id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2305.19503 | The geometry of $Φ_{(3)}$-harmonic maps | In this paper, we motivate and extend the study of harmonic maps or
$\Phi_{(1)}$-harmonic maps (cf [15], Remark 1.3 (iii)), $\Phi$-harmonic maps or
$\Phi_{(2)}$-harmonic maps (cf. [24], Remark 1.3 (v)), and explore geometric
properties of $\Phi_{(3)}$-harmonic maps by unified geometric analytic methods.
We define the notion of $\Phi_{(3)}$-harmonic maps and obtain the first
variation formula and the second variation formula of the $\Phi_{(3)}$-energy
functional $E_{\Phi_{(3)}}$. By using a stress-energy tensor, the
$\Phi_{(3)}$-conservation law, a monotonicity formula, and the asymptotic
assumption of maps at infinity, we prove Liouville type results for
$\Phi_{(3)}$-harmonic maps. We introduce the notion of
$\Phi_{(3)}$-Superstrongly Unstable ($\Phi_{(3)}$-SSU) manifold and provide
many interesting examples. By using an extrinsic average variational method in
the calculus of variations (cf. [51, 49]), we find $\Phi_{(3)}$-SSU manifold
and prove that for $i=1,2,3$, every compact $\Phi_{(i)}$-$\operatorname{SSU}$
manifold is $\Phi_{(i)}$-$\operatorname{SU}$, and hence is
$\Phi_{(i)}$-$\operatorname{U}$ (cf. Theorem 9.3). As consequences, we obtain
topological vanishing theorems and sphere theorems by employing a
$\Phi_{(3)}$-harmoic map as a catalyst. This is in contrast to the approaches
of utilizing a geodesic ([45]), minimal surface, stable rectifiable current
([34, 29, 50]), $p$-harmonic map (cf. [53]), etc., as catalysts. These
mysterious phenomena are analogs of harmonic maps or $\Phi_{(1)}$-harmonic
maps, $p$-harmonic maps, $\Phi_{S}$-harmonic maps, $\Phi_{S,p}$-harmonic maps,
$\Phi_{(2)}$-harmonic maps, etc., (cf. [21, 40, 42, 41, 12, 13]). | Shuxiang Feng, Yingbo Han, Kaige Jiang, Shihshu Walter Wei | 2023-05-31T02:29:30Z | http://arxiv.org/abs/2305.19503v1 | # The geometry of \(\Phi_{(3)}\)-harmonic maps
###### Abstract
In this paper, we motivate and extend the study of harmonic maps or \(\Phi_{(1)}\)-harmonic maps (cf [15], Remark 1.3 (iii)), \(\Phi\)-harmonic maps or \(\Phi_{(2)}\)-harmonic maps (cf. [24], Remark 1.3 (v)), and explore geometric properties of \(\Phi_{(3)}\)-harmonic maps by unified geometric analytic methods. We define the notion of \(\Phi_{(3)}\)-harmonic maps and obtain the first variation formula and the second variation formula of the \(\Phi_{(3)}\)-energy functional \(E_{\Phi_{(3)}}\). By using a stress-energy tensor and the asymptotic assumption of maps at infinity, we prove Liouville type results for \(\Phi_{(3)}\)-harmonic maps. We introduce the notion of \(\Phi_{(3)}\)-Superstrongly Unstable (\(\Phi_{(3)}\)-SSU) manifold and provide many interesting examples. By using an extrinsic average variational method in the calculus of variations (cf. [51, 49]), we find \(\Phi_{(3)}\)-SSU manifold and prove that any stable \(\Phi_{(3)}\)-harmonic maps from a compact \(\Phi_{(3)}\)-SSU manifold (into any compact Riemannian manifold) or (from any compact Riemannian manifold) into a compact \(\Phi_{(3)}\)-SSU manifold must be constant. We also prove that the homotopy class of any map from a compact \(\Phi_{(3)}\)-SSU manifold (into any compact Riemannian manifold) or (from any compact Riemannian manifold) into a compact \(\Phi_{(3)}\)-SSU manifold contains elements of arbitrarily small \(\Phi_{(3)}\)-energy. We call a compact Riemannian manifold \(M\) to be \(\Phi_{(3)}\)-strongly unstable (\(\Phi_{(3)}\)-SU) if it is not the target or domain of a nonconstant stable \(\Phi_{(3)}\)-harmonic map (from or into any compact Riemannian manifold) and also the homotopy class of any map to or from \(M\) (from or into any compact Riemannian manifold) contains elements of arbitrarily small \(\Phi_{(3)}\)-energy. We prove that every compact \(\Phi_{(3)}\)-SSU manifold is \(\Phi_{(3)}\)-SU. As consequences, we obtain topological vanishing theorems and sphere theorems by employing a \(\Phi_{(3)}\)-harmonic map as a catalyst. This is in contrast to the approaches of utilizing a geodesic ([45]), minimal surface, stable rectifiable current ([34, 29, 50]), \(p\)-harmonic map (cf. [53]), etc., as catalysts. These mysterious phenomena are analogs of harmonic maps or \(\Phi_{(1)}\)-harmonic maps, \(p\)-harmonic maps, \(\Phi_{S}\)-harmonic maps, \(\Phi_{S,p}\)-harmonic maps, \(\Phi_{(2)}\)-harmonic maps, etc., (cf. Howard and Wei, 1986; Wei, 1992; Wei and Yau, 1994; Wei, 1998; Feng-Han-Li-Wei, 2021; Feng-Han-Wei, 2021).
+
Footnote †: Keywords: \(\Phi_{(3)}\)-harmonic maps, Liouville type results, variation formula, \(\Phi_{(3)}\)-SSU manifold.
+
Footnote †: Keywords: \(\Phi_{(3)}\)-harmonic maps, Liouville type results, variation formula, \(\Phi_{(3)}\)-SSU manifold.
## 1 Introduction
Harmonic maps or \(\Phi_{(1)}\)-harmonic maps (cf [15], Remark 1.3 (iii)) which appear in a broad spectrum of contexts in mathematics and physics, have had wide-ranging consequences and influenced developments in other fields (see, e.g., [3, 8, 15, 27, 57]). From an algebraic invariant point of view, a harmonic map (or \(\Phi_{(1)}\)-harmonic map) \(u:(M,g)\rightarrow(N,h)\) between Riemannian manifolds \((M,g)\) and \((N,h)\) can be viewed as a critical point of the energy or \(\Phi_{(1)}\)-energy functional, given by the integral of a half (\(=\frac{1}{1\cdot 2}\)) of _first_ elementary symmetric function \(\sigma_{1}\) of
eigenvalues of the pullback metric tensor \(u^{*}h\) relative to the metric \(g\). Similarly, a \(\Phi\)-harmonic map or \(\Phi_{(2)}\)-harmonic map (cf. [24], Remark 1.3 (v)) is a critical point of the \(\Phi\)-energy or \(\Phi_{(2)}\)-energy functional, given by the integral of a quarter (\(=\frac{1}{2\cdot 2}\)) of the _second_ elementary symmetric function \(\sigma_{2}\) of eigenvalues of the pullback metric tensor \(u^{*}h\) relative to the metric \(g\). Liouville type results for \(p\)-harmonic maps, \(F\)-harmonic maps, \(F\)-stationary maps, CC-harmonic maps, \(\Phi\)-harmonic maps, \(\Phi_{S}\)-harmonic maps and \(\Phi_{S,p}\)-harmonic maps are proved by several authors (see [9, 18, 19, 22, 24, 35, 46] for more details). In contrast to the usual method of proving Liouville type results by assuming the finiteness of the energy of the map or the smallness of whole image of the domain manifold under the map, Jin [31] and Feng-Han [17] obtain Liouville type results under natural conditions about the asymptotic behavior of maps at infinity.
We propose an extrinsic, average variational method as an approach to confront and resolve problems in global, nonlinear analysis and geometry (cf. [51, 49]). In contrast to an average method in PDE that we applied in [7] to obtain sharp growth estimates for warping functions in multiply warped product manifolds, we employ _an extrinsic average variational method_ in the calculus of variations ([51]), find a large class of manifolds of positive Ricci curvature that enjoy rich properties, and introduce the notions of _superstrongly unstable_ (SSU) _manifolds_ ([52]) and _\(p\)-superstrongly unstable_ (\(p\)-SSU) _manifolds_ ([55]). Wei-Yau ([55]) discuss Liouville type theorems and the regularity of \(p\)-minimizers into \(p\)-SSU manifolds, extending ground-breaking work of Hardt-Lin ([25]) and Luckhaus ([36]). The method ([51]) can be carried over to more general settings: Han and Wei find \(\Phi\)-SSU manifolds (or \(\Phi_{(2)}\)-SSU manifolds) [24] and prove that every compact \(\Phi\)-SSU manifold must be \(\Phi\)-strongly unstable (\(\Phi\)-SU), or \(\Phi_{(2)}\)-strongly unstable (\(\Phi_{(2)}\)-SU), i.e., every compact \(\Phi\)-SSU manifold can neither be the domain nor the target of any nonconstant smooth stable \(\Phi\)-harmonic map between two compact Riemannian manifolds and the homotopic class of maps (between two compact Riemannian manifolds) from or into \(\Phi\)-SSU manifold contains a map of arbitrarily small \(\Phi\)-energy. These generalize the cases \(S^{m}\), for \(m>5\), and compact minimal submanifolds of \(S^{m+p}\) with \(Ric_{g}>\frac{3}{4}mg\), due to Kawai and Nakauchi [32, 33], in which the nonexistence results of nonconstant stable \(\Phi\)-harmonic maps are extended to \(\Phi\)-SU or \(\Phi_{(2)}\)-SU features in [24]. Cases of SSU manifolds or \(\Phi_{(1)}\)-SSU manifolds such as spheres, product of spheres with appropriate dimensions, etc. can be found in [49]. For some results of the \(\Phi\)-energy functional, we refer to [37, 39]. In [18], Feng-Han-Li-Wei prove that every compact \(\Phi_{S}\)-SSU manifold is \(\Phi_{S}\)-SU. In particular any stable \(\Phi_{S}\)-harmonic map between compact Riemannian manifolds is a constant map if the domain manifold or target manifold is a \(\Phi_{S}\)-SSU manifold. In addition, they obtain properties of weakly conformal \(\Phi_{S}\)-harmonic map and horizontally weakly conformal \(\Phi_{S}\)-harmonic map. Feng-Han-Wei [19] introduce the notions of \(\Phi_{S,p}\)-harmonic maps, stable \(\Phi_{S,p}\)-harmonic maps, and prove that every compact \(\Phi_{S,p}\)-SSU manifold must be \(\Phi_{S,p}\)-SU. They obtain several Liouville type theorems by extending the method used in Jin [31].
For a symmetric 2-covariant tensor field \(\alpha\) on a hypersurface \(M\) in \(\mathbb{R}^{m+1}\), at any fixed point \(x_{0}\in M\,,\)\(\alpha\) has the eigenvalues \(\lambda\) relative to the metric \(g\) of \(M\); i.e., the \(m\) real roots of the equation \(\det(\delta_{ij}\lambda-\alpha_{ij})=0\,\) where \(\alpha_{ij}=\alpha(e_{i},e_{j})\,,\) and \(\{e_{1},\cdots e_{m}\}\) is a basis for \(T_{x_{0}}(M)\,.\) The _algebraic_ invariants -- the \(k\)-th elementary symmetric function of the eigenvalues of \(\alpha\) at \(x_{0}\), denoted by \(\sigma_{k}(\alpha_{x_{0}}),1\leq k\leq m\) frequently have _geometric_ meaning of the manifold \(M\) with analytic, topological and physical impacts. For example, if we take \(\alpha\) to be the second fundamental form of \(M\,,\) in \(\mathbb{R}^{m+1}\), then \(\frac{1}{m}\sigma_{1}(\alpha),\frac{2}{m(m-1)}\sigma_{2}(\alpha),\) and \(\sigma_{m}(\alpha)\) are the mean curvature, scalar curvature, and the Gauss-Kronecker curvature of \(M\) respectively and are central themes of Yamabi problem ([1, 30, 40, 47]), special Lagrangian graphs ([26]), geometric aspects of the
theory of fully nonlinear elliptic equations (e.g., [43]), and conformal geometry (e.g. [6], [12]), etc. If we take \(\alpha\) to be Schoulten tensor, then a study of \(\sigma_{2}(\alpha)\) leads to a generalized Yamabe problem ([5]). In the study of prescribed curvature problems in PDE, the existence of closed starshaped hypersurfaces of prescribed mean curvature in Euclidean space was proved by A.E. Treibergs and S.W. Wei [48], solving a problem of F. Almgren and S.T. Yau [58]. While the case of prescribed Guass-Kronecker curvature was studied by V.I. Oliker [38] and P. Delanoe [10], the case of prescribed \(k\)-th mean curvature, in particular the intermediate cases, \(2\leq k\leq m-1\) were treated by L. Caffarelli, L. Nirenberg and J. Spruck [4].
These motivate us from the viewpoint of geometric mapping theory \(u:(M^{m},g)\rightarrow(N^{n},h)\), taking \(\alpha=u^{*}h\), in this paper, to extend the study of harmonic maps or \(\Phi_{(1)}\)-harmonic maps (cf [15]), \(\Phi\)-harmonic maps or \(\Phi_{(2)}\)-harmonic maps (cf. [24]), to explore geometric properties of \(\Phi_{(3)}\)-harmonic maps by unified geometric analytic methods. We define the notion of \(\Phi_{(3)}\)-harmonic maps and obtain the first variation formula and the second variation formula of the \(\Phi_{(3)}\)-energy functional \(E_{\Phi_{(3)}}\). _In fact, \(\Phi_{(3)}\)-harmonic map_ (_cf. [24]) _is a critical point of the \(\Phi_{(3)}\)-energy functional, given by the integral of a sixth \((=\frac{1}{\mathbf{3}\cdot 2})\) of the third elementary symmetric function \(\sigma_{3}\) of eigenvalues of the pullback metric tensor \(u^{*}h\) relative to the metric \(g\)._ We introduce the notion of \(\Phi_{(3)}\)-SSU manifold and provide many interesting examples. By using an extrinsic average variational method in the calculus of variations (cf. [51, 52]), we find \(\Phi_{(3)}\)-SSU manifolds, and prove that every compact \(\Phi_{(3)}\)-SSU manifold is \(\Phi_{(3)}\)-SSU. As consequences, we prove topological vanishing theorems and sphere theorems by employing a \(\Phi_{(3)}\)-harmonic map as a catalyst. This is in contrast to the approaches of utilizing a geodesic ([45]), minimal surface, stable rectifiable current ([34, 29, 50]), \(p\)-harmonic map (cf. [53]), etc., as catalysts. These mysterious phenomena are analogs of harmonic maps or \(\Phi_{(1)}\)-harmonic maps, \(p\)-harmonic maps, \(\Phi_{S}\)-harmonic maps, \(\Phi_{S,p}\)-harmonic maps, \(\Phi_{(2)}\)-harmonic maps, etc., (cf. [28, 52, 55, 53, 18, 19]).
While we can view the differential of \(u\), denoted by \(du\), as a \(1\)-form with values in the pullback bundle \(u^{-1}TN\) as \(d_{(1)}u\), in this paper, we introduce the following unified notions.
**Definition 1.1**.: _Let \(d_{(1)}u,d_{(2)}u\) and \(d_{(3)}u\) be \(1\)-forms with values in the pullback bundle \(u^{-1}TN\) given by_
\[\begin{split} d_{(1)}u(X)&=du(X)\,,\\ d_{(2)}u(X)&=\sum_{j=1}^{m}h\big{(}du(X),du(e_{j}) \big{)}du(e_{j})\,,\quad\text{and}\\ d_{(3)}u(X)&=\sum_{j,k=1}^{m}h\big{(}du(X),du(e_{j}) \big{)}h\big{(}du(e_{j}),du(e_{k})\big{)}du(e_{k})\,,\end{split} \tag{1}\]
_respectively, for any smooth vector field \(X\) on \((M,g)\), where \(\{e_{i}\}\) is a local orthonormal frame field on \((M,g)\), with the following corresponding norms_
\[\begin{split}||d_{(1)}u||^{2}&=\sum_{i=1}^{m}h \big{(}d_{(1)}u(e_{i}),du(e_{i})\big{)}=\sum_{i=1}^{m}h\big{(}du(e_{i}),du(e_{ i})\big{)}\,,\\ ||d_{(2)}u||^{2}&=\sum_{i=1}^{m}h\big{(}d_{(2)}u(e_{ i}),du(e_{i})\big{)}=\sum_{i,j=1}^{m}h\big{(}du(e_{i}),du(e_{j})\big{)}h\big{(}du(e_{ j}),du(e_{i})\big{)}\,,\quad\text{and}\\ ||d_{(3)}u||^{2}&=\sum_{i=1}^{m}h\big{(}d_{(3)}u(e _{i}),du(e_{i})\big{)}=\sum_{i,j,k=1}^{m}h\big{(}du(e_{i}),du(e_{j})\big{)}h \big{(}du(e_{j}),du(e_{k})\big{)}h\big{(}du(e_{k}),du(e_{i})\big{)}.\end{split}\]
_The \(\Phi_{(1)}\)-energy density \(e_{\Phi_{(1)}}(u)\), \(\Phi_{(2)}\)-energy density \(e_{\Phi_{(2)}}(u)\), and \(\Phi_{(3)}\)-energy density \(e_{\Phi_{(3)}}(u)\) of \(u\) are given by_
\[e_{\Phi_{(1)}}(u) =\frac{||d_{(1)}u||^{2}}{2}, \tag{2}\] \[e_{\Phi_{(2)}}(u) =\frac{||d_{(2)}u||^{2}}{4},\quad\text{and}\] \[e_{\Phi_{(3)}}(u) =\frac{||d_{(3)}u||^{2}}{6},\quad\text{respectively}\,.\]
_The \(\Phi_{(1)}\)-energy \(E_{\Phi_{(1)}}(u)\), \(\Phi_{(2)}\)-energy \(E_{\Phi_{(2)}}(u)\), and \(\Phi_{(3)}\)-energy \(E_{\Phi_{(3)}}(u)\) of \(u\) are given by_
\[E_{\Phi_{(1)}}(u) =\int_{M}e_{\Phi_{(1)}}(u)dv_{g}, \tag{3}\] \[E_{\Phi_{(2)}}(u) =\int_{M}e_{\Phi_{(2)}}(u)dv_{g},\quad\text{and}\] \[E_{\Phi_{(3)}}(u) =\int_{M}e_{\Phi_{(3)}}(u)dv_{g},\quad\text{respectively}\,.\]
**Definition 1.2**.: _For \(i=1,2,3\), a smooth map \(u\) is said to be a \(\Phi_{(i)}\)-harmonic map if it is a critical point of the \(\Phi_{(i)}\)-energy functional \(E_{\Phi_{(i)}}\) with respect to any smooth compactly supported variation of \(u\), stable \(\Phi_{(i)}\)-harmonic or simply \(\Phi_{(i)}\)-stable, if \(u\) is a local minimum of \(E_{\Phi_{(i)}}(u)\), and \(\Phi_{(i)}\)-unstable if \(u\) is not \(\Phi_{(i)}\)-stable._
**Remark 1.3**.: (i) _The norm \(||d_{(1)}u||\) is the Hibert-Schmid norm of the differential \(du\), i.e., \(||d_{(1)}u||=|du|\,.\)_ (ii) _The \(\Phi_{(1)}\)-energy density \(e_{\Phi_{(1)}}(u)=e(u)\) is the energy density of \(u\)._ (iii)_\(\Phi_{(1)}\)-harmonic map is ordinary harmonic map (cf. [15] )._ (iv) _The \(\Phi_{(2)}\)-energy density \(e_{\Phi_{(2)}}(u)=e_{\Phi}(u)\) is the \(\Phi\)-energy density of \(u\)._ (v) \(\Phi_{(2)}\)_-harmonic map is \(\Phi\)-harmonic map (cf. [24] )._ (vi) _Definition 1.2 can be extended to \(4\leq i\leq m=\dim M\). Hence, for any integer \(1\leq i\leq m\), a smooth map \(u\) is said to be a \(\Phi_{(i)}\)-harmonic map if it is a critical point of the \(\Phi_{(i)}\)-energy functional \(E_{\Phi_{(i)}}\) with respect to any smooth compactly supported variation of \(u\), stable \(\Phi_{(i)}\)-harmonic or simply \(\Phi_{(i)}\)-stable, if \(u\) is a local minimum of \(E_{\Phi_{(i)}}(u)\), and \(\Phi_{(i)}\)-unstable if \(u\) is not \(\Phi_{(i)}\)-stable._
We recall
**Definition 1.4** ([53]).: _A Riemannian manifold \(M^{m}\) is said to be superstrongly unstable_ (SSU) _if there exists an isometric immersion of \(M^{m}\) in \(\mathbb{R}^{q}\) with its second fundamental form \(B\) such that for all unit tangent vectors \(v\) to \(M^{m}\) at every point \(x\in M^{m},\) the following functional is negative valued._
\[\langle Q_{x}^{M}(v),v\rangle_{M}=\sum_{i=1}^{m}\big{(}2\langle B(v,e_{i}),B( v,e_{i})\rangle-\langle B(v,v),B(e_{i},e_{i})\rangle\big{)},\]
_where \(\{e_{1},\cdots,e_{m}\}\) is a local orthonormal frame field on \(M^{m}\) near \(x\). A Riemannian manifold \(M\) is said to be \(p\)-superstrongly unstable_ (\(p\)-SSU) _for \(p\geq 2\) if the following functional is negative valued._
\[F_{p,x}(v)=(p-2)\langle B(v,v),B(v,v)\rangle+\langle Q_{x}^{M}(v),v\rangle_{M}.\]
The notion of \(\Phi_{(3)}\)-SSU is defined as follows.
**Definition 1.5**.: _A Riemannian manifold \(M^{m}\) is said to be \(\Phi_{(3)}\)-superstrongly unstable \((\Phi_{(3)}\)-_SSU_) if there exists an isometric immersion of \(M^{m}\) in \(\mathbb{R}^{q}\) with its second fundamental form \(B\) such that for all unit tangent vectors \(v\) to \(M^{m}\) at every point \(x\in M^{m},\) the following functional is negative valued._
\[F_{\Phi_{(3)},x}(v)=\sum_{i=1}^{m}\big{(}6\langle B(v,e_{i}),B(v,e_{i})\rangle_{ \mathbb{R}^{q}}-\langle B(v,v),B(e_{i},e_{i})\rangle_{\mathbb{R}^{q}}\big{)}, \tag{4}\]
_where \(\{e_{1},\cdots,e_{m}\}\) is a local orthonormal frame field on \(M^{m}\) near \(x\)._
**Definition 1.6**.: _A compact Riemannian manifold \(M^{m}\) is \(\Phi_{(3)}\)-strongly unstable \((\Phi_{(3)}\)-_SU_) if it is neither the domain nor the target of any nonconstant smooth \(\Phi_{(3)}\)-stable harmonic map \((\)into or from any compact Riemannian manifold\()\), and the homotopic class of maps from or into \(M\)\((\)into or from any compact Riemannian manifold\()\) contains a map of arbitrarily small \(\Phi_{(3)}\)-energy \(E_{\Phi_{(3)}}\)._
This leads to the study of the identity map on a Riemannian manifold. In particular, if \(M\) is \(\Phi_{(3)}\)-SU, then the identity map of \(M\) is \(\Phi_{(3)}\)-unstable. For convenience, we make the following
**Definition 1.7**.: _A Riemannian manifold \(M^{m}\) is \(\Phi_{(3)}\)-unstable \((\Phi_{(3)}\)-_U_) if the identity map \(\text{Id}_{M}\) on \(M^{m}\) is \(\Phi_{(3)}\)-unstable._
This gives a natural analog in the setting of \(\Phi_{(1)}\)-harmonic maps and \(\Phi_{(2)}\)-harmonic maps:
**Proposition 1.8**.: _Let \(M\) be a compact manifold. Then_
\[M\;\text{is}\;\Phi_{(2)}\,\text{-}\text{SSU}\quad\Rightarrow\quad M\;\text{ is}\;\Phi_{(2)}\,\text{-}\text{SU}\quad\Rightarrow\quad M\;\text{is}\;\Phi_{(2)}\,\text{-}\text{U}\,.\]
We organize this paper in the following way. In Section 2, we obtain some fundamental results which will be used in the subsequent sections, such as the first variation formula (I) and (II) in two different settings and the stress energy tensor \(S_{\Phi_{(3)}}\) with respect to the functional \(E_{\Phi_{(3)}}\). In Section 3, we extend Jin's method for solving a uniqueness problem about \(\Phi_{(3)}\)-harmonic maps. To begin with, we obtain the lower energy growth rates of \(\Phi_{(3)}\)-harmonic maps by using the monotonicity formulas given in [21] (cf. Proposition 3.2). Furthermore, we obtain the upper energy growth rates of these maps by using the asymptotic assumption of the maps at infinity (cf. Proposition 3.3). The two bounds are contradictory if \(\Phi_{(3)}\)-harmonic map is not a constant map (cf. Theorem 3.4). In Section 4, we calculate the second variation formula of the functional \(E_{\Phi_{(3)}}\) (cf. Theorem 4.1) and give the concept of stable \(\Phi_{(3)}\)-harmonic maps with respect to the \(\Phi_{(3)}\)-energy \(E_{\Phi_{(3)}}(u)\). In Section 5, we describe the motivating examples of the \(\Phi_{(3)}\)-SSU manifolds and obtain the relation between \(\Phi_{(3)}\)-SSU manifold and \(p\)-SSU manifold. Furthermore, we prove topological vanishing theorems and sphere theorems by the approach of \(\Phi_{(3)}\)-harmonic map as a tool. In Section 6, by using an extrinsic average variational method in the calculus of variations [51], we prove that every stable \(\Phi_{(3)}\)-harmonic map from a compact \(\Phi_{(3)}\)-SSU manifold into any compact Riemannian manifold is constant (cf. Theorem 6.1). In Section 7, by using the similar method to Theorem 6.1, we obtain that every stable \(\Phi_{(3)}\)-harmonic maps from any compact Riemannian manifold into a compact \(\Phi_{(3)}\)-SSU manifolds is constant (cf. Theorem 7.1). In Section 8, we prove that the homotopic class of any map from every compact Riemannian manifold into a compact \(\Phi_{(3)}\)-SSU manifold contains elements of arbitrarily small
\(\Phi_{(3)}\)-energy (cf. Theorem 8.2). Finally, in Section 9, we prove that the homotopic class of any map from a compact \(\Phi_{(3)}\)-SSU manifold into every compact Riemannian manifold contains elements of arbitrarily small \(\Phi_{(3)}\)-energy (cf. Theorem 9.2). For \(i=1,2,3\), we prove that every compact \(\Phi_{(i)}\)-SSU manifold is \(\Phi_{(i)}\)-SU, and hence is \(\Phi_{(i)}\)-U (cf. Theorem 9.3). This generalizes Proposition 1.8, in which \(i=2\).
## 2 Preliminaries
In this section, we give some results which will be used in this paper, and extend the following concept from harmonic maps and \(\Phi\)-harmonic maps.
**Definition 2.1**.: _The divergence of a \(1\)-form with values in the pull-back bundle \(u^{-1}(TN)\,,\)\(\tau_{\Phi_{(3)}}(u)\)\((\)resp. \(\tau_{\Phi_{(2)}}(u)\), \(\tau_{\Phi_{(1)}}(u))\) is said to be the \(\Phi_{(3)}\)-tension field \((\)resp \(\Phi_{(2)}\)-tension field\(,\)\(\Phi_{(1)}\)-tension field\()\) of \(u\), if_
\[\begin{split}\tau_{\Phi_{(3)}}(u)&=\operatorname{ div}\big{(}d_{(3)}u\big{)}\\ &=\sum_{i,j,k=1}^{m}\bigg{(}\widetilde{\nabla}_{e_{i}}\big{(}h \big{(}du(e_{i}),du(e_{j})\big{)}h\big{(}du(e_{j}),du(e_{k})\big{)}du(e_{k}) \big{)}\bigg{)}\\ \bigg{(}resp.&\tau_{\Phi_{(2)}}(u)&= \operatorname{div}\big{(}d_{(2)}u\big{)}=\sum_{i,j=1}^{m}\bigg{(}\widetilde{ \nabla}_{e_{i}}\big{(}h\big{(}du(e_{i}),du(e_{j})\big{)}du(e_{j})\big{)}\bigg{)} \bigg{)},\\ \tau_{\Phi_{(1)}}(u)&=\operatorname{div}\big{(}d_{( 1)}u\big{)}=\sum_{i=1}^{m}\big{(}\widetilde{\nabla}_{e_{i}}du(e_{i})\big{)} \qquad\bigg{)}\,.\end{split} \tag{5}\]
We note that \(\tau_{\Phi_{(1)}}(u)\) is the same as the tension filed \(\tau(u)\) of \(u\) (cf. [15]) and \(\tau_{\Phi_{(2)}}(u)\) is the same as the \(\Phi\)-tension filed of \(u\), denoted by \(\tau_{\Phi}(u)\) (cf. [24]).
**Theorem 2.2** (The first variation formula \((I)\)).: _Let \(u:(M^{m},g)\to(N,h)\) be a smooth map and let \(u_{t}:(M^{m},g)\to(N,h)\), \((-\delta<t<\delta)\) be a family of compact supported variations such that \(u_{0}=u\) and \(v=\frac{\partial u_{t}}{\partial t}\big{|}_{{}_{t=0}}\). Then we have_
\[\frac{dE_{\Phi_{(3)}}(u_{t})}{dt}\big{|}_{{}_{t=0}}=-\int_{M}h\big{(}v,\tau_{ \Phi_{(3)}}(u)\big{)}\,dv_{g}. \tag{6}\]
Proof.: We extend the vector field \(\frac{\partial}{\partial t}\) on \((-\delta,\delta)\), \(X\) on \(M\) naturally on \((-\delta,\delta)\times M\), and denote those also by \(\frac{\partial}{\partial t},X\). Let \(\nabla\) and \(\widetilde{\nabla}\) be the Levi-Civita connection on \((-\delta,\delta)\times M\) and the induced connection on \(u_{t}^{-1}TN\) respectively.
Note that
\[\frac{\partial}{\partial t}\frac{||d_{(3)}u_{t}||^{2}}{6}\] \[=\frac{1}{6}\frac{\partial}{\partial t}\sum_{i=1}^{m}h\big{(}d_{(3 )}u_{t}(e_{i}),du_{t}(e_{i})\big{)}\] \[=\sum_{i,j,k=1}^{m}h\bigg{(}\widetilde{\nabla}_{\frac{\partial}{ \partial t}}(du_{t}(e_{i})),du_{t}(e_{k})\bigg{)}h\big{(}du_{t}(e_{i}),du_{t}(e _{j})\big{)}h\big{(}du_{t}(e_{j}),du_{t}(e_{k})\big{)}\] \[=\sum_{i,j,k=1}^{m}h\bigg{(}\widetilde{\nabla}_{e_{i}}(du_{t}( \frac{\partial}{\partial t})),du_{t}(e_{k})\bigg{)}h\big{(}du_{t}(e_{i}),du_{t }(e_{j})\big{)}h\big{(}du_{t}(e_{j}),du_{t}(e_{k})\big{)}\] \[=\sum_{i,j,k=1}^{m}\bigg{(}e_{i}\,h\big{(}du_{t}(\frac{\partial}{ \partial t}),du_{t}(e_{k})\big{)}-h\big{(}du_{t}(\frac{\partial}{\partial t}), \nabla_{e_{i}}du_{t}(e_{k})\big{)}\bigg{)}h\big{(}du_{t}(e_{i}),du_{t}(e_{j}) \big{)}h\big{(}du_{t}(e_{j}),du_{t}(e_{k})\big{)}\] \[=\sum_{i,j,k=1}^{m}e_{i}\,h\big{(}du_{t}(\frac{\partial}{\partial t }),du_{t}(e_{k})\big{)}h\big{(}du_{t}(e_{i}),du_{t}(e_{j})\big{)}h\big{(}du_{t }(e_{j}),du_{t}(e_{k})\big{)},\] \[\qquad-h\bigg{(}du_{t}(\frac{\partial}{\partial t}),h\big{(}du_{ t}(e_{i}),du_{t}(e_{j})\big{)}h\big{(}du_{t}(e_{j}),du_{t}(e_{k})\big{)}\, \widetilde{\nabla}_{e_{i}}du_{t}(e_{k})\bigg{)}\] \[=\sum_{i,j,k=1}^{m}e_{i}\,h\big{(}du_{t}(\frac{\partial}{\partial t }),du_{t}(e_{k})\big{)}h\big{(}du_{t}(e_{i}),du_{t}(e_{j})\big{)}h\big{(}du_{t }(e_{j}),du_{t}(e_{k})\big{)}\] \[\qquad-h\bigg{(}du_{t}(\frac{\partial}{\partial t}),\widetilde{ \nabla}_{e_{i}}\big{(}h\big{(}du_{t}(e_{i}),du_{t}(e_{j})\big{)}h\big{(}du_{t }(e_{j}),du_{t}(e_{k})\big{)}\,du_{t}(e_{k})\big{)}\bigg{)}\] \[\qquad+h\bigg{(}du_{t}(\frac{\partial}{\partial t}),e_{i}\big{(} h\big{(}du_{t}(e_{i}),du_{t}(e_{j})\big{)}\,h\big{(}du_{t}(e_{j}),du_{t}(e_{k}) \big{)}\,du_{t}(e_{k})\big{)}\bigg{)}\] \[\qquad+h\bigg{(}du_{t}(\frac{\partial}{\partial t}),h\big{(}du_{ t}(e_{i}),du_{t}(e_{j})\big{)}\,e_{i}\big{(}h\big{(}du_{t}(e_{j}),du_{t}(e_{k}) \big{)}\,du_{t}(e_{k})\bigg{)}\] \[=\sum_{i,j,k=1}^{m}e_{i}\bigg{(}h\big{(}du_{t}(\frac{\partial}{ \partial t}),du_{t}(e_{k})\big{)}h\big{(}du_{t}(e_{i}),du_{t}(e_{j})\big{)}h \big{(}du_{t}(e_{j}),du_{t}(e_{k})\big{)}\bigg{)}\] \[\qquad-h\bigg{(}du_{t}(\frac{\partial}{\partial t}),\widetilde{ \nabla}_{e_{i}}\big{(}d_{(3)}u_{t}\,(e_{i})\big{)}\bigg{)},\]
where we use
\[\widetilde{\nabla}_{\frac{\partial}{\partial t}}\big{(}du_{t}(e_{i})\big{)}- \widetilde{\nabla}_{e_{i}}\bigg{(}du_{t}(\frac{\partial}{\partial t})\bigg{)} =du_{t}[\frac{\partial}{\partial t},e_{i}]=0\]
for the third equality, and (1) for the last step. Let \(X_{t}\) be a compactly supported vector field on \(M\) such that \(g(X_{t},Y)=\sum_{j,k=1}^{m}h\big{(}du_{t}(\frac{\partial}{\partial t}),du_{t}(e_ {k})\big{)}h\big{(}du_{t}(Y),du_{t}(e_{j})\big{)}h\big{(}du_{t}(e_{j}),du_{t}(e_ {k})\big{)}\) for
any vector field \(Y\) on \(M.\) Then
\[\frac{\partial}{\partial t}\frac{||d_{(3)}u_{t}||^{2}}{6}= \sum_{i=1}^{m}\big{(}e_{i}g(X_{t},e_{i})\big{)}-h\bigg{(}du_{t}( \frac{\partial}{\partial t}),\widetilde{\nabla}_{e_{i}}\big{(}d_{(3)}u_{t} \left(e_{i}\right)\big{)}\bigg{)}\] \[= \sum_{i=1}^{m}\big{(}g(\nabla_{e_{i}}X_{t},e_{i})+g(X_{t},\nabla_ {e_{i}}e_{i})\big{)}-h\bigg{(}du_{t}(\frac{\partial}{\partial t}),\widetilde{ \nabla}_{e_{i}}\big{(}d_{(3)}u_{t}\left(e_{i}\right)\big{)}\bigg{)}\] \[= \operatorname{div}_{g}(X_{t})-h\bigg{(}du_{t}(\frac{\partial}{ \partial t}),\tau_{\Phi_{(3)}}(u_{t})\bigg{)}.\]
By the Stokes' theorem, we have
\[\frac{dE_{\Phi_{(3)}}(u_{t})}{dt}\big{|}_{t=0} =\int_{M}\frac{\partial}{\partial t}\frac{||d_{(3)}u_{t}||^{2}}{ 6}\big{|}_{t=0}dv_{g}\] \[=-\int_{M}h\big{(}v,\tau_{\Phi_{(3)}}(u)\big{)}dv_{g}.\]
**Proposition 2.3**.: _A smooth map \(u:(M^{m},g)\to(N,h)\) is \(\Phi_{(3)}\)-harmonic if and only if \(u\) is a solution of the Euler-Lagrange equation for the \(\Phi_{(3)}\)-energy functional \(E_{\Phi_{(3)}}\)_
\[\tau_{\Phi_{(3)}}(u)=0. \tag{7}\]
We introduce
**Definition 2.4**.: _The stress-energy tensor \(S_{\Phi_{(3)}}\) of \(u\) with respect to the functional \(E_{\Phi_{(3)}}(u)\) is the symmetric \(2\)-tensor on \(M^{m}\) given by_
\[S_{\Phi_{(3)}}=e_{\Phi_{(3)}}g-(d_{(3)}u)^{-1}h. \tag{8}\]
_That is,_
\[S_{\Phi_{(3)}}(X,Y) = \frac{||d_{(3)}u||^{2}}{6}g(X,Y)-h\big{(}d_{(3)}u(X),du(Y)\big{)} \tag{9}\] \[= \frac{1}{6}\sum_{i,j,k=1}^{m}h\big{(}du(e_{i}),du(e_{j})\big{)}h \big{(}du(e_{j}),du(e_{k})\big{)}h\big{(}du(e_{k}),du(e_{i})\big{)}g(X,Y)\] \[-\sum_{j,k=1}^{m}h\big{(}du(X),du(e_{j})\big{)}h\big{(}du(e_{j}), du(e_{k})\big{)}h\big{(}du(e_{k}),du(Y)\big{)}\]
_for every smooth vector fields \(X,Y\) on \(M\)._
**Definition 2.5**.: _A map \(u:(M^{m},g)\to(N,h)\) is said to satisfy the \(\Phi_{(3)}\)-conservation law if \(S_{\Phi_{(3)}}\) is divergence free, i.e._
\[\operatorname{div}\,S_{\Phi_{(3)}}=0. \tag{10}\]
**Proposition 2.6**.: _For any smooth vector fields on \(M\), we have_
\[(\operatorname{div}\,S_{\Phi_{(3)}})(X)=-h\big{(}\tau_{\Phi_{(3)}}(u),du(X) \big{)}. \tag{11}\]
Proof.: We choose a local orthonormal frame field \(\{e_{i}\}_{i=1}^{m}\) around a point \(x_{0}\) in \(M\) with \(\nabla_{e_{i}}e_{j}\big{|}_{x_{0}}=0.\) Let \(X\) be a vector field on \(M.\) At \(x_{0},\) we have
\[\big{(}\operatorname{div}\,S_{\Phi_{(3)}}\big{)}(X)\] \[=\sum_{i=1}^{m}(\nabla_{e_{i}}S_{\Phi_{(3)}})(e_{i},X)\] \[=\sum_{i=1}^{m}\bigg{(}e_{i}S_{\Phi_{(3)}}(e_{i},X)-S_{\Phi_{(3)} }\big{(}e_{i},\nabla_{e_{i}}X\big{)}\bigg{)}\] \[=\sum_{i=1}^{m}\bigg{(}e_{i}\big{(}\frac{\|d_{(3)}u\|^{2}}{6}g(e_ {i},X)\big{)}-e_{i}\big{(}h\big{(}d_{(3)}u(e_{i}),du(X)\big{)}\big{)}\] \[\quad-\frac{\|d_{(3)}u\|^{2}}{6}g\big{(}e_{i},\nabla_{e_{i}}X \big{)}+h\big{(}d_{(3)}u(e_{i}),du(\nabla_{e_{i}}X)\big{)}\bigg{)}\] \[=\sum_{i=1}^{m}\bigg{(}e_{i}\big{(}\frac{\|d_{(3)}u\|^{2}}{6} \big{)}g(e_{i},X)-h\big{(}\widetilde{\nabla}_{e_{i}}d_{(3)}u(e_{i}),du(X)\big{)}\] \[\quad-h\big{(}d_{(3)}u(e_{i}),\widetilde{\nabla}_{e_{i}}du(X) \big{)}+h\big{(}d_{(3)}u(e_{i}),du(\nabla_{e_{i}}X)\big{)}\bigg{)}\] \[=\sum_{i,j,k=1}^{m}\bigg{(}\frac{1}{6}X\big{(}h\big{(}du(e_{i}), du(e_{j})\big{)}h\big{(}du(e_{j}),du(e_{k})\big{)}h\big{(}du(e_{k}),du(e_{i}) \big{)}\big{)}\] \[\quad-h\big{(}\tau_{\Phi_{(3)}}(u),du(X)\big{)}-h\big{(}d_{(3)}u( e_{i}),(\nabla_{e_{i}}du)(X)\big{)}\bigg{)}\] \[=\sum_{i,j,k=1}^{m}\bigg{(}h\big{(}du(e_{i}),du(e_{j})\big{)}h \big{(}du(e_{j}),du(e_{k})\big{)}h\big{(}du(e_{k}),\nabla_{X}du(e_{i})\big{)}\] \[\quad-h\big{(}\tau_{\Phi_{(3)}}(u),du(X)\big{)}-h\big{(}d_{(3)}u( e_{i}),(\nabla_{e_{i}}du)(X)\big{)}\bigg{)}\] \[=\sum_{i=1}^{m}\bigg{(}h\big{(}d_{(3)}u(e_{i}),(\nabla_{X}du)(e_{ i})\big{)}-h\big{(}d_{(3)}u(e_{i}),(\nabla_{e_{i}}du)(X)\big{)}\bigg{)}-h \big{(}\tau_{\Phi_{(3)}}(u),du(X)\big{)}\] \[=-h\big{(}\tau_{\Phi_{(3)}}(u),du(X)\big{)}.\]
The last equality holds due to
\[(\nabla_{X}du)(e_{i})=(\nabla_{e_{i}}du)(X).\]
This concludes the proof of Proposition 2.6.
Let \(X,Y\in\Gamma(TM)\). We denote the dual one form of \(X\) by \(X^{\flat}\), that is, \(X^{\flat}(Y)=g(X,Y).\) The covariant derivative of \(X^{\flat}\) gives a \(2\)-tensor field \(\nabla X^{\flat}:\)
\[(\nabla X^{\flat})(Y,Z)=(\nabla_{Y}X^{\flat})(Z)=g(\nabla_{Y}X,Z). \tag{12}\]
If \(X=\nabla f\) is the gradient field of some \(C^{2}\) function \(f\) on \(M,\) then \(X^{\flat}=df\) and \(\nabla X^{\flat}=\operatorname{Hess}\,f.\)
**Lemma 2.7** ([2, 13]).: _Let \(X\) be a vector field and \(T\) be a symmetric \((0,2)\)-type tensor field. Then_
\[\operatorname{div}(i_{X}T)=(\operatorname{div}T)(X)+\frac{1}{2} \langle T,L_{X}g\rangle, \tag{13}\]
_where \(L_{X}\) is the Lie derivative with respect to the direction \(X.\)_
Let \(D\) be any bounded domain of \(M\) with \(C^{1}\) boundary and \(\nu\) be the unit outward normal vector field along \(\partial D.\) By using the Stoke's theorem, we get
\[\int_{\partial D}T(X,\nu)ds_{g}=\int_{D}\big{(}\langle T,\frac{1}{2}L_{X}g \rangle+(\operatorname{div}T)(X)\big{)}dv_{g}. \tag{14}\]
According to (10) and (14), we have
\[\int_{\partial D}S_{\Phi_{(3)}}(X,\nu)ds_{g}=\int_{D}\langle S_{ \Phi_{(3)}},\frac{1}{2}L_{X}g\rangle dv_{g}. \tag{15}\]
Let \(X\) be a smooth vector field on \(M\). We see that \(u_{t}=u\circ\varphi_{t}\) is a deformation in Theorem 2.2. Next we show the first variation formula by \(1\)-parameter families of diffeomorphisms.
**Theorem 2.8** (The first variation formula \((II)\)).: \[\frac{d}{dt}E_{\Phi_{(3)}}(u_{t})_{\big{|}_{t=0}}=-\int_{M}\langle S_{\Phi_{ (3)}},\frac{1}{2}L_{X}g\rangle dv_{g},\]
_where \(L_{X}\) is the Lie derivative with respect to the direction \(X.\)_
Proof.: By the result of Theorem 2.2 and \(u_{t}=u\circ\varphi_{t}\), we know that \(du(X)\) is the variation vector field for the deformation \(u_{t}.\)
\[\begin{split}\frac{d}{dt}E_{\Phi_{(3)}}(u_{t})_{\big{|}_{t=0}}& =-\int_{M}\langle du(X),\tau_{\Phi_{(3)}}(u)\rangle dv_{g}\\ &=\int_{M}\sum_{i=1}^{m}\langle\widetilde{\nabla}_{e_{i}}du(X),d _{(3)}u(e_{i})\rangle dv_{g}.\end{split} \tag{16}\]
We denote a locally orthonormal frame around a fixed point \(x_{0}\) on \(M\) by \(\{e_{i}\}\), such that \(\nabla_{e_{i}}e_{j}\big{|}_{x_{0}}=0.\) We have
\[\begin{split}\langle S_{\Phi_{(3)}},\frac{1}{2}L_{X}g\rangle& =\frac{1}{2}\sum_{i,j}S_{\Phi_{(3)}}(e_{i},e_{j})L_{X}g(e_{i},e_{ j})\\ &=\sum_{i,j}S_{\Phi_{(3)}}(e_{i},e_{j})g(\nabla_{e_{i}}X,e_{j}). \end{split}\]
Hence, at \(x_{0}\), we get
\[\frac{d}{dt}E_{\Phi_{(3)}}(u_{t})\big{|}_{t=0} =\int_{M}\sum_{i=1}^{m}\langle\widetilde{\nabla}_{e_{i}}du(X),d_{(3 )}u(e_{i})\rangle dv_{g}\] \[=\int_{M}\sum_{i=1}^{m}\bigg{(}h\big{(}(\nabla_{e_{i}}du)(X),d_{( 3)}u(e_{i})\big{)}+h\big{(}du(\nabla_{e_{i}}X),d_{(3)}u(e_{i})\big{)}\bigg{)}dv_ {g}\] \[=\int_{M}\sum_{i=1}^{m}\bigg{(}h\big{(}(\nabla_{X}du)(e_{i}),d_{( 3)}u(e_{i})\big{)}+h\big{(}du(\nabla_{e_{i}}X),d_{(3)}u(e_{i})\big{)}\bigg{)}dv _{g}\] \[=\int_{M}\sum_{i=1}^{m}\bigg{(}h\big{(}\widetilde{\nabla}_{X}du(e _{i}),d_{(3)}u(e_{i})\big{)}+h\big{(}du(\nabla_{e_{i}}X),d_{(3)}u(e_{i})\big{)} \bigg{)}dv_{g}\] \[=\int_{M}\sum_{i=1}^{m}\bigg{(}\nabla_{X}(\frac{\|d_{(3)}u\|^{2} }{6})+h\big{(}du(\nabla_{e_{i}}X),d_{(3)}u(e_{i})\big{)}\bigg{)}dv_{g}\] \[=\int_{M}\sum_{i=1}^{m}\bigg{(}L_{X}(\frac{\|d_{(3)}u\|^{2}}{6})+ h\big{(}du(\nabla_{e_{i}}X),d_{(3)}u(e_{i})\big{)}\bigg{)}dv_{g}\] \[=-\int_{M}\frac{\|d_{(3)}u\|^{2}}{6}L_{X}(dv_{g})+\int_{M}\sum_{i =1}^{m}h\big{(}du(\nabla_{e_{i}}X),d_{(3)}u(e_{i})\big{)}dv_{g}\] \[=-\int_{M}\frac{\|d_{(3)}u\|^{2}}{6}\operatorname{div}Xdv_{g}+ \int_{M}\sum_{i=1}^{m}h\big{(}du(\nabla_{e_{i}}X),d_{(3)}u(e_{i})\big{)}dv_{g}\] \[=-\int_{M}\frac{\|d_{(3)}u\|^{2}}{6}\sum_{i,j=1}^{m}g(e_{i},e_{j} )g\big{(}\nabla_{e_{i}}X,e_{j}\big{)}dv_{g}\] \[\quad+\int_{M}\sum_{i,j=1}^{m}h\big{(}du(e_{j}),d_{(3)}u(e_{i}) \big{)}g\big{(}\nabla_{e_{i}}X,e_{j}\big{)}dv_{g}\] \[=-\int_{M}\sum_{i,j=1}^{m}S_{\Phi_{(3)}}(e_{i},e_{j})g\big{(} \nabla_{e_{i}}X,e_{j}\big{)}dv_{g}\] \[=-\int_{M}\langle S_{\Phi_{(3)}},\frac{1}{2}L_{X}g\rangle dv_{g}.\]
Therefore, we have the desired result.
**Theorem 2.9**.: \((i)\) _If \(u:(M^{m},g)\to(N,h)\) is a smooth \(\Phi_{(3)}\)-harmonic map, then \(u\) satisfies the \(\Phi_{(3)}\)-conservation law. \((ii)\) Conversely, if \(u:(M^{m},g)\to(N,h)\) is a smooth map satisfying the \(\Phi_{(3)}\)-conservation law and \(u\) is an immersion, then \(u\) is a smooth \(\Phi_{(3)}\)-harmonic map._
**Remark 2.10**.: _This is an analog and an extension of harmonic maps in which the stress-energy tensor unifies theory of harmonic maps. (cf. [3])_
Proof.: According to Definition 2.5, Proposition 2.3 and Proposition 2.6, we can obtain the desired results.
Liouville type results
We denote the \(g\)-distance function relative to the pole \(x_{0}\) by \(r(x)\), that is, \(r(x)=dist_{g}(x,x_{0})\). Denote \(B(r)=\{x\in M^{m}:r(x)\leq r\}.\) Obviously, \(\frac{\partial}{\partial r}\) is an eigenvector of \(\mathrm{Hess}_{g}(r^{2})\) with respect to eigenvalue 2. Let \(\lambda_{\max}(\text{resp. }\lambda_{\min}\) ) be the maximum (resp. minimal) eigenvalue of \(\mathrm{Hess}_{g}(r^{2})-2dr\otimes dr\) at every point of \(M\backslash\{x_{0}\}.\)
**Theorem 3.1** (Monotonicity Formula).: _Let \(u:(M^{m},g)\rightarrow(N^{n},h)\) be a smooth \(\Phi_{(3)}\)- harmonic map. If_
\[1+\frac{m-1}{2}\lambda_{\min}-3\max\{2,\lambda_{\max}\}\geq\zeta, \tag{17}\]
_where \(\zeta>0\) is a constant, then we have_
\[\frac{1}{\rho_{1}^{\zeta}}\int_{B(\rho_{1})}\frac{\|d_{(3)}u\|^{2}}{6}dv_{g} \leq\frac{1}{\rho_{2}^{\zeta}}\int_{B(\rho_{2})}\frac{\|d_{(3)}u\|^{2}}{6}dv_{g} \tag{18}\]
_for any \(0<\rho_{1}\leq\rho_{2}.\)_
Proof.: We choose \(D=B(r)\) and \(X=r\frac{\partial}{\partial r}=\frac{1}{2}\nabla r^{2}\) in (15). Hence, we have
\[\int_{\partial B(r)}S_{\Phi_{(3)}}\big{(}r\frac{\partial}{\partial r},\nu \big{)}ds_{g}=\int_{B(r)}\langle S_{\Phi_{(3)}},\frac{1}{2}L_{r\frac{\partial} {\partial r}}g\rangle dv_{g}.\]
From the Coarea formula, we get
\[\int_{\partial B(r)}S_{\Phi_{(3)}}\big{(}r\frac{\partial}{\partial r },\nu\big{)}ds_{g}\] \[=r\int_{\partial B(r)}\frac{\|d_{(3)}u\|^{2}}{6}ds_{g}-\int_{ \partial B(r)}rh\big{(}d_{(3)}u(\frac{\partial}{\partial r}),du(\frac{\partial }{\partial r})\big{)}ds_{g}\] \[=r\int_{\partial B(r)}\frac{\|d_{(3)}u\|^{2}}{6}ds_{g}-r\int_{ \partial B(r)}\|d_{(3)}u(\frac{\partial}{\partial r})\|^{2}ds_{g}\] \[\leq r\int_{\partial B(r)}\frac{\|d_{(3)}u\|^{2}}{6}ds_{g} \tag{19}\] \[=r\frac{d}{dr}\int_{B(r)}\frac{\|d_{(3)}u\|^{2}}{6}dv_{g}.\]
Suppose \(\{e_{i}\}_{i=1}^{m}\) is an orthonormal basis with \(g\), such that \(\mathrm{Hess}_{g}(r^{2})\) becomes a diagonal
matrix and \(e_{m}=\frac{\partial}{\partial r}\). By (17), we have
\[\frac{1}{2}\langle S_{\Phi_{(3)}},L_{r\frac{\partial}{\partial r}}g\rangle\] \[= \frac{1}{2}\sum_{i,j=1}^{m}S_{\Phi_{(3)}}(e_{i},e_{j})\big{(}L_{r \frac{\partial}{\partial r}}g\big{)}(e_{i},e_{j})\] \[= \frac{1}{2}\sum_{i,j=1}^{m}\bigg{(}\frac{\|d_{(3)}u\|^{2}}{6}g(e_ {i},e_{j})-h\big{(}d_{(3)}u(e_{i}),du(e_{j})\big{)}\bigg{)}\big{(}L_{r\frac{ \partial}{\partial r}}g\big{)}(e_{i},e_{j})\] \[= \frac{1}{2}\bigg{(}\sum_{i=1}^{m}\frac{\|d_{(3)}u\|^{2}}{6}\, \mathrm{Hess}_{g}(r^{2})(e_{i},e_{i})-\sum_{i,j=1}^{m}h\big{(}d_{(3)}u(e_{i}), du(e_{j})\big{)}\,\mathrm{Hess}_{g}(r^{2})(e_{i},e_{j})\bigg{)} \tag{20}\] \[\geq \frac{1}{2}\bigg{(}\frac{\|d_{(3)}u\|^{2}}{6}\bigg{(}2+(m-1) \lambda_{\min}\bigg{)}-\sum_{i=1}^{m}\max\{2,\lambda_{\max}\}h\big{(}d_{(3)}u (e_{i}),du(e_{i})\big{)}\bigg{)}\] \[= \frac{\|d_{(3)}u\|^{2}}{6}\bigg{(}1+\frac{m-1}{2}\lambda_{\min}- 3\max\{2,\lambda_{\max}\}\bigg{)}\] \[\geq \zeta\frac{\|d_{(3)}u\|^{2}}{6}.\]
Using (19) and (20), we get
\[r\frac{d}{dr}\int_{B(r)}\frac{\|d_{(3)}u\|^{2}}{6}dv_{g}\geq\zeta\int_{B(r)} \frac{\|d_{(3)}u\|^{2}}{6}dv_{g}.\]
Thus, integrating with respect to \(r\) over the interval \([\rho_{1},\rho_{2}]\)
\[\frac{\frac{d}{dr}\int_{B(r)}\frac{\|d_{(3)}u\|^{2}}{6}dv_{g}}{\int_{B(r)} \frac{\|d_{(3)}u\|^{2}}{6}dv_{g}}\geq\frac{\zeta}{r}\]
by the fundamental of theorem of Calculus, we obtain the desired monotonicity formula (18).
From Theorem 3.1, we have
**Proposition 3.2**.: _Suppose \(u:(M^{m},g)\to(N^{n},h)\) is a smooth \(\Phi_{(3)}\)-harmonic map and \(r(x)\) satisfies the condition (17). If \(R\to\infty\) and \(u\) is a nonconstant map, then we have_
\[\int_{B(R)}\frac{\|d_{(3)}u\|^{2}}{6}dv_{g}\geq c(u)R^{\zeta}, \tag{21}\]
_where \(c(u)>0\) is a constant._
In local coordinates, we obtain
\[E^{R}_{\Phi_{(3)}}(u) =\int_{B(R)}\frac{\|d_{(3)}u\|^{2}}{6}dv_{g}\] \[=\int_{B(R)}\frac{1}{6}\sum_{i,j,k}h\big{(}du(e_{i}),du(e_{j}) \big{)}h\big{(}du(e_{j}),du(e_{k})\big{)}h\big{(}du(e_{k}),du(e_{i})\big{)}dv_{g}\] \[=\int_{B(R)}\frac{1}{6}\sum_{i,j,k,a,b,c,\alpha,\beta,\gamma, \delta,\mu,\nu}g^{ia}g^{jb}g^{kc}h_{\alpha\beta}\frac{\partial u^{\alpha}}{ \partial x_{i}}\frac{\partial u^{\beta}}{\partial x_{j}}h_{\gamma\delta} \frac{\partial u^{\gamma}}{\partial x_{b}}\frac{\partial u^{\delta}}{ \partial x_{k}}h_{\mu\nu}\frac{\partial u^{\mu}}{\partial x_{c}}\frac{ \partial u^{\nu}}{\partial x_{a}}dv_{g}.\]
Extending the method employed in Jin [31] for \(\Phi_{(1)}\)-harmonic maps (i.e. harmonic maps) on conformally Euclidean domain to \(\Phi_{(3)}\)-harmonic maps on Riemannian manifolds, we obtain
**Proposition 3.3**.: _Suppose \(u:(M^{m},g)\to(N,h)\) is a smooth \(\Phi_{(3)}\)-harmonic map and \(r(x)\) satisfies the condition (17). When \(R\) is large enough,_
\[\int_{R}^{\infty}\frac{1}{\operatorname{Vol}_{g}^{\frac{1}{5}}\big{(}\partial B (r)\big{)}}dr\geq CR^{-\frac{\zeta}{5}}.\]
_If \(u(x)\to p_{0}\in N\) as \(r(x)\to\infty\), then we have_
\[E_{\Phi_{(3)}}^{R}(u)=o(R^{\zeta}),\quad\text{as}\quad R\to\infty \tag{22}\]
Proof.: Suppose \(u\) is a nonconstant map. From Proposition 3.2, we get \(E_{\Phi_{(3)}}^{R}(u)\to\infty\) as \(R\to\infty\). We choose a local coordinate neighbourhood \((U,\varphi)\) of \(p_{0}\) in \(N\), such that \(\varphi(p_{0})=0\) and
\[h=h_{\alpha\beta}(y)dy^{\alpha}\otimes dy^{\beta},\quad y\in U,\]
satisfies
\[\bigg{(}\frac{\partial h_{\alpha\beta}(y)}{\partial y^{\gamma}}y^{\gamma}+2h _{\alpha\beta}(y)\bigg{)}\geq\big{(}h_{\alpha\beta}(y)\big{)}\quad\text{on} \quad U\]
in the matrice sence (that is, for two \(n\times n\) matrices \(A,B\), by \(A\geq B\), we mean that \(A-B\) is a positive semi-definite matrix). Since \(u(x)\to 0\) as \(r(x)\to\infty\), there exists a \(R_{1}\) such that for \(r(x)>R_{1}\), \(u(x)\in U\) and
\[\bigg{(}\frac{\partial h_{\alpha\beta}(u)}{\partial y^{\gamma}}u^{\gamma}+2h _{\alpha\beta}(u)\bigg{)}\geq\big{(}h_{\alpha\beta}(u)\big{)}. \tag{23}\]
For \(w\in C_{0}^{2}(M\backslash B(R_{1}),\varphi(U))\) and sufficiently small \(t\), the variation \(u+tw:M^{m}\to N\) is defined by
\[(u+tw)(q)=\begin{cases}u(q)&q\in B(R_{1});\\ \varphi^{-1}\big{(}\varphi(u)+tw\big{)}(q)&q\in M\backslash B(R_{1}).\end{cases}\]
By (6) and (7), we have
\[\frac{d}{dt}E_{\Phi_{(3)}}(u+tw)_{\big{|}_{t=0}}=0,\]
that is, using Einstein notation, we obtain,
\[\int_{M\backslash B(R_{1})}g^{is}g^{jk}g^{lr}\bigg{(}\frac{\partial h_{\alpha \beta}}{\partial y^{\xi}}w^{\xi}\frac{\partial u^{\alpha}}{\partial x_{i}} \frac{\partial u^{\beta}}{\partial x_{j}}+2h_{\alpha\beta}\frac{\partial u^{ \alpha}}{\partial x_{i}}\frac{\partial w^{\beta}}{\partial x_{j}}\bigg{)}h_{ \gamma\delta}\frac{\partial u^{\gamma}}{\partial x_{k}}\frac{\partial u^{ \delta}}{\partial x_{l}}h_{\mu\nu}\frac{\partial u^{\mu}}{\partial x_{r}} \frac{\partial u^{\nu}}{\partial x_{s}}dv_{g}=0. \tag{24}\]
Let \(\phi(t)\) be a smooth function on \((R_{1},\infty)\). Choosing \(w(x)=\phi\big{(}r(x)\big{)}u(x)\) in (24), we have
\[\int_{M\backslash B(R_{1})}g^{is}g^{jk}g^{lr}\bigg{(}\frac{ \partial h_{\alpha\beta}}{\partial y^{\xi}}u^{\xi}+2h_{\alpha\beta}\bigg{)} \frac{\partial u^{\alpha}}{\partial x_{i}}\frac{\partial u^{\beta}}{\partial x _{j}}\phi\big{(}r(x)\big{)}h_{\gamma\delta}\frac{\partial u^{\gamma}}{ \partial x_{k}}\frac{\partial u^{\delta}}{\partial x_{l}}h_{\mu\nu}\frac{ \partial u^{\mu}}{\partial x_{r}}\frac{\partial u^{\nu}}{\partial x_{s}}dv_{g}\] \[=-2\int_{M\backslash B(R_{1})}g^{is}g^{jk}g^{lr}h_{\alpha\beta} \frac{\partial u^{\alpha}}{\partial x_{i}}\frac{\partial\phi\big{(}r(x)\big{)} }{\partial x_{j}}u^{\beta}h_{\gamma\delta}\frac{\partial u^{\gamma}}{ \partial x_{k}}\frac{\partial u^{\delta}}{\partial x_{l}}h_{\mu\nu}\frac{ \partial u^{\mu}}{\partial x_{r}}\frac{\partial u^{\nu}}{\partial x_{s}}dv_{g}. \tag{25}\]
The above equation holds for Lipschitz function \(\phi\) with compact support by an approximation argument.
For \(0<\varepsilon\leq 1\), we define
\[\varphi_{\varepsilon}(t)=\begin{cases}1&t\leq 1;\\ 1+\frac{1-t}{\varepsilon}&1<t<1+\varepsilon;\\ 0&t\geq 1+\varepsilon\end{cases}\]
and choose
\[\phi\big{(}r(x)\big{)}=\varphi_{\varepsilon}\bigg{(}\frac{r(x)}{R}\bigg{)} \bigg{(}1-\varphi_{1}\bigg{(}\frac{r(x)}{R_{1}}\bigg{)}\bigg{)},\]
where \(R>2R_{1}.\) Let \(R_{2}=2R_{1}\). In (25), we obtain
\[\int_{M\setminus B(R_{1})}g^{is}g^{jk}g^{lr}\bigg{(}\frac{\partial h _{\alpha\beta}}{\partial y^{\xi}}u^{\xi}+2h_{\alpha\beta}\bigg{)}\frac{ \partial u^{\alpha}}{\partial x_{i}}\frac{\partial u^{\beta}}{\partial x_{j}} \phi\big{(}r(x)\big{)}h_{\gamma\delta}\frac{\partial u^{\gamma}}{\partial x_{ k}}\frac{\partial u^{\delta}}{\partial x_{l}}h_{\mu\nu}\frac{\partial u^{\mu}}{ \partial x_{r}}\frac{\partial u^{\nu}}{\partial x_{s}}dv_{g}\] \[= \int_{B(R_{2})\setminus B(R_{1})}g^{is}g^{jk}g^{lr}\bigg{(}\frac{ \partial h_{\alpha\beta}}{\partial y^{\xi}}u^{\xi}+2h_{\alpha\beta}\bigg{)} \frac{\partial u^{\alpha}}{\partial x_{i}}\frac{\partial u^{\beta}}{\partial x _{j}}\bigg{(}1-\varphi_{1}\bigg{(}\frac{r(x)}{R_{1}}\bigg{)}\bigg{)}\] \[\times\quad h_{\gamma\delta}\frac{\partial u^{\gamma}}{\partial x _{k}}\frac{\partial u^{\delta}}{\partial x_{l}}h_{\mu\nu}\frac{\partial u^{\mu }}{\partial x_{r}}\frac{\partial u^{\nu}}{\partial x_{s}}dv_{g} \tag{26}\] \[+ \int_{B(R)\setminus B(R_{2})}g^{is}g^{jk}g^{lr}\bigg{(}\frac{ \partial h_{\alpha\beta}}{\partial y^{\xi}}u^{\xi}+2h_{\alpha\beta}\bigg{)} \frac{\partial u^{\alpha}}{\partial x_{i}}\frac{\partial u^{\beta}}{\partial x _{j}}h_{\gamma\delta}\frac{\partial u^{\gamma}}{\partial x_{k}}\frac{\partial u ^{\delta}}{\partial x_{l}}h_{\mu\nu}\frac{\partial u^{\mu}}{\partial x_{r}} \frac{\partial u^{\nu}}{\partial x_{s}}dv_{g}\] \[+ \int_{B((1+\varepsilon)R)\setminus B(R)}g^{is}g^{jk}g^{lr}\bigg{(} \frac{\partial h_{\alpha\beta}}{\partial y^{\xi}}u^{\xi}+2h_{\alpha\beta} \bigg{)}\frac{\partial u^{\alpha}}{\partial x_{i}}\frac{\partial u^{\beta}}{ \partial x_{j}}\varphi_{\varepsilon}\bigg{(}\frac{r(x)}{R}\bigg{)}h_{\gamma \delta}\frac{\partial u^{\gamma}}{\partial x_{k}}\frac{\partial u^{\delta}}{ \partial x_{l}}h_{\mu\nu}\frac{\partial u^{\mu}}{\partial x_{r}}\frac{ \partial u^{\nu}}{\partial x_{s}}dv_{g}\]
and
\[-2\int_{M\setminus B(R_{1})}g^{is}g^{jk}g^{lr}h_{\alpha\beta} \frac{\partial u^{\alpha}}{\partial x_{i}}\frac{\partial\phi\big{(}r(x)\big{)} }{\partial x_{j}}u^{\beta}h_{\gamma\delta}\frac{\partial u^{\gamma}}{\partial x _{k}}\frac{\partial u^{\delta}}{\partial x_{l}}h_{\mu\nu}\frac{\partial u^{ \mu}}{\partial x_{r}}\frac{\partial u^{\nu}}{\partial x_{s}}dv_{g}\] \[= 2\int_{B(R_{2})\setminus B(R_{1})}g^{is}g^{jk}g^{lr}h_{\alpha \beta}\frac{\partial u^{\alpha}}{\partial x_{i}}\frac{\partial\varphi_{1} \bigg{(}\frac{r(x)}{R_{1}}\bigg{)}}{\partial x_{j}}u^{\beta}h_{\gamma\delta} \frac{\partial u^{\gamma}}{\partial x_{k}}\frac{\partial u^{\delta}}{\partial x _{l}}h_{\mu\nu}\frac{\partial u^{\mu}}{\partial x_{r}}\frac{\partial u^{\nu} }{\partial x_{s}}dv_{g}\] \[-2\int_{B((1+\varepsilon)R)\setminus B(R)}g^{is}g^{jk}g^{lr}h_{ \alpha\beta}\frac{\partial u^{\alpha}}{\partial x_{i}}\frac{\partial\varphi_{ \varepsilon}\bigg{(}\frac{r(x)}{R}\bigg{)}}{\partial x_{j}}u^{\beta}h_{\gamma \delta}\frac{\partial u^{\gamma}}{\partial x_{k}}\frac{\partial u^{\delta}}{ \partial x_{l}}h_{\mu\nu}\frac{\partial u^{\mu}}{\partial x_{r}}\frac{ \partial u^{\nu}}{\partial x_{s}}dv_{g}\] \[= 2\int_{B(R_{2})\setminus B(R_{1})}g^{is}g^{jk}g^{lr}h_{\alpha \beta}\frac{\partial u^{\alpha}}{\partial x_{i}}\frac{\partial\varphi_{1} \bigg{(}\frac{r(x)}{R_{1}}\bigg{)}}{\partial x_{j}}u^{\beta}h_{\gamma\delta} \frac{\partial u^{\gamma}}{\partial x_{k}}\frac{\partial u^{\delta}}{\partial x _{l}}h_{\mu\nu}\frac{\partial u^{\mu}}{\partial x_{r}}\frac{\partial u^{\nu}}{ \partial x_{s}}dv_{g} \tag{27}\] \[+2\frac{1}{R\varepsilon}\int_{B((1+\varepsilon)R)\setminus B(R)}g ^{is}g^{jk}g^{lr}h_{\alpha\beta}\frac{\partial u^{\alpha}}{\partial x_{i}} \frac{\partial r(x)}{\partial x_{j}}u^{\beta}h_{\gamma\delta}\frac{\partial u^{ \gamma}}{\partial x_{k}}\frac{\partial u^{\delta}}{\partial x_{l}}h_{\mu\nu} \frac{\partial u^{\mu}}{\partial x_{r}}\frac{\partial u^{\nu}}{\partial x_{s}}dv _{g}.\]
As \(\varepsilon\to 0\),
\[2\frac{1}{R\varepsilon}\int_{B((1+\varepsilon)R)\setminus B(R)}g ^{is}g^{jk}g^{lr}h_{\alpha\beta}\frac{\partial u^{\alpha}}{\partial x_{i}} \frac{\partial r(x)}{\partial x_{j}}u^{\beta}h_{\gamma\delta}\frac{\partial u^{ \gamma}}{\partial x_{k}}\frac{\partial u^{\delta}}{\partial x_{l}}h_{\mu\nu} \frac{\partial u^{\mu}}{\partial x_{r}}\frac{\partial u^{\nu}}{\partial x_{s}} dv_{g} \tag{28}\] \[\rightarrow 2\int_{\partial B(R)}g^{is}g^{jk}g^{lr}h_{\alpha\beta}\frac{ \partial u^{\alpha}}{\partial x_{i}}\frac{\partial r(x)}{\partial x_{j}}u^{ \beta}h_{\gamma\delta}\frac{\partial u^{\gamma}}{\partial x_{k}}\frac{ \partial u^{\delta}}{\partial x_{l}}h_{\mu\nu}\frac{\partial u^{\mu}}{\partial x _{r}}\frac{\partial u^{\nu}}{\partial x_{s}}ds_{g}.\]
From (25)-(28), we get
\[\int_{B(R)\backslash B(R_{2})}g^{is}g^{jk}g^{lr}\bigg{(}\frac{\partial h _{\alpha\beta}}{\partial y\xi}u^{\xi}+2h_{\alpha\beta}\bigg{)}\frac{\partial u^{ \alpha}}{\partial x_{i}}\frac{\partial u^{\beta}}{\partial x_{j}}h_{\gamma \delta}\frac{\partial u^{\gamma}}{\partial x_{k}}\frac{\partial u^{\delta}}{ \partial x_{l}}h_{\mu\nu}\frac{\partial u^{\mu}}{\partial x_{r}}\frac{ \partial u^{\nu}}{\partial x_{s}}dv_{g}\] \[=2\int_{\partial B(R)}g^{is}g^{jk}g^{lr}h_{\alpha\beta}\frac{ \partial u^{\alpha}}{\partial x_{i}}\frac{\partial r(x)}{\partial x_{j}}u^{ \beta}h_{\gamma\delta}\frac{\partial u^{\gamma}}{\partial x_{k}}\frac{\partial u ^{\delta}}{\partial x_{l}}h_{\mu\nu}\frac{\partial u^{\mu}}{\partial x_{r}} \frac{\partial u^{\nu}}{\partial x_{s}}ds_{g}-D(R_{1}), \tag{29}\]
where
\[D(R_{1}) =-2\int_{B(R_{2})\backslash B(R_{1})}g^{is}g^{jk}g^{lr}h_{\alpha \beta}\frac{\partial u^{\alpha}}{\partial x_{i}}\frac{\partial\varphi_{1}( \frac{r(x)}{R_{1}})}{\partial x_{j}}u^{\beta}h_{\gamma\delta}\frac{\partial u ^{\gamma}}{\partial x_{k}}\frac{\partial u^{\delta}}{\partial x_{l}}h_{\mu\nu} \frac{\partial u^{\mu}}{\partial x_{r}}\frac{\partial u^{\nu}}{\partial x_{s} }dv_{g}\] \[\quad+\int_{B(R_{2})\backslash B(R_{1})}g^{is}g^{jk}g^{lr}\bigg{(} \frac{\partial h_{\alpha\beta}}{\partial y\xi}u^{\xi}+2h_{\alpha\beta}\bigg{)} \frac{\partial u^{\alpha}}{\partial x_{i}}\frac{\partial u^{\beta}}{\partial x _{j}}\bigg{(}1-\varphi_{1}\bigg{(}\frac{r(x)}{R_{1}}\bigg{)}\bigg{)}\] \[\quad\times\quad h_{\gamma\delta}\frac{\partial u^{\gamma}}{ \partial x_{k}}\frac{\partial u^{\delta}}{\partial x_{l}}h_{\mu\nu}\frac{ \partial u^{\mu}}{\partial x_{r}}\frac{\partial u^{\nu}}{\partial x_{s}}dv_{g}.\]
Next we estimate the term
\[2\int_{\partial B(R)}g^{is}g^{jk}g^{lr}h_{\alpha\beta}\frac{ \partial u^{\alpha}}{\partial x_{i}}\frac{\partial r(x)}{\partial x_{j}}u^{ \beta}h_{\gamma\delta}\frac{\partial u^{\gamma}}{\partial x_{k}}\frac{ \partial u^{\delta}}{\partial x_{l}}h_{\mu\nu}\frac{\partial u^{\mu}}{\partial x _{r}}\frac{\partial u^{\nu}}{\partial x_{s}}ds_{g}. \tag{30}\]
Since the integrand in (30) does not depend on the coordinate systems on \(M^{m}\) and \(N^{n}\), at any point \(p\in\partial B(R)\) and \(u(p)\), we take the appropriate coordinate systems such that \(g_{ij}(p)=\delta_{ij}\), \(g^{ij}(p)=\delta^{ij}\) and \(h_{\alpha\beta}\big{(}u(p)\big{)}=\delta_{\alpha\beta}.\) At \(p\), by Holder's inequality, we have
\[g^{is}g^{jk}g^{lr}h_{\alpha\beta}\frac{\partial u^{\alpha}}{ \partial x_{i}}\frac{\partial r(x)}{\partial x_{j}}u^{\beta}h_{\gamma\delta} \frac{\partial u^{\gamma}}{\partial x_{k}}\frac{\partial u^{\delta}}{\partial x _{l}}h_{\mu\nu}\frac{\partial u^{\mu}}{\partial x_{r}}\frac{\partial u^{\nu} }{\partial x_{s}}\] \[=\sum_{i,j,l}\bigg{(}\sum_{\alpha}\frac{\partial u^{\alpha}}{ \partial x_{i}}u^{\alpha}\frac{\partial r(x)}{\partial x_{j}}\bigg{)}\bigg{(} \sum_{\gamma,\mu}\frac{\partial u^{\gamma}}{\partial x_{j}}\frac{\partial u^{ \gamma}}{\partial x_{l}}\frac{\partial u^{\mu}}{\partial x_{l}}\frac{\partial u ^{\mu}}{\partial x_{i}}\bigg{)}\] \[=\sum_{i,j}\bigg{(}\sum_{\alpha}\frac{\partial u^{\alpha}}{ \partial x_{i}}u^{\alpha}\frac{\partial r(x)}{\partial x_{j}}\bigg{)}\bigg{(} \sum_{l}\bigg{(}\sum_{\gamma}\frac{\partial u^{\gamma}}{\partial x_{j}}\frac{ \partial u^{\gamma}}{\partial x_{l}}\bigg{)}\bigg{(}\sum_{\mu}\frac{\partial u ^{\mu}}{\partial x_{l}}\frac{\partial u^{\mu}}{\partial x_{i}}\bigg{)}\bigg{)}\] \[\leq\sum_{i,j}\bigg{(}\sum_{\alpha}\frac{\partial u^{\alpha}}{ \partial x_{i}}u^{\alpha}\frac{\partial r(x)}{\partial x_{j}}\bigg{)}\bigg{(} \sum_{l}\bigg{(}\sum_{\gamma}\frac{\partial u^{\gamma}}{\partial x_{j}}\frac{ \partial u^{\gamma}}{\partial x_{l}}\bigg{)}^{2}\bigg{)}^{\frac{1}{2}}\bigg{(} \sum_{l}\bigg{(}\sum_{\mu}\frac{\partial u^{\mu}}{\partial x_{l}}\frac{\partial u ^{\mu}}{\partial x_{i}}\bigg{)}^{2}\bigg{)}^{\frac{1}{2}}\] \[\leq\bigg{(}\sum_{i,j}\bigg{(}\sum_{\alpha}\frac{\partial u^{\alpha} }{\partial x_{i}}u^{\alpha}\frac{\partial r(x)}{\partial x_{j}}\bigg{)}^{2} \bigg{)}^{\frac{1}{2}}\bigg{(}\sum_{j,l}\bigg{(}\sum_{\gamma}\frac{\partial u ^{\gamma}}{\partial x_{j}}\frac{\partial u^{\gamma}}{\partial x_{l}}\bigg{)}^{2} \bigg{)}^{\frac{1}{2}}\bigg{(}\sum_{i,l}\bigg{(}\sum_{\mu}\frac{\partial u^{\mu} }{\partial x_{l}}\frac{\partial u^{\mu}}{\partial x_{i}}\bigg{)}^{2}\bigg{)}^{ \frac{1}{2}}\] \[\leq\bigg{(}\sum_{i}\bigg{(}\sum_{\alpha}\frac{\partial u^{\alpha}}{ \partial x_{i}}u^{\alpha}\bigg{)}^{2}\bigg{)}^{\frac{1}{2}}\bigg{(}\sum_{j,l} \bigg{(}\sum_{\gamma}\frac{\partial u^{\gamma}}{\partial x_{j}}\frac{\partial u^{ \gamma}}{\partial x_{l}}\bigg{)}^{2}\bigg{)}^{\frac{1}{2}}\bigg{(}\sum_{i,l} \bigg{(}\sum_{\mu}\frac{\partial u^{\mu}}{\partial x_{l}}\frac{\partial u^{\mu}}{ \partial x_{i}}\bigg{)}^{2}\bigg{)}^{\frac{1}{2}}\] \[\leq\bigg{(}\sum_{i}\bigg{(}\sum_{\alpha}\frac{\partial u^{\alpha}}{ \partial x_{i}}\frac{\partial u^{\alpha}}{\partial x_{i}}\bigg{)}^{\frac{1}{2}} \bigg{(}\sum_{\alpha}\big{(}u^{\alpha}\big{)}^{2}\bigg{)}^{\frac{1}{2}}\bigg{(} \sum_{j,l}\bigg{(}\sum_{\gamma}\frac{\partial u^{\gamma}}{\partial x_{j}}\frac{ \partial u^{\gamma}}{\partial x_{l}}\bigg{)}^{2}\bigg{)}^{\frac{1}{2}}\bigg{(} \sum_{i,l}\bigg{(}\sum_{\mu}\frac{\partial u^{\mu}}{\partial x_{l}}\frac{ \partial u^{\mu}}{\partial x_{i}}\bigg{)}^{2}\bigg{)}^{\frac{1}{2}}\]
\[\leq m^{\frac{13}{12}}\bigg{(}\sum_{\alpha}\left(u^{\alpha}\right)^{2} \bigg{)}^{\frac{1}{2}}\bigg{(}\sum_{i,j}\bigg{(}h\big{(}du(\frac{\partial}{ \partial x_{i}}),du(\frac{\partial}{\partial x_{j}})\big{)}\bigg{)}^{3}\bigg{)}^ {\frac{1}{6}}\] \[\quad\times\bigg{(}\sum_{j,l}\bigg{(}h\big{(}du(\frac{\partial}{ \partial x_{j}}),du(\frac{\partial}{\partial x_{l}})\big{)}\bigg{)}^{3}\bigg{)}^ {\frac{1}{3}}\bigg{(}\sum_{i,l}\bigg{(}h\big{(}du(\frac{\partial}{\partial x_{l} }),du(\frac{\partial}{\partial x_{i}})\big{)}\bigg{)}^{3}\bigg{)}^{\frac{1}{3}}\] \[\leq m^{\frac{13}{12}}\bigg{(}\sum_{\alpha}\left(u^{\alpha}\right)^ {2}\bigg{)}^{\frac{1}{2}}\bigg{(}\sum_{i,j,l}h\big{(}du(\frac{\partial}{ \partial x_{i}}),du(\frac{\partial}{\partial x_{j}})\big{)}h\big{(}du(\frac{ \partial}{\partial x_{l}}),du(\frac{\partial}{\partial x_{i}})\big{)}\bigg{)} \bigg{)}^{\frac{1}{6}}\] \[\quad\times\bigg{(}\sum_{i,j,l}h\big{(}du(\frac{\partial}{ \partial x_{i}}),du(\frac{\partial}{\partial x_{j}})\big{)}h\big{(}du(\frac{ \partial}{\partial x_{j}}),du(\frac{\partial}{\partial x_{l}})\big{)}h\big{(} du(\frac{\partial}{\partial x_{l}}),du(\frac{\partial}{\partial x_{i}})\big{)}\bigg{)}^{ \frac{1}{3}}\] \[\quad\times\bigg{(}\sum_{i,j,l}h\big{(}du(\frac{\partial}{ \partial x_{i}}),du(\frac{\partial}{\partial x_{j}})\big{)}h\big{(}du(\frac{ \partial}{\partial x_{j}}),du(\frac{\partial}{\partial x_{l}})\big{)}h\big{(} du(\frac{\partial}{\partial x_{l}}),du(\frac{\partial}{\partial x_{i}})\big{)}\bigg{)}^{ \frac{1}{3}}\] \[=m^{\frac{13}{12}}\bigg{(}\sum_{\alpha}\left(u^{\alpha}\right)^{2 }\bigg{)}^{\frac{1}{2}}\|d_{(3)}u\|^{\frac{5}{3}}=m^{\frac{13}{12}}\|d_{(3)}u \|^{\frac{5}{3}}\bigg{(}\sum_{\alpha,\beta}h_{\alpha\beta}u^{\alpha}u^{\beta }\bigg{)}^{\frac{1}{2}},\]
where
\[g^{ij}\frac{\partial r}{\partial x_{i}}\frac{\partial r}{\partial x_{j}}=| \nabla r|^{2}=1.\]
Therefore, by Holder's inequality, we have
\[2\int_{\partial B(R)}g^{is}g^{jk}g^{lr}h_{\alpha\beta}\frac{ \partial u^{\alpha}}{\partial x_{i}}\frac{\partial\tau(x)}{\partial x_{j}}u^{ \beta}h_{\gamma\delta}\frac{\partial u^{\gamma}}{\partial x_{k}}\frac{\partial u ^{\delta}}{\partial x_{l}}h_{\mu\nu}\frac{\partial u^{\mu}}{\partial x_{r}} \frac{\partial u^{\nu}}{\partial x_{s}}ds_{g}\] \[\leq \int_{\partial B(R)}2m^{\frac{13}{12}}\|d_{(3)}u\|^{\frac{5}{3}} \bigg{(}\sum_{\alpha,\beta}h_{\alpha\beta}u^{\alpha}u^{\beta}\bigg{)}^{\frac{ 1}{2}}ds_{g}\] \[\leq 2m^{\frac{13}{12}}\bigg{(}\int_{\partial B(R)}\|d_{(3)}u\|^{2}ds _{g}\bigg{)}^{\frac{5}{6}}\bigg{(}\int_{\partial B(R)}\bigg{(}\sum_{\alpha, \beta}h_{\alpha\beta}u^{\alpha}u^{\beta}\bigg{)}^{3}ds_{g}\bigg{)}^{\frac{1}{ 6}}. \tag{31}\]
By (23), we get
\[\int_{B(R)\backslash B(R_{2})}g^{is}g^{jk}g^{lr}\bigg{(}\frac{ \partial h_{\alpha\beta}}{\partial y^{\xi}}u^{\xi}+2h_{\alpha\beta}\bigg{)} \frac{\partial u^{\alpha}}{\partial x_{i}}\frac{\partial u^{\beta}}{\partial x _{j}}h_{\gamma\delta}\frac{\partial u^{\gamma}}{\partial x_{k}}\frac{\partial u ^{\delta}}{\partial x_{l}}h_{\mu\nu}\frac{\partial u^{\mu}}{\partial x_{r}} \frac{\partial u^{\nu}}{\partial x_{s}}dv_{g}\] \[\geq \int_{B(R)\backslash B(R_{2})}g^{is}g^{jk}g^{lr}h_{\alpha\beta} \frac{\partial u^{\alpha}}{\partial x_{i}}\frac{\partial u^{\beta}}{\partial x _{j}}h_{\gamma\delta}\frac{\partial u^{\gamma}}{\partial x_{k}}\frac{\partial u ^{\delta}}{\partial x_{l}}h_{\mu\nu}\frac{\partial u^{\mu}}{\partial x_{r}} \frac{\partial u^{\nu}}{\partial x_{s}}dv_{g} \tag{32}\] \[= \int_{B(R)\backslash B(R_{2})}\|d_{(3)}u\|^{2}dv_{g}.\]
Denote
\[Z(R)=\int_{B(R)\backslash B(R_{2})}\|d_{(3)}u\|^{2}dv_{g}+D(R_{1}). \tag{33}\]
Then
\[Z^{\prime}(R)=\int_{\partial B(R)}\|d_{(3)}u\|^{2}ds_{g}. \tag{34}\]
According to (29)-(34), we get
\[Z(R) \leq C_{1}\bigg{(}Z^{\prime}(R)\bigg{)}^{\frac{5}{6}}\bigg{(} \int_{\partial B(R)}\bigg{(}\sum_{\alpha,\beta}h_{\alpha\beta}u^{\alpha}u^{ \beta}\bigg{)}^{3}ds_{g}\bigg{)}^{\frac{1}{6}}, \tag{35}\] \[\leq C_{1}\bigg{(}Z^{\prime}(R)\bigg{)}^{\frac{5}{6}}\eta^{\frac{ 1}{6}}(R)\cdot\operatorname{Vol}_{g}^{\frac{1}{6}}(\partial B(R))\,,\]
where \(C_{1}=2m^{\frac{13}{12}}\) is a positive constant, and
\[\eta(R)=\max_{r(x)=R}\left\{\bigg{(}\sum_{\alpha,\beta}h_{\alpha\beta}u^{ \alpha}u^{\beta}\bigg{)}^{3}\right\}. \tag{36}\]
Since \(u(x)\to 0\) as \(r(x)\to\infty\), we note \(\eta(R)\to 0\) as \(R\to\infty\), and \(\eta(R)\) is nonincreasing for sufficiently large \(R\). Furthermore,
\[Z(R)-D(R_{1})=\int_{B(R)\backslash B(R_{2})}\|d_{(3)}u\|^{2}dv_{g}. \tag{37}\]
By Proposition 3.2, we have \(R_{3}\geq R_{2}\) such that \(Z(R)>0\) for any \(R>R_{3}\).
It follows from (35) that for any \(R_{4}\geq R\geq R_{3}\), we have
\[Z^{\frac{6}{5}}(R)\leq C_{2}Z^{\prime}(R)\big{(}\eta(R)\operatorname{Vol}_{g}( \partial B(R)\big{)}^{\frac{1}{5}},\]
and hence
\[\int_{R}^{R_{4}}\frac{Z^{\prime}(r)}{Z^{\frac{6}{5}}(r)}dr \geq\frac{1}{C_{2}}\int_{R}^{R_{4}}\left(\eta(r)\operatorname{Vol} _{g}(\partial B(r)\right)^{-\frac{1}{5}}dr\] \[\geq\frac{1}{C_{2}}\int_{R}^{R_{4}}\operatorname{Vol}_{g}^{- \frac{1}{5}}(\partial B(r))dr\cdot\eta^{-\frac{1}{5}}(R)\] \[\geq\frac{1}{C_{2}}R^{-\frac{5}{5}}\eta^{-\frac{1}{5}}(R),\]
where \(C_{2}=C_{1}^{\frac{6}{5}}.\) Letting \(R_{4}\to\infty\), we get
\[\frac{1}{Z^{\frac{1}{5}}(R)}\geq\frac{1}{5C_{2}}R^{-\frac{\zeta}{5}}\eta^{- \frac{1}{5}}(R),\]
which implies that
\[Z(R)\leq C_{3}R^{\zeta}\eta(R), \tag{38}\]
for constant \(C_{3}=(5C_{2})^{5}.\) Hence, according to (37) and (38), we get
\[\int_{B(R)}\frac{\|d_{(3)}u\|^{2}}{6}dv_{g} =E_{\Phi_{(3)}}^{R}(u)=\frac{Z(R)}{6}+\int_{B(R_{2})}\frac{\|d_{( 3)}u\|^{2}}{6}dv_{g}-\frac{D(R_{1})}{6}\] \[=o(R^{\zeta})\quad\text{as}\quad R\to\infty.\]
By Proposition 3.3 and monotoniccy formula (18), we have
**Theorem 3.4** (Liouville Theorem).: _Suppose \(u:(M^{m},g)\to(N^{n},h)\) is a smooth \(\Phi_{(3)}\)-harmonic map and \(r(x)\) satisfies the condition (17). If \(u(x)\to p_{0}\in N^{n}\) as \(r(x)\to\infty\) and_
\[\int_{R}^{\infty}\frac{1}{\operatorname{Vol}_{g}^{\frac{1}{5}}\left(\partial B (r)\right)}dr\geq CR^{-\frac{\zeta}{5}}\]
_for \(R\) large enough, then \(u\) is a constant map._
**Lemma 3.5** ([11, 13, 14, 16, 20, 23, 39, 54]).: _Suppose \((M^{m},g)\) is a complete Riemannian manifold with a pole \(x_{0}.\) We denote the radial curvature of \(M^{m}\) by \(K_{r}.\)_
(1) _If \(-\alpha^{2}\leq K_{r}\leq-\beta^{2}\) with \(\alpha\geq\beta\geq 0\), then_
\[\beta\coth(\beta r)[g-dr\otimes dr]\leq\operatorname{Hess}(r)\leq\alpha\coth (\alpha r)[g-dr\otimes dr].\]
(2) _If \(-\frac{A}{(1+r^{2})^{1+\varepsilon}}\leq K_{r}\leq\frac{B}{(1+r^{2})^{1+ \varepsilon}}\) with \(\varepsilon>0,\)\(A\geq 0\) and \(0\leq B<2\varepsilon,\) then_
\[\frac{1-B/2\varepsilon}{r}[g-dr\otimes dr]\leq\operatorname{Hess}(r)\leq \frac{e^{A/2\varepsilon}}{r}[g-dr\otimes dr].\]
(3) _If \(-\frac{a^{2}}{1+r^{2}}\leq K_{r}\leq\frac{b^{2}}{1+r^{2}}\) with \(a\geq 0\) and \(b^{2}\in[0,\frac{1}{4}],\) then_
\[\frac{1+\sqrt{1-4b^{2}}}{2r}[g-dr\otimes dr]\leq\operatorname{Hess}(r)\leq \frac{1+\sqrt{1+4a^{2}}}{2r}[g-dr\otimes dr].\]
By Lemma 3.5, we have
**Lemma 3.6**.: _Suppose \((M^{m},g)\) is a complete Riemannian manifold with a pole \(x_{0}.\) We denote the radial curvature of \(M^{m}\) by \(K_{r}.\)\((i)\) If \(-\alpha^{2}\leq K_{r}\leq-\beta^{2}\) with \(\alpha\geq\beta\geq 0\) and \((m-1)\beta-6\alpha>0,\) then_
\[1+\frac{m-1}{2}\lambda_{\min}-3\max\{2,\lambda_{max}\}\geq m-\frac{6\alpha}{ \beta}.\]
\((ii)\) _If \(-\frac{A}{(1+r^{2})^{1+\varepsilon}}\leq K_{r}\leq\frac{B}{(1+r^{2})^{1+ \varepsilon}}\) with \(\varepsilon>0,\)\(A\geq 0\) and \(0\leq B<2\varepsilon,\) then_
\[1+\frac{m-1}{2}\lambda_{\min}-3\max\{2,\lambda_{max}\}\geq 1+(m-1)(1-\frac{B}{ 2\varepsilon})-6e^{\frac{A}{2\varepsilon}}.\]
\((iii)\) _If \(-\frac{a^{2}}{1+r^{2}}\leq K_{r}\leq\frac{b^{2}}{1+r^{2}}\) with \(a\geq 0\) and \(0\leq b^{2}\leq\frac{1}{4},\) then_
\[1+\frac{m-1}{2}\lambda_{\min}-3\max\{2,\lambda_{max}\}\geq 1+(m-1)\frac{1+ \sqrt{1-4b^{2}}}{2}-6\frac{1+\sqrt{1+4a^{2}}}{2}.\]
Proof.: If \(K_{r}\) satisfies \((i),\) then by Lemma 3.5, we have that on \(B(r)/\{x_{0}\},\) for every \(r>0,\)
\[1+\frac{m-1}{2}\lambda_{\min}-3\max\{2,\lambda_{max}\}\] \[\geq 1+(m-1)\beta r\coth(\beta r)-6\alpha r\coth(\alpha r)\] \[= 1+\beta r\coth(\beta r)\bigg{(}m-1-\frac{6\alpha}{\beta}\frac{ \coth(\alpha r)}{\coth(\beta r)}\bigg{)}\] \[\geq m-\frac{6\alpha}{\beta}.\]
The last inequality holds due to the increasing function \(\beta r\coth(\beta r)\to 1\) as \(r\to 0\) and \(\frac{\coth(\alpha r)}{\coth(\beta r)}<1\) as \(0<\beta<\alpha\). Using the same method as \((i),\) the above inequality is true for the case \((ii)\) and \((iii)\) on \(B(r).\)
We can prove immediately the following theorem by Theorem 3.1 and Lemma 3.6.
**Theorem 3.7**.: _Suppose \((M^{m},g)\) is a complete Riemannian manifold with a pole \(x_{0}\) such that the radial curvature \(K_{r}\) of \(M\) satisfies one of the following conditions: \((i)\)\(-\alpha^{2}\leq K_{r}\leq-\beta^{2}\) with \(\alpha\geq\beta\geq 0\) and \((m-1)\beta-6\alpha>0\), \((ii)\)\(-\frac{A}{(1+r^{2})^{1+\varepsilon}}\leq K_{r}\leq\frac{B}{(1+r^{2})^{1+ \varepsilon}}\) with \(\varepsilon>0,\)\(A\geq 0,\)\(0\leq B<2\varepsilon\) and \(1+(m-1)(1-\frac{B}{2\varepsilon})-6e^{\frac{A}{2\varepsilon}}>0,\)\((iii)\)\(-\frac{a^{2}}{1+r^{2}}\leq K_{r}\leq\frac{b^{2}}{1+r^{2}}\) with \(a\geq 0\), \(0\leq b^{2}\leq\frac{1}{4}\) and \(1+(m-1)\frac{1+\sqrt{1-4b^{2}}}{2}-6\frac{1+\sqrt{1+4a^{2}}}{2}>0.\) Suppose \(u:(M^{m},g)\rightarrow(N^{n},h)\) is a smooth \(\Phi_{(3)}\)-harmonic map. If \(u(x)\to p_{0}\in N^{n}\) as \(r(x)\rightarrow\infty\) and_
\[\int_{R}^{\infty}\frac{1}{\operatorname{Vol}_{g}^{\frac{1}{5}}\big{(}\partial B (r)\big{)}}dr\geq CR^{-\frac{A}{5}}\]
_as \(R\) large enough, then \(u\) is a constant map, where_
\[\Lambda=\begin{cases}m-\frac{6\alpha}{\beta}&\text{if $K_{r}$ satisfies (i)};\\ 1+(m-1)(1-\frac{B}{2\varepsilon})-6e^{\frac{A}{2\varepsilon}}&\text{if $K_{r}$ satisfies (ii)};\\ 1+(m-1)\frac{1+\sqrt{1-4b^{2}}}{2}-6\frac{1+\sqrt{1+4a^{2}}}{2}&\text{if $K_{r}$ satisfies (iii)}.\end{cases}\]
The second variation formula
For any smooth map \(\Psi:(-\varepsilon,\varepsilon)\times(-\varepsilon,\varepsilon)\times M\to N\), we denote \(u_{s,t}(x)\) by \(\Psi(s,t,x)\), where \(\Psi(0,0,x)=u(x).\) Let
\[V=d\Psi(\frac{\partial}{\partial t})\big{|}_{s,t=0},\quad W=d\Psi(\frac{ \partial}{\partial s})\big{|}_{s,t=0}\]
be the variation vector fields of the deformation \(u_{s,t}.\)
**Theorem 4.1** (The second variation formula).: _Suppose \(u:M^{m}\to N\) is a \(\Phi_{(3)}\)-harmonic map for the functional \(E_{\Phi_{(3)}}\) and \(u_{s,t}:M^{m}\to N,(-\varepsilon<s,t<\varepsilon)\) is a compactly supported two-parameter variation. Then we have_
\[I(V,W) =\frac{\partial^{2}}{\partial s\partial t}E_{\Phi_{(3)}}(u_{s,t} )\big{|}_{s,t=0}\] \[=\int_{M}h\big{(}R^{N}\big{(}V,du(e_{i})\big{)}W,d_{(3)}u(e_{i}) \big{)}dv_{g}\] \[\quad+\int_{M}\sum_{i,j,k=1}^{m}h\big{(}\widetilde{\nabla}_{e_{i }}V,\widetilde{\nabla}_{e_{k}}W\big{)}h\big{(}du(e_{k}),du(e_{j})\big{)}h \big{(}du(e_{i}),du(e_{j})\big{)}dv_{g}\] \[\quad+\int_{M}\sum_{i,j,k=1}^{m}h\big{(}\widetilde{\nabla}_{e_{i }}V,du(e_{k})\big{)}h\big{(}du(e_{k}),\widetilde{\nabla}_{e_{j}}W\big{)}h\big{(} du(e_{i}),du(e_{j})\big{)}dv_{g}\] \[\quad+\int_{M}\sum_{i,j,k=1}^{m}h\big{(}\widetilde{\nabla}_{e_{i }}V,du(e_{k})\big{)}h\big{(}du(e_{k}),\widetilde{\nabla}_{e_{j}}W\big{)}h\big{(} du(e_{i}),du(e_{j})\big{)}dv_{g}\] \[\quad+\int_{M}\sum_{i,j,k=1}^{m}h\big{(}\widetilde{\nabla}_{e_{i }}V,du(e_{k})\big{)}h\big{(}du(e_{k}),du(e_{j})\big{)}h\big{(}du(e_{i}), \widetilde{\nabla}_{e_{j}}W\big{)}dv_{g},\]
_where \(R^{N}\) denotes the curvature tensor of \(N.\)_
A \(\Phi_{(3)}\)-harmonic map is called stable if \(I(V,V)\geq 0\) for any compactly supported vector field \(V\) along \(u\).
Proof.: We still use the symbols \(\nabla\) and \(\widetilde{\nabla}\) to denote the Levi-Civita connection on \((-\varepsilon,\varepsilon)\times(-\varepsilon,\varepsilon)\times M\) and the induced connection on \(\Psi^{-1}TN\) respectively. And we use \(\{e_{i}\}\) to denote a locally orthonormal frame on \(M\) and fix any point \(x_{0}\in M\) such that \(\nabla_{e_{i}}e_{j}\big{|}_{x_{0}}=0\) for any \(i,j\).
From Theorem 2.2 and Proposition 2.3, at \(x_{0}\), we have
\[\frac{\partial^{2}}{\partial s\partial t}E_{\Phi_{(3)}}(u_{s,t}) \big{|}_{s,t=0}\] \[= -\frac{\partial}{\partial s}\int_{M}h\big{(}d\Psi(\frac{\partial }{\partial t}),\tau_{\Phi_{(3)}}u_{s,t}\big{)}\big{|}_{s,t=0}dv_{g}\] \[= -\sum_{i=1}^{m}\frac{\partial}{\partial s}\int_{M}h\big{(}d\Psi( \frac{\partial}{\partial t}),\widetilde{\nabla}_{e_{i}}d_{(3)}u_{s,t}(e_{i}) \big{)}\big{|}_{s,t=0}dv_{g}\]
\[= -\sum_{i=1}^{m}\int_{M}h\big{(}d\Psi(\frac{\partial}{\partial t}), \widetilde{\nabla}_{\frac{\partial}{\partial s}}\widetilde{\nabla}_{e_{i}}d_{(3)} u_{s,t}(e_{i})\big{)}_{\big{|}_{s,t=0}}dv_{g}\] \[= -\sum_{i=1}^{m}\int_{M}h\big{(}d\Psi(\frac{\partial}{\partial t}), \widetilde{\nabla}_{e_{i}}\widetilde{\nabla}_{\frac{\partial}{\partial s}}d_{(3) }u_{s,t}(e_{i})\big{)}_{\big{|}_{s,t=0}}dv_{g} \tag{39}\] \[+\sum_{i=1}^{m}\int_{M}h\big{(}d\Psi(\frac{\partial}{\partial t}),R^{N}\big{(}d\Psi(\frac{\partial}{\partial s}),d\Psi(e_{i})\big{)}d_{(3)}u_{s,t}(e_{i})\big{)}_{\big{|}_{s,t=0}}dv_{g}\] \[= -\sum_{i=1}^{m}\int_{M}e_{i}h\big{(}d\Psi(\frac{\partial}{ \partial t}),\widetilde{\nabla}_{\frac{\partial}{\partial s}}d_{(3)}u_{s,t}( e_{i})\big{)}_{\big{|}_{s,t=0}}dv_{g}\] \[+\sum_{i=1}^{m}\int_{M}h\big{(}\widetilde{\nabla}_{e_{i}}d\Psi( \frac{\partial}{\partial t}),\widetilde{\nabla}_{\frac{\partial}{\partial s}} d_{(3)}u_{s,t}(e_{i})\big{)}_{\big{|}_{s,t=0}}dv_{g}\] \[+\sum_{i=1}^{m}\int_{M}h\big{(}d\Psi(\frac{\partial}{\partial t}),R^{N}\big{(}d\Psi(\frac{\partial}{\partial s}),d\Psi(e_{i})\big{)}d_{(3)}u_{ s,t}(e_{i})\big{)}_{\big{|}_{s,t=0}}dv_{g}.\]
We compute the second term in the right hand side of (39):
\[\sum_{i=1}^{m}\int_{M}h\big{(}\widetilde{\nabla}_{e_{i}}d\Psi( \frac{\partial}{\partial t}),\widetilde{\nabla}_{\frac{\partial}{\partial s}} d_{(3)}u_{s,t}(e_{i})\big{)}_{\big{|}_{s,t=0}}dv_{g} \tag{40}\] \[= \int_{M}\sum_{i,j,k=1}^{m}h\big{(}\widetilde{\nabla}_{e_{i}}V, \widetilde{\nabla}_{e_{k}}W\big{)}h\big{(}du(e_{i}),du(e_{j})\big{)}h\big{(} du(e_{j}),du(e_{k})\big{)}dv_{g}\] \[+\int_{M}\sum_{i,j,k=1}^{m}h\big{(}\widetilde{\nabla}_{e_{i}}V, du(e_{k})\big{)}h\big{(}\widetilde{\nabla}_{e_{i}}W,du(e_{j})\big{)}h\big{(} du(e_{j}),du(e_{k})\big{)}dv_{g}\] \[+\int_{M}\sum_{i,j,k=1}^{m}h\big{(}\widetilde{\nabla}_{e_{i}}V, du(e_{k})\big{)}h\big{(}du(e_{i}),\widetilde{\nabla}_{e_{j}}W\big{)}h\big{(} du(e_{j}),du(e_{k})\big{)}dv_{g}\] \[+\int_{M}\sum_{i,j,k=1}^{m}h\big{(}\widetilde{\nabla}_{e_{i}}V, du(e_{k})\big{)}h\big{(}du(e_{i}),du(e_{j})\big{)}h\big{(}\widetilde{\nabla}_{e_{j}}W, du(e_{k})\big{)}dv_{g}\] \[+\int_{M}\sum_{i,j,k=1}^{m}h\big{(}\widetilde{\nabla}_{e_{i}}V, du(e_{k})\big{)}h\big{(}du(e_{i}),du(e_{j})\big{)}h\big{(}du(e_{j}),\widetilde{ \nabla}_{e_{k}}W\big{)}dv_{g}.\]
The integrand for the first term in the right hand side of (39) is
\[e_{i}h\big{(}d\Psi(\frac{\partial}{\partial t}),\widetilde{ \nabla}_{\frac{\partial}{\partial s}}d_{(3)}u_{s,t}(e_{i})\big{)}\] \[= e_{i}h\bigg{(}d\Psi(\frac{\partial}{\partial t}),\widetilde{ \nabla}_{\frac{\partial}{\partial s}}\bigg{(}h\big{(}d\Psi(e_{i}),d\Psi(e_{j}) \big{)}h\big{(}d\Psi(e_{j}),d\Psi(e_{k})\big{)}d\Psi(e_{k})\bigg{)}\bigg{)}\] \[= e_{i}\bigg{(}h\big{(}d\Psi(\frac{\partial}{\partial t}), \widetilde{\nabla}_{e_{k}}d\Psi(\frac{\partial}{\partial s})\big{)}h\big{(}d \Psi(e_{i}),d\Psi(e_{j})\big{)}h\big{(}d\Psi(e_{j}),d\Psi(e_{k})\big{)}\bigg{)}\] \[+e_{i}\bigg{(}h\big{(}d\Psi(\frac{\partial}{\partial t}),d\Psi(e_{ k})\big{)}h\big{(}\widetilde{\nabla}_{e_{i}}d\Psi(\frac{\partial}{\partial s }),d\Psi(e_{j})\big{)}h\big{(}d\Psi(e_{j}),d\Psi(e_{k})\big{)}\bigg{)}\]
\[+e_{i}\bigg{(}h\big{(}d\Psi(\frac{\partial}{\partial t}),d\Psi(e_{k}) \big{)}h\big{(}d\Psi(e_{i}),\widetilde{\nabla}_{e_{j}}d\Psi(\frac{\partial}{ \partial s})\big{)}h\big{(}d\Psi(e_{j}),d\Psi(e_{k})\big{)}\bigg{)}\] \[+e_{i}\bigg{(}h\big{(}d\Psi(\frac{\partial}{\partial t}),d\Psi(e_{ k})\big{)}h\big{(}d\Psi(e_{i}),d\Psi(e_{j})\big{)}h\big{(}d\Psi(e_{j}),\widetilde{ \nabla}_{e_{k}}d\Psi(\frac{\partial}{\partial s})\big{)}\bigg{)}.\]
Let \(X_{1}\), \(X_{2}\), \(X_{3}\), \(X_{4}\) and \(X_{5}\) be compactly supported vector fields on \(M\) and \(Y\) be any vector field on \(M\). We have
\[g(X_{1},Y) =h\big{(}V,\widetilde{\nabla}_{e_{k}}W\big{)}h\big{(}du(Y),du(e_ {j})\big{)}h\big{(}du(e_{j}),du(e_{k})\big{)},\] \[g(X_{2},Y) =h\big{(}V,du(e_{k})\big{)}h\big{(}\widetilde{\nabla}_{Y}W,du(e_ {j})\big{)}h\big{(}du(e_{j}),du(e_{k})\big{)},\] \[g(X_{3},Y) =h\big{(}V,du(e_{k})\big{)}h\big{(}du(Y),\widetilde{\nabla}_{e_{ j}}W\big{)}h\big{(}du(e_{j}),du(e_{k})\big{)},\] \[g(X_{4},Y) =h\big{(}V,du(e_{k})\big{)}h\big{(}du(Y),du(e_{j})\big{)}h\big{(} \widetilde{\nabla}_{e_{j}}W,du(e_{k})\big{)},\] \[g(X_{5},Y) =h\big{(}V,du(e_{k})\big{)}h\big{(}du(Y),du(e_{j})\big{)}h\big{(} du(e_{j}),\widetilde{\nabla}_{e_{k}}W\big{)}.\]
Hence, when \(s=0\) and \(t=0\), (41) becomes
\[\sum_{i=1}^{m}\bigg{(}e_{i}g(X_{1},e_{i})+e_{i}g(X_{2},e_{i})+e_ {i}g(X_{3},e_{i})+e_{i}g(X_{4},e_{i})+e_{i}g(X_{5},e_{i})\bigg{)} \tag{42}\] \[= \operatorname{div}(X_{1})+\operatorname{div}(X_{2})+\operatorname {div}(X_{3})+\operatorname{div}(X_{4})+\operatorname{div}(X_{5}).\]
The result follows from (39)-(42).
## 5 Examples of \(\Phi_{(3)}\)-SSU manifolds
Proceeding as in [52], we obtain many examples of \(\Phi_{(3)}\)-SSU manifolds.
**Theorem 5.1**.: _A hypersurface \(M\) in Euclidean space is \(\Phi_{(3)}\)-SSU if and only if its principal curvatures satisfy_
\[0<\lambda_{1}\leq\lambda_{2}\leq\cdots\leq\lambda_{m}<\frac{1}{5}(\lambda_{1}+ \cdots+\lambda_{m-1}).\]
Proof.: Similar to the proof of Theorem 5.1 in [21], from the definition of the \(\Phi_{(3)}\)-SSU, we have
\[\sum_{i=1}^{m}\big{(}6\langle B(v,e_{i}),B(v,e_{i})\rangle-\langle B (v,v),B(e_{i},e_{i})\rangle\big{)}\] \[\leq \lambda_{i}\big{(}6\lambda_{m}-\sum_{i=1}^{m}\lambda_{i}\big{)}= \lambda_{i}\big{(}5\lambda_{m}-\sum_{i=1}^{m-1}\lambda_{i}\big{)}<0,\]
that is, \(\lambda_{m}<\frac{1}{5}(\lambda_{1}+\cdots+\lambda_{m-1}).\) We finish the proof.
Then we have
**Corollary 5.2**.: _The standard sphere \(S^{m}\) is \(\Phi_{(3)}\)-SSU if and only if \(m>6.\)_
Proof.: As \(S^{m}\) is a compact convex hypersurface in \(\mathbb{R}^{m+1}\), according to Theorem 5.1, its principle curvatures satisfy
\[\lambda_{1}=\lambda_{2}=\cdots=\lambda_{m}=1.\]
Hence, \(m>6.\) We finish the proof.
**Corollary 5.3**.: _The graph of \(f(x)=x_{1}^{2}+\cdots+x_{m}^{2},\)\(x=(x_{1},\cdots,x_{m})\in\mathbb{R}^{m}\) is \(\Phi_{(3)}\)-\(\mathrm{SSU}\) if and only if \(m>6.\)_
**Lemma 5.4** ([55]).: _An Euclidean hypersurface is \(p\)-\(\mathrm{SSU}\) if and only if its principal curvatures satisfy_
\[0<\lambda_{1}\leq\lambda_{2}\leq\cdots\leq\lambda_{m}<\frac{1}{p-1}(\lambda_{ 1}+\cdots+\lambda_{m-1}).\]
**Theorem 5.5**.: _Every \(\Phi_{(3)}\)-\(\mathrm{SSU}\) manifold \(M\) is \(p\)-\(\mathrm{SSU}\) for any \(2\leq p\leq 6.\)_
Proof.: By Definition, \(\Phi_{(3)}\)-\(\mathrm{SSU}\) manifold enjoys
\[F_{\Phi_{(3)},x}(v)=\sum_{i=1}^{m}\left(6\langle B(v,e_{i}),B(v,e_{i})\rangle_ {\mathbb{R}^{q}}-\langle B(v,v),B(e_{i},e_{i})\rangle_{\mathbb{R}^{q}}\right)<0 \tag{43}\]
for all unit tanget vector \(v\in T_{x}(M)\). It follows that
\[\begin{split} F_{p,x}(v)&=(p-2)\langle\mathsf{B}(v,v),B(v,v)\rangle_{\mathbb{R}^{q}}+\langle Q_{x}^{M}(v),v\rangle_{M}\\ &\leq(p-2)\sum_{i=1}^{m}\left(2\langle B(v,e_{i}),B(v,e_{i}) \rangle_{\mathbb{R}^{q}}\right)\\ &\quad+\sum_{i=1}^{m}\left(2\langle B(v,e_{i}),B(v,e_{i}) \rangle_{\mathbb{R}^{q}}-\langle B(v,v),B(e_{i},e_{i})\rangle_{\mathbb{R}^{q}} \right)\\ &\leq\sum_{i=1}^{m}\left(p\langle B(v,e_{i}),B(v,e_{i})\rangle- \langle B(v,v),B(e_{i},e_{i})\rangle\right)\\ &\leq\sum_{i=1}^{m}\left(6\langle B(v,e_{i}),B(v,e_{i})\rangle- \langle B(v,v),B(e_{i},e_{i})\rangle\right)<0,\end{split} \tag{44}\]
for \(2\leq p\leq 6\). So, \(M\) is \(p\)-\(\mathrm{SSU}\) for any \(2\leq p\leq 6\).
**Theorem 5.6** (Topological Vanishing Theorems).: _Every compact \(\Phi_{(3)}\)-\(\mathrm{SSU}\) manifold \(M\) is \(6\)-connected, i.e.,_
\[\pi_{1}(M)=\cdots=\pi_{6}(M)=0. \tag{45}\]
Proof.: Since every compact \(p\)-\(\mathrm{SSU}\) manifold is \([p]\)-connected (cf. [53] Theorem 3.10), and \(p=6\) by the previous Theorem, the result follows.
**Theorem 5.7**.: _The dimension of any compact \(\Phi_{(3)}\)-\(\mathrm{SSU}\) manifold \(M\) is greater than \(6\)._
Proof.: Suppose that \(m\leq 6\), then \(M\) is not a \(6\)-\(\mathrm{SSU}\) manifold (cf. [55] Theorem 3.10). By the preceeding Theorem, \(M\) is not a \(\Phi_{(3)}\)-\(\mathrm{SSU}\) manifold. Hence, the dimension of any compact \(\Phi_{(3)}\)-\(\mathrm{SSU}\) manifold \(M\) is greater than \(6\)
**Theorem 5.8** (Sphere Theorems).: _Every compact \(\Phi_{(3)}\)-SSU manifold \(M\) of dimension \(m\leq 13\) is homeomorphic to an \(m\)-sphere._
Proof.: In view of Theorem 5.6, \(M\) is \(6\)-connected. By the Hurewicz isomorphism theorem, the \(6\)-connectedness of \(M\) implies homology groups \(H_{1}(M)=\cdots=H_{6}(M)=0\). It follows from Proincare Duality Theorem and the Hurewicz Isomorphism Theorem (cf. E. Spanier [44]) again, \(H_{m-6}(M)=\cdots=H_{m-1}(M)=0\), \(H_{m}(M)\neq 0\), \(m\leq 13\) and \(M\) is \((m-1)\)-connected. Hence, \(M\) is a homotopy \(m\)-sphere, \(m\leq 13\). Since \(M\) is \(\Phi_{(3)}\)-SSU manifold, \(m\geq 7\). Consequently, a homotopy \(m\)-sphere \(M\) for \(m\geq 7\) is homeomorphic to an \(m\)-sphere by a Theorem of S. Smale [42].
**Theorem 5.9**.: _Suppose that \(\widetilde{M}\) is a compact convex hypersurface of \(\mathbb{R}^{q}\) and the principal curvatures of \(\widetilde{M}\) satisfy_
\[0<\lambda_{1}\leq\lambda_{2}\leq\cdots\leq\lambda_{q-1}.\]
_If \(\langle Ric^{M}(v),v\rangle>\frac{5}{6}k\lambda_{q-1}^{2}\), where \(M\) is a compact connected minimal \(k\)-submanifold of \(\widetilde{M}\) and \(v\) is any unit tangent vector to \(M\), then \(M\) is \(\Phi_{(3)}\)-SSU._
Proof.: We denote the second fundamental form of \(M\) in \(\mathbb{R}^{q}\), \(M\) in \(\widetilde{M}\) and \(\widetilde{M}\) in \(\mathbb{R}^{q}\) by \(B\), \(B_{1}\) and \(\widetilde{B}\). According to Gauss equation, we get
\[B(X,Y)=B_{1}(X,Y)+\widetilde{B}(X,Y)\vartheta, \tag{46}\]
where \(\vartheta\) is the unit normal field of \(\widetilde{M}\) in \(\mathbb{R}^{q}\). By the definition of minimal submanifold, we have
\[\sum_{i=1}^{k}B(e_{i},e_{i})=\sum_{i=1}^{k}B_{1}(e_{i},e_{i})+\sum_{i=1}^{k} \widetilde{B}(e_{i},e_{i})\vartheta=\sum_{\alpha=1}^{k}\widetilde{B}(e_{i},e_ {i})\vartheta, \tag{47}\]
where \(\{e_{i}\}_{i=1}^{k}\) is a local orthonormal frame on \(M\). Denote \(\widetilde{B}(e_{i},e_{j})=\lambda_{i}\delta_{ij}\).
Hence,
\[\sum_{i=1}^{k}\left(6\langle B(v,e_{i}),B(v,e_{i})\rangle-\langle B(v,v),B(e_{ i},e_{i})\rangle\right)\]
\[= -6\langle Ric^{M}(v),v\rangle+5\sum_{i=1}^{k}\langle B(v,v),B(e_{i},e_{i})\rangle\] \[= -6\langle Ric^{M}(v),v\rangle+5\sum_{i=1}^{k}\widetilde{B}(v,v) \widetilde{B}(e_{i},e_{i})\] \[\leq -6\langle Ric^{M}(v),v\rangle+5\sum_{i=1}^{k}\lambda_{i}\lambda_ {q-1}\] \[\leq -6\langle Ric^{M}(v),v\rangle+5k\lambda_{q-1}^{2}<0.\]
The first equality holds due to Gauss equation. We have the desired result.
The following lemma will be used in our later proof. For the second fundamental form of an Ellipsoid in \(\mathbb{R}^{m+1}\), we have
**Lemma 5.10** ([24, 41, 56]).: _Let \(\{\lambda_{i}\}_{i=1}^{m}\) be a family of principal curvatures of \(E^{m}\) in \(\mathbb{R}^{m+1}\) with \(0<\lambda_{1}\leq\lambda_{2}\leq\cdots\leq\lambda_{m}\). Then_
\[\frac{\min(a_{i})}{\big{(}\max(a_{i})\big{)}^{2}}\leq\lambda_{1}\leq\lambda_{2 }\leq\cdots\leq\lambda_{m}\leq\frac{\max(a_{i})}{\big{(}\min(a_{i})\big{)}^{2}},\]
_where_
\[E^{m}=\left\{(x_{1},\cdots,x_{m+1})\in\mathbb{R}^{m+1}:\frac{x_{1}^{2}}{a_{1}^ {2}}+\cdots+\frac{x_{m+1}^{2}}{a_{m+1}^{2}}=1,a_{i}>0,1\leq i\leq m+1\right\}.\]
Similarly, we can prove the following results by using Theorem 5.9. As the proofs are similar, we omit the details.
**Theorem 5.11**.: _Suppose that \(M\) is a compact minimal \(k\)-submanifold of an ellipsoid \(E^{q-1}\) in \(\mathbb{R}^{q}\) and \(v\) is any unit tangent vector to \(M.\) Then \(M\) is \(\Phi_{(3)}\)-SSU when_
\[\langle Ric^{M}(v),v\rangle>\frac{5}{6}\frac{\big{(}\max_{1\leq i\leq q}(a_{i} )\big{)}^{2}}{\big{(}\min_{1\leq i\leq q}(a_{i})\big{)}^{4}}k.\]
**Corollary 5.12**.: _Suppose that \(M\) is a compact minimal \(k\)-submanifold of the unit sphere \(S^{q-1}\) and \(v\) is any unit tangent vector to \(M.\) Then \(M\) is \(\Phi_{(3)}\)-SSU when \(\langle Ric^{M}(v),v\rangle>\frac{5}{6}k.\)_
**Theorem 5.13**.: _Suppose that \(M\) is a compact \(k\)-submanifold of the unit sphere \(S^{q-1}\) and \(B_{1}\) is the second fundamental form of \(M\) in \(S^{q-1}\). Then \(M\) is \(\Phi_{(3)}\)-SSU when_
\[\|B_{1}\|^{2}<\frac{k-6}{\sqrt{k}+6},\]
_where \(k>6.\)_
Proof.: By Theorem 5.9 and Cauchy-Schwarz inequality, we get
\[\sum_{i=1}^{k}\big{(}6\langle B(v,e_{i}),B(v,e_{i})\rangle-\langle B (v,v),B(e_{i},e_{i})\rangle\big{)}\] \[= \sum_{i=1}^{k}\big{(}6\langle B_{1}(v,e_{i}),B_{1}(v,e_{i}) \rangle-\langle B_{1}(v,v),B_{1}(e_{i},e_{i})\rangle\big{)}-(k-6)\] \[\leq \sum_{i=1}^{k}6\langle B_{1}(v,e_{i}),B_{1}(v,e_{i})\rangle+|B_{ 1}(v,v)|\big{(}\big{|}\sum_{i=1}^{k}B_{1}(e_{i},e_{i})\big{|}^{2}\big{)}^{\frac {1}{2}}-(k-6)\] \[\leq 6\|B_{1}\|^{2}+\sqrt{k}|B_{1}(v,v)|\big{(}\sum_{i=1}^{k}|B_{1}( e_{i},e_{i})|^{2}\big{)}^{\frac{1}{2}}-(k-6)\] \[\leq (6+\sqrt{k})\|B_{1}\|^{2}-(k-6)<0.\]
Hence, by the definition of the \(\Phi_{(3)}\)-SSU, we get the desired result.
Stable \(\Phi_{(3)}\)-harmonic maps from \(\Phi_{(3)}\)-SSU manifolds
We recall some definitions and facts of submanifolds which will be used in the following results, see [24].
Let \(M^{m}\) be isometrically immersed in the Euclidean space \(\mathbb{R}^{q}\) and \(B\) be the second fundamental form \(M\) in \(\mathbb{R}^{q}\). We denote the standard flat connection of \(\mathbb{R}^{q}\) and the Riemannian connection on \(M\) by \(\overline{\nabla}\) and \(\nabla\). These are related by
\[\overline{\nabla}_{X}Y=\nabla_{X}Y+B(X,Y),\]
where \(X,Y\) are smooth vector fields on \(M.\) The tensors \(A\) and \(B\) are related by
\[\langle A^{\eta}X,Y\rangle=\langle B(X,Y),\eta\rangle, \tag{48}\]
where \(A^{\eta}X\) is the Weingarten map with the normal vector field \(\eta\in T^{\perp}M.\)
For each \(x\in M\), we denote an orthonormal basis for the normal space \(T^{\perp}_{x}M\) to \(M\) at \(x\) by \(\{e_{m+1},\cdots,e_{q}\}.\) Let \(v\in T_{x}M\). The Ricci tensor \(Ric^{M}:T_{x}M\to T_{x}M\) is defined by
\[Ric^{M}(v)=\sum_{i=1}^{m}R(v,e_{i})e_{i}.\]
From the Gauss curvature equation, we have
\[Ric^{M}=\sum_{\alpha=m+1}^{q}\mathrm{trace}(A^{e_{\alpha}})A^{e_{\alpha}}- \sum_{\alpha=m+1}^{q}A^{e_{\alpha}}A^{e_{\alpha}}. \tag{49}\]
Then we have
**Theorem 6.1**.: _Let \((M^{m},g)\) be a compact \(\Phi_{(3)}\)-SSU manifold and \((N,h)\) be a compact Riemannian manifold. Then every stable \(\Phi_{(3)}\)-harmonic map \(u:(M^{m},g)\rightarrow(N,h)\) is constant._
Proof.: Let \(\{v_{\ell}^{\top}\}\) be the tangential projection of an orthonormal frame field \(\{v_{\ell}\}_{\ell=1}^{q}\) in \(\mathbb{R}^{q}\) onto \(M\). For convenience, we choose \(\{v_{1},\cdots,v_{m}\}=\{e_{1},\cdots,e_{m}\}\) to be tangential to \(M\), \(\{v_{m+1},\cdots,v_{q}\}=\{e_{m+1},\cdots,e_{q}\}\) to be normal to \(M\), and \(\nabla^{\Psi}e_{i}=0\) at a point in \(M\). Since \(v_{\ell}^{\top}=v_{\ell}-v_{\ell}^{\perp}\) and \(v_{\ell}\) are parallel in \(\mathbb{R}^{q}\), we have
\[\begin{split}\nabla^{u}_{e_{i}}du(v_{\ell}^{\top})& =(\nabla^{u}_{e_{i}}du)(v_{\ell}^{\top})+du(\nabla^{M}_{e_{i}}v_{ \ell}^{\top})=(\nabla^{u}_{e_{i}}du)(v_{\ell}^{\top})+du\bigg{(}\left(\nabla^{ \mathbb{R}^{q}}_{e_{i}}(v_{\ell}-v_{\ell}^{\perp})\right)^{\top}\bigg{)}\\ &=(\nabla^{u}_{e_{i}}du)(v_{\ell}^{\top})+du\left(A^{v_{\ell}^{ \perp}}(e_{i})\right)\,.\end{split} \tag{50}\]
In view of (48), we have
\[\nabla^{u}_{e_{i}}du(v_{\ell}^{\top})=\sum_{k=1}^{m}\nabla^{u}_{e_{i}}du(e_{k} )+\sum_{\ell=m+1}^{q}\sum_{k=1}^{m}B^{\ell}_{ik}du(e_{k}), \tag{51}\]
where \(B^{\ell}_{ij}=\langle B(e_{i},e_{j}),e_{\ell}\rangle\) is a components of \(B\).
According to Proposition 2.3, we have
\[\sum_{\ell=1}^{q}\int_{M}\langle(\Delta du)(v_{\ell}),d_{(3)}u(v_{ \ell})\rangle dv_{g}=\int_{M}\sum_{i,j,\ell}\delta_{ij}\langle(\Delta du)(e_{i}),d_{(3)}u(e_{j})\rangle dv_{g}\] \[=\int_{M}\sum_{i=1}^{m}\langle(\Delta du)(e_{i}),d_{(3)}u(e_{i}) \rangle dv_{g}=\int_{M}\langle(\Delta du),d_{(3)}u\rangle dv_{g}=\int_{M} \langle\delta du,\delta d_{(3)}u\rangle dv_{g}\] \[=-\int_{M}\langle\delta du,\tau_{\Phi_{(3)}}(u)\rangle dv_{g}=0. \tag{52}\]
By using the Weitzenbock formula, we have
\[-\sum_{k}R^{N}\big{(}du(X),du(e_{k})\big{)}du(e_{k})+du\big{(}Ric^{M}(X)\big{)} =(\Delta du)(X)+(\nabla^{2}du)(X),\]
where \(X\) is a smooth vector field in \(M\). We assume \(i,j,k,\hbar,\imath\in\{1,\cdots,m\}\), \(\alpha,\beta\in\{m+1,\cdots,q\}\), \(\ell\in\{1,\cdots m,\cdots q\}.\) Hence,
\[\sum_{\ell=1}^{q}I\big{(}du(v_{\ell}^{\top}),du(v_{\ell}^{\top}) \big{)}\] \[= -\int_{M}\sum_{i=1}^{m}h\big{(}du(Ric^{M}(e_{i})),d_{(3)}u(e_{i}) \big{)}dv_{g}+\int_{M}\sum_{i=1}^{m}h\big{(}(\nabla^{2}du)(e_{i}),d_{(3)}u(e_{ i})\big{)}dv_{g}\] \[+\int_{M}\sum_{i,j,k,\ell}h\big{(}\widetilde{\nabla}_{e_{i}}du(v _{\ell}^{\top}),\widetilde{\nabla}_{e_{k}}du(v_{\ell})\big{)}h\big{(}du(e_{k}),du(e_{j})\big{)}h\big{(}du(e_{i}),du(e_{j})\big{)}dv_{g}\] \[+\int_{M}\sum_{i,j,k,\ell}h\big{(}\widetilde{\nabla}_{e_{i}}du(v _{\ell}^{\top}),du(e_{k})\big{)}h\big{(}\widetilde{\nabla}_{e_{k}}du(v_{\ell} ),du(e_{j})\big{)}h\big{(}du(e_{i}),du(e_{j})\big{)}dv_{g} \tag{53}\] \[+\int_{M}\sum_{i,j,k,\ell}h\big{(}\widetilde{\nabla}_{e_{i}}du(v _{\ell}^{\top}),du(e_{k})\big{)}h\big{(}du(e_{k}),\widetilde{\nabla}_{e_{j}} du(v_{\ell})\big{)}h\big{(}du(e_{i}),du(e_{j})\big{)}dv_{g}\] \[+\int_{M}\sum_{i,j,k,\ell}h\big{(}\widetilde{\nabla}_{e_{i}}du(v _{\ell}^{\top}),du(e_{k})\big{)}h\big{(}du(e_{k}),du(e_{j})\big{)}h\big{(} \widetilde{\nabla}_{e_{i}}du(v_{\ell}^{\top}),du(e_{j})\big{)}dv_{g}\] \[+\int_{M}\sum_{i,j,k,\ell}h\big{(}\widetilde{\nabla}_{e_{i}}du(v _{\ell}^{\top}),du(e_{k})\big{)}h\big{(}du(e_{k}),du(e_{j})\big{)}h\big{(}du(e_{ i}),\widetilde{\nabla}_{e_{j}}du(v_{\ell}^{\top})\big{)}dv_{g}.\]
We compute at \(x_{0}\). Via (49), The first integrand in (53) is
\[\sum_{i=1}^{m}h\big{(}du(Ric^{M}(e_{i})),d_{(3)}u(e_{i})\big{)} \tag{54}\] \[= \sum_{i,\alpha}h\big{(}du(\mathrm{trace}(A^{e_{\alpha}})A^{e_{ \alpha}}(e_{i})),d_{(3)}u(e_{i})\big{)}-\sum_{i,\alpha}h\big{(}du(A^{e_{\alpha} }A^{e_{\alpha}}(e_{i})),d_{(3)}u(e_{i})\big{)}.\]
The second integrand in (53) is
\[\begin{split}&\sum_{i=1}^{m}h\big{(}(\nabla^{2}du)(e_{i}),d_{(3)}u(e _{i})\big{)}\\ =&\sum_{i,j,k}h\big{(}(\nabla^{2}du)(e_{i}),du(e_{k}) \big{)}h\big{(}du(e_{i}),du(e_{j})\big{)}h\big{(}du(e_{j}),du(e_{k})\big{)}\\ =&\sum_{i,j,k,h}e_{h}\bigg{(}h\big{(}\nabla_{e_{h}}du (e_{i}),du(e_{k})\big{)}h\big{(}du(e_{i}),du(e_{j})\big{)}h\big{(}du(e_{j}),du(e _{k})\big{)}\bigg{)}\\ &-\sum_{i,j,k,h}h\big{(}\nabla_{e_{h}}du(e_{i}),\nabla_{e_{h}}du (e_{k})\big{)}h\big{(}du(e_{i}),du(e_{j})\big{)}h\big{(}du(e_{j}),du(e_{k}) \big{)}\\ &-\sum_{i,j,k,h}h\big{(}\nabla_{e_{h}}du(e_{i}),du(e_{k})\big{)} h\big{(}\nabla_{e_{h}}du(e_{i}),du(e_{j})\big{)}h\big{(}du(e_{j}),du(e_{k}) \big{)}\\ &-\sum_{i,j,k,h}h\big{(}\nabla_{e_{h}}du(e_{i}),du(e_{k})\big{)} h\big{(}du(e_{i}),du(e_{j})\big{)}h\big{(}du(e_{j}),du(e_{k})\big{)}\\ &-\sum_{i,j,k,h}h\big{(}\nabla_{e_{h}}du(e_{i}),du(e_{k})\big{)} h\big{(}du(e_{i}),du(e_{j})\big{)}h\big{(}du(e_{j}),\nabla_{e_{h}}du(e_{k}) \big{)}.\end{split} \tag{55}\]
The third integrand in (53) is
\[\begin{split}&\sum_{i,j,k,\ell}h\big{(}\widetilde{\nabla}_{e_{i}} du(v_{\ell}^{\top}),\widetilde{\nabla}_{e_{k}}du(v_{\ell}^{\top})\big{)}h\big{(}du(e_{k}),du (e_{j})\big{)}h\big{(}du(e_{i}),du(e_{j})\big{)}\\ =&\sum_{i,j,k,h,\alpha,\beta}h\bigg{(}B_{ih}^{\alpha }du(e_{h})+\widetilde{\nabla}_{e_{i}}du(e_{h}),B_{ki}^{\beta}du(e_{i})\\ &+\widetilde{\nabla}_{e_{k}}du(e_{i})\bigg{)}h\big{(}du(e_{k}), du(e_{j})\big{)}h\big{(}du(e_{i}),du(e_{j})\big{)}\\ =&\sum_{i,j,k,h,\imath,\alpha}B_{ih}^{\alpha}B_{ki} ^{\alpha}h\big{(}du(e_{h}),du(e_{\imath})\big{)}h\big{(}du(e_{k}),du(e_{j}) \big{)}h\big{(}du(e_{i}),du(e_{j})\big{)}\\ &+\sum_{i,j,k,h}h\big{(}(\nabla_{e_{h}}du)(e_{i}),(\nabla_{e_{h} }du)(e_{k})\big{)}h\big{(}du(e_{k}),du(e_{j})\big{)}h\big{(}du(e_{i}),du(e_{j })\big{)}\\ =&\sum_{i,j,k,\alpha}h\big{(}du(A^{e_{\alpha}}(e_{i} )),du(A^{e_{\alpha}}(e_{k}))\big{)}h\big{(}du(e_{k}),du(e_{j})\big{)}h\big{(} du(e_{i}),du(e_{j})\big{)}\\ &+\sum_{i,j,k,h}h\big{(}(\nabla_{e_{h}}du)(e_{i}),(\nabla_{e_{h} }du)(e_{k})\big{)}h\big{(}du(e_{k}),du(e_{j})\big{)}h\big{(}du(e_{i}),du(e_{j })\big{)}.\end{split} \tag{56}\]
The fourth integrand in (53) is
\[\begin{split}&\sum_{i,j,k,\ell}h\big{(}\widetilde{\nabla}_{e_{i}} du(v_{\ell}^{\top}),du(e_{k})\big{)}h\big{(}\widetilde{\nabla}_{e_{k}}du(v_{\ell}^{ \top}),du(e_{j})\big{)}h\big{(}du(e_{i}),du(e_{j})\big{)}\\ =&\sum_{i,j,k,h,\alpha,\beta}h\big{(}B_{ih}^{\alpha }du(e_{h})+\widetilde{\nabla}_{e_{i}}du(e_{h}),du(e_{k})\big{)}\\ &\cdot h\big{(}B_{ki}^{\beta}du(e_{\imath})+\widetilde{\nabla}_{e _{k}}du(e_{\imath}),du(e_{j})\big{)}h\big{(}du(e_{i}),du(e_{j})\big{)}\end{split}\]
\[=\sum_{i,j,k,\alpha}B^{\alpha}_{i\bar{h}}B^{\alpha}_{k\bar{h}}h\big{(} du(e_{h}),du(e_{k})\big{)}h\big{(}du(e_{i}),du(e_{j})\big{)}h\big{(}du(e_{i}),du(e_{j}) \big{)} \tag{57}\] \[\quad+\sum_{i,j,k,h}h\big{(}\widetilde{\nabla}_{e_{i}}du(e_{h}), du(e_{k})\big{)}h\big{(}\widetilde{\nabla}_{e_{k}}du(e_{h}),du(e_{j})\big{)}h\big{(}du(e_{i}),du(e_{j}) \big{)}\] \[=\sum_{i,j,k.\alpha}h\big{(}du(A^{e_{\alpha}}(e_{i})),du(e_{k}) \big{)}h\big{(}du(A^{e_{\alpha}}(e_{k})),du(e_{j})\big{)}h\big{(}du(e_{i}),du(e _{j})\big{)}\] \[\quad+\sum_{i,j,k,h}h\big{(}(\nabla_{e_{h}}du)(e_{i}),du(e_{k}) \big{)}h\big{(}(\nabla_{e_{h}}du)(e_{k}),du(e_{j})\big{)}h\big{(}du(e_{i}),du( e_{j})\big{)}.\]
The fifth integrand in (53) is
\[\sum_{i,j,k,\ell}h\big{(}\widetilde{\nabla}_{e_{i}}du(v_{\ell}^{ \top}),du(e_{k})\big{)}h\big{(}du(e_{k}),\widetilde{\nabla}_{e_{j}}du(v_{\ell}^ {\top})\big{)}h\big{(}du(e_{i}),du(e_{j})\big{)} \tag{58}\] \[=\sum_{i,j,k,h,i,\alpha,\beta}h\big{(}B^{\alpha}_{i\bar{h}}du(e_{ \bar{h}})+\widetilde{\nabla}_{e_{i}}du(e_{\bar{h}}),du(e_{k})\big{)}\] \[\quad\cdot h\big{(}du(e_{k}),B^{\beta}_{j_{1}}du(e_{\bar{\imath}} )+\widetilde{\nabla}_{e_{j}}du(e_{\bar{\imath}})\big{)}h\big{(}du(e_{i}),du( e_{j})\big{)}\] \[=\sum_{i,j,k,h,\alpha}B^{\alpha}_{i\bar{h}}B^{\alpha}_{j_{1}}h \big{(}du(e_{\bar{h}}),du(e_{k})\big{)}h\big{(}du(e_{k}),du(e_{\bar{\imath}}) \big{)}h\big{(}du(e_{i}),du(e_{j})\big{)}\] \[\quad+\sum_{i,j,k,h}h\big{(}\widetilde{\nabla}_{e_{i}}du(e_{h}), du(e_{k})\big{)}h\big{(}du(e_{k}),\widetilde{\nabla}_{e_{j}}du(e_{h})\big{)}h \big{(}du(e_{i}),du(e_{j})\big{)}\] \[=\sum_{i,j,k.\alpha}h\big{(}du(A^{e_{\alpha}}(e_{i})),du(e_{k}) \big{)}h\big{(}du(e_{k}),du(A^{e_{\alpha}}(e_{j}))\big{)}h\big{(}du(e_{i}),du (e_{j})\big{)}\] \[\quad+\sum_{i,j,k,h}h\big{(}(\nabla_{e_{h}}du)(e_{i}),du(e_{k}) \big{)}h\big{(}du(e_{k}),(\nabla_{e_{h}}du)(e_{j})\big{)}h\big{(}du(e_{i}),du (e_{j})\big{)}.\]
The sixth integrand in (53) is
\[\sum_{i,j,k,\ell}h\big{(}\widetilde{\nabla}_{e_{i}}du(v_{\ell}^{ \top}),du(e_{k})\big{)}h\big{(}du(e_{k}),du(e_{j})\big{)}h\big{(}\widetilde{ \nabla}_{e_{i}}du(v_{\ell}^{\top}),du(e_{j})\big{)} \tag{59}\] \[=\sum_{i,j,k,h,\alpha}h\big{(}B^{\alpha}_{i\bar{h}}du(e_{h})+ \widetilde{\nabla}_{e_{i}}du(e_{h}),du(e_{k})\big{)}h\big{(}du(e_{k}),du(e_{j} )\big{)}\] \[\quad\cdot h\big{(}B^{\alpha}_{i\bar{h}}du(e_{h})+\widetilde{ \nabla}_{e_{i}}du(e_{h}),du(e_{j})\big{)}\] \[=\sum_{i,j,k,h.\alpha}B^{\alpha}_{i\bar{h}}B^{\alpha}_{i\bar{h}}h \big{(}du(e_{h}),du(e_{k})\big{)}h\big{(}du(e_{k}),du(e_{j})\big{)}h\big{(}du( e_{h}),du(e_{j})\big{)}\] \[\quad+\sum_{i,j,k,h}h\big{(}\widetilde{\nabla}_{e_{i}}du(e_{h}),du( e_{k})\big{)}h\big{(}du(e_{k}),du(e_{j})\big{)}h\big{(}\widetilde{\nabla}_{e_{i}}du( e_{h}),du(e_{j})\big{)}\] \[=\sum_{i,j,k.\alpha}h\big{(}du(A^{e_{\alpha}}(e_{i})),du(e_{k}) \big{)}h\big{(}du(e_{k}),du(e_{j})\big{)}h\big{(}du(A^{e_{\alpha}}(e_{i})),du( e_{j})\big{)}\] \[\quad+\sum_{i,j,k,h}h\big{(}(\nabla_{e_{h}}du)(e_{i}),du(e_{k}) \big{)}h\big{(}du(e_{k}),du(e_{j})\big{)}h\big{(}(\nabla_{e_{h}}du)(e_{i}),du( e_{j})\big{)}.\]
The seventh integrand in (53) is
\[\sum_{i,j,k,\ell}h\big{(}\widetilde{\nabla}_{e_{i}}du(v_{\ell}^{ \top}),du(e_{k})\big{)}h\big{(}du(e_{k}),du(e_{j})\big{)}h\big{(}du(e_{i}), \widetilde{\nabla}_{e_{j}}du(v_{\ell}^{\top})\big{)} \tag{60}\] \[= \sum_{i,j,k,h,\alpha,\alpha}h\big{(}B_{ih}^{\alpha}du(e_{h})+ \widetilde{\nabla}_{e_{i}}du(e_{h}),du(e_{k})\big{)}h\big{(}du(e_{k}),du(e_{j}) \big{)}\] \[\cdot h\big{(}du(e_{i}),B_{ji}^{\beta}du(e_{i})+\widetilde{\nabla}_ {e_{j}}du(e_{i})\big{)}\] \[= \sum_{i,j,k,h,\alpha}B_{ih}^{\alpha}B_{ji}^{\alpha}h\big{(}du(e_{ h}),du(e_{k})\big{)}h\big{(}du(e_{k}),du(e_{j})\big{)}h\big{(}du(e_{i}),du(e_{i}) \big{)}\] \[+\sum_{i,j,k,h}h\big{(}\widetilde{\nabla}_{e_{i}}du(e_{h}),du(e_ {k})\big{)}h\big{(}du(e_{k}),du(e_{j})\big{)}h\big{(}du(e_{i}),\widetilde{ \nabla}_{e_{j}}du(e_{h})\big{)}\] \[= \sum_{i,j,k,\alpha}h\big{(}du(A^{e_{\alpha}}(e_{i})),du(e_{k}) \big{)}h\big{(}du(e_{k}),du(e_{j})\big{)}h\big{(}du(e_{i}),du(A^{e_{\alpha}}(e _{j}))\big{)}\] \[+\sum_{i,j,k,h}h\big{(}(\nabla_{e_{h}}du)(e_{i}),du(e_{k})\big{)} h\big{(}du(e_{k}),du(e_{j})\big{)}h\big{(}du(e_{i}),(\nabla_{e_{h}}du)(e_{j}) \big{)}.\]
Combining (53)-(60), we have
\[\sum_{\ell=1}^{q}I\big{(}du(v_{\ell}^{\top}),du(v_{\ell}^{\top}) \big{)} \tag{61}\] \[= -\int_{M}\sum_{i,\alpha}h\big{(}du(trace(A^{e_{\alpha}})A^{e_{ \alpha}}(e_{i})),d_{(3)}u(e_{i})\big{)}dv_{g}\] \[+\int_{M}\sum_{i,\alpha}h\big{(}du(A^{e_{\alpha}}A^{e_{\alpha}}( e_{i})),d_{(3)}u(e_{i})\big{)}dv_{g}\] \[+\int_{M}\sum_{i,j,k,\alpha}h\big{(}du(A^{e_{\alpha}}(e_{i})),du( A^{e_{\alpha}}(e_{k}))\big{)}h\big{(}du(e_{k}),du(e_{j})\big{)}h\big{(}du(e_{i}),du (e_{j})\big{)}dv_{g}\] \[+\int_{M}\sum_{i,j,k,\alpha}h\big{(}du(A^{e_{\alpha}}(e_{i})),du( e_{k})\big{)}h\big{(}du(A^{e_{\alpha}}(e_{k})),du(e_{j})\big{)}h\big{(}du(e_{i}),du (e_{j})\big{)}dv_{g}\] \[+\int_{M}\sum_{i,j,k,\alpha}h\big{(}du(A^{e_{\alpha}}(e_{i})),du( e_{k})\big{)}h\big{(}du(e_{k}),du(A^{e_{\alpha}}(e_{j}))\big{)}h\big{(}du(e_{i}),du (e_{j})\big{)}dv_{g}\] \[+\int_{M}\sum_{i,j,k,\alpha}h\big{(}du(A^{e_{\alpha}}(e_{i})),du( e_{k})\big{)}h\big{(}du(e_{k}),du(e_{j})\big{)}h\big{(}du(e_{i}),du(e_{j}) \big{)}dv_{g}\] \[+\int_{M}\sum_{i,j,k,\alpha}h\big{(}du(A^{e_{\alpha}}(e_{i})),du( e_{k})\big{)}h\big{(}du(e_{k}),du(e_{j})\big{)}h\big{(}du(e_{i}),du(A^{e_{\alpha}}(e_{j}) \big{)}\big{)}dv_{g}.\]
As the matrix \(h\big{(}du(e_{i}),du(e_{j})\big{)}\) is symmetric, we take a local orthonormal frame \(\{e_{i}\}_{i=1}^{m}\) such that
\[h\big{(}du(e_{i}),du(e_{j})\big{)}=\lambda_{i}^{2}\delta_{ij},\qquad i,j=1, \cdots,m. \tag{62}\]
The first integrand in (61) is
\[\sum_{i,\alpha}h\big{(}du\big{(}\mathrm{trace}(A^{e_{\alpha}})A^{e_{ \alpha}}(e_{i})\big{)},d_{(3)}u(e_{i})\big{)}\] \[=\sum_{i,j,k,\alpha}h\big{(}du\big{(}\mathrm{trace}(A^{e_{\alpha}}) A^{e_{\alpha}}(e_{i})\big{)},du(e_{k})\big{)}h\big{(}du(e_{i}),du(e_{j})\big{)}h \big{(}du(e_{j}),du(e_{k})\big{)}\] \[=\sum_{i,j,k,\hbar,\alpha}\langle A^{e_{\alpha}}(e_{\hbar}),e_{ \hbar}\rangle\langle A^{e_{\alpha}}(e_{i}),e_{\imath}\rangle h\big{(}du(e_{i}), du(e_{k})\big{)}h\big{(}du(e_{i}),du(e_{j})\big{)}h\big{(}du(e_{j}),du(e_{k})\big{)}\] \[=\sum_{i,j,k,\hbar,\imath}\langle B(e_{\hbar},e_{\hbar}),B(e_{i}, e_{\imath})\rangle\lambda_{i}^{2}\delta_{\imath k}\lambda_{i}^{2}\delta_{ij} \lambda_{j}^{2}\delta_{jk} \tag{63}\] \[=\sum_{i,j}\lambda_{i}^{6}\langle B(e_{i},e_{i}),B(e_{j},e_{j})\rangle.\]
The second integrand in (61) is
\[\sum_{i,\alpha}h\big{(}du\big{(}A^{e_{\alpha}}A^{e_{\alpha}}(e_{i })\big{)},d_{(3)}u(e_{i})\big{)}\] \[=\sum_{i,j,k,\alpha}h\big{(}du\big{(}A^{e_{\alpha}}A^{e_{\alpha}} (e_{i})\big{)},du(e_{k})\big{)}h\big{(}du(e_{i}),du(e_{j})\big{)}h\big{(}du(e_{ j}),du(e_{k})\big{)}\] \[=\sum_{i,j,k,\hbar,\imath,\alpha}\langle A^{e_{\alpha}}(e_{i}),e_ {\imath}\rangle\langle A^{e_{\alpha}}(e_{\imath}),e_{\imath}\rangle h\big{(}du (e_{\hbar}),du(e_{k})\big{)}h\big{(}du(e_{i}),du(e_{j})\big{)}h\big{(}du(e_{j} ),du(e_{k})\big{)}\] \[=\sum_{i,j,k,\hbar,\imath}\langle B(e_{i},e_{\imath}),B(e_{ \imath},e_{\imath})\rangle\lambda_{\hbar}^{2}\delta_{\imath k}\lambda_{i}^{2} \delta_{ij}\lambda_{j}^{2}\delta_{jk} \tag{64}\] \[=\sum_{i,j}\lambda_{i}^{6}\langle B(e_{i},e_{j}),B(e_{i},e_{j})\rangle.\]
The third integrand in (61) is
\[\sum_{i,j,k,\alpha}h\big{(}du\big{(}A^{e_{\alpha}}(e_{i})\big{)}, du\big{(}A^{e_{\alpha}}(e_{k})\big{)}\big{)}h\big{(}du(e_{k}),du(e_{j})\big{)}h \big{(}du(e_{i}),du(e_{j})\big{)}\] \[=\sum_{i,j,k,\hbar,\imath,\alpha}\langle A^{e_{\alpha}}(e_{i}),e_ {\imath}\rangle\langle A^{e_{\alpha}}(e_{k}),e_{\imath}\rangle h\big{(}du(e_{ \hbar}),du(e_{\imath})\big{)}h\big{(}du(e_{k}),du(e_{j})\big{)}h\big{(}du(e_{ i}),du(e_{j})\big{)}\] \[=\sum_{i,j,k,\hbar,\imath}\langle B(e_{i},e_{\hbar}),B(e_{k},e_{ \imath})\rangle\lambda_{\hbar}^{2}\delta_{\imath r}\lambda_{k}^{2}\delta_{kj} \lambda_{i}^{2}\delta_{ij}\] \[=\sum_{i,j}\lambda_{i}^{4}\lambda_{j}^{2}\langle B(e_{i},e_{j}),B (e_{i},e_{j})\rangle \tag{65}\] \[\leq\sum_{i,j}(\frac{2}{3}\lambda_{i}^{6}+\frac{1}{3}\lambda_{j}^ {6}\rangle\langle B(e_{i},e_{j}),B(e_{i},e_{j})\rangle\] \[=\sum_{i,j}\lambda_{i}^{6}\langle B(e_{i},e_{j}),B(e_{i},e_{j})\rangle.\]
The fourth integrand in (61) is
\[\sum_{i,j,k,\alpha}h\big{(}du\big{(}A^{e_{\alpha}}(e_{i})\big{)},du( e_{k})\big{)}h\big{(}du(A^{e_{\alpha}}(e_{k})),du(e_{j})\big{)}h\big{(}du(e_{i}),du(e_{j}) \big{)}\] \[= \sum_{i,j,k,h,\alpha}\langle A^{e_{\alpha}}(e_{i}),e_{h}\rangle \langle A^{e_{\alpha}}(e_{k}),e_{i}\rangle h\big{(}du(e_{h}),du(e_{k})\big{)}h \big{(}du(e_{i}),du(e_{j})\big{)}h\big{(}du(e_{i}),du(e_{j})\big{)}\] \[= \sum_{i,j,k,h,\imath}\langle B(e_{i},e_{h}),B(e_{k},e_{i})\rangle \lambda_{\hbar}^{2}\delta_{hk}\lambda_{i}^{2}\delta_{rj}\lambda_{i}^{2}\delta_ {ij}\] \[= \sum_{i,j}\lambda_{i}^{4}\lambda_{j}^{2}\langle B(e_{i},e_{j}),B( e_{i},e_{j})\rangle \tag{66}\] \[\leq \sum_{i,j}\big{(}\frac{2}{3}\lambda_{i}^{6}+\frac{1}{3}\lambda_{ j}^{6}\big{)}\langle B(e_{i},e_{j}),B(e_{i},e_{j})\rangle\] \[= \sum_{i,j}\lambda_{i}^{6}\langle B(e_{i},e_{j}),B(e_{i},e_{j})\rangle.\]
The fifth integrand in (61) is
\[\sum_{i,j,k,\alpha}h\big{(}du\big{(}A^{e_{\alpha}}(e_{i})\big{)}, du(e_{k})\big{)}h\big{(}du(e_{k}),du\big{(}A^{e_{\alpha}}(e_{j})\big{)}\big{)}h \big{(}du(e_{i}),du(e_{j})\big{)}\] \[= \sum_{i,j,k,h,\imath}\langle A^{e_{\alpha}}(e_{i}),e_{h}\rangle \langle A^{e_{\alpha}}(e_{j}),e_{i}\rangle h\big{(}du(e_{h}),du(e_{k})\big{)}h \big{(}du(e_{k}),du(e_{i})\big{)}h\big{(}du(e_{i}),du(e_{j})\big{)}\] \[= \sum_{i,j,k,h,\imath}\langle B(e_{i},e_{h}),B(e_{j},e_{i})\rangle \lambda_{\hbar}^{2}\delta_{hk}\lambda_{\hbar}^{2}\delta_{k\imath}\lambda_{i}^{ 2}\delta_{ij}\] \[= \sum_{i,j}\lambda_{i}^{2}\lambda_{j}^{4}\langle B(e_{i},e_{j}),B (e_{i},e_{j})\rangle\] \[\leq \sum_{i,j}\big{(}\frac{1}{3}\lambda_{i}^{6}+\frac{2}{3}\lambda_{ j}^{6}\big{)}\langle B(e_{i},e_{j}),B(e_{i},e_{j})\rangle \tag{67}\] \[= \sum_{i,j}\lambda_{i}^{6}\langle B(e_{i},e_{j}),B(e_{i},e_{j})\rangle.\]
The sixth integrand in (61) is
\[\sum_{i,j,k,\alpha}h\big{(}du\big{(}A^{e_{\alpha}}(e_{i})\big{)}, du(e_{k})\big{)}h\big{(}du(e_{k}),du(e_{j})\big{)}h\big{(}du\big{(}A^{e_{\alpha}}(e_{i })\big{)},du(e_{j})\big{)}\] \[= \sum_{i,j,k,\hbar,\imath}\langle A^{e_{\alpha}}(e_{i}),e_{h} \rangle\langle A^{e_{\alpha}}(e_{i}),e_{i}\rangle h\big{(}du(e_{h}),du(e_{k}) \big{)}h\big{(}du(e_{k}),du(e_{j})\big{)}h\big{(}du(e_{i}),du(e_{j})\big{)}\] \[= \sum_{i,j,k,\hbar,\imath}\langle B(e_{i},e_{h}),B(e_{i},e_{i}) \rangle\lambda_{\hbar}^{2}\delta_{hk}\lambda_{k}^{2}\delta_{kj}\lambda_{i}^{ 2}\delta_{ij} \tag{68}\] \[= \sum_{i,j}\lambda_{i}^{6}\langle B(e_{i},e_{j}),B(e_{i},e_{j})\rangle.\]
The seventh integrand in (61) is
\[\sum_{i,j,k,\alpha}h\big{(}du\big{(}A^{e_{\alpha}}(e_{i})\big{)},du (e_{k})\big{)}h\big{(}du(e_{k}),du(e_{j})\big{)}h\big{(}du(e_{i}),du\big{(}A^{e_ {\alpha}}(e_{j})\big{)}\big{)} \tag{69}\] \[=\sum_{i,j,k,\hbar,\imath}\langle A^{e_{\alpha}}(e_{i}),e_{\hbar} \rangle\langle A^{e_{\alpha}}(e_{j}),e_{\imath}\rangle h\big{(}du(e_{\hbar}), du(e_{k})\big{)}h\big{(}du(e_{k}),du(e_{j})\big{)}h\big{(}du(e_{i}),du(e_{i})\big{)}\] \[=\sum_{i,j,k,\hbar,\imath}\langle B(e_{i},e_{l}),B(e_{j},e_{ \imath})\rangle\lambda_{l}^{2}\delta_{\hbar k}\lambda_{k}^{2}\delta_{kj} \lambda_{i}^{2}\delta_{ii}\] \[=\sum_{i,j}\lambda_{i}^{2}\lambda_{j}^{4}\langle B(e_{i},e_{j}), B(e_{i},e_{j})\rangle\] \[\leq\sum_{i,j}\big{(}\frac{1}{3}\lambda_{i}^{6}+\frac{2}{3} \lambda_{j}^{6}\rangle\langle B(e_{i},e_{j}),B(e_{i},e_{j})\rangle\] \[=\sum_{i,j}\lambda_{i}^{6}\langle B(e_{i},e_{j}),B(e_{i},e_{j})\rangle.\]
Combining (61)-(69), we have
\[\sum_{\ell=1}^{q}I\big{(}du(v_{\ell}^{\top}),du(v_{\ell}^{\top}) \big{)} \tag{70}\] \[\leq \int_{M}\sum_{i,j}\lambda_{i}^{6}\big{(}6\langle B(e_{i},e_{j}), B(e_{i},e_{j})\rangle-\langle B(e_{i},e_{i}),B(e_{j},e_{j})\rangle\big{)}dv_{g}.\]
If \(u\) is not constant on \(M\), then
\[\sum_{\ell=1}^{q}I\big{(}du(v_{\ell}^{\top}),du(v_{\ell}^{\top})\big{)}<0,\]
that is, there exists a variational vector field \(du(v_{\ell}^{\top})\) along which decreases \(E_{\Phi_{(3)}}\)-energy for some \(1\leq\ell\leq q\). Therefore \(u\) is not be a stable \(\Phi_{(3)}\)-harmonic map. This contradiction proves that \(u\) is constant.
By the results of Section 5 and Theorem 6.1, we have
**Corollary 6.2**.: _If \(M^{m}\) is a compact manifold and satisfies the condition of examples in Section 5, then every stable \(\Phi_{(3)}\)-harmonic map \(u:(M^{m},g)\to(N,h)\) is a constant map from \((M^{m},g)\) into any compact Riemannian manifold \(N\)._
## 7 Stable \(\Phi_{(3)}\)-harmonic maps into \(\Phi_{(3)}\)-SSU manifolds
In this section, we prove the following theorem
**Theorem 7.1**.: _Suppose \((N^{n},g)\) is a compact \(\Phi_{(3)}\)-SSU manifold and \((M^{m},g)\) is any compact manifold. Then every stable \(\Phi_{(3)}\)-harmonic map \(u:(M^{m},g)\to(N^{n},h)\) is constant._
Proof.: We choose a local orthonormal frame field \(\{e_{1},\cdots,e_{m}\}\) on \(M\). Let \(\mathsf{v},\mathsf{v}^{\top},\mathsf{v}^{\bot}\) denote a unit vector in \(\mathbb{R}^{q}\) the tangential projection of \(\mathsf{v}\) onto \(N\), and the normal projection of \(\mathsf{v}\) onto \(N\) respectively. We can choose an adopted orthonormal basis \(\{\mathsf{v}_{\ell}\}_{\ell=1}^{q}\) in \(\mathbb{R}^{q}\) such that \(\{\mathsf{v}_{\ell}\}_{\ell=1}^{n}\) is tangent to \(N\), and \(\{\mathsf{v}_{\ell}\}_{\ell=n+1}^{q}\) is normal to \(N\) at a point in \(N\). Denote by \(\mathsf{f}_{t}^{\mathsf{v}_{\ell}^{\top}}\) the flow generated by \(\mathsf{v}_{\ell}^{\top}\). Set \(du(e_{i})=\sum_{\alpha=1}^{n}u_{i\alpha}\mathsf{e}_{\alpha}\quad\text{for} \quad 1\leq i\leq m\,.\) As \(\mathsf{v}_{\ell}\) is parallel in \(\mathbb{R}^{q}\), we have
\[\begin{split}\nabla_{e_{i}}^{u}\mathsf{v}_{\ell}^{\top}& =\nabla_{du(e_{i})}^{N}\mathsf{v}_{\ell}^{\top}=\left(\nabla_{du (e_{i})}^{\mathbb{R}^{q}}\mathsf{v}_{\ell}^{\top}\right)^{\top}=\left(\nabla_{ du(e_{i})}^{\mathbb{R}^{q}}(\mathsf{v}_{\ell}-\mathsf{v}_{\ell}^{\bot})\right)^{\top}\\ &=\mathsf{A}^{\mathsf{v}_{\ell}^{\bot}}\big{(}du(e_{i})\big{)}\,. \end{split} \tag{71}\]
Hence, if \(\mathsf{v}_{\ell}=\mathsf{e}_{\nu}\) for some \(\nu\geq n+1\) at a point in \(N\), then
\[\nabla_{e_{i}}^{u}\mathsf{v}_{\ell}^{\top}=\sum_{\alpha=1}^{n}u_{i\alpha} \mathsf{B}_{\alpha\beta}^{\nu}, \tag{72}\]
where \(\mathsf{B}_{\alpha\beta}^{\nu}=\langle\mathsf{B}(\mathsf{e}_{\alpha},\mathsf{ e}_{\beta}),\mathsf{e}_{\nu}\rangle\) are the components of the 2nd fundamental form of \(N\) in \(\mathbb{R}^{q}\,.\)
We note the matrix
\[\bigg{(}\sum_{i=1}^{m}u_{i\alpha}u_{i\beta}\bigg{)}_{\alpha,\beta=1,\cdots,n}= \left(u_{i\alpha}\right)^{T}\cdot\left(u_{i\alpha}\right) \tag{73}\]
is symmetric. We take local orthonormal frame fields \(\{e_{1},\cdots,e_{m}\}\) on \(M\) and \(\{\mathsf{e}_{1},\cdots,\mathsf{e}_{n}\}\) on \(N\) so that the symmrtic matrix is diagonalizable. Namely,
\[\sum_{i=1}^{m}u_{i\alpha}u_{i\beta}=\lambda_{\alpha}^{2}\delta_{\alpha\beta}. \tag{74}\]
Suppose that \(i,j,k\in\{1,\cdots,m\}\), \(\alpha,\beta,\gamma,\eta,\iota,\kappa,\sigma,\tau\in\{1,\cdots,n\}\), \(\nu\in\{n+1,\cdots,q\}\), and \(\ell\in\{1,\cdots,q\}.\) Using the second variation formula, and the extrinsic average variation method, we have
\[\begin{split}&\sum_{\ell=1}^{q}I\big{(}\mathsf{v}_{\ell}^{\top}, \mathsf{v}_{\ell}^{\top}\big{)}\\ =&\int_{M}\sum_{i,\ell}h\big{(}R^{N}(\mathsf{v}_{ \ell}^{\top},du(e_{i}))\mathsf{v}_{\ell}^{\top},d_{(3)}u(e_{i})\big{)}dv_{g}\\ &+\int_{M}\sum_{i,j,k,\ell}h\big{(}\widetilde{\nabla}_{e_{i}} \mathsf{v}_{\ell}^{\top},\widetilde{\nabla}_{e_{k}}\mathsf{v}_{\ell}^{\top} \big{)}h\big{(}du(e_{k}),du(e_{j})\big{)}h\big{(}du(e_{i}),du(e_{j})\big{)}dv_{g }\\ &+\int_{M}\sum_{i,j,k,\ell}h\big{(}\widetilde{\nabla}_{e_{i}} \mathsf{v}_{\ell}^{\top},du(e_{k})\big{)}h\big{(}\widetilde{\nabla}_{e_{k}} \mathsf{v}_{\ell}^{\top},du(e_{j})\big{)}h\big{(}du(e_{i}),du(e_{j})\big{)}dv_{g }\\ &+\int_{M}\sum_{i,j,k,\ell}h\big{(}\widetilde{\nabla}_{e_{i}} \mathsf{v}_{\ell}^{\top},du(e_{k})\big{)}h\big{(}du(e_{k}),\widetilde{\nabla}_{ e_{j}}\mathsf{v}_{\ell}^{\top}\big{)}h\big{(}du(e_{i}),du(e_{j})\big{)}dv_{g}\\ &+\int_{M}\sum_{i,j,k,\ell}h\big{(}\widetilde{\nabla}_{e_{i}} \mathsf{v}_{\ell}^{\top},du(e_{k})\big{)}h\big{(}du(e_{k}),du(e_{j})\big{)}h \big{(}\widetilde{\nabla}_{e_{i}}\mathsf{v}_{\ell}^{\top},du(e_{j})\big{)}dv_{g }\\ &+\int_{M}\sum_{i,j,k,\ell}h\big{(}\widetilde{\nabla}_{e_{i}} \mathsf{v}_{\ell}^{\top},du(e_{k})\big{)}h\big{(}du(e_{k}),du(e_{j})\big{)}h \big{(}du(e_{i}),\widetilde{\nabla}_{e_{j}}\mathsf{v}_{\ell}^{\top}\big{)}dv_{g }.\end{split} \tag{75}\]
Hence, at \(x_{0}\), we give estimates on every term in (75):
\[\begin{split}&\sum_{i,\ell}h\big{(}R^{N}\big{(}\mathsf{v}_{\ell}^{ \top},du(e_{i})\big{)}\mathsf{v}_{\ell}^{\top},d_{(3)}u(e_{i})\big{)}\\ =&\sum_{i,j,k,\ell}h\big{(}R^{N}\big{(}\mathsf{v}_{ \ell}^{\top},du(e_{i})\big{)}\mathsf{v}_{\ell}^{\top},du(e_{k})\big{)}h\big{(} du(e_{i}),du(e_{j})\big{)}h\big{(}du(e_{j}),du(e_{k})\big{)}\\ =&\sum_{i,j,k,\alpha,\beta,\gamma,\eta,\iota,\kappa, \sigma,\tau}u_{i\beta}u_{k\kappa}u_{i\tau}u_{j\sigma}u_{j\eta}u_{k\gamma}h \big{(}R^{N}(\mathsf{e}_{\alpha},\mathsf{e}_{\beta})\mathsf{e}_{\iota},\mathsf{ e}_{\kappa}\big{)}h(\mathsf{e}_{\tau},\mathsf{e}_{\sigma})h(\mathsf{e}_{\eta}, \mathsf{e}_{\gamma})\\ =&\sum_{\alpha,\beta,\gamma,\eta,\kappa,\sigma} \lambda_{\beta}^{2}\delta_{\beta\gamma}\lambda_{\sigma}^{2}\delta_{\sigma\eta} \lambda_{\kappa}^{2}\delta_{\kappa\gamma}\delta_{\tau\sigma}\delta_{\eta\gamma }h\big{(}R^{N}(\mathsf{e}_{\alpha},\mathsf{e}_{\beta})\mathsf{e}_{\alpha}, \mathsf{e}_{\kappa}\big{)}\\ =&\sum_{\alpha,\beta}\lambda_{\alpha}^{6}h\big{(}R^{ N}(\mathsf{e}_{\alpha},\mathsf{e}_{\beta})\mathsf{e}_{\alpha},\mathsf{e}_{\beta} \big{)}\\ =&\sum_{\alpha,\beta}\lambda_{\alpha}^{6}\big{(} \langle\mathsf{B}(\mathsf{e}_{\alpha},\mathsf{e}_{\beta}),\mathsf{B}(\mathsf{e }_{\alpha},\mathsf{e}_{\beta})\rangle-\langle\mathsf{B}(\mathsf{e}_{\alpha}, \mathsf{e}_{\alpha}),\mathsf{B}(\mathsf{e}_{\beta},\mathsf{e}_{\beta})\rangle \big{)},\end{split} \tag{76}\]
and
\[\begin{split}&\sum_{i,j,k,\ell}h\big{(}\widetilde{\nabla}_{e_{i}} \mathsf{v}_{\ell}^{\top},\widetilde{\nabla}_{e_{k}}\mathsf{v}_{\ell}^{\top} \big{)}h\big{(}du(e_{k}),du(e_{j})\big{)}h\big{(}du(e_{i}),du(e_{j})\big{)}\\ =&\sum_{i,j,k,\alpha,\beta,\ell}u_{i\alpha}u_{k\beta }h\big{(}\nabla_{\mathsf{e}_{\alpha}}v_{\ell}^{\top},\nabla_{\mathsf{e}_{ \beta}}v_{\ell}^{\top}\big{)}h\big{(}du(e_{k}),du(e_{j})\big{)}h\big{(}du(e_{i} ),du(e_{j})\big{)}\\ =&\sum_{i,j,k,\alpha,\beta,\gamma,\eta,\iota,\kappa, \sigma,\tau,\nu}u_{i\alpha}u_{k\beta}\mathsf{B}_{\alpha\gamma}^{\nu}\mathsf{B }_{\beta\eta}^{\nu}u_{k}^{\iota}u_{j\kappa}u_{i\sigma}u_{j\tau}h(\mathsf{e}_{ \gamma},\mathsf{e}_{\eta})h(\mathsf{e}_{\iota},\mathsf{e}_{\kappa})h(\mathsf{e }_{\sigma},\mathsf{e}_{\tau})\\ =&\sum_{\alpha,\beta,\gamma,\eta,\iota,\kappa, \sigma,\tau,\nu}\lambda_{\alpha}^{2}\lambda_{\kappa}^{2}\lambda_{\beta}^{2} \delta_{\beta\iota}\delta_{\kappa\tau}\delta_{\alpha\sigma}\delta_{\gamma\eta }\delta_{\iota\kappa}\delta_{\sigma\tau}\mathsf{B}_{\alpha\gamma}^{\nu}\mathsf{ B}_{\beta\eta}^{\nu}\\ =&\sum_{\alpha,\gamma,\nu}\lambda_{\alpha}^{6} \mathsf{B}_{\alpha\gamma}^{\nu}\mathsf{B}_{\alpha\gamma}^{\nu}\\ =&\sum_{\alpha,\beta}\lambda_{\alpha}^{6}\langle \mathsf{B}(\mathsf{e}_{\alpha},\mathsf{e}_{\beta}),\mathsf{B}(\mathsf{e}_{ \alpha},\mathsf{e}_{\beta})\rangle,\end{split} \tag{77}\]
and
\[\begin{split}&\sum_{i,j,k,\ell}h\big{(}\widetilde{\nabla}_{e_{i}} \mathsf{v}_{\ell}^{\top},du(e_{k})\big{)}h\big{(}\widetilde{\nabla}_{e_{k}} \mathsf{v}_{\ell}^{\top},du(e_{j})\big{)}h\big{(}du(e_{i}),du(e_{j})\big{)}\\ =&\sum_{i,j,k,\alpha,\beta,\ell}u_{i\alpha}u_{k\beta }h\big{(}\nabla_{\mathsf{e}_{\alpha}}\mathsf{v}_{\ell}^{\top},du(e_{k})\big{)}h \big{(}\nabla_{\mathsf{e}_{\beta}}v_{\ell}^{\top},du(e_{j})\big{)}h\big{(}du(e_ {i}),du(e_{j})\big{)}\\ =&\sum_{i,j,k,\alpha,\beta,\gamma,\eta,\iota,\kappa, \sigma,\tau,\nu}u_{i\alpha}u_{j\beta}u_{k\eta}u_{k\iota}u_{i\tau}u_{j\sigma} \mathsf{B}_{\alpha\gamma}^{\nu}\mathsf{B}_{\beta\sigma}^{\nu}h(\mathsf{e}_{ \gamma},\mathsf{e}_{\iota})h(\mathsf{e}_{\tau},\mathsf{e}_{\kappa})\\ =&\sum_{\alpha,\beta,\gamma,\eta,\iota,\kappa, \sigma,\tau,\nu}\lambda_{\alpha}^{2}\lambda_{\iota}^{2}\lambda_{\beta}^{2} \delta_{\beta\tau}\delta_{\iota\kappa}\delta_{\beta\eta}\delta_{\gamma\eta} \delta_{\sigma\iota}\delta_{\kappa\tau}\mathsf{B}_{\alpha\gamma}^{\alpha} \mathsf{B}_{\beta\sigma}^{\nu}\\ =&\sum_{\alpha,\beta,\nu}\lambda_{\alpha}^{4}\lambda_{ \beta}^{2}B_{\alpha\beta}^{\nu}B_{\alpha\beta}^{\nu}\\ =&\sum_{\alpha,\beta}\lambda_{\alpha}^{4}\lambda_{ \beta}^{2}\langle\mathsf{B}(\mathsf{e}_{\alpha},\mathsf{e}_{\beta}),\mathsf{B}( \mathsf{e}_{\alpha},\mathsf{e}_{\beta})\rangle\\ \end{split}\]
\[\begin{split}\leq&\sum_{\alpha,\beta}\big{(}\frac{2}{3} \lambda_{\alpha}^{6}+\frac{1}{3}\lambda_{\beta}^{6}\big{)}\langle\mathsf{B}( \mathsf{e}_{\alpha},\mathsf{e}_{\beta}),\mathsf{B}(\mathsf{e}_{\alpha},\mathsf{ e}_{\beta})\rangle\\ =&\sum_{\alpha,\beta}\lambda_{\alpha}^{6}\langle \mathsf{B}(\mathsf{e}_{\alpha},\mathsf{e}_{\beta}),\mathsf{B}(\mathsf{e}_{ \alpha},\mathsf{e}_{\beta})\rangle,\end{split} \tag{78}\]
and
\[\begin{split}&\sum_{i,j,k,\ell}h\big{(}\widetilde{\nabla}_{e_{i}} \mathsf{v}_{\ell}^{\top},du(e_{k})\big{)}h\big{(}du(e_{k}),\widetilde{\nabla}_ {e_{j}}\mathsf{v}_{\ell}^{\top}\big{)}h(du(e_{i}),du(e_{j}))\\ =&\sum_{i,j,k,\alpha,\beta,\ell}u_{i\alpha}u_{j}^{ \beta}h\big{(}\nabla_{\mathsf{e}_{\alpha}}\mathsf{v}_{\ell}^{\top},du(e_{k}) \big{)}h\big{(}du(e_{k}),\nabla_{\mathsf{e}_{\beta}}\mathsf{v}_{\ell}^{\top} \big{)}h\big{(}du(e_{i}),du(e_{j})\big{)}\\ =&\sum_{i,j,k,\alpha,\beta,\gamma,\eta,\iota,\kappa, \sigma,\tau,\nu}u_{i\alpha}u_{j\beta}u_{k\eta}u_{k\iota}u_{i\kappa}u_{j\sigma} \mathsf{B}_{\beta\gamma}^{\nu}\mathsf{B}_{\beta\tau}^{\nu}h(\mathsf{e}_{\gamma },\mathsf{e}_{\eta})h(\mathsf{e}_{\iota},\mathsf{e}_{\tau})h(\mathsf{e}_{\kappa },\mathsf{e}_{\sigma})\\ =&\sum_{\alpha,\beta,\gamma,\eta,\iota,\kappa, \sigma,\tau,\nu}\lambda_{\alpha}^{2}\lambda_{\beta}^{2}\lambda_{\eta}^{2} \delta_{\alpha\kappa}\delta_{\beta\sigma}\delta_{\eta\ell}\delta_{\gamma\eta} \delta_{i\tau}\delta_{\kappa\sigma}\mathsf{B}_{\alpha\gamma}^{\nu}\mathsf{B}_{ \beta\tau}^{\nu}\\ =&\sum_{\alpha,\beta,\nu}\lambda_{\alpha}^{4}\lambda _{\beta}^{2}\mathsf{B}_{\alpha\beta}^{\nu}\mathsf{B}_{\alpha\beta}^{\nu}\\ =&\sum_{\alpha,\beta,\nu}\lambda_{\alpha}^{4} \lambda_{\beta}^{2}\langle\mathsf{B}(\mathsf{e}_{\alpha},\mathsf{e}_{\beta}), \mathsf{B}(\mathsf{e}_{\alpha},\mathsf{e}_{\beta})\rangle\\ \leq&\sum_{\alpha,\beta}\big{(}\frac{2}{3}\lambda_{ \alpha}^{6}+\frac{1}{3}\lambda_{\beta}^{6}\big{)}\langle\mathsf{B}(\mathsf{e}_ {\alpha},\mathsf{e}_{\beta}),\mathsf{B}(\mathsf{e}_{\alpha},\mathsf{e}_{\beta})\rangle \\ =&\sum_{\alpha,\beta}\lambda_{\alpha}^{6}\langle \mathsf{B}(\mathsf{e}_{\alpha},\mathsf{e}_{\beta}),\mathsf{B}(\mathsf{e}_{ \alpha},\mathsf{e}_{\beta})\rangle,\end{split} \tag{79}\]
and
\[\begin{split}&\sum_{i,j,k,\ell}h\big{(}\widetilde{\nabla}_{e_{i}} \mathsf{v}_{\ell}^{\top},du(e_{k})\big{)}h\big{(}du(e_{k}),du(e_{j})\big{)}h \big{(}\widetilde{\nabla}_{e_{i}}\mathsf{v}_{\ell}^{\top},du(e_{j})\big{)}\\ =&\sum_{i,j,k,\alpha,\beta,\ell}u_{i\alpha}u_{i \beta}h\big{(}\nabla_{\mathsf{e}_{\alpha}}\mathsf{v}_{\ell}^{\top},du(e_{k}) \big{)}h\big{(}du(e_{k}),du(e_{j})\big{)}h\big{(}\nabla_{\mathsf{e}_{\beta}} \mathsf{v}_{\ell}^{\top},du(e_{j})\big{)}\\ =&\sum_{i,j,k,\alpha,\beta,\gamma,\eta,\iota,\kappa, \sigma,\tau,\nu}u_{i\alpha}u_{i\beta}u_{k\eta}u_{k\iota}u_{j\tau}u_{j\sigma} \mathsf{B}_{\alpha\gamma}^{\nu}\mathsf{B}_{\beta\kappa}^{\nu}h(\mathsf{e}_{ \gamma},\mathsf{e}_{\eta})h(\mathsf{e}_{\iota},\mathsf{e}_{\tau})h(\mathsf{e}_{ \kappa},\mathsf{e}_{\sigma})\\ =&\sum_{\alpha,\beta,\gamma,\eta,\iota,\kappa, \sigma,\tau,\nu}\lambda_{\alpha}^{2}\lambda_{\eta}^{2}\lambda_{\tau}^{2} \delta_{\alpha\beta}\delta_{\eta\ell}\delta_{\tau\sigma}\delta_{\gamma\eta} \delta_{\iota\tau}\delta_{\kappa\sigma}\mathsf{B}_{\alpha\gamma}^{\nu}\mathsf{B} _{\alpha\kappa}^{\nu}\\ =&\sum_{\alpha,\beta,\nu}\lambda_{\alpha}^{2}\lambda_{ \beta}^{4}\mathsf{B}_{\alpha,\beta}^{\nu}\mathsf{B}_{\alpha,\beta}^{\nu}\\ \leq&\sum_{\alpha,\beta}\big{(}\frac{1}{3}\lambda_{ \alpha}^{6}+\frac{2}{3}\lambda_{\beta}^{6}\big{)}\langle\mathsf{B}(\mathsf{e}_ {\alpha},\mathsf{e}_{\beta}),\mathsf{B}(\mathsf{e}_{\alpha},\mathsf{e}_{\beta})\rangle \\ =&\sum_{\alpha,\beta}\lambda_{\alpha}^{6}\langle\mathsf{B}( \mathsf{e}_{\alpha},\mathsf{e}_{\beta}),\mathsf{B}(\mathsf{e}_{\alpha},\mathsf{e}_{ \beta})\rangle,\end{split} \tag{80}\]
and
\[\sum_{i,j,k,\ell}h\big{(}\widetilde{\nabla}_{e_{i}}\mathsf{v}_{\ell}^ {\top},du(e_{k})\big{)}h\big{(}du(e_{k}),du(e_{j})\big{)}h\big{(}du(e_{i}), \widetilde{\nabla}_{e_{j}}\mathsf{v}_{\ell}^{\top}\big{)} \tag{81}\] \[= \sum_{i,j,k,\alpha,\beta,\ell}u_{i\alpha}u_{j\beta}h\big{(}\nabla_ {\mathsf{e}_{\alpha}}\mathsf{v}_{\ell}^{\top},du(e_{k})\big{)}h\big{(}du(e_{k}), du(e_{j})\big{)}h\big{(}du(e_{i}),\nabla_{\mathsf{e}_{\beta}}\mathsf{v}_{\ell}^{ \top}\big{)}\] \[= \sum_{i,j,k,\alpha,\beta,\gamma,\eta,\iota,\kappa,\sigma,\tau,\nu} u_{i\alpha}u_{j\beta}u_{k\eta}u_{k\iota}u_{j\tau}u_{i\kappa}\mathsf{B}_{\alpha \gamma}^{\nu}\mathsf{B}_{\beta\sigma}^{\nu}h(\mathsf{e}_{\gamma},\mathsf{e}_{ \eta})h(\mathsf{e}_{\iota},\mathsf{e}_{\tau})h(\mathsf{e}_{\kappa},\mathsf{e}_ {\sigma})\] \[= \sum_{\alpha,\beta,\gamma,\eta,\iota,\kappa,\sigma,\tau,\nu} \lambda_{\alpha}^{2}\lambda_{\beta}^{2}\lambda_{\alpha}^{2}\delta_{\beta\tau} \delta_{\eta\iota}\delta_{\gamma\eta}\delta_{i\tau}\delta_{\kappa\sigma} \mathsf{B}_{\alpha\gamma}^{\nu}\mathsf{B}_{\beta\sigma}^{\nu}\] \[= \sum_{\alpha,\beta,\nu}\lambda_{\alpha}^{2}\lambda_{\beta}^{4} \mathsf{B}_{\alpha\beta}^{\nu}\mathsf{B}_{\alpha\beta}^{\nu}\] \[\leq \sum_{\alpha,\beta}\big{(}\frac{1}{3}\lambda_{\alpha}^{6}+\frac{2 }{3}\lambda_{\beta}^{6}\big{)}\langle B(\mathsf{e}_{\alpha},\mathsf{e}_{\beta} ),B(\mathsf{e}_{\alpha},\mathsf{e}_{\beta})\rangle\] \[= \sum_{\alpha,\beta}\lambda_{\alpha}^{6}\langle\mathsf{B}(\mathsf{ e}_{\alpha},\mathsf{e}_{\beta}),\mathsf{B}(\mathsf{e}_{\alpha},\mathsf{e}_{ \beta})\rangle.\]
Combining (73)-(81), we have
\[\sum_{\ell=1}^{q}I\big{(}\mathsf{v}_{\ell}^{\top},\mathsf{v}_{\ell}^{\top} \big{)}\leq\int_{M}\sum_{\alpha,\beta}\lambda_{\alpha}^{6}\big{(}6\langle \mathsf{B}(\mathsf{e}_{\alpha},\mathsf{e}_{\beta}),\mathsf{B}(\mathsf{e}_{ \alpha},\mathsf{e}_{\beta})\rangle-\langle\mathsf{B}(\mathsf{e}_{\alpha}, \mathsf{e}_{\alpha}),\mathsf{B}(\mathsf{e}_{\beta},\mathsf{e}_{\beta})\rangle \big{)}dv_{g}. \tag{82}\]
As \(N\) is a \(\Phi_{(3)}\)-SSU manifold, we know that if \(u\) is a nonconstant, then
\[\sum_{\ell=1}^{q}I\big{(}\mathsf{v}_{\ell}^{\top},\mathsf{v}_{\ell}^{\top} \big{)}<0.\]
That is, making a variation of \(u\) along a vector field \(\mathsf{v}_{\ell}^{\top}\) decreases \(E_{\Phi_{(3)}}\)-energy for some \(1\leq\ell\leq q\). Hence, \(u\) is not a stable \(\Phi_{(3)}\)-harmonic map, which leads to a contradiction. Consequently, \(u\) is a constant.
By Section 5 and Theorem 7.1, we have
**Corollary 7.2**.: _If \(N^{n}\) is a compact manifold and satisfies the condition of examples in Section 5, \(M^{m}\) is any compact Riemannian manifold, then every stable \(\Phi_{(3)}\)-harmonic map \(u:(M^{m},g)\to(N^{n},h)\) is a constant map._
The Infimum of \(\Phi_{(3)}\)-energy in the homotopic class of maps into \(\Phi_{(3)}\)-SSU manifolds
**Lemma 8.1**.: _If \(N\) is a compact \(\Phi_{(3)}\)-SSU manifold, then there is a number \(0<\rho<1\) such that for any compact manifold \(M\) and any map \(u:M\to N\) there is a map \(u_{1}:M\to N\) homotopic to \(u\) with \(E_{\Phi_{(3)}}(u_{1})\leq\rho E_{\Phi_{(3)}}(u)\)._
Proof.: Let \(\widehat{T}_{y}N\) be the space of the unit tangent vectors to \(N\) at the point \(y\in N\). Since \(N\) is \(\Phi_{(3)}\)-SSU and \(\widehat{T}_{y}N\) is compact, by (4), there exists \(\kappa>0\) such that for every \(y\in N\,\) and every \(x\in\widehat{T}_{y}N\,,\)
\[F_{\Phi_{(3)}x}(v)<-q\kappa. \tag{83}\]
Similar to the method of [24], let \(v\) be a unit vector in \(\mathbb{R}^{q}\),
It follows from (4), (82) and (83) that
\[\sum_{\ell=1}^{q}\frac{d^{2}}{dt^{2}}E_{\Phi_{(3)}}(f_{t}^{v_{\ell}^{\top}} \circ u)_{\big{|}_{t=0}}\leq-6q\kappa\int_{M}e_{\Phi_{(3)}}(u)\,dv_{g}=-6q \kappa E_{\Phi_{(3)}}(u). \tag{84}\]
We now proceed in steps.
**Step 1**. There is a number \(\xi\geq 6\kappa>0\) such that for \(1\leq\ell\leq q\), \(|t|\leq 1\) and all \(X,Y,Z\in\Gamma(TN)\),
\[\begin{split}&\bigg{|}\frac{d^{3}}{dt^{3}}\bigg{(}\langle df_{t}^{v_{ \ell}^{\top}}\left(X\right),df_{t}^{v_{\ell}^{\top}}\left(Y\right)\rangle \langle df_{t}^{v_{\ell}^{\top}}\left(Y\right),df_{t}^{v_{\ell}^{\top}}\left( Z\right)\rangle\langle df_{t}^{v_{\ell}^{\top}}\left(Z\right),df_{t}^{v_{\ell}^{ \top}}\left(X\right)\rangle\bigg{)}\bigg{|}\\ \leq&\xi\bigg{|}\langle X,Y\rangle\langle Y,Z \rangle\langle Z,X\rangle\bigg{|}.\end{split} \tag{85}\]
Proof.: Let \(SN\) be the unit sphere bundle of \(N\). Then the function defined on the compact set \([-1,1]\times SN\times SN\times SN\) by
\[(t,x,y,z)\mapsto\max_{1\leq\ell\leq q}\bigg{|}\frac{d^{3}}{dt^{3}}\big{(} \langle df_{t}^{v_{\ell}^{\top}}\left(x\right),df_{t}^{v_{\ell}^{\top}}\left( y\right)\rangle\langle df_{t}^{v_{\ell}^{\top}}\left(y\right),df_{t}^{v_{\ell}^{ \top}}\left(z\right)\rangle\langle df_{t}^{v_{\ell}^{\top}}\left(z\right),df_{ t}^{v_{\ell}^{\top}}\left(x\right)\rangle\big{)}\bigg{|} \tag{86}\]
is continuous and thus has a maximum. Let \(\xi_{0}\) be this maximum and \(\xi=\max\{5\kappa,\xi_{0}\}\). Then (85) follows by homogeneity.
**Step 2**. There is a smooth vector field \(V\) on \(N\) such that if \(\xi\) is as in Step 1, then we have
\[\frac{d}{dt}E_{\Phi_{(3)}}(f_{t}^{V}\circ u)_{\big{|}_{t=0}}\leq 0, \tag{87}\]
\[\frac{d^{2}}{dt^{2}}E_{\Phi_{(3)}}(f_{t}^{V}\circ u)_{\big{|}_{t=0}}\leq-6 \kappa E_{\Phi_{(3)}}(u), \tag{88}\]
and
\[\bigg{|}\frac{d^{3}}{dt^{3}}E_{\Phi_{(3)}}(f_{t}^{V}\circ u)\bigg{|}\leq\xi E _{\Phi_{(3)}}(u),\quad\text{for}\quad|t|\leq 1. \tag{89}\]
Proof.: From (84) it is seen that
\[\frac{d^{2}}{dt^{2}}E_{\Phi_{(3)}}(f_{t}^{v_{\ell}^{\top}}\circ u)_{\big{|}_{ t=0}}\leq-6\kappa E_{\Phi_{(3)}}(u), \tag{90}\]
for some \(1\leq\ell\leq q\). Or we would have \(\sum_{\ell=1}^{q}\frac{d^{2}}{dt^{2}}E_{\Phi_{(3)}}(f_{t}^{v_{\ell}^{\top}} \circ u)_{\big{|}_{t=0}}>-6q\kappa E_{\Phi_{(3)}}(u)\,,\) contradicting (84). If \(\frac{d}{dt}E_{\Phi_{(3)}}(f_{t}^{v_{\ell}^{\top}}\circ u)_{\big{|}_{t=0}}\leq 0\), set \(V=v_{\ell}^{\top}\); otherwise, set \(V=-v_{\ell}^{\top}\). Then (87) and (88) hold.
From (85), we have
\[\bigg{|}\frac{d^{3}}{dt^{3}}E_{\Phi_{(3)}}(f_{t}^{V}\circ u)\bigg{|}\] \[=\frac{1}{6}\int_{M}\sum_{i,j,k=1}^{m}\frac{d^{3}}{dt^{3}}\bigg{(} \langle df_{t}^{v_{t}^{\top}}\big{(}du(e_{i})\big{)},df_{t}^{v_{t}^{\top}} \big{(}du(e_{j})\big{)}\rangle\langle df_{t}^{v_{t}^{\top}}\big{(}du(e_{j}) \big{)},df_{t}^{v_{t}^{\top}}\big{(}du(e_{k})\big{)}\rangle\] \[\quad\times\langle df_{t}^{v_{t}^{\top}}\big{(}du(e_{k})\big{)}, df_{t}^{v_{t}^{\top}}\big{(}du(e_{i})\big{)}\rangle\bigg{)}\,dv_{g}\] \[\leq\frac{\xi}{6}\int_{M}\sum_{i,j,k=1}^{m}\langle du(e_{i}),du(e_ {j})\rangle\langle du(e_{j}),du(e_{k})\rangle\langle du(e_{k}),du(e_{i})\rangle \,dv_{g} \tag{91}\] \[=\xi E_{\Phi_{(3)}}(u).\]
So (89) is right.
**Step 3**. Let \(\zeta=\frac{5\kappa}{\xi}\) (\(\zeta\leq 1\), as \(5\kappa\leq\xi\)), \(\rho=1-\frac{\kappa\zeta^{2}}{2}\), and \(V\) be as in Step 2. Then \(0<\rho<1\) and
\[E_{\Phi_{(3)}}(f_{\zeta}^{V}\circ u)\leq\rho E_{\Phi_{(3)}}(u). \tag{92}\]
Proof.: Let \(E_{\Phi_{(3)}}(t)=E_{\Phi_{(3)}}(f_{t}^{V}\circ u)\). Then by Step 2 for \(0\leq t\leq\zeta\), we have
\[E_{\Phi_{(3)}}^{\prime\prime}(t)=E_{\Phi_{(3)}}^{\prime\prime}(0)+\int_{0}^{t }E_{\Phi_{(3)}}^{\prime\prime\prime}(s)ds\leq-6\kappa E_{\Phi_{(3)}}(u)+\xi \zeta E_{\Phi_{(3)}}(u)=-\kappa E_{\Phi_{(3)}}(u).\]
Thus
\[E_{\Phi_{(3)}}^{\prime}(t)=E_{\Phi_{(3)}}^{\prime}(0)+\int_{0}^{t }E_{\Phi_{(3)}}^{\prime\prime}(s)ds\leq-\kappa tE_{\Phi_{(3)}}(u) \tag{93}\]
and
\[E_{\Phi_{(3)}}(\zeta)=E_{\Phi_{(3)}}(0)+\int_{0}^{\zeta}E_{\Phi_{ (3)}}^{\prime}(s)ds\leq\bigg{(}1-\frac{\kappa\zeta^{2}}{2}\bigg{)}E_{\Phi_{(3 )}}(u)=\rho E_{\Phi_{(3)}}(u). \tag{94}\]
As both \(E_{\Phi_{(3)}}(\zeta)\) and \(E_{\Phi_{(3)}}(u)\) are positive, this inequality implies \(\rho\) is positive. Let \(u_{1}=f_{\zeta}^{V}\circ u\). Then \(u_{1}\) is homotopic to \(u\) and we have just shown \(E_{\Phi_{(3)}}(u_{1})\leq\rho E_{\Phi_{(3)}}(u)\).
From Step 1, Step 2 and Step 3, we know this lemma is right.
**Theorem 8.2**.: _If \(N\) is a compact \(\Phi_{(3)}\)-\(\mathrm{SSU}\) manifold, then for every compact Riemannian manifold \(M\), the homotopic class of any map from \(M\) into \(N\) contains elements of arbitrarily small \(\Phi_{(3)}\)-energy._
Proof.: Let \(u:M\to N\) be any smooth map from \(M\) to \(N\). By using Lemma 8.1, we can find a map \(u_{1}:M\to N\) which is homotopic to \(u\) with \(E_{\Phi_{(3)}}(u_{1})\leq\rho E_{\Phi_{(3)}}(u)\). Another application of the lemma gives an \(u_{2}\) homotopic to \(u_{1}\) with \(E_{\Phi_{(3)}}(u_{2})\leq\rho E_{\Phi_{(3)}}(u_{1})\leq\rho^{2}E_{\Phi_{(3)}}(u)\). By induction, there is \(u_{\ell}\) (\(\ell=1,2,\cdots\)) homotopic to \(u\) with \(E_{\Phi_{(3)}}(u_{\ell})\leq\rho^{\ell}E_{\Phi_{(3)}}(u)\). But \(0<\rho<1\) whence \(\lim_{\ell\to\infty}E_{\Phi_{(3)}}(u_{\ell})=0\) as required.
**Corollary 8.3**.: _If \(N\) is a compact \(\Phi_{(3)}\)-\(\mathrm{SSU}\) manifold, then the infimum of the \(\Phi_{(3)}\)-energy \(E_{\Phi_{(3)}}\) is zero among maps homotopic to the identity map on \(N\,\)._
Proof.: This follows at once from Theorem 8.2 by choosing \(M=N\) and the smooth map to be the identity map on \(N\,\).
The Infimum of \(\Phi_{(3)}\)-energy in the homotopic class of maps from \(\Phi_{(3)}\)-SSU manifolds
**Lemma 9.1**.: _If \(N\) is a compact Riemannian manifold such that the infimum of the \(\Phi_{(3)}\)-energy \(E_{\Phi_{(3)}}\) is zero among maps homotopic to the identity and if \(M\) is a compact Riemannian manifold, then the infimum of the \(\Phi_{(3)}\)-energy \(E_{\Phi_{(3)}}\) is zero in each homotopy class of maps from \(N\) to \(M\)._
Proof.: Let \(K,M,N\) be three Riemannian manifolds of dimensions \(k,m\) and \(n\) respectively, and \(M\) be a compact manifold. Let \(K\xrightarrow{\psi}N\xrightarrow{u}M\) be smooth maps. Denote a symmetric \(n\)\(\times n\)-matrix \(U\) with the \(i\)-\(j\) entry \(U_{ij}\) given by \(U_{ij}=\langle du(e_{i}),du(e_{j})\rangle_{M}\,,\) and a symmetric \(k\times k\)-matrix \(\Psi\) with the \(i\)-\(j\) entry \(\Psi_{ij}\) given by \(\Psi_{ij}=\langle d\psi(e_{i}),d\psi(e_{j})\rangle_{N}\,.\) Then the \(\Phi_{(3)}\)-energy density \(e_{\Phi_{(3)}}(u)\) of \(u\) satisfies
\[\begin{split} e_{\Phi_{(3)}}(u)&=\frac{1}{6}\sum_{i,j,k=1}^{n}\langle du(e_{i}),du(e_{j})\rangle\langle du(e_{j}),du(e_{k})\rangle \langle du(e_{k}),du(e_{i})\rangle\\ &=\frac{1}{6}\mathrm{trace}(U\cdot U\cdot U)\,.\end{split} \tag{95}\]
Similarly, \(e_{\Phi_{(3)}}(\psi)=\frac{1}{6}\mathrm{trace}(\Psi^{3})\,,\) and
\[\begin{split} e_{\Phi_{(3)}}(u\circ\psi)&=\frac{1}{6 }\mathrm{trace}(U^{3}\cdot\Psi^{3})\\ &\leq\max_{x\in N}|U_{i,j}(x)|^{3}\cdot\frac{1}{6}\mathrm{trace} \Psi^{3}\\ &\leq\max_{x\in N}|U_{i,j}(x)|^{3}\cdot e_{\Phi_{(3)}}(\Psi)\,. \end{split} \tag{96}\]
Thus, if \(u:N\to M\) is any smooth map, its composition with \(\psi^{\ell}:N\to N\) homotopic to the identity map on \(N\) and \(E_{\Phi_{(3)}}(\psi^{\ell})\to 0\,,\) as \(\ell\to\infty\), then
\[\begin{split} E_{\Phi_{(3)}}(u\circ\psi^{\ell})&\leq \max_{x\in N}|U_{i,j}(x)|^{3}\cdot E_{\Phi_{(3)}}(\Psi^{\ell})\\ &\to 0\quad\text{as}\quad\ell\to\infty\,.\end{split} \tag{97}\]
**Theorem 9.2**.: _If \(N\) is compact \(\Phi_{(3)}\)-\(\mathrm{SSU}\) manifold, then for every compact Riemannian manifold \(M\), the homotopic class of any map from \(N\) into \(M\) contains elements of arbitrarily small \(\Phi_{(3)}\)-energy \(E_{\Phi_{(3)}}\)._
Proof.: This follows at once from Corollary 8.3 and Lemma 9.1.
By virtue of Theorems 6.1, 7.1, 8.2, 9.2, Definition 1.6, [52, p.131-132, Definition and Theorem A], and [24, p.5, Theorem 1.2], we have the following result.
**Theorem 9.3**.: _For \(i=1,2,3\), every compact \(\Phi_{(i)}\)-\(\mathrm{SSU}\) manifold is \(\Phi_{(i)}\)-\(\mathrm{SU}\), and hence is \(\Phi_{(i)}\)-\(\mathrm{U}\)._
This generalizes Proposition 1.8.
**Acknowledgements**: The authors wish to thank Professors Sun-Yung Alice Chang and Paul C. Yang for references and communications, the editor for the editorship, and the referee for helpful comments, suggestions and remarks which make the present form of the paper possible. Work supported in part by the National Natural Science Foundation of China (Grant No.11971415, 11771456), the NSF (DMS-1447008), and Nanhu Scholars Program for Young Scholars of Xinyang Normal University.
|
2309.03856 | Mean curvature flow with generic low-entropy initial data II | We prove that the mean curvature flow of a generic closed embedded
hypersurface in $\mathbb{R}^4$ or $\mathbb{R}^5$ with entropy $\leq 2$, or with
entropy $\leq \lambda(\mathbb{S}^1)$ if in $\mathbb{R}^6$, encounters only
generic singularities. | Otis Chodosh, Christos Mantoulidis, Felix Schulze | 2023-09-07T17:15:15Z | http://arxiv.org/abs/2309.03856v1 | # Mean curvature flow with generic low-entropy initial data II
###### Abstract.
We prove that the mean curvature flow of a generic closed embedded hypersurface in \(\mathbb{R}^{4}\) or \(\mathbb{R}^{5}\) with entropy \(\leq 2\), or with entropy \(\leq\lambda(\mathbb{S}^{1})\) if in \(\mathbb{R}^{6}\), encounters only generic singularities.
## 1. Introduction
Mean curvature flow is the gradient flow of area. A family of hypersurfaces \(M(t)\subset\mathbb{R}^{n+1}\) is flowing by mean curvature flow if the following equation is satisfied
\[\left(\tfrac{\partial}{\partial t}\mathbf{x}\right)^{\perp}=\mathbf{H}_{M(t)} (\mathbf{x}). \tag{1.1}\]
Here, \(\mathbf{H}_{M(t)}(\mathbf{x})\) denotes the mean curvature vector of \(M(t)\) at \(\mathbf{x}\). When the initial data \(M(0)\) is compact, mean curvature flow is guaranteed to become singular in finite time, so understanding the nature of such singularities is a fundamental problem.
A well-known conjecture of Huisken suggests that the singularities of a generic mean curvature flow should be as simple as possible: spherical and cylindrical [11, #8]. One approach to this issue is to study only the singularities that persist under a generic perturbation of the initial data \(M(0)\). In this article we study this problem under a low-entropy condition (see Definition 1.6 for the definition of entropy). In particular, we obtain the following result (see Corollary 1.18 for the precise statement)
**Theorem 1.1** (Low entropy generic flow in \(\mathbb{R}^{4}\), informal).: _If \(M^{3}\subset\mathbb{R}^{4}\) is a closed embedded hypersurface with entropy \(\lambda(M)\leq 2\) then there exist arbitrarily small \(C^{\infty}\) graphs \(M^{\prime}\) over \(M\) so that the mean curvature flow starting from \(M^{\prime}\) has only multiplicity-one singularities of \(\mathbb{S}^{3},\mathbb{S}^{2}\times\mathbb{R}\), and \(\mathbb{S}^{1}\times\mathbb{R}^{2}\)-type._
_Remark 1.2_.: This implies that any mean curvature flow of \(M^{\prime}\) will be smooth for a.e. time \(t\), see Corollary 1.23 below.
Together with K. Choi, we proved Theorem 1.1 for \(M^{2}\subset\mathbb{R}^{3}\) with \(\lambda(M)\leq 2\) in [CCMS21, Theorem 1.1]. The generalization to \(\mathbb{R}^{4}\) presents significant new challenges. In particular, the flow of a closed embedded \(M^{3}\subset\mathbb{R}^{4}\) with \(\lambda(M)\leq 2\) may have singularities modeled on _singular_ self-shrinkers, such as the cone over the Clifford torus.1 Singular shrinkers were a major obstacle in many previous works (cf. \((\star_{n,\Lambda})\) in [1, 1] and \((\dagger_{n,\Lambda})\) in [CCMS21]), and precisely informed our previous strong entropy \(\leq\lambda(\mathbb{S}^{1})\) assumption for \(M^{3}\subset\mathbb{R}^{4}\) in [CCMS21, Theorem 1.2]).
We also have results analogous to Theorem 1.1 in higher dimensions (see Corollaries 1.19 and 1.20 for the precise statements):
**Theorem 1.3** (Low entropy generic flow in \(\mathbb{R}^{5}\), informal).: _If \(M^{4}\subset\mathbb{R}^{5}\) is a closed embedded hypersurface with entropy \(\lambda(M)\leq 2\) then there exist arbitrarily small \(C^{\infty}\) graphs \(M^{\prime}\) over \(M\) so that the mean curvature flow starting from \(M^{\prime}\) has only multiplicity-one singularities of \(\mathbb{S}^{4},\mathbb{S}^{3}\times\mathbb{R}\), \(\mathbb{S}^{2}\times\mathbb{R}^{2}\), and \(\mathbb{S}^{1}\times\mathbb{R}^{3}\)-type._
**Theorem 1.4** (Low entropy generic flow in \(\mathbb{R}^{6}\), informal).: _If \(M^{5}\subset\mathbb{R}^{6}\) is a closed embedded hypersurface with entropy \(\lambda(M)\leq\lambda(\mathbb{S}^{1})\) then there exist arbitrarily small \(C^{\infty}\) graphs \(M^{\prime}\) over \(M\) so that the mean curvature flow starting from \(M^{\prime}\) has only multiplicity-one singularities of \(\mathbb{S}^{5},\mathbb{S}^{4}\times\mathbb{R}\), \(\mathbb{S}^{3}\times\mathbb{R}^{2}\), \(\mathbb{S}^{2}\times\mathbb{R}^{3}\)-type._
Our results also imply a topological classification of low-entropy hypersurfaces (see Corollary 1.22 for a more general statement and proof).
**Corollary 1.5** (Mean curvature flow as a smooth isotopy in low entropy).: _Let \(n\in\{2,3,4,5\}\) and \(M^{n}\subset\mathbb{R}^{n+1}\) be a closed connected embedded hypersurface._
1. _If_ \(\lambda(M)\leq\lambda(\mathbb{S}^{n-1})\) _then perhaps after a small initial_ \(C^{\infty}\) _perturbation, the mean curvature flow provides a smooth isotopy from_ \(M\) _to a standard_ \(\mathbb{S}^{n}\) _in_ \(\mathbb{R}^{n+1}\)_._
2. _If_ \(\lambda(M)\leq\lambda(\mathbb{S}^{n-2})\) _then perhaps after a small initial_ \(C^{\infty}\) _perturbation, the mean curvature flow with surgery of_ _[_10_]_ _(cf._ _[_12_, HK17b_]__) provides a smooth isotopy from_ \(M\) _to the boundary of a standard handlebody that is either a standard ball_ \(B^{n}\) _or a boundary connect sum of finitely many_ \(B^{n-1}\times\mathbb{S}^{1}\)_'s._
This result is only new when \(n\in\{4,5\}\). When \(n\in\{2,3\}\), a slightly different version of (a) was first proven by Bernstein-Wang [14, 15], and as presented (a) and (b) both follow from our work with K. Choi [11, 12]. Our work relied on various insights from the Bernstein-Wang program (see also [1, 15, 16, 17, 18, 19, 20, 21, 22, 23]).
At the moment, serious obstacles remain to extend the isotopy construction (Corollary 1.5) to the case \(\lambda(M)\leq 2\) using Theorem 1.1, even assuming a (hypothetical) construction of a flow with surgery (cf. [10, CHH21, CHH23, DH23, CDD\({}^{+}\)22]).
We refer the reader to [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25]), for other results on mean curvature flow.
### The low entropy assumption and a strong multiplicity-one conjecture
As a consequence of our work with K. Choi [11, 12, 13] (building on [16, 15, 17, 18, 19]), the "only" remaining obstacle to a theory of generic mean curvature flows in \(\mathbb{R}^{3}\) without any entropy assumptions is the possible occurrence of smooth tangent flows with integer multiplicity--which the well-known "multiplicity-one conjecture" of Ilmanen [16] posits should not occur.
In Appendix A's Theorem A.5, we show how Theorems 1.1 and 1.3 can generalize to give generic mean curvature flows in \(\mathbb{R}^{4}\), \(\mathbb{R}^{5}\) without assuming \(\lambda(M)\leq 2\) but instead a stronger form of the multiplicity-one conjecture.
### Generic regularity of minimizing hypersurfaces
In [10] we proved generic regularity for solutions to the Plateau problem (for hypersurfaces) in ambient \(9\)- and \(10\)-dimensions (see also [10]). The present paper can be considered as a parabolic analogue of [10], but there are several new serious issues in this paper that were not present there.
In the elliptic problem, a crucial feature is that singular points cannot limit to regular points (by density considerations). In the parabolic problem, we do _not_ know in general whether non-generic singularities can limit to generic ones (but see [12] for a related result); it is only known that this cannot happen if the limiting singularity is \(\mathbb{S}^{n}(\sqrt{2n})\) (then apply [13]) or \(\mathbb{S}^{n-1}(\sqrt{2(n-1)})\times\mathbb{R}\) (then apply [14, 12, 13]). This creates serious issues when working with an entropy bound \(\lambda(M)\leq 2\) (as we do in this paper) rather than \(\lambda(M)\leq\lambda(\mathbb{S}^{n-1})\) (as in [10]). We expect the techniques we have developed here to handle this (cf. Section 6) to be applicable in other situations.
There are two more serious issues in going from the elliptic to the parabolic problem:
1. The dimension of the spine of a singularity versus the eigenvalue estimate are _coupled_ (cf. Propositions 3.6 and 5.2), necessitating a more complicated covering argument to prove the final result.
2. The parabolic Harnack inequality requires one to "go back in time," significantly complicating the proof of the separation estimates (cf. Proposition 4.12).
### Related work in other settings
We also mention here some other related work. Figalli-Ros-Oton-Serra [15] have proven generic regularity of solutions to the obstacle problem (see also [16]). They have subsequently applied these techniques to study the Stefan problem [15]. These papers have some features in common with the present paper (and [10, 10]), particularly the general strategy of (super-linear) space-time Holder estimates versus estimates for the space-time singular set, although the technical details are very different. In particular, the serious issue of "generic" versus "non-generic" singularities faced here does not appear to have an analog in the previously cited works.
Finally, we also note the recent work of Fernandez-Real-Yu [17] establishing improved generic regularity for free-boundary problems by using a strategy that seems to have some relation with the technique used in [10, 10].
### Definitions
Before precisely stating our main results we first recall several definitions, some of which are now standard.
**Definition 1.6** (Colding-Minicozzi entropy).: Following Colding-Minicozzi [12], we define the entropy of \(M^{n}\subset\mathbb{R}^{n+1}\) to be:
\[\lambda(M):=\sup_{\begin{subarray}{c}\mathbf{x}_{0}\in\mathbb{R}^{n+1}\\ t_{0}>0\end{subarray}}\int_{M}(4\pi t_{0})^{-\frac{n}{2}}e^{-\frac{1}{4t_{0}} \left|\mathbf{x}-\mathbf{x}_{0}\right|^{2}}\,d\mathcal{H}^{n}(\mathbf{x})\,.\]
By Huisken's monotonicity of Gaussian area, \(t\mapsto\lambda(M(t))\) is non-increasing when \(M(t)\) is a family of closed hypersurfaces flowing smoothly by mean curvature flow, or more generally a Brakke flow with bounded area ratios (which holds, including for blowups, assuming that it holds for the initial data).
_Remark 1.7_.: It is not hard to see that if \(M^{n}\subset\mathbb{R}^{n+1}\) is a closed hypersurface, then \(\lambda(M)=\lambda(\mathbf{z}_{0}+O(M\times\mathbb{R}^{\ell}))\) for all \(\ell\geq 1\), \(O\in O(n+\ell+1)\), \(\mathbf{z}_{0}\in\mathbb{R}^{n+\ell+1}\).
**Definition 1.8** (Shrinkers).: A (possibly incomplete) smooth hypersurface \(\Sigma\subset\mathbb{R}^{n+1}\) is said to be a shrinker if it satisfies \(\mathbf{H}+\frac{1}{2}\mathbf{x}^{\perp}=\mathbf{0}\). By Huisken's monotonicity formula, these model the smooth parts of singularity models in mean curvature flow. We consider the following sets of shrinkers:
1. \(\mathcal{S}_{n}\), the set of _complete_ smooth self-shrinkers in \(\mathbb{R}^{n+1}\) with \(\lambda(\Sigma)<\infty\).
2. \(\mathcal{S}_{n}^{*}\), the set of non-flat elements of \(\mathcal{S}_{n}\).
3. \(\mathcal{S}_{n}^{\mathrm{gen}}\), the set of "generic" shrinkers, i.e., round self-shrinking spheres and cylinders over them in \(\mathbb{R}^{n+1}\): \[\mathcal{S}_{n}^{\mathrm{gen}}:=\left\{O(\mathbb{S}^{j}(\sqrt{2j})\times \mathbb{R}^{n-j})\in\mathcal{S}_{n}:j=1,\ldots,n,\,O\in O(n+1)\right\}.\]
When working with entropy upper bounds, we need only consider low-entropy shrinkers. For \(\Lambda>0\), we define:
\[\mathcal{S}_{n}(\Lambda) :=\{\Sigma\in\mathcal{S}_{n}:\lambda(\Sigma)<\Lambda\},\] \[\mathcal{S}_{n}^{*}(\Lambda) :=\mathcal{S}_{n}(\Lambda)\cap\mathcal{S}_{n}^{*}.\]
_Remark 1.9_ (Generic shrinker entropies).: Note that Stone [11] computed the entropies of spheres--and thus all generic shrinkers by Remark 1.7--to be
\[2>\lambda(\mathbb{S}^{1})=\sqrt{\frac{2\pi}{e}}\approx 1.52>\frac{3}{2}> \lambda(\mathbb{S}^{2})=\frac{4}{e}\approx 1.47>\cdots>\lambda(\mathbb{S}^{n}).\]
**Definition 1.10** (Forwards-backwards parabolic balls).: Throughout the paper, for spacetime points \(X=(\mathbf{x},t)\in\mathbb{R}^{n+1}\times\mathbb{R}\) and radii \(r>0\),
\[P_{r}(X)=B_{r}(\mathbf{x})\times(t-r^{2},t+r^{2})\]
denotes the forwards-backwards parabolic ball of radius \(r\) about \(X\).
**Definition 1.11** (Regular vs singular and generic vs non-generic points).: Let \(\mathcal{M}\) be an integral \(n\)-Brakke flow in \(\mathbb{R}^{n+1}\) with support \(\operatorname{supp}\mathcal{M}\).
1. Let \(\operatorname{reg}\mathcal{M}\) be the set of all _regular points_\(X\in\operatorname{supp}\mathcal{M}\), i.e., those for which there exists \(r>0\) so that \(\mathcal{M}\lfloor P_{r}(X)\) is the multiplicity one Brakke flow associated to a smooth mean curvature flow.
2. Let \(\operatorname{sing}\mathcal{M}\) be the set of all _singular points_\(X\in\operatorname{supp}\mathcal{M}\setminus\operatorname{reg}\mathcal{M}\) so that \(\mathcal{M}\) is defined for times slightly before \(t\).
3. Decompose \(\operatorname{sing}\mathcal{M}\) into "generic" and "non-generic" singular points: \[\operatorname{sing}\mathcal{M}=\operatorname{sing}_{\mathrm{gen}}\mathcal{M} \cup\operatorname{sing}_{\mathrm{non-gen}}\mathcal{M},\] where a point \(X\in\operatorname{sing}\mathcal{M}\) is declared to be in \(\operatorname{sing}_{\mathrm{gen}}\mathcal{M}\) if some (and thus any by [1, 1]) tangent flow is the self-similar flow associated to \(\Sigma\in\mathcal{S}_{n}^{\mathrm{gen}}\) with multiplicity one; otherwise \(X\in\operatorname{sing}_{\mathrm{non-gen}}\mathcal{M}\).
Note that, by this definition, if \(\mathcal{M}\) is a Brakke flow starting at \(t=0\), then we have the disjoint union decomposition
\[\operatorname{supp}\mathcal{M}=\operatorname{supp}\mathcal{M}(0)\cup \operatorname{reg}\mathcal{M}\cup\operatorname{sing}\mathcal{M}.\]
We now refer the reader to [13, Section 2] for the complete definitions of the terms used below.
**Definition 1.12**.: Let \(M^{n}\subset\mathbb{R}^{n+1}\) be a closed embedded hypersurface.
* We denote by \(\hat{\mathfrak{F}}(M)\) the set of unit-regular cyclic integral Brakke flows \(\mathcal{M}\) with \(\mathcal{M}(0)=\mathcal{H}^{n}\lfloor M\).
* We denote by \(\mathfrak{F}(M)\subset\hat{\mathfrak{F}}(M)\) the set of \(\mathcal{M}\in\hat{\mathfrak{F}}(M)\) with the following property: there exist closed embedded hypersurfaces \(M_{i}\subset\mathbb{R}^{n+1}\), each disjoint from \(M\), but converging smoothly to \(M\), and \(\mathcal{M}_{i}\rightharpoonup\mathcal{M}\) for some \(\mathcal{M}_{i}\in\hat{\mathfrak{F}}(M_{i})\).
Note that \(\mathfrak{F}(M)\neq\emptyset\) by [14, 15] (see also [14, Appendix B]).
### Main theorem and discussion of main hypotheses
**Theorem 1.13**.: _Let \(n\in\{2,3,\ldots\}\), \(\Lambda\in(0,2]\), and assume that our two main hypotheses, \((\diamondsuit_{n,\Lambda})\) and \((\heartsuit_{n,\Lambda})\) (discussed below), both hold._
_Then, for every closed embedded hypersurface \(M^{n}\subset\mathbb{R}^{n+1}\) with \(\lambda(M)\leq\Lambda\) there exist arbitrarily small \(C^{\infty}\) graphs \(M^{\prime}\) over \(M\) so that \(\operatorname{sing}_{\operatorname{non-gen}}\mathcal{M}^{\prime}=\emptyset\) for all \(\mathcal{M}^{\prime}\in\mathfrak{F}(M^{\prime})\)._
_Remark 1.14_.: The case \(n=2\) already follows from [13, Theorem 1.1], but it is recovered by our more general techniques, so we include it for generality.
_First main hypothesis_, \((\diamondsuit_{n,\Lambda})\). Not every shrinker one encounters along a weak mean curvature flow is a complete smooth hypersurface, as considered in \(\mathcal{S}_{n}\). Instead, it may itself have singular points, which (after a blow-up argument) are modeled by minimal cones. In _certain_ occasions (see Proposition 3.1) these cones are, in fact, stable. Together with a dimension reduction argument in geometric measure theory due to Federer, this naturally leads us to consider the following class of objects:
**Definition 1.15** (Regular stable cones).: We define \(\mathcal{RSC}_{k}^{*}\) to be the set of regular stable stationary cones \(\mathcal{C}^{k}\subset\mathbb{R}^{k+1}\), i.e., those cones that are stationary with \(\operatorname{sing}\mathcal{C}=\{\mathbf{0}\}\) (in particular, \(\mathcal{C}\) is non-flat), and so that \(\operatorname{reg}\mathcal{C}\) is stable as a minimal surface.
By [15], \(\mathcal{RSC}_{k}^{*}=\emptyset\) for \(2\leq k\leq 6\). On the other hand, \(\mathcal{RSC}_{k}^{*}\neq\emptyset\) for \(k\geq 7\). As with shrinkers, we will only be interested in cones with entropy below our upper bound \(\Lambda>0\):
\[\mathcal{RSC}_{k}^{*}(\Lambda):=\{\mathcal{C}\in\mathcal{RSC}_{k}^{*}:\lambda( \mathcal{C})<\Lambda\}.\]
Our first main hypothesis is that there are _no_ such cones below our entropy threshold:
( \[\diamondsuit_{n,\Lambda}\] ) \[\mathcal{RSC}_{k}^{*}(\Lambda)=\emptyset\qquad\text{ for }k=7,\ldots,n.\]
_Remark 1.16_.: Note that \((\diamondsuit_{n,\Lambda})\) is vacuously true for \(n\leq 6\) and \(\Lambda>0\). It is a well-known open problem to estimate \(\Lambda>0\) so that \((\diamondsuit_{n,\Lambda})\) holds if \(n\geq 7\) (cf. [14]).
_Second main hypothesis_, \((\heartsuit_{n,\Lambda})\). For a smooth but possibly incomplete shrinker \(\Sigma\), we define \(\mu(\Sigma)\) to be the first eigenvalue of the \(L\)-operator, namely
\[\mu(\Sigma):=\inf\left\{\int_{\Sigma}(|\nabla f|^{2}-|A|^{2}f^{2}-\tfrac{1}{2 }f^{2})e^{-\frac{1}{4}|\cdot|^{2}}:f\in C_{c}^{\infty}(\Sigma),\int_{\Sigma}f ^{2}e^{-\frac{1}{4}|\cdot|^{2}}=1\right\}, \tag{1.2}\]
We then define
\[\mathcal{S}_{n}^{*}(\Lambda,\mu):=\{\Sigma\in\mathcal{S}_{n}^{*}(\Lambda):\mu (\Sigma)\geq\mu\}.\]
The second main hypothesis is that low-entropy non-generic shrinkers are "sufficiently" unstable:
( \[\heartsuit_{n,\Lambda}\] ) \[\mathcal{S}_{k}^{*}(\Lambda,-\tfrac{n-k}{2})\subset\mathcal{S}_{k}^{\text{gen}} \qquad\text{ for }k=2,\ldots,n-3.\]
_Remark 1.17_.: Note that (\(\heartsuit_{n,\Lambda}\)) does hold for:
1. \(n\leq 4\) and all \(\Lambda\), vacuously;
2. \(n=5\) and \(\Lambda\leq\lambda(\mathbb{S}^{1})\), since \(\mathcal{S}_{2}^{*}(\lambda(\mathbb{S}^{1}))\subset\mathcal{S}_{2}^{\text{gen}}\) by [1, Corollary 1.2];
### Applications of main theorem
The following are all immediate corollaries of our main theorem and Remarks 1.16, 1.17. We skip the low dimensional case \(n=2\) since it was already stated in [10]. Everywhere below, \(M\) denotes a closed embedded hypersurface.
**Corollary 1.18**.: _For \(M^{3}\subset\mathbb{R}^{4}\) with \(\lambda(M)\leq 2\) there exist arbitrarily small \(C^{\infty}\) graphs \(M^{\prime}\) over \(M\) so that \(\operatorname{sing}_{\text{\rm non-gen}}\mathcal{M}^{\prime}=\emptyset\) for all \(\mathcal{M}^{\prime}\in\mathfrak{F}(M^{\prime})\)._
**Corollary 1.19**.: _For \(M^{4}\subset\mathbb{R}^{5}\) with \(\lambda(M)\leq 2\) there exist arbitrarily small \(C^{\infty}\) graphs \(M^{\prime}\) over \(M\) so that \(\operatorname{sing}_{\text{\rm non-gen}}\mathcal{M}^{\prime}=\emptyset\) for all \(\mathcal{M}^{\prime}\in\mathfrak{F}(M^{\prime})\)._
**Corollary 1.20**.: _For \(M^{5}\subset\mathbb{R}^{6}\) with \(\lambda(M)\leq\lambda(\mathbb{S}^{1})\) there exist arbitrarily small \(C^{\infty}\) graphs \(M^{\prime}\) over \(M\) so that \(\operatorname{sing}_{\text{\rm non-gen}}\mathcal{M}^{\prime}=\emptyset\) for all \(\mathcal{M}^{\prime}\in\mathfrak{F}(M^{\prime})\)._
_Remark 1.21_.: To relax the entropy bound \(\lambda(\mathbb{S}^{1})\) to \(2\) in Corollary 1.20, one could try to see whether or not (\(\heartsuit_{5,2}\)) holds. This is an independently interesting question: we are asking if all \(\Sigma\in\mathcal{S}_{2}^{*}\setminus\mathcal{S}_{2}^{\text{gen}}\) with entropy \(\lambda(\Sigma)<2\) have \(\mu(\Sigma)<-\frac{3}{2}\). Note that numerics [1, 1, 1] suggest that Angenent's torus has \(\lambda\sim 1.85\), \(\mu\sim-3.74\).
Here is a topological application of our theorem:
**Corollary 1.22**.: _Take \(n\in\{2,3,\ldots\}\), \(\Lambda\in(0,2]\) such that (\(\heartsuit_{n,\Lambda}\)) and (\(\heartsuit_{n,\Lambda}\)) hold. Let \(M^{n}\subset\mathbb{R}^{n+1}\) be a closed connected embedded hypersurface with \(\lambda(M)\leq\Lambda\)._
1. _If_ \(\Lambda\leq\lambda(\mathbb{S}^{n-1})\) _then perhaps after a small initial_ \(C^{\infty}\) _perturbation, the mean curvature flow provides a smooth isotopy from_ \(M\) _to a standard_ \(\mathbb{S}^{n}\) _in_ \(\mathbb{R}^{n+1}\)_._
2. _If_ \(\Lambda\leq\lambda(\mathbb{S}^{n-2})\) _then perhaps after a small initial_ \(C^{\infty}\) _perturbation, the mean curvature flow with surgery of_ _[_1_]_ _provides a smooth isotopy from_ \(M\) _to the boundary of a standard handlebody that is either a standard ball_ \(B^{n}\) _or a boundary connect sum of finitely many_ \(B^{n-1}\times\mathbb{S}^{1}\)_'s._
By Remarks 1.16 and 1.17, Corollary 1.22 holds unconditionally (cf. Corollary 1.5) when \(n\leq 5\).
Proof.: Combining the argument used in the beginning of Section 6 with Theorem 1.13, either \(M\) is a round sphere (in which case we are done), or we can find a small \(C^{\infty}\) graph \(M^{\prime}\) over \(M\) so that \(\lambda(M)<\Lambda\) and so that there is \(\mathcal{M}^{\prime}\in\mathfrak{F}(M^{\prime})\) with \(\operatorname{sing}_{\text{\rm non-gen}}\mathcal{M}^{\prime}=\emptyset\).
Then, in case (a), we have \(\lambda(\mathcal{M}^{\prime})<\lambda(\mathbb{S}^{n-1})\) so \(\mathcal{M}^{\prime}\) can only have multiplicity-one \(\mathbb{S}^{n}\)-type singularities. Thus, the mean curvature flow is completely smooth until it becomes a round sphere. In case (b), we additionally use the surgery of [1].
Here is an application to regularity of the flow (by convention, \(\lambda(\mathbb{S}^{-1})=\lambda(\mathbb{S}^{0})=2\)):
**Corollary 1.23**.: _Take \(n\in\{2,\dots\}\) and \(\Lambda\in(0,\lambda(\mathbb{S}^{n-3})]\) such that \((\diamondsuit_{n,\Lambda})\) and \((\heartsuit_{n,\Lambda})\) hold. If Let \(M^{n}\subset\mathbb{R}^{n+1}\) be a closed connected embedded hypersurface with \(\lambda(M)\leq\Lambda\). There is an arbitrarily small graph \(M^{\prime}\) over \(M\) so that any mean curvature flow \(\mathcal{M}^{\prime}\in\mathfrak{F}(M^{\prime})\) is completely smooth for a.e. time \(t\) and each connected component of \(\operatorname{sing}\mathcal{M}^{\prime}\) is contained in some time-slice._
Proof.: Combine Theorem 1.13 with [1].
By Remarks 1.16 and 1.17, Corollary 1.23 holds unconditionally when \(n\leq 5\).
### Organization
Section 2 contains some preliminary discussions about rescaled flows, regularity, smoothly crossing flows, and parabolic covering. Section 3 contains regularity and eigenvalue estimates for low-entropy \(F\)-stationary varifolds. In Section 4 we establish the main local estimates for nearly ancient one-sided flows near nearly-self-similar flows. We then use this in Section 5 to prove the central density drop result. We then show how to handle the issue of non-generic singularities limiting to bubble-sheet generic singularities in Section 6, completing the proof of the main result. Appendix A discusses a form of the multiplicity-one conjecture as it relates to our results here. In Appendix B we estimate the first eigenvalue of the \(L\)-operator on non-flat stable cones (this is not used elsewhere in the paper).
### Acknowledgements
We are grateful to Kyeongsu Choi and Brian White for helpful discussions. O.C. was supported by a Terman Fellowship and an NSF grant (DMS-2304432). C.M. was supported by an NSF grant (DMS-2147521).
## 2. Preliminaries and notation
### \(F\)-functional and density
Let us begin by recalling a few basic notions in the smooth setting. First, for smooth hypersurfaces \(M^{n}\subset\mathbb{R}^{n+1}\) one has the \(F\)-functional,
\[F(M)=(4\pi)^{-\frac{n}{2}}\int_{M}e^{-\frac{1}{4}|\cdot|^{2}}\,d\mathcal{H}^{ n}, \tag{2.1}\]
which in turn gives rise to the Colding-Minicozzi entropy (Definition 1.6) via
\[\lambda(M)=\sup_{\begin{subarray}{c}\mathbf{x}_{0}\in\mathbb{R}^{n+1}\\ t_{0}>0\end{subarray}}F\left(\frac{1}{\sqrt{t_{0}}}(M-\mathbf{x}_{0})\right). \tag{2.2}\]
Moreover, for a smooth mean curvature flow with bounded area ratios \(\mathcal{M}:t\mapsto M(t)\), the \(F\)-functional also gives rise to the density function, defined as
\[\Theta_{\mathcal{M}}(X,r)=F\left(\frac{1}{r}(M(t-r^{2})-\mathbf{x})\right),\; X\in\mathbb{R}^{n+1}\times\mathbb{R},\;r>0, \tag{2.3}\]
which is nondecreasing in \(r\) by Huisken's well-known monotonicity formula. In particular,
\[\Theta_{\mathcal{M}}(X)=\lim_{r\to 0}\Theta_{\mathcal{M}}(X,r),\;X\in\mathbb{R}^{ n+1}\times\mathbb{R}, \tag{2.4}\]
is well-defined.
For our proofs, we will need to work with weaker objects than smooth hypersurfaces, namely \(n\)-varifolds in \(\mathbb{R}^{n+1}\) (see [22]) and Brakke flows (see [19] and [13, Section 2]). In this setting, (2.1), (2.2) extend to \(n\)-varifolds in \(\mathbb{R}^{n+1}\) (by looking at
their induced Radon measures), and the monotonicity underlying (2.3), (2.4) extends to \(n\)-Brakke flows in \(\mathbb{R}^{n+1}\) with bounded area ratios (cf. [12, Lemma 7]).
### \(F\)-stationary varifolds
It is a simple computation that all elements of \(\mathcal{S}_{n}\) are stationary for the \(F\)-functional (cf. [13, SS3]) or equivalently for the area functional with respect to the conformal metric \(e^{-|\cdot|^{2}/(2n)}g_{\mathbb{R}^{n+1}}\) on \(\mathbb{R}^{n+1}\). Thus, Definition 2.1 below precisely generalizes the notion of shrinkers to the varifold setting.
**Definition 2.1**.: An \(n\)-varifold \(V\) in \(\mathbb{R}^{n+1}\) is called \(F\)-stationary if is stationary with respect to the conformally metric \(e^{-|\cdot|^{2}/(2n)}g_{\mathbb{R}^{n+1}}\) on \(\mathbb{R}^{n+1}\).
**Definition 2.2**.: Let \(V\) be an \(F\)-stationary \(n\)-varifold in \(\mathbb{R}^{n+1}\). We define an associated \(n\)-Brakke flow \(\mathcal{M}_{V}\) as follows: if \(V\) is a cone we define \(\mathcal{M}_{V}\) as a static flow
\[\mathcal{M}_{V}(t)=V\text{ for all }t\in\mathbb{R},\]
while if \(V\) is not a cone, we define \(\mathcal{M}_{V}\) as a shrinking flow that disappears at \(t=0\):
\[\mathcal{M}_{V}(t)=\begin{cases}\sqrt{-t}V&\text{ for }t<0,\\ 0&\text{ for }t\geq 0.\end{cases}\]
In the latter case \(\mathcal{M}_{V}\) may only be unit-regular for times \(t<0\); e.g., if \(V\) is a non-conical self-shrinker with asymptotically conical ends, then \(\mathcal{M}_{V}\) vanishes "unexpectedly" at \(t=0\). This will not pose any issue in the sequel; cf. the \(T\) parameter in Lemma 2.10.
It is not hard to see, using the monotonicity formula on \(\mathcal{M}_{V}\), that
\[\lambda(V)=F(V)\]
for all \(F\)-stationary \(n\)-varifolds \(V\) in \(\mathbb{R}^{n+1}\).
### Regularity scale of Brakke flows
**Definition 2.3** (Regularity scale).: Let \(\mathcal{M}\) be an integral \(n\)-Brakke flow in \(\mathbb{R}^{n+1}\) and \(X=(\mathbf{x},t)\in\operatorname{reg}\mathcal{M}\). We define the _regularity scale_ of \(\mathcal{M}\) at \(X\) by
\[r_{\mathcal{M}}(X):=\sup\{r>0:P_{r}(X)\cap\operatorname{supp}\mathcal{M} \subset\operatorname{reg}\mathcal{M},\;|A|\leq r^{-1}\text{ on }P_{r}(X)\}.\]
For \(\rho,r>0\) and \((\mathbf{x},t)\in\mathbb{R}^{n+1}\times\mathbb{R}\) we set
\[\mathcal{R}_{\rho,r}\mathcal{M}(\mathbf{x},t):=\{\mathbf{y}\in\operatorname {supp}\mathcal{M}(t)\cap B_{r}(\mathbf{x}):r_{\mathcal{M}}(\mathbf{y},t)>\rho \}\subset\operatorname{reg}\mathcal{M}(t).\]
For an integral \(F\)-stationary varifold we abuse notation and write \(r_{V}(\mathbf{x}):=r_{\mathcal{M}_{V}}(\mathbf{x},-1)\) and \(\mathcal{R}_{\rho,r}V(\mathbf{x}):=\mathcal{R}_{\rho,r}\mathcal{M}_{V}( \mathbf{x},-1)\).
_Remark 2.4_.: For \(\lambda>0\), the rescaled flows \(\mathcal{M}_{\lambda}(t)=\lambda\mathcal{M}(\lambda^{-2}t)\) satisfy
\[\mathcal{R}_{\lambda\rho,\lambda r}\mathcal{M}_{\lambda}(\lambda^{-1}\mathbf{ x},\lambda^{-2}t)=\lambda\mathcal{R}_{\rho,r}\mathcal{M}(\mathbf{x},t).\]
Furthermore, for \(\rho^{\prime}\leq\rho,r\leq r^{\prime}\) we have
\[\mathcal{R}_{\rho,r}\mathcal{M}(\mathbf{x},t)\subset\mathcal{R}_{\rho^{ \prime},r^{\prime}+|\mathbf{x}-\mathbf{x}^{\prime}|}\mathcal{M}(\mathbf{x}^{ \prime},t).\]
_Remark 2.5_ (Continuity of regularity scale).: It is standard to show, under the assumption that the entropy is uniformly bounded above by \(2-\varepsilon\) with \(\varepsilon>0\), that \(r_{\mathcal{M}}(X)\) is continuous with respect to convergence of the point \(X\) and the \(n\)-Brakke flow \(\mathcal{M}\) if, e.g., the initial conditions converge smoothly. See [12, Lemma 2.4] for the analogous result for minimizing hypersurfaces (note that the upper bound on the entropy prevents issues with multiplicity, which is not a concern for minimizing hypersurfaces).
### Rescaled Brakke flows
**Definition 2.6**.: Given a Brakke flow \(\mathcal{N}\) and a spacetime point \(X_{0}=(\mathbf{x}_{0},t_{0})\) we can define the _rescaled Brakke flow around \(X_{0}\) at scale \(\beta\)_ by the usual recipe
\[\tilde{\mathcal{N}}(\tau):=\beta^{-1}e^{\tau/2}(\mathcal{N}(t_{0}-\beta^{2}e ^{-\tau})-\mathbf{x}_{0}).\]
Note that \(\tau=0\) corresponds to \(t=t_{0}-\beta^{2}\) (if \(\mathcal{N}\) is defined then).
**Definition 2.7**.: Consider a rescaled Brakke flow \(\tilde{\mathcal{N}}\) at scale \(\beta=1\) for simplicity. We define the regularity scale \(r_{\tilde{\mathcal{N}}}(\mathbf{x},\tau)\) as follows. Define a non-rescaled flow
\[\mathcal{N}(t)=\sqrt{-t}\tilde{\mathcal{N}}(\tau-\log(-t))\]
and then set \(r_{\tilde{\mathcal{N}}}(\mathbf{x},\tau):=r_{\mathcal{N}}(\mathbf{x},-1)\).
### Regularity of low-entropy \(F\)-stationary varifolds
**Lemma 2.8**.: _Let \(V\) be a cyclic integral \(F\)-stationary \(n\)-varifold in \(\mathbb{R}^{n+1}\) with \(\lambda(V)<2\). Then:_
1. _We have_ \(\dim_{H}\operatorname{sing}V\leq n-3\)_._
2. _If_ \(\operatorname{sing}V\neq\emptyset\)_, then_ \(V\) _has an iterated tangent cone of the form (up to rotation)_ \[\mathbb{R}^{n-k}\times\mathcal{C}^{k},\;k\geq 3,\] _with_ \(\mathcal{C}^{k}\subset\mathbb{R}^{k+1}\) _a smooth multiplicity-one non-flat minimal cone._
Proof.: The assumption \(\lambda(V)<2\) rules out tangent cones given by higher multiplicity planes or unions four or more half-planes. The union of three half-planes is ruled out by the cyclic assumption thanks to [15]. The assertion then follows from standard dimension reduction arguments (cf. [15]).
### Smoothly crossing Brakke flows
We recall here
**Definition 2.9** ([12, Definition B.1]).: Consider integral \(n\)-Brakke flows \(\mathcal{M}\), \(\mathcal{M}^{\prime}\) in \(\mathbb{R}^{n+1}\). We say that \(\mathcal{M}\), \(\mathcal{M}^{\prime}\)_cross smoothly_ at \(X=(\mathbf{x},t)\in\operatorname{reg}\mathcal{M}\cap\operatorname{reg} \mathcal{M}^{\prime}\) if there is \(r>0\) so that \(\mathcal{M}\lfloor P(X,r),\mathcal{M}^{\prime}\lfloor P(X,r)\) are equal to smooth connected multiplicity-one flows \(\Gamma(s),\Gamma^{\prime}(s)\) so that \(\Gamma(t)\) has points on both sides of \(\Gamma^{\prime}(t)\) in any small neighborhood of \(\mathbf{x}\). We also say that \(\mathcal{M}\) and \(\mathcal{M}^{\prime}\) cross smoothly (with no reference to a point) if they cross smoothly at some \(X\in\operatorname{reg}\mathcal{M}\cap\operatorname{reg}\mathcal{M}^{\prime}\).
**Lemma 2.10** ([12, Lemma B.2]).: _Consider integral \(n\)-Brakke flows \(\mathcal{M}_{i}\), \(\mathcal{M}^{\prime}_{i}\) in \(\mathbb{R}^{n+1}\) that are unit-regular for times \(t<T\) (possibly \(T=\infty\)). Assume that \(\mathcal{M}_{i}\rightharpoonup\mathcal{M}\), \(\mathcal{M}^{\prime}_{i}\rightharpoonup\mathcal{M}^{\prime}\), and that \(\mathcal{M}\), \(\mathcal{M}^{\prime}\) cross smoothly at \(X=(\mathbf{x},t)\) with \(t<T\). Then, there are \(X_{j}\to X\) so that \(\mathcal{M}_{j}\), \(\mathcal{M}^{\prime}_{j}\) cross smoothly at \(X_{j}\) for all sufficiently large \(j\)._
Proof.: The proof in the reference (where no \(T\) parameter was present) still applies.
**Lemma 2.11**.: _Let \(M,M^{\prime}\subset\mathbb{R}^{n+1}\) be closed embedded hypersurfaces which are disjoint or coincide. Then, no pair \(\mathcal{M}\in\mathfrak{F}(M)\), \(\mathcal{M}^{\prime}\in\mathfrak{F}(M^{\prime})\) can cross smoothly._
Proof.: If \(M\neq M^{\prime}\), then \(M\cap M^{\prime}=\emptyset\) implies \(\operatorname{supp}\mathcal{M}\cap\operatorname{supp}\mathcal{M}^{\prime}=\emptyset\) by the avoidance principle (cf. [12, Lemma 10.6]).
It remains to consider \(M=M^{\prime}\). Using the definition of \(\mathfrak{F}(M)\), pick any \(M_{i}\) converging smoothly to \(M\) with \(M_{i}\cap M=\emptyset\), and its corresponding \(\mathcal{M}_{i}\in\hat{\mathfrak{F}}(M_{i})\) converging weakly to \(\mathcal{M}\). Do the same with \(M^{\prime}\), \(M^{\prime}_{i}\), \(\mathcal{M}^{\prime}_{i}\). Passing to a subsequence of \(\{M^{\prime}_{i}\}\), we may assume that \(M_{i}\cap M^{\prime}_{i}=\emptyset\), and thus \(\operatorname{supp}\mathcal{M}_{i}\cap\operatorname{supp}\mathcal{M}^{\prime }_{i}=\emptyset\) by avoidance. The result follows from Lemma 2.10.
Abusing notation, we'll say that two \(F\)-stationary \(n\)-varifolds \(V\), \(V^{\prime}\) in \(\mathbb{R}^{n+1}\) cross smoothly if their associated shrinking Brakke flows \(\mathcal{M}_{V}\), \(\mathcal{M}_{V^{\prime}}\) cross smoothly in the sense of Definition 2.9. Equivalently, \(V\), \(V^{\prime}\) cross smoothly at \(\mathbf{x}\) if there exists \(r>0\) so that \(V\lfloor B_{r}(\mathbf{x})\), \(V^{\prime}\lfloor B_{r}(\mathbf{x})\) are smooth connected multiplicity-one self-shrinkers \(\Sigma\), \(\Sigma^{\prime}\) so that \(\Sigma\) has points on both sides of \(\Sigma^{\prime}\) in any small neighborhood of \(\mathbf{x}\).
**Lemma 2.12**.: _If \(V\), \(V^{\prime}\) are cyclic integral \(F\)-stationary \(n\)-varifolds in \(\mathbb{R}^{n+1}\) with \(\lambda(V)\), \(\lambda(V^{\prime})<2\), then either \(V=V^{\prime}\) or \(V\), \(V^{\prime}\) cross smoothly._
Proof.: By the Frankel property for self-shrinkers (cf. [13, Corollary C.4]) we have \(\operatorname{supp}V\cap\operatorname{supp}V^{\prime}\neq\emptyset\). Suppose that \(\operatorname{reg}V\cap\operatorname{reg}V^{\prime}=\emptyset\). Since
\[\dim_{H}\operatorname{sing}V,\;\dim_{H}\operatorname{sing}V^{\prime}\leq n-3 \tag{2.5}\]
by Lemma 2.8, this would imply that \(\dim_{H}(\operatorname{supp}V\cap\operatorname{supp}V^{\prime})\leq n-3\). This contradicts the strong maximum principle for varifolds [12, Theorem A(i)] (cf. [14]).
We may thus consider \(\mathbf{x}\in\operatorname{reg}V\cap\operatorname{reg}V^{\prime}\). If \(V\), \(V^{\prime}\) cross smoothly at \(\mathbf{x}\), we are done. If they do not, then \(V\) lies (weakly) to one-side of \(V^{\prime}\) near \(\mathbf{x}\). The strong maximum principle then implies that \(V=V^{\prime}\) near \(\mathbf{x}\). Since \(\operatorname{reg}V,\operatorname{reg}V^{\prime}\) are connected (this holds by combining the Frankel property for self-shrinkers with [12, Theorem A(ii)]) we thus see that \(\operatorname{reg}V=\operatorname{reg}V^{\prime}\). Combined with (2.5) and \(\lambda(V)\), \(\lambda(V^{\prime})<2\), we thus see that \(V=V^{\prime}\).
**Corollary 2.13**.: _Let \(M\subset\mathbb{R}^{n+1}\) be a closed embedded hypersurface with \(\lambda(M)<2\). Suppose that \(\mathcal{M},\mathcal{M}^{\prime}\in\mathfrak{F}(M)\) and \(X\in\operatorname{supp}\mathcal{M}\cap\operatorname{supp}\mathcal{M}^{\prime }\cap\{t>0\}\). Then:_
1. \(\Theta_{\mathcal{M}}(X)=\Theta_{\mathcal{M}^{\prime}}(X)\)_, and_
2. _all tangent flows at_ \(X\) _for_ \(\mathcal{M}\)_,_ \(\mathcal{M}^{\prime}\) _coincide for_ \(t<0\)_._
Proof.: Note that (b) implies (a), so we can just prove (b). To that end, it suffices to show that any tangent flow \(\tilde{\mathcal{M}}\) to \(\mathcal{M}\) at \(X\) is also a tangent flow to \(\mathcal{M}^{\prime}\) at \(X\). The result then follows by swapping \(\mathcal{M}\) and \(\mathcal{M}^{\prime}\).
Let \(\tilde{\mathcal{M}}\) be a tangent flow to \(\mathcal{M}\) at \(X\) obtained by parabolically dilating by \(\lambda_{i}\to\infty\). Let the corresponding dilated flows of \(\mathcal{M}\) be \(\tilde{\mathcal{M}}_{j}\), and those of \(\mathcal{M}^{\prime}\) be \(\tilde{\mathcal{M}}^{\prime}_{j}\), so that
\[\tilde{\mathcal{M}}_{j}\rightharpoonup\tilde{\mathcal{M}}\]
and, after passing to a subsequence,
\[\tilde{\mathcal{M}}^{\prime}_{j}\rightharpoonup\tilde{\mathcal{M}}^{\prime},\]
for a tangent flow \(\tilde{\mathcal{M}}^{\prime}\) of \(\mathcal{M}^{\prime}\) at \(X\). By Lemma 2.11, \(\mathcal{M}\) and \(\mathcal{M}^{\prime}\) cannot cross smoothly, and thus neither can their dilations \(\tilde{\mathcal{M}}_{j}\), \(\tilde{\mathcal{M}}^{\prime}_{j}\). Thus, by Lemma 2.10, \(\tilde{\mathcal{M}}\), \(\tilde{\mathcal{M}}^{\prime}\) cannot cross smoothly either.
If \(V\) and \(V^{\prime}\) are the \(F\)-stationary \(n\)-varifolds associated with \(\tilde{\mathcal{M}}\) and \(\tilde{\mathcal{M}}^{\prime}\) then \(\lambda(M)<2\) implies that \(\lambda(V)\) and \(\lambda(V^{\prime})<2\), so in particular \(V=V^{\prime}\) by Lemma 2.12. Thus, the set of tangent flows to \(\mathcal{M}\) at \(X\) is the same as the set of tangent flows to \(\mathcal{M}^{\prime}\) at \(X\), at least in their \(t<0\) portions that are uniquely determined by \(V\), \(V^{\prime}\).
### Parabolic planes and a covering lemma
We will be relying on estimates on the parabolic Hausdorff dimension of the singular set of a mean curvature flow obtained in [11, 10]. These estimates, as in the case of minimizing hypersurfaces, rely on the notion of the _spine_ of a tangent flow (see Definition 3.3 below). To that end, we introduce some notation for the set of possible spines.
**Definition 2.14**.: We denote \(\mathscr{S}\) by the set of subspaces \(\Pi\) of \(\mathbb{R}^{n+1}\times\mathbb{R}\) that are either of the form \(\Pi=\Pi^{\prime}\times\{0\}\) or \(\Pi^{\prime}\times\mathbb{R}\) for \(\Pi^{\prime}\subset\mathbb{R}^{n+1}\) a subspace. Write \(D(\Pi)=\dim\Pi^{\prime}\) in the first case and \(D(\Pi)=\dim\Pi^{\prime}+2\) in the second (this is just the _parabolic_ dimension of \(\Pi\subset\mathbb{R}^{n+1}\times\mathbb{R}\)).
_Remark 2.15_.: In the general parabolic setting spines need not be elements of \(\mathscr{S}\) due to the possibility of quasi-static cones arising as tangent flows. However, unit-regular flows with entropy \(<2\) do not have quasi-static cones arising as tangent flows, so Definition 2.14 precisely suffices for our consideration of spines.
**Lemma 2.16** (cf. [1, (3.14)]).: _There is \(C=C(n)\in[1,\infty)\) with the following property._
_For any \(\Pi\in\mathscr{S}\) and \(\gamma\in(0,1)\), if \(\mathfrak{X}\subset U_{\gamma}(\Pi)\cap P((\mathbf{0},0),1)\) is an arbitrary nonempty subset, then there are points \(X_{1},\dots,X_{K}\in\mathfrak{X}\) so that_
\[\mathfrak{X}\subset\cup_{k=1}^{K}P(X_{k},\gamma)\text{ with }K\leq C\gamma^{-D(\Pi)}.\]
Proof.: This follows in a standard way from the Vitali covering lemma.
## 3. Low-entropy \(F\)-stationary varifolds
In this section we consider a cyclic integral \(F\)-stationary \(n\)-varifold \(V\) in \(\mathbb{R}^{n+1}\) with \(\lambda(V)<2\). We will write \(\mu(V)=\mu(\operatorname{reg}V)\) (see (1.2)).
**Proposition 3.1**.: _If \(\mu(V)>-\infty\) then any blow-up limit of \(V\) has stable regular part._
Proof.: Consider a blow-up limit \(\tilde{V}_{i}=\lambda_{i}(V-\mathbf{x}_{i})\rightharpoonup\tilde{V}\) with \(\lambda_{i}\to\infty\), \(\mathbf{x}_{i}\to\mathbf{x}_{\infty}\). Assume, for contradiction, that \(\tilde{V}\) has unstable regular part. Let \(U\) be an open set such that
\[\operatorname{sing}\tilde{V}\cap U=\emptyset\text{ and }\operatorname{reg} \tilde{V}\cap U\text{ is unstable.}\]
Fix some \(\delta>0\) and a nonzero \(\tilde{f}\in C_{c}^{\infty}(\operatorname{reg}\tilde{V}\cap U)\) with
\[\int_{\operatorname{reg}\tilde{V}}\left(|\nabla\tilde{f}(\mathbf{y})|^{2}-|A( \mathbf{y})|^{2}\tilde{f}(\mathbf{y})^{2}\right)d\mathbf{y}\leq-2\delta\int_ {\operatorname{reg}\tilde{V}}\tilde{f}(\mathbf{y})^{2}\,d\mathbf{y}.\]
Since \(\operatorname{sing}\tilde{V}\cap U=\emptyset\) we can find non-zero \(\tilde{f}_{i}\in C_{c}^{\infty}(\operatorname{reg}\tilde{V}_{i}\cap U)\) converging to \(f\) smoothly. Observing that as \(i\to\infty\),
\[e^{\frac{1}{4}|\mathbf{x}_{i}|^{2}}e^{-\frac{1}{4}|\mathbf{x}_{i}+\lambda_{i}^ {-1}\mathbf{y}|^{2}}=e^{-\frac{1}{2}\lambda_{i}^{-1}\mathbf{x}_{i}\cdot\mathbf{ y}-\frac{1}{4}\lambda_{i}^{-2}|\mathbf{y}|^{2}}\to 1\text{ uniformly for }|\mathbf{y}|\text{ bounded,}\]
we thus conclude that, for \(i\) sufficiently large,
\[\int_{\operatorname{reg}\tilde{V}_{i}}\big{(}|\nabla\tilde{f}_{i}|(\mathbf{y})^{2 }-|A(\mathbf{y})|^{2}\tilde{f}_{i}(\mathbf{y})^{2}\big{)}e^{-\tfrac{1}{4}| \mathbf{x}_{i}+\lambda_{i}^{-1}\mathbf{y}|^{2}}\,d\mathbf{y}\leq-\delta\int_{ \operatorname{reg}\tilde{V}_{i}}\tilde{f}_{i}(\mathbf{y})^{2}e^{-\tfrac{1}{4} |\mathbf{x}_{i}+\lambda_{i}^{-1}\mathbf{y}|^{2}}\,d\mathbf{y}.\]
Define \(f_{i}\in C_{c}^{\infty}(\operatorname{reg}V)\) by pulling \(\tilde{f}_{i}\) back to the original scale, i.e. \(f_{i}(\mathbf{x})=\tilde{f}_{i}(\lambda_{i}(\mathbf{x}-\mathbf{x}_{i}))\). We have:
\[\int_{\operatorname{reg}V}\big{(}|\nabla f_{i}|(\mathbf{x})^{2}-|A(\mathbf{x} )|^{2}f_{i}(\mathbf{x})^{2}-\tfrac{1}{2}f_{i}(\mathbf{x})^{2}\big{)}e^{- \tfrac{1}{4}|\mathbf{x}|^{2}}\,d\mathbf{x}\leq(-\delta\lambda_{i}^{2}-\tfrac{ 1}{2})\int_{\operatorname{reg}V}f_{i}(\mathbf{x})^{2}e^{-\tfrac{1}{4}| \mathbf{x}|^{2}}\,d\mathbf{x}.\]
Since \(\lambda_{i}\to\infty\) we find that \(\mu(V)=-\infty\). This completes the proof.
Arguing as in the proof of Lemma 2.8 we have the following result.
**Corollary 3.2**.: _Suppose that \((\diamondsuit_{n,\Lambda})\) holds. If \(\lambda(V)<\Lambda\) and \(\mu(V)>-\infty\) then \(\operatorname{sing}V=\emptyset\), i.e., \(V\) is the varifold associated to some smooth self-shrinker \(\Sigma\in\mathcal{S}_{n}\)._
We recall Definition 2.2 for the shrinking flow \(\mathcal{M}_{V}\) associated to \(V\).
**Definition 3.3**.: For any \(F\)-stationary varifold \(V\) we set
\[\operatorname{spine}V:=\{X\in\mathbb{R}^{n+1}\times\mathbb{R}:\mathcal{M}_{V}+ X=\mathcal{M}_{V}\}. \tag{3.1}\]
It follows from [10, SS8] and Remark 2.15 that \(\operatorname{spine}V\in\mathscr{S}\) and \(\Theta_{\mathcal{M}_{V}}(X)=F(V)\) if and only if \(X\in\operatorname{spine}V\). By abuse of notation we will write
\[D(V)=D(\operatorname{spine}V),\]
where \(D(\operatorname{spine}V)\) is as in Definition 2.14.
**Lemma 3.4**.: _Suppose that \(V_{j}\rightharpoonup V\) is a convergent sequence of \(F\)-stationary integral varifolds. Then:_
1. \(\limsup_{j}D(V_{j})\leq D(V)\)_, and_
2. \(\limsup_{j}\mu(V_{j})\leq\mu(V)\)_._
Proof.: Suppose that \(\mathcal{M}_{V_{j}}+X_{j}=\mathcal{M}_{V_{j}}\) for \(X_{j}\in\operatorname{spine}V_{j}\). Without loss of generality, \(|X_{j}|=1\) for all \(j\) and thus \(X_{j}\to X\) after passing to a subsequence. Passing \(\mathcal{M}_{V_{j}}+X_{j}=\mathcal{M}_{V_{j}}\) to the limit, too, we get \(\mathcal{M}_{V}+X=\mathcal{M}_{V}\), so \(X\in\operatorname{spine}V\). This proves (a).
Assertion (b) follows from (1.2) since on every open set \(U\Subset\mathbb{R}^{n+1}\setminus\operatorname{sing}V\) we have smooth convergence of \(V_{j}\to V\).
The following estimate2 is straightforward to prove by cutting off in the \(\mathbb{R}^{\ell}\)-factors and using the finiteness of Gaussian area.
Footnote 2: With some extra effort, it should be possible to show that there is equality in the above lemma, but we will not need this fact here.
**Lemma 3.5**.: _For \(\Sigma\in\mathcal{S}_{k}\) we have \(\mu(\mathbb{R}^{\ell}\times\Sigma)\leq\mu(\Sigma)\)._
The following result is a crucial ingredient in this paper. We estimate the size of the spine versus the first eigenvalue of the \(L\) operator. It is crucial that we prove a uniform bound, and this necessarily complicates the statement and proof.
**Proposition 3.6**.: _Fix \(n\in\{2,3,\ldots\}\), \(\Lambda\in(0,2]\) so that (\(\heartsuit_{n,\Lambda}\)) and (\(\diamondsuit_{n,\Lambda}\)) hold and let \(\varepsilon>0\). There are constants \(\kappa=\kappa(n,\Lambda,\varepsilon)>0,\rho_{0}=\rho_{0}(n,\Lambda,\varepsilon )>0,r_{0}=r_{0}(n,\Lambda,\varepsilon)>2\) so that the following holds._
_Let \(V\) be a non-flat and non-generic \(F\)-stationary cyclic integral \(n\)-varifold in \(\mathbb{R}^{n+1}\) with \(\lambda(V)\leq\Lambda-\varepsilon\). Then:_
1. \(\mathcal{R}_{3\rho_{0},r_{0}/3}V(\mathbf{0})\neq\emptyset\)_._
2. _For any_ \(\mathcal{R}_{3\rho_{0},r_{0}/3}V(\mathbf{0})\subset\Omega\subset\operatorname {reg}V\)_, we have_ \[2\mu(\Omega)+D(V)<-2\kappa.\]
Proof.: Suppose for contradiction that (a) fails, i.e., there is a sequence of \(F\)-stationary cyclic integral \(n\)-varifolds in \(\mathbb{R}^{n+1}\), \(V_{j}\), with
\[\mathcal{R}_{3j^{-1},j/3}V_{j}(\mathbf{0})=\emptyset.\]
Pass to a subsequential limit \(V_{j}\rightharpoonup V\). Note that \(V\) is a non-flat \(F\)-stationary cyclic integral \(n\)-varifold with \(\lambda(V)\leq\Lambda-\varepsilon\) (by a cutoff argument) and \(V\not\in\mathcal{S}_{n}^{\operatorname{gen}}\) (by [15]). In particular, \(\mathcal{R}_{\rho,r}V\neq\emptyset\) for \(\rho\) sufficiently small and \(r\) sufficiently large. This is a contradiction, proving (a).
Note that (a) remains valid if we decrease \(\rho_{0}\) and increase \(r_{0}\). As such, to prove (b) it suffices to consider \(V_{j}\) with \(\mathcal{R}_{3j^{-1},j/3}V_{j}(\mathbf{0})\subset\Omega_{j}\subset \operatorname{reg}V_{j}\) so that
\[0\leq\limsup_{j}(2\mu(\Omega_{j})+D(V_{j})) \tag{3.2}\]
Pass to a subsequential limit \(V_{j}\rightharpoonup V\) as above.
_Claim 3.7_.: We have \(2\mu(V)+D(V)<0\).
Assume this claim for now. Choose \(\rho>0\), \(r>2\), \(\Omega^{\prime}\subset\mathcal{R}_{4\rho,r/4}V(\mathbf{0})\subset \operatorname{reg}V\) so that
\[2\mu(\Omega^{\prime})+D(V)<0.\]
Allard's theorem yields for sufficiently large \(j\) regions \(\Omega^{\prime}_{j}\subset\mathcal{R}_{3\rho,r/3}V_{j}(\mathbf{0})\subset \Omega_{j}\subset\operatorname{reg}V\) converging as smooth graphs over \(\Omega^{\prime}\). Transplanting test functions on \(\Omega^{\prime}\) to \(\Omega^{\prime}_{j}\) we find that \(\limsup_{j}\mu(\Omega^{\prime}_{j})\leq\mu(\Omega^{\prime})\). Combined with \(\mu(\Omega_{j})\leq\mu(\Omega^{\prime}_{j})\) and Lemma 3.4 we find
\[\limsup_{j}(2\mu(\Omega_{j})+D(V_{j}))\leq\limsup_{j}(2\mu(\Omega^{\prime}_{j} )+D(V_{j}))\leq 2\mu(\Omega^{\prime})+D(V)<0.\]
This contradicts (3.2), and proves (b).
It remains to prove Claim 3.7. Note it trivially holds if \(\mu(V)=-\infty\), so by Corollary 3.2 we can take \(V\) to be the varifold associated to a smooth self-shrinker \(\Sigma\in\mathcal{S}_{n}^{*}\setminus\mathcal{S}_{n}^{\operatorname{gen}}\). Then, \(\mu(\Sigma)<-1\) by [15, Theorems 9.36 and 0.17], so Claim 3.7 automatically holds if \(D(V)\leq 2\). We can thus consider \(D(V)\geq 3\). We know that
\[\Sigma=\Pi^{\prime}\times\Sigma^{\prime}\text{ for }\Sigma^{\prime}\in\mathcal{S }_{n-D(V)}^{*}\setminus\mathcal{S}_{n-D(V)}^{\operatorname{gen}}.\]
Set \(k:=n-D(V)\) and note that \(2\leq k\leq n-3\) (the first inequality follows from \(\mathcal{S}_{1}^{*}=\mathcal{S}_{1}^{\operatorname{gen}}\) and the second from \(D(V)\geq 3\)). Since \(\Sigma^{\prime}\in\mathcal{S}_{k}^{*}(\Lambda)\setminus\mathcal{S}_{k}^{ \operatorname{gen}}\), (\(\heartsuit_{n,\Lambda}\)) implies
\[2\mu(\Sigma^{\prime})<-n+k=-D(V).\]
Lemma 3.5 implies that \(\mu(\Sigma)\leq\mu(\Sigma^{\prime})\), yielding Claim 3.7.
## 4. One-sided flows
### Crossing between non-generic shrinkers and their translates
The following result is [1, Proposition 2.2] generalized to the current setting of singular tangent flows. Recall Definition 2.2 for the notion of associated shrinking flows.
**Proposition 4.1**.: _Take \(n\in\{2,3,\ldots\}\), \(\Lambda\in(0,2]\) such that \((\diamondsuit_{n,\Lambda})\) holds._
_Let \(V\), \(V^{\prime}\) be cyclic integral \(F\)-stationary \(n\)-varifolds in \(\mathbb{R}^{n+1}\) with \(\lambda(V)\), \(\lambda(V^{\prime})<\Lambda\) and associated shrinking flows \(\mathcal{M}_{V},\mathcal{M}_{V^{\prime}}\). If:_
* \(V\) _is non-flat and non-generic, and_
* _there is_ \((\mathbf{0},0)\neq X^{\prime}\in\mathbb{R}^{n+1}\times\mathbb{R}\) _so that_ \(\mathcal{M}_{V}\)_,_ \(\mathcal{M}_{V^{\prime}}+X^{\prime}\) _do not cross smoothly,_
_then:_
1. \(V^{\prime}=V\)_, and_
2. \(X^{\prime}\in\operatorname{\mathrm{spine}}V\)_._
Proof.: Note that the tangent flow at \(-\infty\) to \(\mathcal{M}_{V}\) is \(\mathcal{M}_{V}\) and the tangent flow at \(-\infty\) to \(\mathcal{M}_{V^{\prime}}+X^{\prime}\) is \(\mathcal{M}_{V^{\prime}}\). By Lemma 2.10 and definition of smoothly crossing \(F\)-stationary varifolds, \(V\) and \(V^{\prime}\) do not cross smoothly. Lemma 2.12 thus implies that \(V=V^{\prime}\).
It remains to prove that \(X^{\prime}\in\operatorname{\mathrm{spine}}V\). Write \(X^{\prime}=(\mathbf{x}^{\prime},t^{\prime})\). Since \(\mathcal{M}_{V}\) is invariant under parabolic dilation, we note that \(\mathcal{M}_{V}\) and \(\mathcal{M}_{V}+(\lambda\mathbf{x},\lambda^{2}t^{\prime})\) do not cross smoothly for any \(\lambda>0\). By considering \(t=-1\), we see that the speed of the family
\[\lambda\mapsto\sqrt{1+\lambda^{2}t^{\prime}}\operatorname{reg}V+\lambda \mathbf{x}^{\prime}\]
at \(\lambda=0\) must not change sign. Up to changing the sign of the unit normal \(\nu\) on \(\operatorname{reg}V\) (recalling \(\operatorname{reg}V\) is connected) we thus find that \(\mathbf{x}^{\prime}\cdot\nu\geq 0\) on \(\operatorname{reg}V\). Since \(\mathbf{x}^{\prime}\cdot\nu\) satisfies the elliptic PDE \(L(\mathbf{x}^{\prime}\cdot\nu)=\frac{1}{2}\mathbf{x}^{\prime}\cdot\nu\) along \(\operatorname{reg}V\) (cf. [13, Theorem 5.2]), the strong maximum principle implies that either \(\mathbf{x}^{\prime}\cdot\nu>0\) on \(\operatorname{reg}V\) or \(\mathbf{x}^{\prime}\cdot\nu\equiv 0\) on \(\operatorname{reg}V\).
We begin by considering the first case \(\mathbf{x}^{\prime}\cdot\nu>0\) on \(\operatorname{reg}V\). By the argument in [10, Lemma 2.1] we find that \(\mu(V)\geq-\frac{1}{2}\) (recalling that \(\mu(V)\) is the first eigenvalue of the \(L\) operator on \(\operatorname{reg}V\)). Thus Corollary 3.2 implies that \(\operatorname{sing}V=\emptyset\). Hence, by [10, Theorem 1.1], \(V\) is a hyperplane. This contradicts that \(V\) is non-flat.
Thus, we find that \(\mathbf{x}^{\prime}\cdot\nu\equiv 0\). We first assume that \(\mathbf{x}^{\prime}\neq 0\). This implies that \(\operatorname{reg}V\) splits a line in the \(\mathbf{x}^{\prime}\) direction. By Lemma 2.8, we see that \(V\) splits a line in the \(\mathbf{x}^{\prime}\) direction. If \(t^{\prime}=0\), we see that \(X^{\prime}\in\operatorname{\mathrm{spine}}V\). If \(t^{\prime}\neq 0\) then since \(V\) splits a line in the \(\mathbf{x}^{\prime}\) direction, we see that \(\mathcal{M}_{V}+(\mathbf{0},t^{\prime})=\mathcal{M}_{V}+(\mathbf{x}^{\prime},t ^{\prime})\) does not cross \(\mathcal{M}_{V}\) smoothly.
As such, it remains to consider the case that \(\mathbf{x}^{\prime}=0\) and \(t^{\prime}\neq 0\). In this case, we can argue as above (by considering \(t=-1\)) see that the speed of
\[\lambda\mapsto(1+\lambda t^{\prime})\operatorname{reg}V\]
does not change sign on \(\operatorname{reg}V\). This implies that (up to changing the sign of the unit normal) that the mean curvature \(H\geq 0\) on \(\operatorname{reg}V\). As above, we recall that
\[LH=H\]
along \(\operatorname{reg}V\), so the strong maximum principle (and connectedness of \(\operatorname{reg}V\)) yields either \(H>0\) or \(H\equiv 0\). As above, if \(H>0\) then \(\operatorname{sing}V=\emptyset\) (by arguing as in [10, Lemma 2.1] and using Lemma 2.8). In this case, by [13, Theorem 0.17] (cf. [11])
\(V\in\mathcal{S}_{n}^{\mathrm{gen}}\), a contradiction. Thus, \(H\equiv 0\) along \(\operatorname{reg}V\). By Lemma 2.8 we thus see that \(V\) is a stationary cone. Thus, \((0,t^{\prime})\in\operatorname{spine}V\). This completes the proof.
### Nearly self-similar non-generic flows
Fix \(n\in\{2,3,\ldots\}\), \(\Lambda\in(0,2]\), \(\varepsilon>0\), and consider a class of nearly self-similar flows about a non-generic singular point:
**Definition 4.2**.: For \(\eta\in(0,1]\) define \(\mathscr{P}(\eta)\) to be the set of pairs \((X,\mathcal{M})\) so that:
1. \(X\in P((\mathbf{0},0),1)\),
2. \(\mathcal{M}\) is a unit-regular cyclic integral \(n\)-Brakke flow in \(\mathbb{R}^{n+1}\) for \(t\in[-\eta^{-2},\infty)\),
3. \(\lambda(\mathcal{M})\leq\Lambda-\varepsilon\),
4. \(X\in\operatorname{sing}_{\mathrm{non\text{-}gen}}\mathcal{M}\),
5. \(\Theta_{\mathcal{M}}(X)\geq\Theta_{\mathcal{M}}(X,\eta^{-1})-4\eta\).
_Remark 4.3_.: It is useful to observe that for all \((X,\mathcal{M})\in\mathscr{P}(\eta)\), \(r\in(0,\eta^{-1}]\),
\[\Theta_{\mathcal{M}}(X)\geq\Theta_{\mathcal{M}}(X,r)-4\eta.\]
This follows from monotonicity and (5).
**Lemma 4.4**.: _Fix \(\Lambda\in(0,2]\), \(\varepsilon>0\). Suppose that \(((\mathbf{0},0),\mathcal{M}_{j})\in\mathscr{P}(j^{-1})\) and \(\mathcal{M}_{j}\rightharpoonup\mathcal{M}\). Then, there exists a non-flat and non-generic \(F\)-stationary cyclic integral \(n\)-varifold \(V\) with \(\lambda(V)\leq\Lambda-\varepsilon\), and \(\mathcal{M}=\mathcal{M}_{V}\) whenever the latter is nonvanishing (see Definition 2.2)._
Proof.: It is clear that \(\mathcal{M}\) is a unit-regular cyclic integral \(n\)-Brakke flow in \(\mathbb{R}^{n+1}\) defined for \((-\infty,\infty)\). Due to Remark 4.3 and upper-semicontinuity of density
\[\Theta_{\mathcal{M}}((\mathbf{0},0))\geq\Theta_{\mathcal{M}}((\mathbf{0},0),r)\]
for all \(r>0\). Thus, by monotonicity,
\[\mathcal{M}(t)=\mathcal{M}_{V}(t)\text{ for all }t<0,\]
where \(V\) is an \(F\)-stationary varifold with \(\lambda(V)\leq\Lambda-\varepsilon\). Note that if \(F\) is a stationary cone, this holds for all \(t\in\mathbb{R}\) by unit-regularity and \(\Lambda-\varepsilon<2\), and that \(V\) is non-flat since \((\mathbf{0},0)\in\operatorname{sing}\mathcal{M}_{j}\) for all \(j\).
It remains to prove that \(V\not\in\mathcal{S}_{n}^{\mathrm{gen}}\). This follows from [3] as we now explain (cf. [3, Theorem 0.2]). We now recall [3, (2.11)]. Fix a countable dense subset \(\{f_{k}\}\) of the unit ball in \(C_{c}^{0}(\mathbb{R}^{n+1})\) and set
\[d_{*}(\mu_{1},\mu_{2}):=\sum_{k}2^{-k}\left|\int f_{k}e^{-\frac{1}{4}|\cdot|^ {2}}d\mu_{1}-\int f_{k}e^{-\frac{1}{4}|\cdot|^{2}}d\mu_{2}\right|\]
for \(\mu_{1},\mu_{2}\) Radon measures with \(F(\mu_{i})<\infty\). (This is denoted \(d_{V}\) in [3].) One can check that \(d_{*}\) metrizes the weak-\(*\) topology on the space of Radon measures \(\mu\) with \(F(\mu)<\infty\). By [3, Corollary 2.12] there is \(c_{0}\) so that if \(V\) is an \(F\)-stationary varifold with \(F(V)\leq 2-\varepsilon\) and
\[d_{*}(V,\mathcal{S}_{n}^{\mathrm{gen}})\leq 2c_{0}\qquad\implies\qquad V\in \mathcal{S}_{n}^{\mathrm{gen}}, \tag{4.1}\]
where the \(d_{*}\)-metric on a varifold \(V\) is the metric with respect to the induced Radon measure \(\|V\|\).
Returning to the above setup, assume, for contradiction, that \(V\in\mathcal{S}_{n}^{\mathrm{gen}}\). For each \(j\), let \(\tilde{\mathcal{M}}_{j}\) denote the rescaled Brakke flow around \((\mathbf{0},0)\) at scale \(1\) corresponding to \(\mathcal{M}_{j}\). We will use a few times that, by Remark 4.3, we have for any sequence \(\tau_{j}\in[0,\infty)\),
\[\tilde{\mathcal{M}}_{j}(\cdot+\tau_{j})\rightharpoonup V^{\prime}, \tag{4.2}\]
where \(V^{\prime}\) is a \(F\)-stationary \(n\)-varifold. By assumption, \(V^{\prime}=V\) if \(\tau_{j}\equiv 0\). Since \((\mathbf{0},0)\not\in\operatorname{sing}_{\mathrm{gen}}\mathcal{M}_{j}\) for each \(j\), there is a sequence \(\tau^{\prime}_{k,j}\to\infty\) so that (4.2) gives
\[\tilde{\mathcal{M}}_{j}(\cdot+\tau^{\prime}_{k,j})\rightharpoonup V^{\prime}_{ j}\not\in\mathcal{S}_{n}^{\mathrm{gen}}\text{ as }k\to\infty,\]
so, by (4.1),
\[d_{*}(\tilde{\mathcal{M}}_{j}(\cdot+\tau^{\prime}_{k,j}),\mathcal{S}_{n}^{ \mathrm{gen}})\geq 2c_{0}-o(1)\text{ as }k\to\infty. \tag{4.3}\]
Take \(\bar{\tau}_{j}\in[0,\infty]\) to be as large as possible so that
\[d_{*}(\tilde{\mathcal{M}}_{j}(\tau),\mathcal{S}_{n}^{\mathrm{gen}})\leq c_{0} \text{ for all }\tau\in[0,\bar{\tau}_{j}).\]
By (4.3), \(\bar{\tau}_{j}<\infty\) for all \(j\). Moreover, (4.2) with \(\tau_{j}\equiv 0\) gives that \(\bar{\tau}_{j}\to\infty\) as \(j\to\infty\). Up to a subsequence, we can assume from (4.2) that \(\tilde{\mathcal{M}}_{j}(\cdot+\bar{\tau}_{j})\rightharpoonup\bar{V}\), where \(\bar{V}\) is a \(F\)-stationary \(n\)-varifold. Then, by our choice of \(\bar{\tau}_{j}\),
\[d_{*}(\bar{V},\mathcal{S}_{n}^{\mathrm{gen}})\leq d_{*}(\tilde{\mathcal{M}}_{ j}(\bar{\tau}_{j}-1),\mathcal{S}_{n}^{\mathrm{gen}})+o(1)\leq c_{0}+o(1),\]
so \(\bar{V}\in\mathcal{S}_{n}^{\mathrm{gen}}\) by (4.1). However, this yields
\[c_{0}<\sup_{\tau\in[0,1]}d_{*}(\tilde{\mathcal{M}}_{j}(\bar{\tau}_{j}+\tau), \mathcal{S}_{n}^{\mathrm{gen}})\leq\sup_{\tau\in[0,1]}d_{*}(\tilde{\mathcal{ M}}_{j}(\bar{\tau}_{j}+\tau),\bar{V})=o(1).\]
This is a contradiction, completing the proof.
**Lemma 4.5**.: _Assume that \((\diamondsuit_{n,\Lambda})\) holds and take any \(((\mathbf{0},0),\mathcal{M}_{j})\), \((X^{\prime}_{j},\mathcal{M}^{\prime}_{j})\in\mathscr{P}(j^{-1})\) such that:_
* \(\mathcal{M}_{j}\)_,_ \(\mathcal{M}^{\prime}_{j}\) _don't cross smoothly,_
* \(\mathcal{M}_{j}\rightharpoonup\mathcal{M}\)_,_
* \(\mathcal{M}^{\prime}_{j}\rightharpoonup\mathcal{M}^{\prime}\)_,_
* \(X^{\prime}_{j}\to X^{\prime}\neq(\mathbf{0},0)\)_,_
_and \(V\) be as in Lemma 4.4 (applied with \(\mathcal{M}_{j}\)). Then,_
1. \(\mathcal{M}^{\prime}=\mathcal{M}_{V}\) _whenever the latter is nonvanishing, and_
2. \(X^{\prime}\in\operatorname{spine}V\)_._
Proof.: Denote \(X^{\prime}=(\mathbf{x}^{\prime},t^{\prime})\).
By Lemma 4.4 (applied to \(\mathcal{M}_{j},\mathcal{M}^{\prime}_{j}-X^{\prime}_{j}\)), there exist two non-flat and non-generic \(F\)-stationary cyclic integral \(n\)-varifolds \(V\), \(V^{\prime}\) with \(\lambda(V),\lambda(V^{\prime})\leq\Lambda-\varepsilon\) so that
\[\mathcal{M}=\mathcal{M}_{V}\text{ and }\mathcal{M}^{\prime}=\mathcal{M}_{V^{ \prime}}(\cdot+t^{\prime})+\mathbf{x}^{\prime},\]
whenever the right hand sides are nonvanishing.
By Lemma 2.10, \(\mathcal{M}\) and \(\mathcal{M}^{\prime}\) do not cross smoothly (otherwise some \(\mathcal{M}_{j}\), \(\mathcal{M}^{\prime}_{j}\) would). Hence, Proposition 4.1 implies that \(V=V^{\prime}\) and \(X^{\prime}\in\operatorname{spine}V\), as desired.
### Graphical distance estimate for nearby rescaled flows
Going forward we fix \(n\in\{2,3,\dots\}\), \(\Lambda\in(0,2]\), \(\varepsilon>0\), so that \((\heartsuit_{n,\Lambda})\) and \((\diamondsuit_{n,\Lambda})\) both hold. Recall that constants \(\rho_{0},r_{0}>0\) were then chosen in Proposition 3.6 (depending only on \(n,\Lambda,\varepsilon\)).
**Lemma 4.6**.: _Let \(\tilde{\mathcal{M}}^{\prime}_{j}\), \(\tilde{\mathcal{M}}^{\prime\prime}_{j}\) be sequences of unit-regular cyclic integral rescaled Brakke flows in \(\mathbb{R}^{n+1}\), defined for \(\tau\in[T_{j},\infty)\) with \(T_{j}\to-\infty\), which satisfy_
* \(\lambda(\tilde{\mathcal{M}}^{\prime}_{j})\)_,_ \(\lambda(\tilde{\mathcal{M}}^{\prime\prime}_{j})\leq\Lambda-\varepsilon\)_,_
* \(\tilde{\mathcal{M}}^{\prime}_{j}\)_,_ \(\tilde{\mathcal{M}}^{\prime\prime}_{j}\) _do not cross smoothly,_
* \(\tilde{\mathcal{M}}^{\prime}_{j},\tilde{\mathcal{M}}^{\prime\prime}_{j}\rightharpoonup \tilde{\mathcal{M}}_{V}\) _for some_ \(F\)_-stationary cyclic integral_ \(n\)_-varifold_ \(V\)_, where_ \(\tilde{\mathcal{M}}_{V}\) _denotes the rescaling about_ \((\mathbf{0},0)\) _(see Definition_ 2.6_) of the flow_ \(\mathcal{M}_{V}\) _associated to_ \(V\) _(see Definition_ 2.2_)._
_Then, for sufficiently large \(j\):_
1. \(\mathcal{R}_{\rho_{0},r_{0}}\tilde{\mathcal{M}}^{\prime}_{j}(\mathbf{0},0)\neq\emptyset\)_._
2. _There is an increasing sequence of relatively open sets_ \(\tilde{W}_{j}\) _exhausting_ \(\operatorname{reg}\tilde{\mathcal{M}}_{V}\)_, with the property that the portions of_ \(\tilde{\mathcal{M}}^{\prime}_{j}\)_,_ \(\tilde{\mathcal{M}}^{\prime\prime}_{j}\) _within a fixed vertical distance from_ \(\tilde{W}_{j}\) _are normal graphs of_ \(\tilde{w}^{\prime}_{j}\)_,_ \(\tilde{w}^{\prime\prime}_{j}\in C^{\infty}(\tilde{W}_{j})\)_, and_ \[\tilde{w}^{\prime}_{j},\;\tilde{w}^{\prime\prime}_{j}\to 0\text{ in }C^{\infty}_{ \operatorname{loc}}(\operatorname{reg}\tilde{\mathcal{M}}_{V}).\] _Up to switching the unit normal of_ \(\operatorname{reg}\tilde{\mathcal{M}}_{V}\) _we can assume that_ \(\tilde{w}^{\prime}_{j}\geq\tilde{w}^{\prime\prime}_{j}\)_._
3. _There is a sequence of second order elliptic operators_ \(L_{j}\) _on_ \(\tilde{W}_{j}\) _so that_ \[(\partial_{\tau}-L_{j})(\tilde{w}^{\prime}_{j}-\tilde{w}^{\prime\prime}_{j})=0 \text{ on }\tilde{W}_{j},\] \[L_{j}\to L\text{ in }C^{\infty}_{\operatorname{loc}}(\operatorname{reg} \tilde{\mathcal{M}}_{V}),\] _where_ \(L\) _is the operator from (_1.2_)._
_Remark 4.7_.: If \(\tilde{\mathcal{M}}_{V}\) is as above, then \(\operatorname{reg}\tilde{\mathcal{M}}_{V}=\operatorname{reg}V\times\mathbb{R}\).
Proof of Lemma 4.6.: By Proposition 3.6, \(\mathcal{R}_{2\rho_{0},r_{0}/2}\tilde{\mathcal{M}}_{V}(\mathbf{0},0)\neq\emptyset\). This yields the first assertion. The remaining claims follow from unit-regularity and standard geometric arguments (recall that \(\operatorname{reg}\tilde{\mathcal{M}}_{V}\) is connected).
Consider the follow geometric variant of the infimum of vertical distances:
**Definition 4.8**.: For nonempty subsets \(A,B\subset\mathbb{R}^{n+1}\) we write
\[d(A,B)=\inf\{|\mathbf{a}-\mathbf{b}|:\mathbf{a}\in A,\mathbf{b}\in B\}.\]
Note that \(d(A,B)=0\) does not mean that \(A=B\).
**Lemma 4.9**.: _There is \(H=H(n,\Lambda,\varepsilon)\in[1,\infty)\) with the following property._
_Consider \(\tilde{\mathcal{M}}^{\prime}_{j},\tilde{\mathcal{M}}^{\prime\prime}_{j}, \tilde{\mathcal{M}}_{V},\tilde{W}_{j},\tilde{w}^{\prime}_{j},\tilde{w}^{\prime \prime}_{j}\) as in Lemma 4.6. Assume that_
\[\tilde{d}_{j}:=d(\mathcal{R}_{\rho_{0},r_{0}}\tilde{\mathcal{M}}^{\prime}_{j}( \mathbf{0},0),\operatorname{supp}\tilde{\mathcal{M}}^{\prime\prime}_{j}(0))>0\]
_for \(j=1,2,\dots\), and define_
\[\tilde{u}_{j}:=\tilde{d}_{j}^{-1}(\tilde{w}^{\prime}_{j}-\tilde{w}^{\prime \prime}_{j}).\]
_Then, after perhaps passing to a subsequence:_
1. \(\tilde{u}_{j}\to\tilde{u}\) _in_ \(C^{\infty}_{\operatorname{loc}}(\operatorname{reg}V\times(-\infty,0))\) _with_ \(\partial_{\tau}\tilde{u}=L\tilde{u}\) _and_ \(\tilde{u}\geq 0\)
_._
2. _At rescaled time_ \(\tau=-1\)_,_ \[\sup_{\mathcal{R}_{\rho_{0},r_{0}}V(\mathbf{0})}\tilde{u}(\cdot,-1)\leq H.\]
3. _For each rescaled time_ \(\tau<-1\)_,_ \[\sup_{\mathcal{R}_{3\rho_{0},r_{0}/3}V(\mathbf{0})}\tilde{u}(\cdot,\tau_{0})\leq H \inf_{\mathcal{R}_{\rho_{0},r_{0}}V(\mathbf{0})}\tilde{u}(\cdot,\tau_{0}+1).\]
Proof.: By Lemma 4.6, the functions \(\tilde{u}_{j}\) satisfy a second order parabolic PDE
\[(\partial_{t}-L_{j})\tilde{u}_{j}=0\text{ on }\tilde{W}_{j}\to\operatorname{reg} \tilde{\mathcal{M}}_{V},\]
\[L_{j}\to L\text{ in }C^{\infty}_{\operatorname{loc}}(\operatorname{reg} \tilde{\mathcal{M}}_{V}).\]
Recall, per Remark 4.7, that \(\operatorname{reg}\tilde{\mathcal{M}}_{V}=\operatorname{reg}V\times\mathbb{R}\). By a standard geometric argument relating graphical separation to distance we find
\[\inf_{\mathcal{R}_{\rho_{0}/2,2r_{0}}V(\mathbf{0})}\tilde{u}_{j}(\cdot,0)\leq 1 +o(1). \tag{4.4}\]
We can now apply parabolic Harnack and Schauder theory in a standard way to conclude (a). The proofs of (b) and (c) will follow similarly except we must argue that the constant \(H\) can be chosen to only depend on \(n,\Lambda,\varepsilon\).
We first consider (b). Suppose for contradiction that for each \(k=1,2,\dots\), there is a sequence \(\tilde{\mathcal{M}}^{\prime}_{j,k},\tilde{\mathcal{M}}^{\prime\prime}_{j,k}, \tilde{\mathcal{M}}_{V_{k}},\tilde{W}_{j,k},\tilde{w}^{\prime}_{j,k},\tilde{w} ^{\prime\prime}_{j,k}\), with \(j=1,2,\dots\), as in Lemma 4.6 so that
\[\tilde{u}_{j,k}=\tilde{d}_{j,k}^{-1}(\tilde{w}^{\prime}_{j,k}-\tilde{w}^{ \prime\prime}_{j,k})\]
converges as \(j\to\infty\) in \(C^{\infty}_{\operatorname{loc}}(\operatorname{reg}V_{k}\times(-\infty,0))\) to \(\tilde{u}_{k}\) with \(\partial_{\tau}\tilde{u}_{k}=L_{V_{k}}\tilde{u}_{k}\) but so that there are \(\mathbf{x}_{k}\in\mathcal{R}_{\rho_{0},r_{0}}V_{k}(\mathbf{0})\) with
\[\tilde{u}_{k}(\mathbf{x}_{k},-1)\geq 2k. \tag{4.5}\]
We can pass to a subsequence so that \(V_{k}\rightharpoonup V\). Choose \(j=j(k)\) so that
\[\tilde{u}_{j(k),k}(\mathbf{x}_{k},-1)\geq k. \tag{4.6}\]
Using (4.4) we also choose \(\mathbf{y}_{k}\in\mathcal{R}_{\rho_{0}/2,2r_{0}}V_{k}\) with
\[\tilde{u}_{j(k),k}(\mathbf{y}_{k},0)\leq 2. \tag{4.7}\]
Up to passing to a subsequence, \(\mathbf{x}_{k}\to\mathbf{x},\mathbf{y}_{k}\to\mathbf{y}\in\operatorname{reg}V\). Writing \(L\) for the \(L\)-operator on \(\operatorname{reg}V\), we can choose a finite chain of overlapping coordinate balls (from \(\mathbf{x}\) to \(\mathbf{y}\) in \(\operatorname{reg}V\) so that in each ball, the \(L\)-operator has controlled ellipticity and coefficients. Since the \(L_{j}\) operator is smoothly converging to the \(L\) operator on \(\operatorname{reg}\tilde{\mathcal{M}}_{V}\), the same thing holds for \(L_{j}\). Now, we can apply parabolic Harnack on the chain of balls in the usual way to bound
\[\tilde{u}_{j(k),k}(\mathbf{x}_{k},-1)\leq\tilde{H}\tilde{u}_{j(k),k}(\mathbf{ y}_{k},0),\]
where \(\tilde{H}\) depends only on \(V\) and the chosen chain of balls, but not on \(k\). However, this contradicts (4.6) and (4.7). This proves (b).
Finally, to prove (c) we argue in the same way, except we replace (4.5) by
\[\tilde{u}_{k}(\mathbf{x}_{k},\tau_{k})\geq(2k)\tilde{u}_{k}(\mathbf{y}_{k}, \tau_{k}+1).\]
for some \(\tau_{k}<-1\). The argument is the same as before except we shift rescaled time by \(+\tau_{k}\) to account for the possibility that \(\tau_{k}\to-\infty\). Since the rescaled flow of a shrinker is stationary, this does not change the above analysis. This completes the proof.
### Separation and packing estimates
We first need an elementary growth lemma about nonnegative solutions of \((\partial_{\tau}-L)\tilde{u}=0\) on compact subdomains of shrinkers, which we will then "de-linearize." Below is a parabolic analogue of the linear estimates obtained in [10] and [10, Appendix A].
**Lemma 4.10**.: _Let \(\Sigma\subset\mathbb{R}^{n+1}\) be a compact self-shrinker with (possibly empty) boundary. Let \(\varphi>0\) denote the first eigenfunction of the \(L\)-operator on \(\Sigma\) with Dirichlet boundary conditions. Suppose that \(\tilde{u}\in C^{\infty}(\Sigma\times[\tau_{0},\tau_{1}])\) satisfies_
\[\partial_{\tau}\tilde{u}=L\tilde{u},\;\tilde{u}\geq 0.\]
_Then, for all \(\tau\in[\tau_{0},\tau_{1}]\) we have_
\[\int_{\Sigma}\tilde{u}(\mathbf{x},\tau)\varphi(\mathbf{x})e^{-\tfrac{1}{4}| \mathbf{x}|^{2}}\,d\mathbf{x}\geq e^{-\mu(\Sigma)\cdot(\tau-\tau_{0})}\int_{ \Sigma}\tilde{u}(\mathbf{x},\tau_{0})\varphi(\mathbf{x})e^{-\tfrac{1}{4}| \mathbf{x}|^{2}}\,d\mathbf{x}.\]
Proof.: Set
\[\tilde{U}(\tau):=\int_{\Sigma}\tilde{u}(\mathbf{x},\tau)\varphi(\mathbf{x})e^ {-\tfrac{1}{4}|\mathbf{x}|^{2}}\,d\mathbf{x}.\]
Differentiating under the integral sign and then integrating by parts we find
\[\tilde{U}^{\prime}(\tau) =\int_{\Sigma}(L\tilde{u}(\mathbf{x},\tau))\varphi(\mathbf{x})e^ {-\tfrac{1}{4}|\mathbf{x}|^{2}}\] \[=\int_{\Sigma}\tilde{u}(\mathbf{x},\tau)(L\varphi(\mathbf{x}))e^ {-\tfrac{1}{4}|\mathbf{x}|^{2}}-\int_{\partial\Sigma}\tilde{u}(\mathbf{x}, \tau)(\partial_{\zeta}\varphi(\mathbf{x}))e^{-\tfrac{1}{4}|\mathbf{x}|^{2}}\] \[\geq-\mu(\Sigma)\int_{\Sigma}\tilde{u}(\mathbf{x},\tau)\varphi( \mathbf{x})e^{-\tfrac{1}{4}|\mathbf{x}|^{2}}=-\mu(\Sigma)\tilde{U}(\tau).\]
Above, \(\zeta\) is the outward pointing unit conormal and we used that \(\partial_{\zeta}\varphi\leq 0\) due to the Dirichlet boundary conditions and \(\varphi\geq 0\). Integrating this proves the assertion.
Now continue to fix \(n\in\{2,3,\dots,\}\), \(\Lambda\in(0,2]\), \(\varepsilon>0\) so that \((\heartsuit_{n,\Lambda})\) and \((\heartsuit_{n,\Lambda})\) hold.
_Remark 4.11_.: We have chosen several other constants depending only on \(n,\Lambda,\varepsilon\), as we recall here:
1. The covering constant \(C\) was fixed in Lemma 2.16.
2. The gain of decay constant \(\kappa\) and regularity scale constants \(\rho_{0},r_{0}\) were fixed in Proposition 3.6.
3. The Harnack constant \(H\) was fixed in Lemma 4.9.
We now fix the scale size constant \(\gamma\in(0,1)\) by
\[\gamma=\min\left\{\left(\tfrac{1}{2}H^{-2}(\tfrac{5}{4e})^{2\kappa+n+2}\right) ^{1/\kappa},(2C)^{-1/\kappa},\tfrac{\sqrt{3}}{2e}\right\} \tag{4.8}\]
Note that \(\gamma=\gamma(n,\Lambda,\varepsilon)<\tfrac{1}{2}\).
With these preparations, we can now discuss the key covering/separation result. To state it, it is convenient to consider the projection operator:
\[\pi:(X,\mathcal{M})\mapsto X.\]
**Proposition 4.12**.: _There exists \(\eta_{0}\in(0,1)\) with the following property._
_Let \(\mathcal{Q}\subset\mathscr{P}(\eta_{0})\) be such that:_
* \((\mathbf{0},0)\in\pi(\mathcal{Q})\)_, and_
* _if_ \((X^{\prime},\mathcal{M}^{\prime})\) _and_ \((X^{\prime\prime},\mathcal{M}^{\prime\prime})\in\mathcal{Q}\) _then_ \(\mathcal{M}^{\prime}\) _and_ \(\mathcal{M}^{\prime\prime}\) _don't cross smoothly._
_Then there exist \(D\in\mathbb{N}\) and points \(X_{1}=(\mathbf{x}_{1},t_{1}),\ldots,X_{K}=(\mathbf{x}_{K},t_{K})\in\pi( \mathcal{Q})\) such that:_
* \(\pi(\mathcal{Q})\subset\cup_{i=1}^{K}P(X_{i},\gamma)\) _with_ \(K\leq C\gamma^{-D}\)_;_
* _for_ \((X^{\prime},\mathcal{M}^{\prime})\in\mathcal{Q}\cap\pi^{-1}(P(X_{i},\gamma))\) _it holds that_ \[\mathcal{R}_{2\rho_{0},2r_{0}}\mathcal{M}^{\prime}(\mathbf{0},-4)\neq\emptyset,\] \[\mathcal{R}_{2\gamma\rho_{0},2\gamma r_{0}}\mathcal{M}^{\prime}( \mathbf{x}_{i},t_{i}-4\gamma^{2})\neq\emptyset;\text{ and,}\]
* _for_ \((X^{\prime},\mathcal{M}^{\prime}),(X^{\prime\prime},\mathcal{M}^{\prime \prime})\in\mathcal{Q}\cap\pi^{-1}(P(X_{i},\gamma))\) _we have:_ \[\gamma^{-(\kappa+D)}\cdot 2^{-1}d(\mathcal{R}_{2\rho_{0},2r_{0}} \mathcal{M}^{\prime}(\mathbf{0},-4),\operatorname{supp}\mathcal{M}^{\prime \prime}(-4))\] \[\leq(2\gamma)^{-1}d(\mathcal{R}_{2\gamma\rho_{0},2\gamma r_{0}} \mathcal{M}^{\prime}(\mathbf{x}_{i},t_{i}-4\gamma^{2}),\operatorname{supp} \mathcal{M}^{\prime\prime}(t_{i}-4\gamma^{2})).\]
_Remark 4.13_.: One could have replaced \(\gamma^{-(\kappa+D)}\) by \(\gamma^{-((2-\delta)\kappa+D)}\) with any \(\delta\in(0,2)\) in (c); cf. Proposition 3.6 (b).
Proof of Proposition 4.12.: Suppose for contradiction that no such \(\eta_{0}\) exists. Then there would exist a sequence \(\mathcal{Q}_{j}\subset\mathscr{P}(j^{-1})\), with \(j\) large, so that
* there is \(((\mathbf{0},0),\mathcal{M}_{j})\in\mathcal{Q}_{j}\) and
* if \((X^{\prime},\mathcal{M}^{\prime}),(X^{\prime\prime},\mathcal{M}^{\prime\prime })\in\mathcal{Q}_{j}\) then \(\mathcal{M}^{\prime}\) and \(\mathcal{M}^{\prime\prime}\) don't cross smoothly,
but that there is no \(D\) and a collection of points in \(\pi(\mathcal{Q}_{j})\) so that (a) & (b) & (c) hold.
Pass to a subsequence so that \(\mathcal{M}_{j}\rightharpoonup\mathcal{M}\). By Lemma 4.4, there exists a non-planar and non-generic \(F\)-stationary cyclic integral \(n\)-varifold \(V\) with \(\lambda(V)\leq\Lambda-\varepsilon\) and
\[\mathcal{M}=\mathcal{M}_{V}\]
whenever the right hand side is nonvanishing. Set \(\Pi=\operatorname{spine}V\) and \(D=D(\Pi)\).
For \((X^{\prime}_{j},\mathcal{M}^{\prime}_{j})\in\mathcal{Q}_{j}\), we can pass to a subsequence so that \(X^{\prime}_{j}\to X^{\prime}\) and \(\mathcal{M}^{\prime}_{j}\rightharpoonup\mathcal{M}^{\prime}\). Then, by Lemma 4.5, \(X^{\prime}\in\Pi\) and
\[\mathcal{M}^{\prime}=\mathcal{M}=\mathcal{M}_{V}\]
whenever the right hand side is nonvanishing. After discarding finitely many \(j\), we can assume that
\[\pi(\mathcal{Q}_{j})\subset U_{\gamma}(\Pi)\cap P((\mathbf{0},0),1).\]
By Lemma 2.16 we can thus find \(X_{j,1},\ldots,X_{j,K_{j}}\in\pi(\mathcal{Q}_{j})\) so that
\[\pi(\mathcal{Q}_{j})\subset\cup_{i=1}^{K_{j}}P(X_{j,i},\gamma)\text{ and }K_{j}\leq C\gamma^{-D}.\]
Thus, (a) holds for this collection of points.
Passing to a further subsequence (not relabeled) if necessary, we can assume that either (b) or (c) fails for every \(j\) and some \(i=i(j)\). We will then derive a contradiction. For notational simplicity, we will write \(X_{j}=(\mathbf{x}_{j},t_{j})\) instead of \(X_{j,i}\).
We start with (b). Take \((X^{\prime}_{j},\mathcal{M}^{\prime}_{j})\in\mathcal{Q}_{j}\cap\pi^{-1}(P(X_{j },\gamma))\). We claim that, for all large \(j\),
\[\mathcal{R}_{2\rho_{0},2r_{0}}\mathcal{M}^{\prime}_{j}(\mathbf{0},-4)\neq\emptyset \text{ and }\mathcal{R}_{2\gamma\rho_{0},2\gamma r_{0}}\mathcal{M}^{\prime}_{j}( \mathbf{x}_{j},t_{j}-4\gamma^{2})\neq\emptyset.\]
We just prove the latter statement; the former follows similarly. Let \(\tilde{\mathcal{M}}^{\prime}_{j}\) denote the rescaled Brakke flow corresponding to \(\mathcal{M}^{\prime}_{j}\) centered at \(X_{j}=(\mathbf{x}_{j},t_{j})\) at scale \(2\gamma\) (see Definition 2.6). In other words, we set
\[\tilde{\mathcal{M}}^{\prime}_{j}(\tau)=(2\gamma)^{-1}e^{\tau/2}(\mathcal{M}^{ \prime}_{j}(t_{j}-4\gamma^{2}e^{-\tau})-\mathbf{x}_{j}).\]
Since \(\gamma\) is fixed, we find that \(\tilde{\mathcal{M}}^{\prime}_{j}\rightharpoonup\tilde{\mathcal{M}}_{V}\). Lemma 4.6 implies that, for \(j\) large,
\[\mathcal{R}_{\rho_{0},r_{0}}\tilde{\mathcal{M}}^{\prime}_{j}(\mathbf{0},0) \neq 0.\]
This yields the desired nonemptiness after returning to the unrescaled flow (see Remark 2.4). This is a contradiction to our assumption that (b) failed for all \(j\).
We can thus assume (c) fails for all \(j\). By our setup, there exist
\[(X^{\prime}_{j},\mathcal{M}^{\prime}_{j}),(X^{\prime\prime}_{j},\mathcal{M}^{ \prime\prime}_{j})\in\mathcal{Q}_{j}\cap\pi^{-1}(P(X_{j},\gamma))\]
so that
\[(2\gamma)^{-1}d(\mathcal{R}_{2\gamma\rho_{0},2\gamma r_{0}} \mathcal{M}^{\prime}_{j}(\mathbf{x}_{j},t_{j}-4\gamma^{2}),\operatorname{ supp}\mathcal{M}^{\prime\prime}_{j}(t_{j}-4\gamma^{2}))\\ <\gamma^{-(\kappa+D)}\cdot 2^{-1}d(\mathcal{R}_{2\rho_{0},2r_{0}} \mathcal{M}^{\prime}_{j}(\mathbf{0},-4),\operatorname{supp}\mathcal{M}^{ \prime\prime}_{j}(-4)). \tag{4.9}\]
We now derive a contradiction to (4.9) (this will complete the proof of the proposition). It is important to shift the spatial center on the right-hand side of (4.9). We note that
\[\mathcal{R}_{2\rho_{0},2r_{0}-|\mathbf{x}_{j}|}\mathcal{M}^{\prime}_{j}( \mathbf{x}_{j},-4)\subset\mathcal{R}_{2\rho_{0},2r_{0}}\mathcal{M}^{\prime}_{ j}(\mathbf{0},-4)\]
by Remark 2.4. Thus, we have3
Footnote 3: Proposition 3.6 gives that \(\mathcal{R}_{3\rho_{0},r_{0}/3}V(\mathbf{0})\neq\emptyset\); this ensures that the sets on the right-hand-side of (4.10), (4.11), (4.12) are nonempty for large \(j\).
\[(2\gamma)^{-1}d(\mathcal{R}_{2\gamma\rho_{0},2\gamma r_{0}}\mathcal{M}^{ \prime}_{j}(\mathbf{x}_{j},t_{j}-4\gamma^{2}),\operatorname{supp}\mathcal{M}^ {\prime\prime}_{j}(t_{j}-4\gamma^{2}))\\ <\gamma^{-(\kappa+D)}\cdot 2^{-1}d(\mathcal{R}_{2\rho_{0},2r_{0}-| \mathbf{x}_{j}|}\mathcal{M}^{\prime}_{j}(\mathbf{x}_{j},-4),\operatorname{ supp}\mathcal{M}^{\prime\prime}_{j}(-4)). \tag{4.10}\]
Let's translate (4.10) into a statement about the rescaled flows \(\tilde{\mathcal{M}}^{\prime}_{j},\tilde{\mathcal{M}}^{\prime\prime}_{j}\) around \(X_{j}\) at scale \(2\gamma\). Note that under this rescaling, non-rescaled time turns into rescaled time as:
\[t=t_{j}-4\gamma^{2}\mapsto\tau=0,\]
\[t=-4\mapsto\tau=\tau_{j}:=2\log\gamma-\log(1+t_{j}/4).\]
In particular, using \(|t_{j}|\leq 1\), the dilation factor in the rescaled flow at \(\tau_{j}\) satisfies
\[(2\gamma)^{-1}e^{\tau_{j}/2}=(1+t_{j}/4)^{-\frac{1}{2}}:=\beta_{j}.\]
As such, (4.10) rescales (see Remark 2.4) to yield
\[d(\mathcal{R}_{\rho_{0},r_{0}}\tilde{\mathcal{M}}^{\prime}_{j}( \mathbf{0},0),\operatorname{supp}\tilde{\mathcal{M}}^{\prime\prime}_{j}(0))\\ <\gamma^{-(\kappa+D)}\cdot d(\mathcal{R}_{\beta_{j}\rho_{0}, \beta_{j}(r_{0}-|\mathbf{x}_{j}|/2)}\tilde{\mathcal{M}}^{\prime}_{j}(\mathbf{ 0},\tau_{j}),\operatorname{supp}\tilde{\mathcal{M}}^{\prime\prime}_{j}(\tau_{ j})). \tag{4.11}\]
Combining with Remark 2.4, (4.11), \(\beta_{j}\in(\frac{2}{\sqrt{5}},\frac{2}{\sqrt{3}})\), \(r_{0}>2\), \(|\mathbf{x}_{j}|\leq 1\), we thus find that there exists a fixed \(\theta\in(2,3)\) such that
\[\tilde{d}_{j}:=d(\mathcal{R}_{\rho_{0},r_{0}}\tilde{\mathcal{M}}^{\prime}_{j}( \mathbf{0},0),\operatorname{supp}\tilde{\mathcal{M}}^{\prime\prime}_{j}(0)) \tag{4.12}\]
\[<\gamma^{-(\kappa+D)}\cdot d(\mathcal{R}_{\theta\rho_{0},r_{0}/\theta}\tilde{ \mathcal{M}}^{\prime}_{j}(\mathbf{0},\tau_{j}),\operatorname{supp}\tilde{ \mathcal{M}}^{\prime\prime}_{j}(\tau_{j})).\]
Observe that (4.12) and the avoidance principle imply \(\tilde{d}_{j}>0\).
We pass to a further subsequence so that \(\tau_{j}\to\tau_{0}\) with \(\tau_{0}-2\log\gamma\in[\log\frac{4}{5},\log\frac{4}{3}]\). Note that (4.8) guarantees that \(\tau_{0}\leq-2\). Note also that
\[\tilde{\mathcal{M}}^{\prime}_{j},\tilde{\mathcal{M}}^{\prime\prime}_{j}\rightharpoonup \tilde{\mathcal{M}}_{V}\]
satisfy the hypotheses of Lemma 4.6. Let \(\tilde{W}_{j}\), \(\tilde{w}^{\prime}_{j}\), \(\tilde{w}^{\prime\prime}_{j}\) be as in that lemma with \(\tilde{w}^{\prime}_{j}\geq\tilde{w}^{\prime\prime}_{j}\). Apply Lemma 4.9 to pass
\[\tilde{u}_{j}:=\tilde{d}_{j}^{-1}(\tilde{w}^{\prime}_{j}-\tilde{w}^{\prime \prime}_{j})\]
to a \(C^{\infty}_{\operatorname{loc}}(\operatorname{reg}V\times(-\infty,0))\) limit that satisfies
\[\sup_{\mathcal{R}_{\rho_{0},r_{0}}V(\mathbf{0})}\tilde{u}(\cdot,-1)\leq H. \tag{4.13}\]
and
\[\sup_{\mathcal{R}_{3\rho_{0},r_{0}/3}V(\mathbf{0})}\tilde{u}(\cdot,\tau_{0}) \leq H\inf_{\mathcal{R}_{\rho_{0},r_{0}}V(\mathbf{0})}\tilde{u}(\cdot,\tau_{0}+ 1). \tag{4.14}\]
Finally, we note that using \(\theta\in(2,3)\), (4.12) passes to the limit to yield
\[\inf_{\mathcal{R}_{3\rho_{0},r_{0}/3}V(\mathbf{0})}\tilde{u}(\cdot,\tau_{0}) \geq\gamma^{\kappa+D}. \tag{4.15}\]
Combining (4.14) and (4.15) we find
\[H^{-1}\gamma^{\kappa+D}\leq\inf_{\mathcal{R}_{\rho_{0},r_{0}}V(\mathbf{0})} \tilde{u}(\cdot,\tau_{0}+1). \tag{4.16}\]
Fix a relatively open set \(\Omega\subset\operatorname{reg}V\) with smooth boundary such that
\[\mathcal{R}_{2\rho_{0},r_{0}/2}V(\mathbf{0})\subset\Omega\subset\mathcal{R}_ {\rho_{0},r_{0}}V(\mathbf{0}).\]
By Proposition 3.6,
\[2\mu(\Omega)<-2\kappa-D:=2\mu_{0}.\]
We now apply Lemma 4.10 to \(\Sigma=\overline{\Omega}\) on the interval \([\tau_{0}+1,-1]\) to find
\[\int_{\Omega}\tilde{u}(\cdot,-1)\varphi e^{-\frac{1}{4}|\cdot|^{2}}\geq e^{ \mu_{0}(\tau_{0}+2)}\int_{\Omega}\tilde{u}(\cdot,\tau_{0}+1)\varphi e^{-\frac{ 1}{4}|\cdot|^{2}}, \tag{4.17}\]
where \(\varphi>0\) is the first Dirichlet eigenfunction of \(L\) on \(\Omega\) and \(\int_{\Omega}\varphi e^{-\frac{1}{4}|\cdot|^{2}}=1\). We then claim that we have the following chain of inequalities:
\[H \geq\sup_{\mathcal{R}_{\rho_{0},r_{0}}(\mathbf{0})}\tilde{u}( \cdot,-1)\] \[\geq\int_{\Omega}\tilde{u}(\cdot,-1)\varphi e^{-\frac{1}{4}| \cdot|^{2}}\] \[\geq e^{\mu_{0}(\tau_{0}+2)}\int_{\Omega}\tilde{u}(\cdot,\tau_{0} +1)\varphi e^{-\frac{1}{4}|\cdot|^{2}}\] \[\geq e^{\mu_{0}(\tau_{0}+2)}\inf_{\mathcal{R}_{\rho_{0},r_{0}}V( \mathbf{0})}\tilde{u}(\cdot,\tau_{0}+1)\] \[\geq H^{-1}\gamma^{\kappa+D}e^{\mu_{0}(\tau_{0}+2)}\]
\[\geq H^{-1}\gamma^{\kappa+D+2\mu_{0}}(4e/5)^{2\mu_{0}}.\]
The first inequality follows from (4.13), the second and fourth from the normalization of \(\varphi\) and \(\Omega\subset\mathcal{R}_{\rho_{0},r_{0}}(\mathbf{0})\), the third from (4.17), the second to last follows from (4.16) and the final inequality from \(e^{\tau_{0}}\geq(4/5)\gamma^{2}\).
Noting that \(D\leq n+2\) (by definition of \(D\)), we have \(2\mu_{0}\geq-2\kappa-n-2\). Thus, we can rearrange the previous expression to read
\[(\tfrac{5}{4e})^{2\kappa+n+2}\leq H^{2}\gamma^{\kappa}.\]
This contradicts (4.8), completing the proof.
## 5. Density drop
### Setup
We continue to fix \(n\in\{2,3,\dots,\}\), \(\Lambda\in(0,2]\), \(\varepsilon>0\) so that \((\heartsuit_{n,\Lambda})\) and \((\heartsuit_{n,\Lambda})\) hold. Recall the definition of the constants \(C,\kappa,\rho_{0},H\) (cf. Remark 4.11). We have also fixed \(\gamma\) in (4.8) and \(\eta_{0}\) in Proposition 4.12. The constants \(C,\kappa,\rho_{0},H,\gamma,\eta_{0}\) all depend only on the values of \(n,\Lambda,\varepsilon\).
In this section, we consider a closed interval \(0\in S\) with \(|S|>0\) and a local unit-speed foliation \(\{M_{s}\}_{s\in S}\) with \(M_{s}\subset\mathbb{R}^{n+1}\) closed connected embedded hypersurfaces having
\[\lambda(M_{s})\leq\Lambda-\varepsilon. \tag{5.1}\]
Shrinking \(S\), we can assume that the \(M_{s}\) are uniformly smooth and embedded, so in particular there is \(t_{0}>0\) so that if \((\mathbf{x},t)\in\operatorname{sing}\mathcal{M},\mathcal{M}\in\mathfrak{F}(M _{s}),s\in S\) then \(t\geq t_{0}\).
### Singular points on foliation Brakke flows
For any \(Q\subset S\) we will write
\[\operatorname{sing}(Q) :=\cup_{s\in Q}\cup_{\mathcal{M}\in\mathfrak{F}(M_{s})} \operatorname{sing}\mathcal{M},\] \[\operatorname{sing}_{\operatorname{gen}}(Q) :=\cup_{s\in Q}\cup_{\mathcal{M}\in\mathfrak{F}(M_{s})} \operatorname{sing}_{\operatorname{gen}}\mathcal{M}\] \[\operatorname{sing}_{\operatorname{non-gen}}(Q) :=\cup_{s\in Q}\cup_{\mathcal{M}\in\mathfrak{F}(M_{s})} \operatorname{sing}_{\operatorname{non-gen}}\mathcal{M}. \tag{5.2}\]
By Corollary 2.13 we see that if
\[\mathcal{M},\mathcal{M}^{\prime}\in\cup_{s\in S}\mathfrak{F}(M_{s})\text{ and }X=(\mathbf{x},t)\in\operatorname{supp}\mathcal{M}\cap\operatorname{supp} \mathcal{M}^{\prime}\text{ has }t>0,\]
then:
* \(\mathcal{M}\), \(\mathcal{M}^{\prime}\in\mathfrak{F}(M_{s_{0}})\) for \(s_{0}\in S\),
* the set of tangent flows to \(\mathcal{M}\), \(\mathcal{M}^{\prime}\) at \(X\) agree,
* as do the densities \(\Theta_{\mathcal{M}}(X)\), \(\Theta_{\mathcal{M}^{\prime}}(X)\).
In particular, just like how, for each \(\mathcal{M}\in\cup_{s\in S}\mathfrak{F}(M_{s})\),
\[\operatorname{sing}\mathcal{M}=\operatorname{sing}_{\operatorname{gen}} \mathcal{M}\cup\operatorname{sing}_{\operatorname{non-gen}}\mathcal{M}\]
is a disjoint union (by [15, 15]), so is
\[\operatorname{sing}(S)=\operatorname{sing}_{\operatorname{gen}}(S)\cup \operatorname{sing}_{\operatorname{non-gen}}(S).\]
Moreover, for any \(X\in\operatorname{sing}(S)\iff X\in\operatorname{sing}\mathcal{M}\), \(\mathcal{M}\in\mathfrak{F}(M_{s})\), \(s\in S\), we can define
\[\Theta(X):=\Theta_{\mathcal{M}}(X)\]
unambiguously without reference to \(\mathcal{M}\).
### The improved quantity
Let \(\mathfrak{N}\subset\operatorname{sing}_{\operatorname{non-gen}}(S)\) be a closed subset with respect to the parabolic topology and define,
\[\mathcal{D}_{\mathfrak{N}}:S\to\{-\infty\}\cup[1,\infty),\]
\[\mathcal{D}_{\mathfrak{N}}(s):=\sup\{\Theta(X):X\in\mathfrak{N}\cap \operatorname{sing}\mathcal{M},\ \mathcal{M}\in\mathscr{F}(M_{s})\},\]
where we use the standard convention \(\sup\emptyset=-\infty\).
**Lemma 5.1**.: _The function \(s\mapsto\mathcal{D}_{\mathfrak{N}}(s)\) is upper semi-continuous, i.e., for all \(s_{0}\in S\),_
\[\limsup_{s\to s_{0}}\mathcal{D}_{\mathfrak{N}}(s)\leq\mathcal{D}_{\mathfrak{ N}}(s_{0}).\]
Proof.: Choose \(s_{i}\to s_{0}\), \(s_{i}\neq s_{0}\), \(\mathcal{M}_{i}\in\mathfrak{F}(M_{s_{i}})\), \(X_{i}\in\operatorname{sing}\mathcal{M}_{i}\cap\mathfrak{N}\) with
\[\lim_{i}\Theta(X_{i})=\limsup_{s\to s_{0}}\mathcal{D}_{\mathfrak{N}}(s).\]
Passing to a subsequence, \(\mathcal{M}_{i}\rightharpoonup\mathcal{M}\in\mathfrak{F}(M_{s_{0}})\) and \(X_{i}\to X\). Since \(\mathfrak{N}\) is closed, \(X\in\mathfrak{N}\). Thus,
\[\mathcal{D}_{\mathfrak{N}}(s_{0})\geq\Theta(X)\geq\lim_{i}\Theta(X_{i})= \limsup_{s\to s_{0}}\mathcal{D}_{\mathfrak{N}}(s)\]
by upper semicontinuity of density. This completes the proof.
The main result of this section is as follows. Below, \(\eta_{0}\) is given by Proposition 4.12.
**Proposition 5.2**.: _We have_
\[\liminf_{s\to s_{0}}\mathcal{D}_{\mathfrak{N}}(s)\leq\mathcal{D}_{\mathfrak{ N}}(s_{0})-\eta_{0}\]
_for any \(s_{0}\in S\)._
**Corollary 5.3**.: _There is a relatively open dense subset \(S^{\prime}\subset S\) with \(\operatorname{sing}(S^{\prime})\cap\mathfrak{N}=\emptyset\)._
Proof.: Note that if \(\mathcal{D}_{\mathfrak{N}}(s)<1\) then \(\mathcal{D}_{\mathfrak{N}}(s)=-\infty\) by integrality. Thus
\[S^{\prime}:=\mathcal{D}_{\mathfrak{N}}^{-1}(-\infty)=\mathcal{D}_{\mathfrak{ N}}^{-1}([-\infty,1))\]
is open using the upper-semicontinuity from Lemma 5.1. On the other hand, iterating Proposition 5.2 finitely many times, we see that \(S^{\prime}\) is dense. This completes the proof.
The remainder of this section is devoted to the proof of Proposition 5.2. Note that the failure of Proposition 5.2 for some \(s_{0}\in S\) would mean that
\[\liminf_{s\to s_{0}}\mathcal{D}_{\mathfrak{N}}(s)>\mathcal{D}_{\mathfrak{N}}( s_{0})-\eta_{0}. \tag{5.3}\]
As such, by combining (5.3) with Lemma 5.1, we see that if Proposition 5.2 was false, we can shrink \(S\) while preserving \(s_{0}\in S\) so that
\[\mathcal{D}_{\mathfrak{N}}(s_{0})-\eta_{0}\leq\mathcal{D}_{\mathfrak{N}}(s) \leq\mathcal{D}_{\mathfrak{N}}(s_{0})+\eta_{0}\text{ for all }s\in S. \tag{5.4}\]
For the sake of contradiction, we can thus assume that (5.4) holds in the remainder of the proof.
**Lemma 5.4**.: _There is \(r_{0}\in(0,\frac{1}{2}\sqrt{t_{0}})\) so that_
\[\Theta_{\mathcal{M}}(X,r_{0})\leq\mathcal{D}_{\mathfrak{N}}(s)+3\eta_{0}\]
_for every \(s\in S\), \(\mathcal{M}\in\mathfrak{F}(M_{s})\), and \(X\in\mathfrak{N}\cap\operatorname{sing}\mathcal{M}\)._
Proof.: If not, there exist \(X_{i}\), \(\mathcal{M}_{i}\), \(s_{i}\) as above, and \(r_{i}\to 0\) so that
\[\Theta_{\mathcal{M}_{i}}(X_{i},r_{i})>\mathcal{D}_{\mathfrak{N}}(s_{i})+3\eta_{ 0}.\]
Passing to a subsequence (not relabeled), let \(s_{i}\to s_{\infty}\in S\). Using (5.4) twice we get
\[\mathcal{D}_{\mathfrak{N}}(s_{i})\geq\mathcal{D}_{\mathfrak{N}}(s_{0})-\eta_{ 0}\geq\mathcal{D}_{\mathfrak{N}}(s_{\infty})-2\eta_{0}.\]
Thus, we find that
\[\Theta_{\mathcal{M}_{i}}(X_{i},r_{i})>\mathcal{D}_{\mathfrak{N}}(s_{\infty})+ \eta_{0}.\]
For any \(\rho\in(0,\sqrt{t_{0}})\), monotonicity yields
\[\Theta_{\mathcal{M}_{i}}(X_{i},\rho)>\mathcal{D}_{\mathfrak{N}}(s_{\infty})+ \eta_{0}.\]
for large \(i\). Pass to a further subsequence (not relabeled) so that \(\mathcal{M}_{i}\rightharpoonup\mathcal{M}_{\infty}\in\mathfrak{F}(M_{s_{ \infty}})\) and \(X_{i}\to X_{\infty}\). Then,
\[\Theta_{\mathcal{M}_{\infty}}(X_{\infty},\rho)\geq\mathcal{D}_{\mathfrak{N}} (s_{\infty})+\eta_{0}.\]
and sending \(\rho\to 0\), we find
\[\Theta(X_{\infty})\geq\mathcal{D}_{\mathfrak{N}}(s_{\infty})+\eta_{0}. \tag{5.5}\]
However, \(\mathfrak{N}\) is closed so \(X_{\infty}\in\mathfrak{N}\), contradicting (5.5). This completes the proof.
Up to performing a (single) global scaling, we will assume below that \(r_{0}=\eta_{0}^{-1}\). In particular, by Lemma 5.4, we see that
\[\operatorname{sing}(S)\cap\{X=(\mathbf{x},t):t\in(0,4\eta_{0}^{-2}]\}=\emptyset. \tag{5.6}\]
### Covers of high-density points in \(\mathfrak{N}\)
We wish to construct appropriate covers of the set of nearly top density points in \(\mathfrak{N}\):
\[\mathfrak{N}_{+}:=\{X\in\mathfrak{N}:s\in S,\mathcal{M}\in\mathfrak{F}(M_{s}),\Theta(X)\geq\mathcal{D}_{\mathfrak{N}}(s)-\eta_{0}\}.\]
Let us introduce the notation we will use. For \(\ell\in\{0,1,\dots\}\) we will construct a finite set \(\mathcal{T}[\ell]\) together with "geometric realization maps"
\[\iota_{\ell}:\mathcal{T}[\ell]\to\mathfrak{N}_{+}.\]
We will also choose "parent" maps
\[\mathfrak{p}:\mathcal{T}[\ell+1]\to\mathcal{T}[\ell].\]
We will write \(\mathfrak{p}^{(k)}\) for the iterated parent (grandparent) map, with the usual convention that \(\mathfrak{p}^{(0)}=\operatorname{id}:\mathcal{T}[\ell]\to\mathcal{T}[\ell]\).
_Remark 5.5_.: Informally, we can think of \(\cup_{\ell}\iota_{\ell}(\mathcal{T}[\ell])\) as a tree of points in \(\mathfrak{N}_{+}\). However, it is technically useful to allow multiple elements to map to the same point in \(\mathfrak{N}_{+}\), but allowing them to have different parent elements.
Now, assuming that we have inductively constructed \(\mathcal{T}[0]\xleftarrow{\mathfrak{p}}\mathcal{T}[1]\xleftarrow{\mathfrak{ p}}\ldots\xleftarrow{\mathfrak{p}}\mathcal{T}[\ell]\), we proceed to define the following for each \(\mathcal{X}\in\mathcal{T}[\ell]\):
* the spacetime neighborhood \[P_{\ell}(\mathcal{X}):=\bigcap_{j=0}^{\ell}P(\iota_{j}(\mathfrak{p}^{(\ell-j)} (\mathcal{X})),\gamma^{j});\]
* the restriction of \(\mathfrak{N}_{+}\) to it \[\mathfrak{N}_{+,\ell}(\mathcal{X}):=\mathfrak{N}_{+}\cap P_{\ell}(\mathcal{X});\]
* the set of pairs of (point, flow) so that the flow has a high density point lying in \(\mathfrak{N}_{+,\ell}(\mathcal{X})\), followed by a translation/scaling taking \(P(\iota_{\ell}(\mathcal{X}),\gamma^{\ell})\mapsto P((\mathbf{0},0),1)\) \[\mathcal{Q}_{\ell}(\mathcal{X}):=\{(\operatorname{ParDil}_{ \gamma^{-\ell}}(X^{\prime}-\iota_{\ell}(\mathcal{X})),\operatorname{ParDil}_{ \gamma^{-\ell}}(\mathcal{M}^{\prime}-\iota_{\ell}(\mathcal{X}))):\] \[X^{\prime}\in\mathfrak{N}_{+,\ell}(\mathcal{X})\cap\operatorname{ sing}\mathcal{M}^{\prime},\mathcal{M}^{\prime}\in\mathfrak{F}(M_{s}),s\in S\},\]
where \(\operatorname{ParDil}_{\lambda}\) is the standard parabolic dilation of a Brakke flow around the space-time origin \((\mathbf{0},0)\) by a factor \(\lambda>0\).
Recalling that \(\mathscr{P}(\eta_{0})\) is as in Definition 4.2 and Proposition 4.12, and that
\[\pi:(X,\mathcal{M})\mapsto X,\]
denotes the projection onto the first coordinate in \(\mathscr{P}(\eta_{0})\), we have the following result:
**Lemma 5.6**.: _If \(\mathcal{X}\in\mathcal{T}[\ell]\), then:_
1. \(\mathcal{Q}_{\ell}(\mathcal{X})\subset\mathscr{P}(\eta_{0})\)_, and_
2. _the assumptions of Proposition_ 4.12 _are satisfied with_ \(\mathcal{Q}_{\ell}(\mathcal{X})\) _in place of_ \(\mathcal{Q}\)_._
Proof.: We first prove (a), i.e., that \(\mathcal{Q}_{\ell}(\mathcal{X})\) satisfies (1)-(5) in Definition 4.2 (with \(\eta=\eta_{0}\)). We first note that since \(\mathfrak{N}_{+,\ell}\subset P_{\ell}(\mathcal{X})\subset P(\iota_{\ell}( \mathcal{X}),\gamma^{\ell})\), we have
\[\operatorname{ParDil}_{\gamma^{-\ell}}(\mathfrak{N}_{+,\ell}(\mathcal{X})- \iota_{\ell}(\mathcal{X}))\subset\operatorname{ParDil}_{\gamma^{-\ell}}(P_{ \ell}(\mathcal{X})-\iota_{\ell}(\mathcal{X}))\subset P((\mathbf{0},0),1),\]
so (1) holds. Condition (2) follows from \(\gamma<1\) and (5.6). Condition (3) follows from (5.1). Condition (4) follows from that \(\mathfrak{N}\subset\operatorname{sing}_{\text{non-gen}}(S)\). Finally, for any \(\mathcal{M}^{\prime}\in\mathfrak{F}(M_{s})\) and \(X^{\prime}\in\mathfrak{N}_{+,\ell}(\mathcal{X})\cap\operatorname{sing} \mathcal{M}^{\prime}\), we have that
\[\Theta_{\mathcal{M}^{\prime}}(X^{\prime})\geq\mathcal{D}_{\mathfrak{G}}(s)- \eta_{0}\]
by definition of \(\mathfrak{N}_{+}\) and
\[\Theta_{\mathcal{M}^{\prime}}(X^{\prime},\eta_{0}^{-1})\leq\mathcal{D}_{ \mathfrak{G}}(s)+3\eta_{0}\]
by Lemma 5.4 and the fact that we scaled so that \(r_{0}=\eta_{0}^{-1}\). Putting this together, we find using monotonicity that
\[\Theta_{\mathcal{M}^{\prime}}(X^{\prime},\eta_{0}^{-1}\gamma^{\ell})\leq\Theta _{\mathcal{M}^{\prime}}(X^{\prime},\eta_{0}^{-1})\leq\Theta_{\mathcal{M}^{ \prime}}(X^{\prime})+4\eta_{0}.\]
Translating and scaling, this yields condition (5). Thus (a) holds.
Now consider (b). The first bullet point follows from how we translated and scaled the pairs, and the second bullet point follows from Lemma 2.11.
Thus, Proposition 4.12 applies to each \(\mathcal{Q}_{\ell}(\mathcal{X})\), \(\mathcal{X}\in\mathcal{T}[\ell]\) and yields the following corollary (after undoing the various definitions of \(\mathcal{Q}_{\ell}(\mathcal{X})\), etc.):
**Corollary 5.7**.: _Let \(\mathcal{X}\in\mathcal{T}[\ell]\). Then, there exist \(D_{\ell}(\mathcal{X})\in\mathbb{N}\) and points \(X_{1}^{\mathcal{X}},\ldots,X_{K(\mathcal{X})}^{\mathcal{X}}\in\mathfrak{N}_{+, \ell}(\mathcal{X})\) so that:_
1. \(\mathfrak{N}_{+,\ell}(\mathcal{X})\subset\cup_{i=1}^{K(\mathcal{X})}P(X_{i}^{ \mathcal{X}},\gamma^{\ell+1})\) _with_ \(K(\mathcal{X})\leq C\gamma^{-D_{\ell}(\mathcal{X})}\)_;_ _and denoting_ \(\iota_{\ell}(\mathcal{X})=(\mathbf{x},t)\) _and_ \(X_{i}^{\mathcal{X}}=(\mathbf{x}_{i},t_{i})\) _we also have:_
_._
2. _if_ \(\mathcal{M}^{\prime}\in\cup_{s\in S}\mathfrak{F}(M_{s})\) _and_ \(\mathfrak{N}_{+}\cap P_{\ell}(\mathcal{X})\cap P(X_{i}^{\mathcal{X}},\gamma^{ \ell+1})\) _intersects_ \(\operatorname{sing}\mathcal{M}^{\prime}\)_, then_ \[\mathcal{R}_{2\gamma^{\ell}\rho_{0},2\gamma^{\ell}r_{0}}\mathcal{M}^{\prime}( \mathbf{x},t-4\gamma^{2\ell})\neq 0,\] \[\mathcal{R}_{2\gamma^{\ell+1}\rho_{0},2\gamma^{\ell+1}r_{0}}\mathcal{M}^{\prime }(\mathbf{x}_{i},t_{i}-4\gamma^{2(\ell+1)})\neq 0,\text{ and},\]
3. _if_ \(\mathcal{M}^{\prime},\mathcal{M}^{\prime\prime}\in\cup_{s\in S}\mathfrak{F} (M_{s})\) _and_ \(\mathfrak{N}_{+}\cap P_{\ell}(\mathcal{X})\cap P(X_{i}^{\mathcal{X}},\gamma^{ \ell+1})\) _intersects both_ \(\operatorname{sing}\mathcal{M}^{\prime}\) _and_ \(\operatorname{sing}\mathcal{M}^{\prime\prime}\)_, then_ \[\gamma^{-(\kappa+D_{\ell}(X))}\cdot(2\gamma^{\ell})^{-1}d( \mathcal{R}_{2\gamma^{\ell}\rho_{0},2\gamma^{\ell}r_{0}}\mathcal{M}^{\prime}( \mathbf{x},t-4\gamma^{2\ell}),\operatorname{supp}\mathcal{M}^{\prime\prime}( t-4\gamma^{2\ell}))\\ \leq(2\gamma^{\ell+1})^{-1}d(\mathcal{R}_{2\gamma^{\ell+1}\rho_{0 },2\gamma^{\ell+1}r_{0}}\mathcal{M}^{\prime}(\mathbf{x}_{i},t_{i}-4\gamma^{2( \ell+1)}),\operatorname{supp}\mathcal{M}^{\prime\prime}(t_{i}-4\gamma^{2( \ell+1)})).\]
**Definition 5.8**.: Let us now define the \(\mathcal{T}[\ell]\), \(\ell\in\{0,1,2,\ldots\}\) and parent maps \(\mathfrak{p}\):
* For \(\ell=0\), take \(\mathcal{T}[0]\subset\mathfrak{N}_{+}\) arbitrarily so that \(\mathfrak{N}_{+}\subset\cup_{X\in\mathcal{T}[0]}P(X,1)\).
* Then, inductively define \[\mathcal{T}[\ell+1]:=\bigcup_{\mathcal{X}\in\mathcal{T}[\ell]}\{X_{1}^{ \mathcal{X}},\ldots,X_{K(\mathcal{X})}^{\mathcal{X}}\}\times\{\mathcal{X}\}\] and \[\iota_{\ell+1}:\mathcal{T}[\ell+1]\to\mathfrak{N}_{+},\] \[\mathfrak{p}:\mathcal{T}[\ell+1]\to\mathcal{T}[\ell],\] to be projection onto the first and second factors, respectively.
**Corollary 5.9**.: _For \(\ell\in\{0,1,\ldots\}\) it holds that_
\[\mathfrak{N}_{+}\subset\bigcup_{\mathcal{X}\in\mathcal{T}[\ell]}P_{\ell}( \mathcal{X}).\]
Proof.: We induct on \(\ell\). When \(\ell=0\), since \(P_{0}(X)=P(\iota_{0}(X),1)\), the assertion follows from the definition of \(\mathcal{T}[0]\). If the assertion holds for \(\ell\), then
\[\mathfrak{N}_{+}=\mathfrak{N}_{+}\cap\bigcup_{\mathcal{X}\in\mathcal{T}[\ell ]}P_{\ell}(\mathcal{X})=\bigcup_{\mathcal{X}\in\mathcal{T}[\ell]}\mathfrak{N }_{+,\ell}(\mathcal{X}).\]
By (1) in Corollary 5.7 and our definition of \(\mathfrak{p}\) and \(\iota_{\ell+1}\),
\[\mathfrak{N}_{+,\ell}(\mathcal{X})\subset P_{\ell}(\mathcal{X})\cap\bigcup_{i=1 }^{K(\mathcal{X})}P(X_{i}^{\mathcal{X}},\gamma^{\ell+1})=\bigcup_{\mathcal{X}^ {\prime}\in\mathfrak{p}^{-1}(\mathcal{X})}P_{\ell+1}(\mathcal{X}^{\prime}).\]
Therefore, since \(\cup_{\mathcal{X}\in\mathcal{T}[\ell]}\mathfrak{p}^{-1}(\mathcal{X})=\mathcal{ T}[\ell+1]\),
\[\mathfrak{N}_{+}=\bigcup_{\mathcal{X}\in\mathcal{T}[\ell]}\mathfrak{N}_{+, \ell}(\mathcal{X})\subset\bigcup_{\mathcal{X}\in\mathcal{T}[\ell]}\bigcup_{ \mathcal{X}^{\prime}\in\mathfrak{p}^{-1}(\mathcal{X})}P_{\ell+1}(\mathcal{X} ^{\prime})=\bigcup_{\mathcal{X}^{\prime}\in\mathcal{T}[\ell]}\mathfrak{p}^{-1 }(\mathcal{X}^{\prime}).\]
This proves the assertion.
**Proposition 5.10**.: _There is \(\ell_{0}\) sufficiently large so that the following holds for \(\ell\geq\ell_{0}\)._
_Suppose that \(\mathcal{X}\in\mathcal{T}[\ell]\) and \(\iota_{\ell}(X)\in\operatorname{sing}\mathcal{M}\) for \(\mathcal{M}\in\mathfrak{F}(M_{s})\), \(s\in S\), and suppose that \(\mathcal{M}^{\prime}\in\mathfrak{F}(M_{s^{\prime}})\), \(s^{\prime}\in S\), is such that \(\mathfrak{N}_{+,\ell}(\mathcal{X})\) intersects \(\operatorname{sing}\mathcal{M}^{\prime}\neq\emptyset\). Then,_
\[|s^{\prime}-s|<16r_{0}\gamma^{\ell\kappa}\gamma^{\sum_{j=0}^{\ell-1}D_{j}( \mathfrak{p}^{(\ell-j)}(\mathcal{X}))}\]
Proof.: We begin by considering any \(\ell,\mathcal{X},\mathcal{M},\mathcal{M}^{\prime},s,s^{\prime}\) as in the statement of the proposition. Later, we will assume a sequence of counterexamples exists with \(\ell\to\infty\) and derive a contradiction.
Write \(\iota_{j}(\mathfrak{p}^{(\ell-j)}(\mathcal{X}))=(\mathbf{x}_{j},t_{j})\) for \(j=0,\dots,\ell\). Apply Corollary 5.7 (c) at level \(j\) (in place of \(\ell\) there) centered at \(\mathfrak{p}^{(\ell-j)}(\mathcal{X})\) (in place of \(\mathcal{X}\) there) and \((\mathbf{x}_{j+1},t_{j+1})\) (in place of \(\mathcal{X}_{i}\) there) and \(\mathcal{M}\) (in place of \(\mathcal{M}^{\prime\prime}\) there) to obtain
\[\gamma^{-(\kappa+D_{j}(\mathfrak{p}^{(\ell-j)}(\mathcal{X})))}(2 \gamma^{j})^{-1}d(\mathcal{R}_{2\gamma^{j}\rho_{0},2\gamma^{j}r_{0}}\mathcal{ M}^{\prime}(\mathbf{x}_{j},t_{j}-4\gamma^{2j}),\operatorname{supp}\mathcal{M}(t_{j}-4 \gamma^{2j}))\\ \leq(2\gamma^{j+1})^{-1}d(\mathcal{R}_{2\gamma^{j+1}\rho_{0},2 \gamma^{j+1}r_{0}}\mathcal{M}^{\prime}(\mathbf{x}_{j+1},t_{j+1}-4\gamma^{2(j+ 1)}),\operatorname{supp}\mathcal{M}(t_{j+1}-4\gamma^{2(j+1)})).\]
We iterate this from \(j=0\) to \(j=\ell-1\). Together the trivial estimate
\[d(\mathcal{R}_{2\gamma^{\ell}\rho_{0},2\gamma^{\ell}r_{0}}\mathcal{M}^{\prime }(\mathbf{x}_{\ell},t_{\ell}-4\gamma^{2\ell}),\operatorname{supp}\mathcal{M}( t_{\ell}-4\gamma^{2\ell}))\leq 4\gamma^{\ell}r_{0},\]
we obtain
\[d(\mathcal{R}_{2\rho_{0},2r_{0}}\mathcal{M}^{\prime}(\mathbf{x}_{0},t_{0}-4), \operatorname{supp}\mathcal{M}(t_{0}-4))\leq 8r_{0}\gamma^{\ell\kappa}\gamma^{ \sum_{j=0}^{\ell-1}D_{j}(\mathfrak{p}^{(\ell-j)}(\mathcal{X}))}.\]
Using that the distance between the support of co-dimension one Brakke flows is non-decreasing [10, 10.6], we get
\[d(\operatorname{supp}\mathcal{M}^{\prime}(0),\operatorname{supp}\mathcal{M}( 0))\leq 8r_{0}\gamma^{\ell\kappa}\gamma^{\sum_{j=0}^{\ell-1}D_{j}(\mathfrak{p} ^{(\ell-j)}(\mathcal{X}))}.\]
We now use this to prove the proposition.
Assume that for \(\ell\) sufficiently large there are \(\mathcal{X}_{\ell},\mathcal{M}_{\ell},\mathcal{M}^{\prime}_{\ell},s_{\ell},s ^{\prime}_{\ell}\) as in the setup but with
\[|s^{\prime}_{\ell}-s_{\ell}|\geq 16r_{0}\gamma^{\ell\kappa}\gamma^{\sum_{j=0}^ {\ell-1}D_{j}(\mathfrak{p}^{(\ell-j)}(\mathcal{X}_{\ell}))}. \tag{5.7}\]
The analysis in the previous paragraph applies to the above setting to thus yield
\[d(\operatorname{supp}\mathcal{M}^{\prime}_{\ell}(0),\operatorname{supp} \mathcal{M}_{\ell}(0))\leq 8r_{0}\gamma^{\ell\kappa}\gamma^{\sum_{j=0}^{\ell-1}D_{j}( \mathfrak{p}^{(\ell-j)}(\mathcal{X}_{\ell}))}. \tag{5.8}\]
By combining (5.7) and (5.8), we find that
\[d(\operatorname{supp}\mathcal{M}^{\prime}_{\ell}(0),\operatorname{supp} \mathcal{M}_{\ell}(0))\leq\tfrac{1}{2}|s^{\prime}_{\ell}-s_{\ell}|. \tag{5.9}\]
However, (5.9) contradicts the fact that our foliation at \(t=0\) is a unit speed foliation. This completes the proof.
### The proof of Proposition 5.2
For \(\ell\in\{0,1,\dots\}\) fixed, we form an open cover of \(S\) as follows: for \(\mathcal{X}\in\mathcal{T}[\ell]\) we fix \(s\in S\) with \(\iota_{\ell}(\mathcal{X})\in\operatorname{sing}\mathcal{M}\) for some \(\mathcal{M}\in\mathfrak{F}(M_{s})\). Then, let \(U_{\ell}(\mathcal{X})\subset S\) denote the relatively open interval centered at \(s\) with length
\[|U_{\ell}(\mathcal{X})|=32r_{0}\gamma^{\ell\kappa}\gamma^{\sum_{j=0}^{\ell-1}D_ {j}(\mathfrak{p}^{(\ell-j)}(\mathcal{X}))},\]
or if \(s\) is close to \(\partial S\), then one part of \(U_{\ell}(X)\) will be cut off and this estimate will be replaced by \(\leq\); this goes in the right direction below.
Assume now that \(\ell\geq\ell_{0}\), where \(\ell_{0}\) is as in Proposition 5.10. Then,
\[S=\bigcup_{\mathcal{X}\in\mathcal{T}[\ell]}U_{\ell}(\mathcal{X}).\]
Indeed, for \(s^{\prime}\in S\), (5.4) implies that there is \(\mathcal{M}^{\prime}\in\mathfrak{F}(M_{s^{\prime}})\) and \(X^{\prime}\in\mathfrak{N}_{+}\cap\operatorname{sing}\mathcal{M}^{\prime}\). By Corollary 5.9, \(X^{\prime}\in P_{\ell}(\mathcal{X})\) for some \(\mathcal{X}\in\mathcal{T}[\ell]\), so \(s^{\prime}\in U_{\ell}(\mathcal{X})\) by Proposition 5.10.
It follows that
\[|S| \leq\sum_{\mathcal{X}\in\mathcal{T}[\ell]}|U_{\ell}(\mathcal{X})|\] \[\leq 32r_{0}2^{\ell}\gamma^{\ell\kappa}\sum_{\mathcal{X}\in\mathcal{ T}[\ell]}\gamma^{\sum_{j=0}^{\ell-1}D_{j}(\mathfrak{p}^{(\ell-j)}(\mathcal{X}))}\] \[=32r_{0}2^{\ell}\gamma^{\ell\kappa}\sum_{\mathcal{X}_{0}\in \mathcal{T}[0]}\sum_{\mathcal{X}_{1}\in\mathfrak{p}^{-1}(X_{0})}\ldots\sum_{ \mathcal{X}_{\ell}\in\mathfrak{p}^{-1}(X_{\ell-1})}\gamma^{\sum_{j=0}^{\ell-1 }D_{j}(\mathcal{X}_{j})}\]
By Corollary 5.7 and the definition of \(\mathfrak{p}\) we have for \(j\in\{0,1,\ldots,\ell-1\}\) that
\[|\mathfrak{p}^{-1}(\mathcal{X}_{j})|=K(\mathcal{X}_{j})\leq C\gamma^{-D_{j}( \mathcal{X}_{j})}.\]
Therefore,
\[|S|\leq 32r_{0}|\mathcal{T}[0]|\gamma^{\ell\kappa}C^{\ell}=32r_{0}|\mathcal{ T}[0]|(C\gamma^{\kappa})^{\ell}.\]
By (4.8), it holds that \(C\gamma^{\kappa}<1\). Thus, the right hand side of this chain of inequalities is \(o(1)\) as \(\ell\to\infty\). This is a contradiction, completing the proof.
## 6. Proof of Theorem 1.13
We fix \(n\in\{2,3,\ldots\}\), \(\Lambda\in(0,2]\) so that \((\heartsuit_{n,\Lambda})\) and \((\diamondsuit_{n,\Lambda})\) hold, any closed embedded \(M^{n}\subset\mathbb{R}^{n+1}\) with \(\lambda(M)\leq\Lambda\). We need some preparations for the proof.
### Low-entropy local foliation
Flowing \(M\) by (smooth) mean curvature flow for a short time will produce \(M^{\prime}\) an arbitrarily small \(C^{\infty}\) graph over \(M\) with \(\lambda(M^{\prime})<\lambda(M)\leq\Lambda\) unless \(M\) is a self-shrinker. If \(M\) is a self-shrinker then either it is a round sphere (in which case Theorem 1.13 follows trivially) or, by [12, Theorem 0.7], there is an arbitrarily small \(C^{\infty}\)-graph \(M^{\prime}\) with \(\lambda(M^{\prime})<\lambda(M)\leq\Lambda\). As such, up to replacing \(M\) by \(M^{\prime}\) considered above, we can assume that
\[\lambda(M)\leq\Lambda-2\varepsilon.\]
For \(S\) a closed interval with \(0\in S\) and \(|S|>0\), let \(\{M_{s}\}_{s\in S}\) denote a unit-speed (local) foliation so that the \(M_{s}\) are uniformly smooth and embedded with
\[\lambda(M_{s})\leq\Lambda-\varepsilon\text{ for all }s\in S.\]
### The generic strata
Recall the definitions of \(\operatorname{sing}(S),\operatorname{sing}_{\operatorname{gen}}(S), \operatorname{sing}_{\operatorname{non-gen}}(S)\subset\mathbb{R}^{n+1}\times \mathbb{R}\) from (5.2) and define, for \(k=0,\ldots,n-1\), the \(k\)-th generic stratum by
\[\mathfrak{G}^{k}:=\{X\in\operatorname{sing}_{\operatorname{gen}}(S):\Theta(X) =\lambda(\mathbb{S}^{n-k}\times\mathbb{R}^{k})\}\subset\mathbb{R}^{n+1} \times\mathbb{R}.\]
Note that
\[\operatorname{sing}_{\operatorname{gen}}(S)=\cup_{k=0}^{n-1}\mathfrak{G}^{k}.\]
Fix \(\eta_{1}=\eta_{1}(n)\) so that if \(X_{\ell}\in\operatorname{sing}_{\operatorname{non-gen}}(S)\) and \(X_{\ell}\to X\in\mathfrak{G}^{k}\), \(k=0,1,\ldots,n-1\), then
\[\limsup_{\ell}\Theta(X_{\ell})\leq\lambda(\mathbb{S}^{n-k})-\eta_{1}.\]
The existence of such \(\eta_{1}\) follows from a straightforward adaptation of [12, Theorem 0.2] to the present setting.
### Closed subsets of non-generic singular points
For \(\alpha>0\) define the following subsets of the set of non-generic singular points:
\[\tilde{\mathfrak{N}}_{\alpha}:=\operatorname{sing}_{\text{non-gen}}(S)\setminus U _{\alpha}(\operatorname{sing}_{\text{gen}}(S))\]
and, for \(k=0,\dots,n-1\),
\[\mathfrak{N}_{\alpha}^{k}:=\operatorname{sing}_{\text{non-gen}}(S)\cap \overline{\mathfrak{G}^{k}}\setminus\cup_{j=k+1}^{n-1}U_{\alpha}(\mathfrak{G }^{j}).\]
Here, \(\bar{\cdot}\) denotes the closure and \(U_{\alpha}(\cdot)\) the open \(\alpha\)-neighborhood in spacetime \(\mathbb{R}^{n+1}\times\mathbb{R}\), with respect to the parabolic metric. Finally, define
\[\mathfrak{N}_{\alpha}:=\tilde{\mathfrak{N}}_{\alpha}\cup\mathfrak{N}_{\alpha }^{n-1}\cup\dots\cup\mathfrak{N}_{\alpha}^{0}.\]
**Lemma 6.1**.: \(\mathfrak{N}_{\alpha}\subset\mathbb{R}^{n+1}\times\mathbb{R}\) _is closed._
Proof.: Let's show that each set in the union is closed. The case of \(\tilde{\mathfrak{N}}_{\alpha}\) is straightforward: if \(X_{\ell}\in\tilde{\mathfrak{N}}_{\alpha}\) are such that \(X_{\ell}\to X\), then clearly \(X\not\in U_{\alpha}(\operatorname{sing}_{\text{gen}}(S))\), so \(X\in\tilde{\mathfrak{N}}_{\alpha}\).
Next, consider \(\mathfrak{N}_{\alpha}^{k}\). Take \(X_{\ell}\in\mathfrak{N}_{\alpha}^{k}\) with \(X_{\ell}\to X\). Then, \(X\in\overline{\mathfrak{G}^{k}}\setminus\cup_{j=k+1}^{n-1}U_{\alpha}( \mathfrak{G}^{j})\) is immediate. It remains to prove that \(X\in\operatorname{sing}_{\text{non-gen}}(S)\).
If not, then \(X\in\operatorname{sing}_{\text{gen}}(S)\), so \(X\in\cup_{m=0}^{k}\mathfrak{G}^{m}\) since \(X\not\in\cup_{j=k+1}^{n-1}U_{\alpha}(\mathfrak{G}^{j})\). Note that
\[X_{\ell},\;X\in\overline{\mathfrak{G}^{k}}\implies\Theta(X_{\ell}),\;\Theta(X )\geq\lambda(\mathbb{S}^{n-k}) \tag{6.1}\]
by upper semicontinuity of density. Combining (6.1) and \(\lambda(\mathbb{S}^{1})>\lambda(\mathbb{S}^{2})>\dots>\lambda(\mathbb{S}^{n})\), we have that \(X\in\mathfrak{G}^{k}\). Thus, by the choice of \(\eta_{1}\), we have
\[\limsup_{\ell}\Theta(X_{\ell})\leq\lambda(\mathbb{S}^{n-k})-\eta_{1}.\]
This contradicts the fact that \(\Theta(X_{\ell})\geq\lambda(\mathbb{S}^{n-k})\) by (6.1).
### The perturbation
It follows from Lemma 6.1 that Corollary 5.3 applies with \(\mathfrak{N}_{\alpha}\) in place of \(\mathfrak{N}\) and yields a relatively open dense subset \(S^{\prime}_{\alpha}\subset S\) with \(\operatorname{sing}(S^{\prime}_{\alpha})\cap\mathfrak{N}_{\alpha}=\emptyset\).
Consider \(S^{\prime}=\cap_{\alpha\in\mathbb{Q}\cap(0,1]}S^{\prime}_{\alpha}\), which is dense by the Baire category theorem. Theorem 1.13 now follows from Lemma 6.2 below.
**Lemma 6.2**.: \(\operatorname{sing}_{\text{non-gen}}(S^{\prime})=\emptyset\)_._
Proof.: Suppose, for contradiction, that there did exist some \(X\in\operatorname{sing}_{\text{non-gen}}(S^{\prime})\), and take any sequence \(\alpha_{i}\in\mathbb{Q}\cap(0,1]\) with \(\alpha_{i}\to 0\). Fix some \(i\). Using that
\[S^{\prime}\subset S_{\alpha_{i}}\text{ and }\operatorname{sing}(S_{\alpha_{i}} )\cap\mathfrak{N}_{\alpha_{i}}=\emptyset, \tag{6.2}\]
we see that \(X\not\in\mathfrak{N}_{\alpha_{i}}\). But \(\mathfrak{N}_{\alpha_{i}}\supset\tilde{\mathfrak{N}}_{\alpha_{i}}\), so \(X\not\in\tilde{\mathfrak{N}}_{\alpha_{i}}\), so in particular \(X\in U_{\alpha_{i}}(\operatorname{sing}_{\text{gen}}(S))\). Since \(i\) was arbitrary,
\[X\in\cap_{i}U_{\alpha_{i}}(\operatorname{sing}_{\text{gen}}(S))=\overline{ \operatorname{sing}_{\text{gen}}(S)}=\overline{\cup_{k=0}^{n-1}\mathfrak{G}^{ k}}.\]
As a result, there exists some \(k=0,1,\dots,n-1\) so that
\[X\in\overline{\mathfrak{G}^{k}}\setminus\cup_{j=k+1}^{n-1}\overline{ \mathfrak{G}^{j}}. \tag{6.3}\]
Fix \(i\) again. Using (6.2) again, now together with \(\mathfrak{N}_{\alpha_{i}}\supset\mathfrak{N}_{\alpha_{i}}^{k}\), it follows that \(X\not\in\mathfrak{N}_{\alpha_{i}}^{k}\). Combined with \(X\in\operatorname{sing}_{\text{non-gen}}(S^{\prime})\) we see that
\[X\notin\overline{\mathfrak{G}^{k}}\setminus\cup_{j=k+1}^{n-1}U_{\alpha_{i}}( \mathfrak{G}^{j}).\]
Since \(X\in\overline{\mathfrak{G}^{k}}\) by (6.3), it must hold that
\[X\in\cup_{j=k+1}^{n-1}U_{\alpha_{i}}(\mathfrak{G}^{j}).\]
But since \(i\) was arbitrary, this implies that
\[X\in\cap_{i}\cup_{j=k+1}^{n-1}U_{\alpha_{i}}(\mathfrak{G}^{j})=\cup_{j=k+1}^{n- 1}\overline{\mathfrak{G}^{j}},\]
contradicting (6.3). This completes the proof.
## Appendix A The generic strong multiplicity-one property
In this appendix we consider the following condition:
**Definition A.1**.: We say that a closed embedded hypersurface \(M^{n}\subset\mathbb{R}^{n+1}\) satisfies the _strong multiplicity-one property_ if the following holds. Consider:
* hypersurfaces \(M_{j}\subset\mathbb{R}^{n+1}\) converging smoothly to \(M\),
* flows \(\mathcal{M}_{j}\in\mathfrak{F}(M_{j})\),
* space-time points \(X_{j}\to X\in\mathbb{R}^{n+1}\times(0,\infty)\), and
* scales \(\lambda_{j}\to\infty\),
so that \(\operatorname{ParDil}_{\lambda_{j}}(\mathcal{M}_{j}-X_{j})\rightharpoonup\tilde {\mathcal{M}}\). Then \(\operatorname{reg}\tilde{\mathcal{M}}\) has multiplicity one and the parabolic dimension of the singular set satisfies \(\dim_{H}\operatorname{sing}\tilde{\mathcal{M}}\leq n\).
In particular, polyhedral cones other than the flat multiplicity one hyperplane cannot arise as limit flows of perturbations of \(M\).
_Remark A.2_.: Bamler has recently proven that a condition of this type does indeed hold in the Ricci flow setting [1, 1].
It's important to observe that by dimension reduction [10] (and the cyclic property of \(M\)[10]), if \(M\) fails to have the strong multiplicity-one property then we can adjust \(X_{j},\lambda_{j}\) so as to ensure that either:
1. \(\tilde{\mathcal{M}}\) is a static/quasi-static multiplicity \(\geq 2\) hyperplane, or
2. a union of \(\geq 4\) multiplicity-one half \(n\)-planes meeting along an \((n-1)\)-plane.
(Thus, e.g., all \(M\) with \(\lambda(M)<2\) automatically have the strong multiplicity-one property; cf. Lemma 2.8.)
**Lemma A.3** (Openness of the strong multiplicity-one property).: _Suppose that \(M_{k}\) are smooth closed embedded hypersurfaces in \(\mathbb{R}^{n+1}\) smoothly converging to \(M\). If \(M\) satisfies the strong multiplicty-one property then so does \(M_{k}\) for \(k\) sufficiently large._
Proof.: If not, there's \(M_{k,j}\) smoothly converging to \(M_{k}\), flows \(\mathcal{M}_{k,j}\in\mathfrak{F}(M_{k,j})\), space-time points \(X_{k,j}\to X_{k}\in\mathbb{R}^{n+1}\times(0,\infty)\), and scales \(\lambda_{k,j}\to\infty\) so that
\[\operatorname{ParDil}_{\lambda_{k,j}}(\mathcal{M}_{k,j}-X_{k,j})\rightharpoonup \tilde{\mathcal{M}}_{k}\]
as \(j\to\infty\), where either \(\tilde{\mathcal{M}}_{k}\) satisfies (1) or (2) above. If (1) is satisfied for infinitely many \(k\) we can pass to a subsequence so that \(\tilde{\mathcal{M}}_{k}\rightharpoonup\tilde{\mathcal{M}}\) still with \(\tilde{\mathcal{M}}\) satisfying (1). If (2) is satisfied for infinitely many \(k\), then either \(\tilde{\mathcal{M}}_{k}\rightharpoonup\tilde{\mathcal{M}}\) still with \(\tilde{\mathcal{M}}\) satisfying (2) as is, or (2) where or two (or more) of the half-planes overlap.
In any case, we can (possibly after adjusting \(X_{k,j},\lambda_{k,j}\) and passing to an un-labeled subsequence) find a sequence \(j_{0}(k)\to\infty\) so that if \(j(k)\geq j_{0}(k)\) then
\[\operatorname{ParDil}_{\lambda_{k,j(k)}}(\mathcal{M}_{k,j(k)}-X_{k,j(k)})\rightharpoonup \tilde{\mathcal{M}}\]
where \(\tilde{\mathcal{M}}\) satisfies either (1) or (2). Indeed, the only case that's not obvious is the final possibility in which \(\tilde{\mathcal{M}}_{k}\) is a union of half-planes with two or more converging. In this case we just have to re-center the points and scales in the converging half-planes so as to arrange that we're in case (1).
Taking \(j(k)\geq j_{0}(k)\) sufficiently large, we can ensure that \(M_{k,j(k)}\) converges smoothly to \(M\). In particular, the flows \(\mathcal{M}_{k,j(k)}\) are smooth for a definite interval of time, and thus the points \(X_{k,j(k)}\) are bounded away from time \(0\). The data \(M_{k,j(k)}\), \(\mathcal{M}_{k,j(k)}\), \(X_{k,j(k)}\), \(\lambda_{j(k)}\) contradict the assumed strong multiplicity-one property of \(M\). This completes the proof.
We will work with initial data that can be approximated by hypersurfaces with the strong multiplicity-one property:
**Definition A.4**.: We say that a closed embedded hypersurface \(M^{n}\subset\mathbb{R}^{n+1}\) satisfies the _generic strong multiplicity one-property_ if there are \(M_{k}\) smoothly converging to \(M\) with \(M_{k}\) having the strongly multiplicity-one property.
The techniques in this paper used to prove Corollaries 1.18 and 1.19 can be used to prove the following result, which drops all entropy bounds and replaces them with the generic strong multiplicity-one assumption. (A stronger partial result holds for \(n=2\) by [10, 11]; see [10, Theorem 9.2] for a precise statement.)
**Theorem A.5**.: _If \(M^{n}\subset\mathbb{R}^{n+1}\), \(n\in\{2,3,4\}\) has the generic strong multiplicity one property then there exist arbitrarily small \(C^{\infty}\) graphs \(M^{\prime}\) over \(M\) so that \(\operatorname{sing}_{\text{\rm non-gen}}\mathcal{M}^{\prime}=\emptyset\) for all \(\mathcal{M}^{\prime}\in\mathfrak{F}(M^{\prime})\)._
Note that a natural generalization of Ilmanen's multiplicity-one conjecture would be for all _all_ initial data themselves to have the strong multiplicity-one property directly; a weaker one is for all initial data to satisfy the _generic_ strong multiplicity-one property.
Let us explain the necessary modifications to prove Theorem A.5. First, by assumption we can perturb \(M\) (not relabeled) so that \(M\) satisfies the strong multiplicity-one property. Now, following the setup in Section 6, we embed \(M\) in a local unit speed foliation \(\{M_{s}\}_{s\in S}\). By Lemma A.3 we can shrink \(S\) (still with \(0\in S\)) so that all \(M_{s}\), \(s\in S\), have the strong multiplicity-one property.
We make the following modifications to the argument used in the rest of the paper:
1. Any \(\mathcal{M}\in\mathfrak{F}(M_{s})\), \(s\in S\) has \(\operatorname{reg}\mathcal{M}\) connected and multiplicity one by [10, Corollary F.4] and the strong-multiplicity one property (here we just have to consider tangent flows to \(\mathcal{M}\) without varying the parameter \(s\)). In particular, the regularity scale continues to be continuous (cf. Remark 2.5).
2. Instead of considering arbitrary \(F\)-stationary varifolds we always consider \(F\)-stationary varifolds that correspond to self-similar flows arising as limits of \(\mathcal{M}_{s_{j}}\in\mathfrak{F}(M_{s_{j}})\), \(s_{j}\in S\). The conclusion of Lemma 2.8 continues to hold in this setting. The \(\lambda(V)<2\) assumption in subsequent results (e.g. Lemma 2.12, Corollary 2.13, Proposition 3.1, etc.) is simply used to refer to a regularity
result along the lines of Lemma 2.8, and thus these results continue to hold in this more general setting.
3. The class \(\mathscr{P}(\eta)\) as in Definition 4.2 should be replaced with the smaller set of such pairs that arise as rescalings of flows \(\mathcal{M}\in\mathfrak{F}(M_{s})\), \(s\in S\) (and condition (3) should be removed). A similar modification (only considering flows that arise as rescalings of flows starting at the \(M_{s}\), \(s\in S\)) should be made in subsequent arguments (e.g. Lemmas 4.6 and 4.9, as well as Proposition 4.12).
The remainder of the argument is unchanged.
_Remark A.6_.: It would suffice to assume that \(M\) satisfies a different form of the generic strong multiplicity-one property in which we only considers initial data that are leaves of the local foliation \(\{M_{s}\}\).
_Remark A.7_.: Note that we do not know if the second main hypothesis (\(\heartsuit_{n,\Lambda}\)) holds when \(n=5\) and \(\Lambda\) is large (in other words, we do not know if there's a non-generic shrinker \(\Sigma^{2}\subset\mathbb{R}^{3}\) with \(\mu(\Sigma)\geq-\frac{3}{2}\)). Because of this, it's unclear if Theorem A.5 holds in \(\mathbb{R}^{6}\) (even assuming the generic strong multiplicity one property for \(M\)).
## Appendix B Stable minimal cones as shrinkers
In this appendix, we prove an estimate for the first eigenvalue of the \(L\)-operator on a stable minimal cone. This is not used elsewhere in the paper. This should be related to the question of whether or not the Simons cone can arise generically as a (static) tangent flow (cf. [21, 17, 16]).
Lemma 3.1 has the following consequence.
**Corollary B.1**.: _Let \(V\) be a conical stationary cyclic integral \(n\)-varifold in \(\mathbb{R}^{n+1}\) with \(\lambda(V)<2\). If \(\operatorname{reg}V\) is unstable with respect to the Euclidean area functional, then \(\mu(V)=-\infty\)._
If \(\operatorname{reg}V\) is stable, the conclusion of Corollary B.1 need not hold. For example it is easy to check that \(\mu(\mathbb{R}^{n})=-\frac{1}{2}\). We now show that non-flat stable cones satisfy an improved inequality.
**Proposition B.2**.: _Take \(n\geq 2\) and \(V\) any a conical stationary cyclic integral \(n\)-varifold in \(\mathbb{R}^{n+1}\) with4\(\lambda(V)<2\). If \(V\) is not a flat hyperplane, then_
Footnote 4: Actually, the result here would hold for a stationary minimal cone satisfying Wickramasekera’s \(\alpha\)-structural hypothesis [21] (it is easy to check that a stationary integral cyclic varifold \(V\) with \(\lambda(V)<2\) satisfies the \(\alpha\)-structural hypothesis).
\[\mu(V)\leq-1-\delta_{n},\]
_for some \(\delta_{n}>0\) depending only on \(n\)._
Proof.: By [14, Lemma 6.1.1] and dimension reduction we can assume that \(n\geq 7\) (otherwise Corollary B.1 gives \(\mu(V)=-\infty\)).
Let \(\Sigma\subset\mathbb{S}^{n}\) denote the regular part of the link of \(V\). By [15] (cf. [14]) we can find \(\Sigma^{\prime}\Subset\Sigma\) so that \(\Sigma^{\prime}\) has smooth boundary and so that there is \(u\in C^{\infty}(\Sigma^{\prime})\) with \(u=0\) on \(\partial\Sigma^{\prime}\) (if nonempty) and
(B.1) \[\Delta_{\Sigma}u+(|A_{\Sigma}|^{2}-(n-1))u\geq 0.\]
Indeed, by [22, Theorem 0.1] (since \(\Sigma\) is not a totally geodesic \(\mathbb{S}^{n}\) by assumption) it holds that
\[\inf\left\{\int_{\Sigma}|\nabla u|^{2}+(|A_{\Sigma}|^{2}-(n-1))u^{2}:u\in C_{c}^ {\infty}(\Sigma),\int_{\Sigma}u^{2}=1\right\}\leq 0\]
with equality if and only if \(V\) is the cone over a Clifford hypersurface (and in particular \(\operatorname{sing}V=\{\mathbf{0}\}\)). As such, if \(\operatorname{sing}V\neq\{\mathbf{0}\}\) this equality is strict, so we can exhaust \(\Sigma\) by smooth regions; the first Dirichlet eigenfunction on these regions will eventually satisfy (B.1). Conversely, if \(\operatorname{sing}V=\{\mathbf{0}\}\) we can find \(u\) satisfying (B.1) with \(\Sigma^{\prime}=\Sigma\) (in this case, this is a consequence of [20, Lemma 6.1.7]).
We now consider \(f(r,\omega)=w(r)u(\omega)\) for \(u\) as in (B.1) and \(w\in C_{c}^{\infty}((0,\infty))\) in (1.2). We find
\[\mu(V)\int_{0}^{\infty}w(r)^{2}e^{-\tfrac{1}{4}r^{2}}r^{n-1}dr\leq\int_{0}^{ \infty}\left(w^{\prime}(r)^{2}-((n-1)r^{-2}+\tfrac{1}{2})w(r)^{2}\right)e^{- \tfrac{1}{4}r^{2}}r^{n-1}dr.\]
A standard argument shows that for any \(\alpha>\frac{2-n}{2}\) we can take
\[w(r)=\begin{cases}0&r\in(0,\varepsilon^{2}]\\ \varepsilon^{\alpha}(2-\frac{\log r}{\log\varepsilon})&r\in(\varepsilon^{2}, \varepsilon]\\ r^{\alpha}&r\in(\varepsilon,\varepsilon^{-1}]\\ \varepsilon^{-\alpha}(2-\varepsilon r)&r\in(\varepsilon^{-1},2\varepsilon^{-1 }]\\ 0&r\in(\varepsilon^{-2},\infty).\end{cases}\]
This yields
\[\mu(V)\int_{0}^{\infty}e^{-\tfrac{1}{4}r^{2}}r^{n+2\alpha-1}dr\] \[\leq\int_{0}^{\infty}\left((\alpha^{2}-(n-1))r^{-2}-\tfrac{1}{2} \right)e^{-\tfrac{1}{4}r^{2}}r^{n+2\alpha-1}dr+o(\varepsilon^{n+2\alpha-2}).\]
We perform the change of variables \(r=2\sqrt{s}\) in both integrals:
\[\mu(V)\int_{0}^{\infty}e^{-s}s^{\frac{n+2\alpha}{2}-1}ds\] \[\leq\frac{1}{2}\int_{0}^{\infty}\left(\tfrac{\alpha^{2}-(n-1)}{2} s^{-1}-1)\right)e^{-s}s^{\frac{n+2\alpha}{2}-1}ds+o(\varepsilon^{n+2\alpha-2}).\]
Recalling that for \(\operatorname{Re}(z)>0\) the Gamma function \(\Gamma(z):=\int_{0}^{\infty}s^{z-1}e^{-s}ds\) satisfies the recurrence relation \(\Gamma(z+1)=z\Gamma(z)\), we find
\[\mu(V) \leq\frac{1}{2}\left(\frac{\alpha^{2}-(n-1)}{2}\frac{\Gamma( \frac{n+2\alpha}{2}-1)}{\Gamma(\frac{n+2\alpha}{2})}-1\right)+o(\varepsilon^ {n+2\alpha-2})\] \[=\frac{1}{2}\left(\frac{\alpha^{2}-(n-1)}{n+2\alpha-2}-1\right)+ o(\varepsilon^{n+2\alpha-2})\] \[=\frac{1}{2}\frac{\alpha^{2}-2\alpha-2n+3}{n+2\alpha-2}+o( \varepsilon^{n+2\alpha-2})\]
Thus, we find that
\[\mu(V)\leq\inf_{\alpha>\frac{2-n}{2}}\frac{1}{2}\frac{\alpha^{2}-2\alpha-2n+3}{n+2 \alpha-2}.\]
(Note that for \(n\leq 6\) this infimum is \(-\infty\); compare this with Corollary B.1.) Furthermore,
\[\inf_{\alpha>\frac{2-n}{2}}\frac{1}{2}\frac{\alpha^{2}-2\alpha-2n +3}{n+2\alpha-2} =\frac{1}{4}\left(-n+\sqrt{8-8n+n^{2}}\right)\] \[=-1-\underbrace{\frac{1}{4}\left(\sqrt{16-8n+n^{2}}-\sqrt{8-8n+n ^{2}}\right)}_{:=\delta_{n}}\]
(since \(n\geq 7\)). This completes the proof.
Note that \(\delta_{7}=\frac{1}{2}\), \(\delta_{8}\approx 0.29\), and \(\delta_{n}\to 0\) as \(n\to\infty\).
|
2303.17878 | Fused Depthwise Tiling for Memory Optimization in TinyML Deep Neural
Network Inference | Memory optimization for deep neural network (DNN) inference gains high
relevance with the emergence of TinyML, which refers to the deployment of DNN
inference tasks on tiny, low-power microcontrollers. Applications such as audio
keyword detection or radar-based gesture recognition are heavily constrained by
the limited memory on such tiny devices because DNN inference requires large
intermediate run-time buffers to store activations and other intermediate data,
which leads to high memory usage. In this paper, we propose a new Fused
Depthwise Tiling (FDT) method for the memory optimization of DNNs, which,
compared to existing tiling methods, reduces memory usage without inducing any
run time overhead. FDT applies to a larger variety of network layers than
existing tiling methods that focus on convolutions. It improves TinyML memory
optimization significantly by reducing memory of models where this was not
possible before and additionally providing alternative design points for models
that show high run time overhead with existing methods. In order to identify
the best tiling configuration, an end-to-end flow with a new path discovery
method is proposed, which applies FDT and existing tiling methods in a fully
automated way, including the scheduling of the operations and planning of the
layout of buffers in memory. Out of seven evaluated models, FDT achieved
significant memory reduction for two models by 76.2% and 18.1% where existing
tiling methods could not be applied. Two other models showed a significant run
time overhead with existing methods and FDT provided alternative design points
with no overhead but reduced memory savings. | Rafael Stahl, Daniel Mueller-Gritschneder, Ulf Schlichtmann | 2023-03-31T08:26:17Z | http://arxiv.org/abs/2303.17878v1 | # Fused Depthwise Tiling for Memory Optimization in TinyML
###### Abstract.
Memory optimization for deep neural network (DNN) inference gains high relevance with the emergence of TinyML, which refers to the deployment of DNN inference tasks on tiny, low-power microcontrollers. Applications such as audio keyword detection or radar-based gesture recognition are heavily constrained by the limited memory on such tiny devices because DNN inference requires large intermediate run-time buffers to store activations and other intermediate data, which leads to high memory usage. In this paper, we propose a new Fused Depthwise Tiling (FDT) method for the memory optimization of DNNs, which, compared to existing tiling methods, reduces memory usage without inducing any run time overhead. FDT applies to a larger variety of network layers than existing tiling methods that focus on convolutions. It improves TinyML memory optimization significantly by reducing memory of models where this was not possible before and additionally providing alternative design points for models that show high run time overhead with existing methods. In order to identify the best tiling configuration, an end-to-end flow with a new path discovery method is proposed, which applies FDT and existing tiling methods in a fully automated way, including the scheduling of the operations and planning of the layout of buffers in memory. Out of seven evaluated models, FDT achieved significant memory reduction for two models by 76.2% and 18.1% where existing tiling methods could not be applied. Two other models showed a significant run time overhead with existing methods and FDT provided alternative design points with no overhead but reduced memory savings.
neural networks, embedded software +
Footnote †: journal: Computer Vision and Pattern Recognition
+
2. An automated exploration with a new block-based path discovery to find suitable tiling configurations, a memory-aware scheduling and optimal memory layout planning.
In a sample of seven models that benefit from fused tiling, FDT achieved significant memory reduction for two models by 76.2% and 18.1% where existing tiling methods could not be applied. Two other models showed a significant run time overhead with existing methods and FDT provided an alternative design point with no overhead but reduced memory savings.
## 2. Related Work
Inference on resource-constrained devices can be tackled in a number of ways. Offloading computation to other infrastructure such as the cloud is widely used, but introduces challenges of high bandwidth and energy requirements for data transfer, network latency and privacy concerns (Krishnan et al., 2017). Orthogonal methods to fused tiling are quantization (Krishnan et al., 2017), pruning (Krishnan et al., 2017) and NAS (Beng et al., 2017; Chen et al., 2017).
Tiling is the splitting of DNN graph operations such that individual partitions can be computed independently of each other. It is used primarily within a single DNN operation to accelerate execution (Chen et al., 2017). Another application of tiling is the partitioning of DNNs so that they can be run distributed over several devices (Krishnan et al., 2017) or can be offloaded to one or more accelerators (Chen et al., 2017). A novel aspect of these works are their use of fusing to keep consecutive tiled partitions independent of each other. The basic principle is summarized in Fig. 1 on the left and will be referred to as Fused Feature Map Tiling (FFMT). The figure shows two consecutive convolutional operations as part of a DNN. The three sets of feature maps are the input of the first operation, an intermediate buffer and the output of the second operation. Since the intermediate buffer is larger than the input and output, tiling it could reduce memory requirements. FSMT does this by splitting all feature maps of the intermediate tensor buffer into partitions. Convolution operations have spatial locality, which allows to produce output feature maps from split inputs mostly independently. However, convolution kernels larger than 1x1 cause an overlap in the input partitions that accumulates additively over all tiled operations. Two partitions are shown in Fig. 1 in different colors/patterns, and their overlap caused by 3x3 convolutions is marked in purple/crossed. FFMT was first employed for reducing peak memory usage in (Chen et al., 2017), but their path discovery requires partially manual user effort. Other works that use FFMT with automated path discovery are (Chen et al., 2017; Chen et al., 2017; Chen et al., 2017; Li et al., 2017; Li et al., 2017). FFMT along with tiling in the depthwise dimension for single layers without operator fusion was explored in (Chen et al., 2017; Li et al., 2017). Based on these, the work in (Li et al., 2017) identifies the full FFMT search space of loop scheduling in a memory hierarchy and adds a new cost model. (Li et al., 2017) further states: "in most convolution layers all input channels are required to calculate any output feature, which makes cross-layer tiling across the channel dimensions impossible", which we challenge with this work. Our distinction from existing work is summarized in Table 1. Our work is the first to exploit FDT for optimization of working memory (RAM) in DNN inference while also keeping the path discovery fully automated. FDT will be explained in full detail in the following section.
## 3. Fused Depthwise Tiling (Fdt)
FDT is first proposed in (Chen et al., 2017) (named _Fused Layer Partitioning_) for partitioning DNN weights of fully connected and convolutional layers that have a large number of weights. Our work is the first that applies it for the optimization of working memory, i.e. RAM, where the original work targeted the static parameters, i.e. ROM.
The primary goal of fused tiling for memory optimization is the splitting of large intermediate tensor buffers so that their partitions can be computed independently with reduced memory demand. As shown in Fig. 1 on the right, FDT does this in the depthwise dimension instead of along the feature maps as with FFMT. Switching to the depthwise dimension avoids any overlap in the intermediate buffer and makes the method independent of the kernel size because the two dimensional convolutions are not split. However, it requires that the input and output buffers are fully available to every partition, because every single output feature map is the result of summing all input feature maps after applying a convolutional filter. Fig. 2 helps explain this concept with two consecutive dense (also called fully connected) layers tiled into two partitions.
\begin{table}
\begin{tabular}{|l|c c|} \hline
**Work** & **FFMT** & **FDT** \\ \hline Distributed Inference (Krishnan et al., 2017) & RAM reduction & - \\ Full Distributed Inference (Chen et al., 2017) & RAM reduction & ROM reduction \\ Partly Manual Tiling (Chen et al., 2017; Chen et al., 2017) & RAM reduction & - \\ Automated Tiling (Chen et al., 2017; Chen et al., 2017; Li et al., 2017; Li et al.
Part of the output neurons of the first layer (_FDT Fan-Out_) are computed in each partition using all input neurons. For the second layer (_FDT Fan-In_), the output neurons can only be computed partially, because not all input neurons are available to every partition. However, since a dense operation is a sum of products, all partial values of all partitions can be recombined by summing them element-wise and applying the activation function in a new appended _Merge_ operation. Since activation functions are nonlinear, this imposes a limit of two FDT-partitioned operations for each tiled sequence. In FMT there is no inherent limit to the number of consecutive convolutions until the overlap becomes too large to achieve memory savings or the run time overhead becomes impractical. We call such a tiled sequence _path_ and they may contain other operations interleaved with the FEMT/FDT ones. For example, element-wise or pooling operations can be inserted, because they do not introduce cross-dependencies between partitions. FEMT requires spatial locality, while FDT can be applied to a wider range of operations where all output elements depend on all input elements as long as there is no interdependence among the output elements. Examples of operations that can only be tiled by FDT are dense operations and pairs of embedding lookup (e.g. TensorFlow gather function) and axis reduction (e.g. by taking the mean).
## 4. Automated Tiling Exploration
It is not meaningful to demonstrate the theoretical memory usage of fused tiling methods in isolation, because the practical memory usage is heavily affected by the entire end-to-end deployment flow with the interdependent problems of tiling configuration, operation scheduling and layout planning. Each of these problems will be addressed in this section. The entire automated tiling exploration flow is shown in Fig. 3. Firstly, the operations of DNN graph \(G_{in}\) are scheduled in a memory-optimized order \(S\). After the schedule has been fixed, all required intermediate buffers are placed into a linear memory space such that the total required peak memory is minimal. From the resulting memory layout \(L\) a list is extracted that consists of intermediate buffer candidates \(B_{i}\) which may reduce the total memory usage if they were to be tiled. These \(B_{i}\) are passed to the path discovery in descending order by their size. The path discovery step identifies tiling configuration candidates \(C_{j}\) for the first buffer candidate. All configuration candidates are passed to the actual graph transformation pass that applies tiling on the DNN graph to produce graph candidates \(G_{j}\). These are again evaluated by performing scheduling and memory layout planning. If the memory size of the smallest found layout \(L_{min}\) is smaller than the current layout \(L\), the corresponding tiling configuration did improve memory usage and the currently best graph candidate \(G_{opt}\) is updated. If no configuration could be found that reduces the memory usage, the next buffer candidate is tested. The newly generated tiled DNN graph \(G_{opt}\) is evaluated iteratively. The flow terminates when no buffer candidate \(B_{i}\) produces a tiling configuration that reduces the layout size further. In the following, we describe each step in detail, starting with the scheduling and layout planning as these are the prerequisites for path discovery.
### Memory-aware Scheduling
For many DNNs, scheduling is trivial because their graphs do not contain any branches. The operation nodes are scheduled in the order as they are located on the single path of the graph. However, with tiling, parallel paths are introduced in the DNN graph and different schedules become possible that determine the lifetime of the intermediate buffers and hence, peak memory. While optimal memory-aware scheduling has been achieved before in (Bauer et al., 2017), tiled graphs with large number of partitions and many split operations can quickly cause unmanageable run times. Tiled DNNs resemble _series-parallel graphs (SP-graphs)_, i.e. graphs that only comprise of series and parallel compositions of other SP-graphs and the base case of a single node. Optimal memory-aware scheduling of SP-graphs has been solved with a polynomial-time algorithm by (Kraus et al., 2018) based on (Kraus et al., 2018). We implemented this algorithm and adjusted the task model to match that of DNN inference. In contrast to typical task models in distributed computing, the output of an operation can be used by all subsequent operations without distinct buffers for each edge. For non-SP-graphs, we formulated an Mixed Integer Linear Program (MILP), because we deemed it easier than the method by (Bauer et al., 2017). If the SP-graph algorithm (it is still \(\mathcal{O}(n^{3})\)) times out, we use a simple heuristic based on _hill-valley segments_ introduced in (Kraus et al., 2018), but compromising optimality for trivial run time complexity. For each parallel path, the heuristic determines the node \(N_{i,max}\) with the maximum memory usage and the node \(N_{i,min}\) with the minimum memory usage which is also a descendant of \(N_{i,max}\). The paths are now scheduled in their descending order of \(N_{i,diff}=N_{i,max}-N_{i,min}\) and used as-is, instead of merging them as in the optimal algorithm.
### Memory Layout Planning
After the optimal schedule has been determined, all intermediate buffers of the DNN graph have to be mapped to concrete memory locations. This is a nontrivial task because buffers can overlap in memory, as long as they are not live at the same time. The DNN graph describes the dependencies between buffers and operations, and the schedule indicates in what order these operations are executed. Together, these two determine the exact lifetime and, therefore, conflicts that exist between any buffers. The following MILP is formulated to perform optimal memory layout planning.
(1) \[min_{\mathbf{e}} max_{i}(e_{i})\] \[s.t. e_{i}\geq s_{i} e_{i}\in\mathbb{N}\quad\forall_{i=1...N}\] (2) \[e_{u}-s_{u}\geq e_{v}\lor e_{v}-s_{v}\geq e_{u}\] \[(u,v)\in c_{j}\quad\forall_{j=1...C}\] (3)
The \(i\)-th of a total of \(N\) buffers has the ending offset \(e_{i}\) and the size \(s_{i}\). The \(j\)-th of a total of \(C\) conflicts is described by \(c_{j}\) and contains the indices \(u\) and \(v\) that refer to the buffer list. The objective function (1) minimizes the largest ending offset of all buffers which is equal to the peak memory usage of all mapped buffers. The constraint (2)
Figure 3. Automated Tiling Exploration Flow.
ensures that all buffers can only start after the address zero. Finally, the constraint (3) ensures that there are no address overlaps in the list of conflicting buffers. The nonlinear disjunctions are modeled with the _Big M Method_. The final offsets of each buffer are obtained trivially by \(e_{i}-s_{i}\).
### Block-based Path Discovery
Path discovery has the goal of proposing optimized fused tiling configurations that dictate where and how DNN operations are tiled. The process starts at a buffer that should be split into multiple partitions, called the _critical_ buffer. It then walks the DNN graph up and down to find suitable split and merge points. After memory-aware scheduling and memory layout planning, the critical buffers are identified by selecting buffers from the memory layout that would reduce the total layout size if their size were to be reduced. This is achieved by checking whether a buffer is the sole one responsible for the final layout size. In our work, the input or output buffers of the model cannot be tiled because they are assumed to be written and read as a whole by the application. The method can be adapted easily if this were not the case. All critical buffers are considered for path discovery, but the largest ones are checked first.
The blocks of our block-based path discovery along with their supported operations are shown in Fig. 4. The input I is the start of any path and can either be split explicitly by a _SPLIT_ operation or implicitly through an _FDT Fan-Out_ operation. _SPLIT_ may produce depthwise partitioned (PD\({}_{\text{D}}\)) or feature map partitioned (PD\({}_{\text{FM}}\)) values. Partitioned operations (_PART_) compute part of the output values by using their respective input values and are compatible with any partitioned values. The concatenation operation (_CONCAT_) concatenates multiple partitioned values back into the original non-partitioned buffer O. The same can be realized implicitly with the _FDT Fan-In_ which includes the second partitioned operation of FDT along with the final merge operation as discussed in Section 3. _FFMT_ represents an operation that is split with FMT and is only applicable to convolutional operations. _FDT Fan-Out_, _PART_, _FDT Fan-In_ and _FFMT_ replace their original operation with the tiled variant, while _SPLIT_ and _CONCAT_ are additionally inserted operations to build a valid path.
Fig. 5 shows an example DNN at the top with a highlighted critical buffer. At the critical buffer, multiple candidate paths are proposed for type PD\({}_{\text{D}}\) or PD\({}_{\text{FM}}\) if possible. One proposal is created for each number of partitions \(N\in\{2,...,25\}\) with the upper limit chosen to reduce overheads while observing that higher limits rarely provide additional memory savings. For _FFMT_, quadratic two-dimensional tiling configurations are added as \(N\in\{2x2,3x3,4x4,5x5\}\). Next, the path is discovered starting from the critical buffer in both directions where any compatible block can be chosen. Whenever the _FDT Fan-In_ method is used, one version of the path without _FDT Fan-In_ is kept, because a _CONCAT_ could require less memory than continuing with partial values. Whenever an _FFMT_-partitioned operation that has overlap is encountered, one version that stops before that operation is kept and finalized with _SPLIT_ or _CONCAT_. This is done because overlaps that become too large may cause inferior paths compared to shorter ones. The discovery has to stop at any operation that is incompatible with fused tiling (e.g. softmax, slice, concat). For each of the proposed path candidates, the operation before the critical buffer with the lowest input buffer size is selected as start of the path and the operation after the critical buffer with the lowest output buffer size is selected as end of the path. If no such operation could be determined before and after the critical buffer, the path is discarded and if no valid paths are left, the discovery fails. The second and third graphs in Fig. 5 show the longest paths of the example for FDT and FMT respectively. Note that initially the FFMT path included the outermost convolutions, but since their input/output buffer is larger than the one before, the path terminals are selected as shown. In the final step, path discovery determines the path that is expected to cause the lowest memory usage. As mentioned in the overview, this is done by evaluating the memory size with memory-aware scheduling and memory layout planning. The best configuration is the one with the lowest memory size.
### Automated Graph Transformation
Once the best path configuration has been determined, it is applied by transforming the DNN graph with the given parameters. At the start of the split path, either an explicit or implicit split has to be realized. For an explicit split, a new operation has to be inserted that slices the input into partitions according to the tiling configuration. An implicit split is implemented by replicating the convolutional or dense layer by the number of partitions and splitting their weight dimension that is responsible for producing outputs. Any following operations are also replicated on each partition and need their parameters changed to match their new input dimensions. For example, a bias addition no longer adds its original constants, but only the ones corresponding to the respective partition. Another example are padding operations where their padding needs to be eliminated at split boundaries to preserve the original DNN behavior. Depthwise convolutions can be split trivially along the channel dimension as tiling method _PART_, since every output channel only depends on its respective input channel. The associated filter weights must still be split accordingly. The
Figure 4. Path discovery blocks with supported operations.
Figure 5. Path discovery applying FDT and FFMT.
exact splitting logic for every operator has to be determined on a case-by-case basis. However, it is possible to define categories with similar splitting logic. _FDT Fan-In_ operations are split equivalently to _FDT Fan-Out_ ones, just that the input channel dimension of the weight tensor is split. Care has to be taken to prohibit automatic fusing of the last operations on the split paths with the _CONCAT_ or _FDT Fan-In_ operation, because that would lead to keeping their inputs alive on multiple split paths. After all transformations have been applied to the graph, the flow goes back to scheduling it as shown in Fig. 3.
### Implementation
The complete end-to-end flow for comparing FDT to FMT has been implemented in _Apache TVM_(Das et al., 2017). TVM is a state-of-the-art machine learning compiler that takes DNNs and converts them into its own Intermediate Representation (IR). This IR is used to optimize the DNN through compiler transformations that are aware of the machine learning domain. Finally, various backends are able to produce output for different deployment targets like GPUs or microcontrollers. We chose TVM because its IR is very suitable for implementing complex transformation passes. To achieve competitive results compared to widely-used frameworks like _TensorFlow Lite for Microcontrollers_, we chose the _Ahead-of-Time (AoT) TVM backend_ that generates static code that is able to run the DNN inference without the full TVM run-time libraries. In TVM, many DNN operations are fused to eliminate intermediate buffers entirely. For example, a convolution with bias addition and activation function is carried out by adding the bias and applying the activation function while calculating each individual convolution output value. All intermediate buffers between such fused operations do not contribute to the peak memory usage. Therefore, when analyzing a DNN for critical buffers, only the buffers of non-fused operations are taken into consideration. However, during path discovery, all fused operations are transformed into their fine-grained operations because they may contain operations that are suitable as terminals of the split path. While this may have an effect on inference latency through increased number of memory accesses, the goal of the optimization is having small buffers at the path terminals, so that those possible extra accesses will never dominate other accesses. After the graph transformation step, operations are fused again at every possible opportunity.
## 5. Results
From a wide range of models, the following subset was identified that benefits from fused tiling. All models are quantized to 8 bits.
1. _Keyword Spotting (KWS):_ Detection of keywords from audio. Part of the MLPerf Tiny benchmark (Das et al., 2017).
2. _Text Sentiment Analysis (TXT):_(Kumar et al., 2018; Wang et al., 2019).
3. _Magic Wand (Mw):_ TinyML gesture recognition with an accelerometer (Kumar et al., 2018).
4. _PoseNet (POS):_ Pose estimation (Wang et al., 2019).
5. _MobileNet V2 SSDLite (SSD):_ COCO classifier (Wang et al., 2019).
6. _Cifar10 classifier (CIF):_ Own CNN (Kumar et al., 2018).
7. _Radar Gesture Recognition (RAD):_ Own TinyML CNN for gesture recognition with a radar sensor.
The target architecture for all experiments was RISC-\(V\) in the _RV32GC_ configuration. The GNU toolchain at version 11.1.0 was used with the optimization flag set to -0s and options to prune all unused code and data. The RAM and ROM usage is determined from the section sizes in the compiled binary. The run time is estimated by statically determining the number of multiply-accumulate (MAC) operations required in the final optimized DNN graph. This gives a good estimate because the computational cost of DNNs is dominated by matrix multiplications and therefore MACs (Kumar et al., 2018). This is not equivalent to the run time after deployment, but is sufficient for a relative comparison. Dynamic instruction counts were also gathered on an instruction set simulator, but they showed high sensitivity to TVM's automatic kernel generation, rather than the chosen tiling configurations. This could be explained by TVM's lack of operator schedules that are optimized for RISC-V. The MILPs were implemented in ORTools 9.3 (Wang et al., 2019) using the Gurobi 9.1.2 solver (Gurobi, 2018).
### Automated Tiling Exploration
Our optimal memory layout planning algorithm was compared to the best-performing heuristic approach in TVM that uses hill-climbing and simulated annealing. The heuristic finds the optimum for most models, but in one case (the TXT model) we achieved a memory reduction of 16.8% compared to the heuristic.
Our MILP memory-aware scheduling solution is optimal, as defined by its cost function. The work in (Das et al., 2017) reports a run time of 37.9 seconds for the SwiftNet model (Das et al., 2017). When running our MILP scheduling with the same SwiftNet model, we measured a run time of 37 seconds. While not being able to directly compare these numbers due to different used machines, our result on an _AMD Ryzen 9 3900X_ processor shows comparable performance.
Our path discovery is able to traverse a large variety of models and selects the optimal solution within its search space. This search space ranges from zero to hundreds depending on the critical buffer dimensions and operations used to create a path. Further factors are the variants with early path stops and the iterative application of tiling. The innermost operations of graph transformation, scheduling and layout planning have to be executed that number of times. For the evaluated models, the entire flow has a run time of 3 minutes for the RAD model (38 tiling configurations) up to an hour for the POS model (172 tiling configurations). (Das et al., 2017; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019) do not provide flow run times. The work of (Das et al., 2017) reports 82 to 375 seconds for searching nine configurations, while still having to manually select the number of partitions and their axes. This shows that the implemented flow runs efficiently and, in contrast to existing work, requires no manual choice for the tiling configuration.
### Fused Depthwise Tiling
The results in Table 2 show the working memory (RAM) usage and estimated MAC operations for each untiled network and the improvements by applying FSMT or FDT individually. The first two models are only able to be tiled by FDT. In the case of KWS, the critical buffer is involved in a sequence of convolutions that reduce the feature map size down to 1x1, which can not be split by FSMT. The TXT model's critical buffer exists within an embedding lookup followed by a mean axis reduction that can only be tiled by FDT. The remaining models are all CNNs with sufficient feature
map sizes such that either method is applicable. FDT never incurs any run time overhead at the cost of lower memory reduction compared to FFMT. The average memory savings are 32.7% for FFMT and 24.7% for FDT with the highest savings achieved for the TXT model with FDT at 76.2%. Mostly, the run time overheads of both methods are negligible, but the POS and CIF models with FFMT showed a significant overhead of 45.1% and 9.0% because they contain larger chains of fused operations that cause more redundant calculations from overlapping partitions. In these cases, FDT offers an alternative design point without any overhead, but often reduced memory savings. For the remaining three models, FDT did not achieve higher memory savings than FFMT and FFMT did not cause a significant run time overhead. The limitation of FDT is therefore its limited applicability to models in general. The ROM overheads are not shown because they are negligible with impacts below 1%. Although (Xu et al., 2021) used FDT and FFMT as well, that work only investigated memory usage without inference, which mainly amounts to ROM usage.
Enhancing an FFMT-only TinyML deployment flow with FDT expands the tiling design space for memory and performance goals. In the case of a memory-optimized design, the fused tiling method with the highest memory savings can be selected. In the case of a performance-optimized design, the highest memory savings should be selected with the constraint that the run time overhead may not exceed a certain threshold. The exploration also found tiling configurations, in which FFMT and FDT are applied in conjunction. However, the results were in the best case as good as the best configuration with a single tiling method. Still, for possible new models, the combination could also yield benefits.
## 6. Conclusion
In this paper, we apply Fused Depthwise Tiling to DNN graphs for memory optimization for the first time. We built a state-of-the-art, end-to-end deployment flow for its evaluation. In TinyML scenarios, integrating this new tiling reduces the memory usage of two evaluated models significantly and offered additional design points for two other models that eliminate the run time overhead at reduced memory savings.
## Acknowledgments
This work was supported in part by the German Federal Ministry of Education and Research (BMBF) within the project Scale4Edge under contract no. 16ME0131.
|
2309.10180 | Double Deep Q-Learning-based Path Selection and Service Placement for
Latency-Sensitive Beyond 5G Applications | Nowadays, as the need for capacity continues to grow, entirely novel services
are emerging. A solid cloud-network integrated infrastructure is necessary to
supply these services in a real-time responsive, and scalable way. Due to their
diverse characteristics and limited capacity, communication and computing
resources must be collaboratively managed to unleash their full potential.
Although several innovative methods have been proposed to orchestrate the
resources, most ignored network resources or relaxed the network as a simple
graph, focusing only on cloud resources. This paper fills the gap by studying
the joint problem of communication and computing resource allocation, dubbed
CCRA, including function placement and assignment, traffic prioritization, and
path selection considering capacity constraints and quality requirements, to
minimize total cost. We formulate the problem as a non-linear programming model
and propose two approaches, dubbed B\&B-CCRA and WF-CCRA, based on the Branch
\& Bound and Water-Filling algorithms to solve it when the system is fully
known. Then, for partially known systems, a Double Deep Q-Learning (DDQL)
architecture is designed. Numerical simulations show that B\&B-CCRA optimally
solves the problem, whereas WF-CCRA delivers near-optimal solutions in a
substantially shorter time. Furthermore, it is demonstrated that DDQL-CCRA
obtains near-optimal solutions in the absence of request-specific information. | Masoud Shokrnezhad, Tarik Taleb, Patrizio Dazzi | 2023-09-18T22:17:23Z | http://arxiv.org/abs/2309.10180v1 | Double Deep Q-Learning-based Path Selection and Service Placement for Latency-Sensitive Beyond 5G Applications
###### Abstract
Nowadays, as the need for capacity continues to grow, entirely novel services are emerging. A solid cloud-network integrated infrastructure is necessary to supply these services in a real-time responsive, and scalable way. Due to their diverse characteristics and limited capacity, communication and computing resources must be collaboratively managed to unleash their full potential. Although several innovative methods have been proposed to orchestrate the resources, most ignored network resources or relaxed the network as a simple graph, focusing only on cloud resources. This paper fills the gap by studying the joint problem of communication and computing resource allocation, dubbed CCRA, including function placement and assignment, traffic prioritization, and path selection considering capacity constraints and quality requirements, to minimize total cost. We formulate the problem as a non-linear programming model and propose two approaches, dubbed B&B-CCRA and WF-CCRA, based on the Branch & Bound and Water-Filling algorithms to solve it when the system is fully known. Then, for partially known systems, a Double Deep Q-Learning (DDQL) architecture is designed. Numerical simulations show that B&B-CCRA optimally solves the problem, whereas WF-CCRA delivers near-optimal solutions in a substantially shorter time. Furthermore, it is demonstrated that DDQL-CCRA obtains near-optimal solutions in the absence of request-specific information.
Beyond 5G, 6G, Computing First Networking, Cloud-Network Integration, Cloud Network Fabric, Resource Allocation, Path Selection, Traffic Prioritization, VNF Placement, Optimization Theory, Reinforcement Learning, and Q-learning.
## I Introduction
Nowadays, an increase in data flow has resulted in a 1000-fold increase in network capacity, which is the primary driver of network evolution. While this demand for capacity will continue to grow, the Internet of Everything is foring a paradigm shift to new-born perceptions, bringing a range of novel services with rigorous deterministic criteria, such as connected robotics, smart healthcare, autonomous transportation, and extended reality [1]. These services will be provisioned by establishing functional components, Virtual Network Functions (VNFs), which will generate and consume vast amounts of data that must be processed in real-time to ensure service responsiveness and scalability.
In these circumstances, a distributed cloud architecture is essential [2], which could be implemented via a solid cloud-network integrated infrastructure built of distinct domains in Beyond 5G (B5G) [3]. These domains can be distinguished by the technology employed, including radio access, transport, and core networks, as well as edge, access, aggregation, regional, and central clouds. Moreover, these resources can be virtualized using technologies such as Network Function Virtualization (NFV), which enables the construction of separate virtual entities on top of this physical infrastructure [4, 5]. Since distributed cloud and network domains would be diverse in terms of characteristics but limited in terms of capability, communication and computing resources should be jointly allocated, prioritized, and scheduled to ensure maximum Quality of Service (QoS) satisfaction while maximizing resource sharing and maintaining a deterministic system state, resulting in energy savings as one of the most significant examples of cost minimization objectives [6].
The joint problem of resource allocation in cloud-network integrated infrastructures has been extensively studied in the literature. Emu _et al._[7] analyzed the VNF placement problem as an Integer Linear Programming (ILP) model that guarantees low End-to-End (E2E) latency while preserving QoS requirements by not exceeding an acceptable latency violation limit. They proposed an approach based on neural networks and demonstrated that it can result in near-optimal solutions in a timely way. Vasilakos _et al._[8] examined the same problem and proposed a hierarchical Reinforcement Learning (RL) method with local prediction modules as well as a global learning component. They demonstrated that their method significantly outperforms conventional approaches. Sami _et al._[9] investigated a similar topic to minimize the cost of allocations, and a Markov decision process design was provided. They claimed that the proposed method provides efficient placements. Performing cost-effective services was also investigated by Liu _et al._[10] and He _et al._[11]. In the former, the authors considered the cost of computing and networking resources as well as the cost of using VNFs and proposed a heuristic algorithm, whereas, in the latter, they considered latency as a cost and proposed a Deep Reinforcement Learning (DRL) solution to the problem. Iwamoto _et al._[12] investigated the problem of scheduling VNF migrations in order to optimize the QoS degradation of all traffic flows and proposed a stochastic method on the basis of the load degree of VNF instances.
Although innovative techniques for addressing computing resource restrictions have been proposed by the above-mentioned authors, the network is solely considered as a pipeline in their studies, with no cognitive ability to the cloud domains. Nevertheless, there are additional studies in the literature that have been concentrating on communication and computing resources jointly. Kuo _et al._[17] studied the joint problem of VNF placement and path selection in order to better utilize the network resources, and a heuristic approach was proposed to tackle it. Mada _et al._[18] and Zhang _et al._[19] addressed the problem of VNF placement with the objective of maximizing the sum rate of accepted requests. Mada _et al._ solved the problem by using an optimization solver, and Zhang _et al._ adopted a heuristic strategy. Yuan, Tang and You [20] formulated the latency-optimal placement of functions as an ILP problem and proposed a genetic metaheuristic algorithm to solve it. Gao _et al._[21] focused on the VNF placement and scheduling to reduce the cost of computing resources by proposing a latency-aware heuristic algorithm. Minimizing the cost of allocations was also investigated by Miyamura _et al._[15] and Yang _et al._[16]. They took into account traffic routing constraints and proposed heuristic approaches to address the problem. By considering energy consumption as the most significant cost associated with networking and computing resources, Xuan _et al._[14] addressed the same problem by proposing an algorithm based on a multi-agent DRL and a self-adaptation division strategy. Nguyen _et al._[13] investigated the problem of VNF placement, where requests are weighted according to their priority and the goal is to maximize the total weight of services accepted for deployment on the infrastructure.
The methods presented in the cited studies are effective for resolving the resource allocation problem. However, such approaches cannot be utilized in B5G systems. Due to the stringent QoS requirements in the delay-reliability-rate space [22], the large number of concurrent services and requests, and the ever-changing dynamics of both infrastructure and end-user service usage behavior in terms of time and space, every detail of communication and computing resources must be determined and controlled in order to realize a deterministic B5G system [3]. In some studies, latency-related limitations and requirements were simply ignored [17, 15, 16, 13]. Despite the fact that delay is addressed in the other studies mentioned, they simplified it to be a connection feature, and queuing delay in network devices is completely eliminated. Furthermore, path selection is disregarded in some studies [18, 19, 20], and cost optimization is overlooked in others [19, 20].
This paper fills in the gap in the current literature by investigating the joint problem of allocating communication and computing resources, including VNF placement and assignment, traffic prioritization, and path selection. The problem is faced while taking into account capacity constraints and link and queuing delays, to minimize overall cost. As an extension of the work presented in [23], the following are the primary contributions of this research:
* Formulating the joint resource allocation problem of the cloud-network integrated infrastructure as a Mixed Integer Non-Linear Programming (MINLP) problem.
* Proposing a method based on the Branch & Bound (B&B) algorithm to discover the optimal solution of the problem, and devising a heuristic approach based on the Water-Filling (WF) algorithm in order to identify near-optimal solutions to the problem. When the system is fully known, both techniques can be applied to solve the problem.
* Developing an architecture based on the Double Deep Q-learning (DDQL) technique comprising agent design, training procedure, and decision-making strategy for allocating resources when the system is only partially known, i.e., there is no prior knowledge about the requests' requirements.
The reminder of this paper is organized as follows. Section II introduces the system model. The resource allocation problem is formulated in Section III. Next, the B&B and heuristic approaches are presented in Section IV. Section V presents a DDQL-based resource allocation architecture. Finally, numerical results are illustrated and analyzed in Section VI, followed by concluding remarks in Section VII.
## II System Model
In the following, we describe the main components of the system envisioned in this paper. As depicted in Fig. 1, the system consists of an infrastructure (integrated networking and computing resources), services running on computing resources, and end-user requests that must be connected to the services via networking resources. The parameters defined in this section are summarized in Table II.
### _Infrastructure Model_
The considered infrastructure is composed of the edge (non-radio side) and core network domains consisting of \(\mathcal{V}\) nodes, \(\mathcal{L}\) links, and \(\mathcal{P}\) paths denoted by \(\mathcal{G}=\langle\boldsymbol{\mathcal{V}},\boldsymbol{\mathcal{L}}, \boldsymbol{\mathcal{P}}\rangle\). \(\boldsymbol{\mathcal{V}}=\{v|v\in\{1,2,...,\mathcal{V}\}\}\) is the set of nodes. \(\boldsymbol{\mathcal{L}}\subset\{l:(v,v^{\prime})|v,v^{\prime}\in\boldsymbol{ \mathcal{V}}\}\) indicates the set of links, where the bandwidth of link \(l\) is constrained by \(\widehat{B_{l}}\), and it costs \(\Xi_{l}\) per capacity unit. Although a variety of factors (distance, technology, redundancy, accessibility, etc.) contribute to this cost as a capital expenditure, the energy used by network devices to process the traffic carried by this link is one of the significant operating expenses affecting this cost and must be precisely addressed in order to realize future networks [24]. \(\boldsymbol{\mathcal{P}}=\{p:(\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
packet size for request \(r\), are also assumed to be known a priori. Utilizing historical data, along with predictive data analytics methods, is one of the viable options for obtaining such accurate and realistic statistical estimates of traffic.
## III Problem Definition
This section describes the joint problem of VNF placement and assignment, traffic prioritization, and path selection. In what follows, the constraints and objective function are formulated as a MINLP problem and the problem is stated at the end of the section. The variables and parameters defined in this section are summarized in Table II.
### _VNF Placement and Assignment Constraints_
To arrange VNFs, each request must be first assigned a single node to serve as its service location (C1). This assignment is acceptable if the assigned node hosts a VNF for the requested service (C2). When the requests of a specific service are assigned to a particular node, they will be handled by a shared VNF. C3 ensures that the total service capacity required by these requests does not surpass the VNF's capacity. Additionally, C4 guarantees that the computing capacity of a node is not exceeded by the VNFs placed on it. Without these two constraints, both VNFs and nodes are at risk of becoming overloaded, leading to the potential termination of VNFs and congestion of requests. Such a scenario would significantly decrease the system's reliability and availability. The problem formulation becomes as follows:
\[\sum_{\mathbf{\mathcal{V}}}g_{r,v}=1,\forall r\in\mathbf{\mathcal{R}},\] (C1) \[g_{r,v}\leq z_{s_{r},v},\forall r,v\in\mathbf{\mathcal{R}},\mathbf{ \mathcal{V}},\] (C2) \[\sum_{\{r|r\in\mathbf{\mathcal{R}}\wedge s_{r}=s\}}\widetilde{ \mathcal{C}}_{r}g_{r,v}\leq\widehat{c_{s}},\forall v,s\in\mathbf{\mathcal{V}},\] (C3) \[\sum_{\mathbf{\mathcal{S}}}\widetilde{c_{s}}z_{s,v}\leq\widehat{c_{ v}},\forall v\in\mathbf{\mathcal{V}},\] (C4)
where \(g_{r,v}\) and \(z_{s,v}\) are binary variables. \(g_{r,v}\) is \(1\) if node \(v\) is selected as the service node of request \(r\), and \(z_{s,v}\) is \(1\) if service \(s\) is replicated on node \(v\).
### _Traffic Prioritization and Path Selection Constraints_
To direct traffic, we must first ensure that each request is assigned to exactly one priority level (C5). Then, each request's (request and reply) paths are determined (C6 and C7). For each request, a single inquiry path is chosen that starts at the request's entry node and ends at the request's VNF node. The response path follows the same logic but in reverse order. The following two constraints guarantee that the two paths are chosen on the priority level assigned to each request (C8 and C9). Finally, the constraints maintaining the maximum capacity of links and queues are enforced (C10 and C11). With C10, the sum of the required bandwidth for all requests whose inquiry or response path, or both, contains link \(l\) is guaranteed to be less than or equal to the link's capacity. In C11, the capacity of queues is guaranteed in the same way for each link and each priority level. The set includes:
\[\sum_{\mathbf{\mathcal{K}}}\nicefrac{{\partial r,k}}{{ \partial r,k}}=1,\forall r\in\mathbf{\mathcal{R}},\] (C5) \[\sum_{\{p|p\in\mathbf{\mathcal{P}}\wedge p,=w\wedge p=v\},\mathbf{ \mathcal{K}}}\overline{f_{r,p,k}}=g_{r,v},\forall r,v\in\mathbf{\mathcal{R}},\mathbf{ \mathcal{V}},\] (C6) \[\sum_{\{p|p\in\mathbf{\mathcal{P}}\wedge p,=w\wedge p=v\},\mathbf{ \mathcal{K}}}\overline{f_{r,p,k}}=g_{r,v},\forall r,v\in\mathbf{\mathcal{R}},\] (C7) \[\sum_{\mathbf{\mathcal{P}}}\overline{f_{r,p,k}}=\rho_{r,k},\forall r,k \in\mathbf{\mathcal{R}},\mathbf{\mathcal{K}},\] (C8) \[\sum_{\mathbf{\mathcal{R}}}\overline{f_{r,p,k}}=\rho_{r,k},\forall r, k\in\mathbf{\mathcal{R}},\mathbf{\mathcal{K}},\] (C9) \[\sum_{\mathbf{\mathcal{R}}}\overline{g_{r}}\sum_{\mathbf{\mathcal{R}}, \mathbf{\mathcal{R}}}\delta_{p,l}\cdot(\overline{f_{r,p,k}}+\overline{f_{r,p,k}}) \leq\widehat{B_{l}},\forall l\in\mathbf{\mathcal{L}},\] (C10) \[\sum_{\mathbf{\mathcal{R}}}\overline{\mathcal{r}}\sum_{\mathbf{\mathcal{ P}}}\delta_{p,l}\cdot(\overline{f_{r,p,k}}+\overline{f_{r,p,k}})\leq\widehat{ \mathcal{T}}_{k},\forall k,l\in\mathbf{\mathcal{L}},\] (C11)
where \(\varrho_{r,k}\) is a binary variable that equals 1 only when the priority level assigned to request \(r\) is \(k\), and \(\overline{f_{r,p,k}}\) and \(\overline{f_{r,p,k}}\) are binary variables that reflect the inquiry and response paths for request \(r\) on priority level \(k\), respectively.
### _Delay Constraints_
To guarantee the minimum delay requirement of requests, the following settings should be adhered:
\[D_{r,s_{r}}\widetilde{c_{r}}=\widetilde{\mathcal{H}}_{r},\forall r \in\mathbf{\mathcal{R}},\] (C12) \[D_{r,k,l}=\frac{\sum_{\mathbf{\mathcal{R}_{1}}}\widetilde{\mathcal{ P}}_{r}^{\prime}+\bigwedge_{\mathbf{\mathcal{R}_{2}}}\widetilde{\mathcal{H}}_{r^{\prime}}}{ \widehat{B_{l}}-\sum_{\mathbf{\mathcal{R}_{2}}}\widetilde{\mathcal{B}}_{r^{\prime }}}+\frac{\widetilde{\mathcal{H}}_{r}}{\widehat{B_{l}}},\ \forall r,k,l\in\mathbf{\mathcal{R}},\mathbf{\mathcal{K}},\mathbf{ \mathcal{L}},\] (C13) \[D_{r}=\sum_{\mathbf{\mathcal{P}},\mathbf{\mathcal{K}}}D_{r,k,l}\delta_{p,l }\cdot(\overline{f_{r,p,k}}+\overline{f_{r,p,k}})+D_{r,s_{r}},\forall r\in\mathbf{ \mathcal{R}}\] (C14) \[D_{r}\leq\widehat{\mathcal{T}}_{r},\forall r\in\mathbf{\mathcal{R}},\] (C15)
where \(D_{r,k,l}\), \(D_{r,s_{r}}\) and \(D_{r}\) are continuous variables denoting the delay experienced by a given flow of request \(r\) associated
with priority level \(k\) passing through ATS-based link \(l\)[25], its computing delay, and the corresponding E2E delay calculated as the sum of the delays on the links that comprise both paths of the request and its computing delay. Besides, \(\bigwedge\) is a function which returns the max value over the given set, \(\boldsymbol{\mathcal{R}_{1}}\) equals \(\{r^{\prime}|r^{\prime}\in\boldsymbol{\mathcal{R}}\wedge k_{r^{\prime}}\leq k \wedge\delta_{p,l}(\overrightarrow{f_{r^{\prime},p,k_{r^{\prime}}}}+ \overleftarrow{f_{r^{\prime},p,k_{r^{\prime}}}})>0\}\), \(\boldsymbol{\mathcal{R}_{2}}\) represents \(\{r^{\prime}|r^{\prime}\in\boldsymbol{\mathcal{R}}\wedge k_{r^{\prime}}>k \wedge\delta_{p,l}(\overrightarrow{f_{r^{\prime},p,k_{r^{\prime}}}}+ \overleftarrow{f_{r^{\prime},p,k_{r^{\prime}}}})>0\}\), and \(\boldsymbol{\mathcal{R}_{3}}\) denotes \(\{r^{\prime}|r^{\prime}\in\boldsymbol{\mathcal{R}}\wedge k_{r^{\prime}}<k \wedge\delta_{p,l}(\overrightarrow{f_{r^{\prime},p,k_{r^{\prime}}}}+ \overleftarrow{f_{r^{\prime},p,k_{r^{\prime}}}})>0\}\). These sets represent requests that share the same link as request \(r\), whereas \(\boldsymbol{\mathcal{R}1}\) includes requests with a higher or equal priority, \(\boldsymbol{\mathcal{R}_{2}}\) contains requests with a lower priority, and \(\boldsymbol{\mathcal{R}_{3}}\) shares requests with a higher priority.
### _Objective Function_
The objective function is to minimize the total cost of allocated computing nodes and network links, that is:
\[\sum\nolimits_{\boldsymbol{\mathcal{R}},\boldsymbol{V}}\Psi_{v}g_{r,v}+\sum \nolimits_{\boldsymbol{\mathcal{R}},\boldsymbol{\mathcal{L}}}\Xi_{i}\sum \nolimits_{\boldsymbol{\mathcal{P}},\boldsymbol{\mathcal{K}}}\delta_{p,l}( \overrightarrow{f_{r,p,k}}+\overleftarrow{f_{r,p,k}}),\] (OF)
As mentioned in Section II, this cost is directly related to the energy consumption of networking and computing elements, and its reduction is a crucial open challenge that must be carefully addressed to enable B5G systems [28, 29, 30, 31].
### _Problem_
Considering the constraints and objective function, the problem of Communication and Computing Resource Allocation (CCRA) is:
\[\text{CCRA: }\min\text{ OF }s.t.\text{ C1 - C15.} \tag{1}\]
## IV Fully-Informed Methods
In this section, the system is assumed to be fully known, i.e., the list of services and their characteristics are available, and the current state of the network and cloud resources as well as requests and their requirements are being monitored and collected on a regular basis. This could be the case of an industrial environment whereby tasks and communications among robots and devices are pre-planned [32, 33, 34]. Under such scenarios, the following section proposes two methods, B&B-CCRA and WF-CCRA, to solve the problem specified by (1). Clearly, an efficient strategy for implementing these methods is to centralize their development as system orchestrator components. Then, when end-users request access to the services, the methods can be executed, and the resulting decisions can be applied to the network and cloud resources using Software-Defined Networking (SDN) and NFV technologies.
### _B&b-Ccra_
Suppose that C1 and C5 - C15 are eliminated from (1) and only C2 - C4 affect the problem. Given this, the problem can be reformulated as minimizing the cost of assigned nodes within the capacity constraints of nodes and VNFs, that is \(min\sum_{\boldsymbol{\mathcal{R}},\boldsymbol{\mathcal{V}}}\Psi_{v}g_{r,v}\)_s.t._ C2 - C4. If a new parameter denoted \(\Psi^{\prime}_{v}=\mathcal{M}-\Psi_{v}\), where \(\mathcal{M}\) is a big positive number, is defined and substituted for \(\Psi_{v}\), the relaxed problem can be rewritten equivalently as \(max\sum_{\boldsymbol{\mathcal{R}},\boldsymbol{\mathcal{V}}}\Psi^{\prime}_{v}g_ {r,v}\)_s.t._ C2 - C4, which is the Multi-Dimensional Knapsack (MDK) problem with at least \(\mathcal{S}\) items and \(\mathcal{V}\) knapsacks. Since the MDK problem is NP-hard [35] and a relaxed version of our problem is as hard as this problem, it is proved that our problem is also NP-hard, and finding its optimal solution in polynomial time is mathematically intractable. One potential strategy for addressing such a problem is to restrict its solution space using the B&B algorithm, which relaxes and solves the problem to obtain lower bounds, and then improves the bounds using mathematical cuts to reach acceptable solutions. The method is described in Algorithm 1. In this algorithm, the solution space is discovered by maintaining an unexplored candidate list \(\boldsymbol{\mathcal{N}}=\{N_{t}|t\geq 1\}\), where each node \(N_{t}\) contains a problem, denoted by \(\Phi_{t}\), and \(t\) is the iteration number. This list only contains \(N_{1}\), the root candidate, at the beginning with the primary problem to be solved. To reduce its enormous computational complexity, instead of directly applying the B&B algorithm to CCRA, we consider its integer linear transformation as the problem of \(N_{1}\).
CCRA comprises non-linear constraints C13 and C14. To linearize C13, the summations and max function with variable boundaries should be converted to a linear form. A simple, effective technique is to replace each term with an approximated upper bound. Since the aggregated traffic burstiness is bounded by \(\widehat{\mathcal{T}_{k}}\) for each priority level \(k\) in C11, \(\sum_{\boldsymbol{\mathcal{R}_{1}}}\widehat{\mathcal{T}_{r^{\prime}}}\) can be replaced by the sum of this bound for all priority levels greater than or equal to \(k\), that is \(\sum_{\{k^{\prime}|k^{\prime}\leq k\}}\widehat{\mathcal{T}_{k^{\prime}}}\). In a similar way, we define a new constraint (C13\({}^{\prime}\)) for the aggregated bandwidth allowed on priority level \(k\) over link \(l\), dubbed \(\widehat{f_{l,k}}\), and replace the sum of allocated bandwidths with \(\sum_{\{k^{\prime}|k^{\prime}<k\}}\widehat{f_{l,k^{\prime}}}\). Besides, the maximum packet size for a particular subset of requests can be replaced by the maximum permitted packet size in the network, denoted by \(\widehat{\mathcal{H}}\). Therefore, the followings define the linear transformation of C13:
\[\sum\nolimits_{\boldsymbol{\mathcal{R}}}\overline{\mathcal{B}_{r}}\sum \nolimits_{\boldsymbol{\mathcal{P}}}\delta_{p,l}(\overrightarrow{f_{r,p,k}}+ \overleftarrow{f_{r,p,k}})\leq\overleftarrow{f_{l,k}},\forall k\in\mathcal{K}, \forall l\in\mathcal{L},\] (C13 \[{}^{\prime}\] ) \[\widehat{D_{k,l}}=\overleftarrow{\underline{\mathcal{L}_{1}}} \overleftarrow{\underline{\mathcal{R}_{1}}}-\sum\nolimits_{\boldsymbol{ \mathcal{R}_{2}}}\widehat{f_{l,k^{\prime}}}+\overleftarrow{\underline{\mathcal{H }}}}+\overleftarrow{\underline{\mathcal{H}}},\forall k\in\mathcal{K},\forall l \in\mathcal{L},\] (C13 \[{}^{\prime\prime}\] )
where \(\widehat{D_{k,l}}\) is the delay upper bound on link \(l\) with priority level \(k\), \(\boldsymbol{\mathcal{K}_{1}}\) is \(\{k^{\prime}|k^{\prime}\leq k\}\), and \(\boldsymbol{\mathcal{K}_{2}}\) is \(\{k^{\prime}|k^{\prime}<k\}\). Since \(D_{r,s}\), is linear, C14 can be linearized by substituting the actual delay for the upper bound derived in C13\({}^{\prime\prime}\), and the new constraint for E2E delay is:
\[D_{r}=\sum\nolimits_{\boldsymbol{\mathcal{P}},\boldsymbol{\mathcal{L}}, \boldsymbol{\mathcal{K}}}\widehat{D_{k,l}}\widehat{\delta_{p,l}}(\overrightarrow{f_{r,p,k }}+\overleftarrow{f_{r,p,k}})+D_{r,s_{r}},\forall r\in\boldsymbol{\mathcal{R}}.\] (C14 \[{}^{\prime}\] )
Given this, the linear transformation of CCRA, dubbed LiC-CRA, is as follows:
\[\text{LiCCRA: }\min\text{ OF }s.t.\text{ C1 - C12, C13${}^{\prime}$, C13${}^{\prime\prime}$, C14${}^{\prime}$, C15${}^{\prime}$}. \tag{2}\]
Now, with LiCCRA as \(\Phi_{1}\), each iteration of the B&B algorithm begins with the selection and removal of a candidate from the unexplored list. Then, the problem of this candidate is naturally relaxed and solved, i.e., all the integer variables in the set \(\{0,1\}\) are replaced with their continuous equivalents restricted by the box constraint \([0,1]\), and the relaxed problem
is solved using a Linear Programming (LP) solver to obtain the solution of the relaxed problem \((\mathbf{\mu}_{t}^{\star},\mathbf{\lambda}_{t}^{\star})\) and the optimal objective value \(\phi_{t}^{\star}\), where \(\mathbf{\mu}\) is the relaxed integer variables set, and \(\mathbf{\lambda}\) is the set of continuous variables. Next, if all relaxed variables have integer values, the obtained objective in this iteration is considered to update the best explored integer solution. Otherwise, a variable index \(j\) is selected such that \(\mathbf{\mu}_{t}^{\star}[j]\) is fractional, and the feasible constraints set \(\pi_{t}\) is divided into two parts as \(\pi_{1}^{1}=\pi_{t}\cap\{\mathbf{\mu}_{t}[j]\leq\big{|}\mathbf{\mu}_{t}^{\star}[j]\big{|}\}\) and \(\pi_{t}^{2}=\pi_{t}\cap\{\mathbf{\mu}_{t}[j]\geq\big{|}\mathbf{\mu}_{t}^{\star}[j]\big{|}\}\). Then, two problems are formed as \(\Phi_{t}^{1}=min\) OF _s.t._\(\pi_{t}^{1}\) and \(\Phi_{t}^{2}=min\) OF _s.t._\(\pi_{t}^{2}\). Now, two child nodes \(N_{t}^{1}\) and \(N_{t}^{2}\), whose problems are \(\Phi_{t}^{1}\) and \(\Phi_{t}^{2}\) respectively, are put into the unexplored list. The B&B algorithm is iterated until \(\mathbf{\mathcal{N}}\) is empty.
Alternatively, we can run this algorithm until a desired solving time is reached or an acceptable objective value is acquired. The key advantage of this algorithm is that it produces at least a lower bound even when the solving time is limited. As a result, it may be used to establish baselines allowing for the evaluation of alternative approaches.
### _Wf-Ccra_
Since the B&B method searches the problem's solution space for the optimal solution, its complexity can grow up to the size of the solution space in the worst case [36]. Given that the size of the solution space in CCRA (or LiCCRA) for each request is \(\mathcal{V}^{2}\mathcal{P}^{2}\mathcal{K}\) considering its integer variables, the problem's overall size is \(\mathcal{R}!\mathcal{V}^{2}\mathcal{P}^{2}\mathcal{K}\), considering the number of permutations of \(\mathcal{R}\) requests. Therefore, finding its optimal solution for large-scale instances using B&B is impractical in a timely manner, and the goal of this section is to devise an efficient approach based on the WF concept in order to identify near-optimal solutions for this problem.
The WF-CCRA method is elaborated in Algorithm 2. The first step is to initialize the vectors of parameters and variables used in (1) (or in (2)). Following that, two empty sets, \(\mathbf{\mathcal{R}}^{\prime}\) and \(\mathbf{\Omega}\), are established. The former maintains the set of accepted requests, and the latter stores the feasible resource combinations for each request during its iteration. Now, the algorithm iterates through each request in \(\mathbf{\mathcal{R}}\), starting with the one with the most stringent delay requirement, and keeps track of the feasible allocations of VNF, priority, as well as inquiry and response paths based on the constraints of (1) (or (2)). The final steps of each iteration are to choose the allocation with the lowest cost and fix it for the request, as well as to update remaining resources and the set of pending and accepted requests. When there is no pending request, the algorithm terminates.
```
1:initialize variable and parameter vectors
2:\(\mathbf{\mathcal{R}}^{\prime}\leftarrow\{\}\), \(\mathbf{\Omega}\leftarrow\{\}\)
3:sort \(\mathbf{\mathcal{R}}\) in ascending order according to \(\widetilde{\mathcal{D}_{r}}\)
4:while\(\mathbf{\mathcal{R}}\) is not empty do
5:for\(v\in\mathbf{\mathcal{V}}\)do
6:if\(z_{s_{r},v}==1\)and\(\widetilde{\mathcal{C}_{r}}\leq\widetilde{\mathcal{C}_{s_{r}}}\) on \(v\)then
7:\(||\;g_{r,v}=1\)
8:if\(z_{s_{r},v}\neq 1\)and\(\widetilde{\mathcal{C}_{s_{r}}}\leq\widehat{\mathcal{C}_{v}}\)then
9:\(||\;z_{s_{r},v}=1\), \(g_{r,v}=1\)
10:else go to the next iteration
11:for\(k\in\mathbf{\mathcal{K}}\)do
12:\(||\;\rho_{k},k=1\)
13:\(||\;\mathbf{for}\;p\in\mathbf{\mathcal{P}}\wedge\vdash_{p}=v_{r}\wedge\neg\ \nu_{p}=v\)do
14:\(||\;\mathbf{if}\widetilde{\mathcal{B}_{r}}\leq\widehat{\mathcal{B}_{l}}\; \&\;\widetilde{\mathcal{T}_{r}}\leq\widetilde{\mathcal{T}_{k}}\) on \(l\;\forall l\in\mathbf{\mathcal{L}}\wedge\delta_{p,l}=1\)then
15:\(||\;\mathbf{\;}||\;\mathbf{\;}|\;\mathbf{\;}|\;\mathbf{\;}|\;\mathbf{\;}|\;\mathbf{\;}|\;\mathbf{\;}|\; \mathbf{\;}|\;\mathbf{\;}|\;\mathbf{\;}|\;\mathbf{\;}|\;\mathbf{\;}|\;\mathbf{\;}|\;\mathbf{\;}|\;\mathbf{\;}|\; \mathbf{\;}|\;\mathbf{\;}|\;\mathbf{\;}|\;\mathbf{\;}|\;\mathbf{\;}|\;\mathbf{\;}|\;\mathbf{\;}|\;\mathbf{\;}|\; \mathbf{\;}|\;\mathbf{\;}|\;\mathbf{\;}|\;\mathbf{\;}|\;\mathbf{\;}|\;\mathbf{\;}|\;\mathbf{\;}|\;\mathbf{\;}|\; \mathbf{\;}|\;\mathbf{\;}|\;\mathbf{\;}|\;\mathbf{\;}|\;\mathbf{\;}|\;\;\mathbf{\;}|\;\;\mathbf{\;}|\;\;\mathbf{ \;}|\;\mathbf{\;}|\;\;\mathbf{\;}|\;\;\mathbf{\;}|\;\mathbf{\;}|\;\;\mathbf{\;}|\;\;\mathbf{\;}|\;\;\mathbf{ \;}|\;\;\mathbf{\;}|\;\;\mathbf{\;}|\;\;\mathbf{\;}|\;\;\mathbf{\;}|\;\;\;\mathbf{\;}|\;\;\;\mathbf{\;} \mathbf{\;}|\;\;\mathbf{\;}|\;\;\mathbf{\;}|\;\;\mathbf{\;}|\;\;\;\mathbf{\;}|\;\;\;\mathbf{\;}|\;\;\;\mathbf{ \;}|\;\;\mathbf{\;}|\;\;\;\mathbf{\;}|\;\;\;\mathbf{\;}|\;\;\;\mathbf{\;}|\;\;\;\mathbf{\;}|\;\; \;\mathbf{\;}|\;\;\;\mathbf{\;}|\;\;\;\mathbf{\;}|\;\;\;\mathbf{\;}|\;\;\;\mathbf{\;}|\;\;\;\mathbf{\;} \;\;\mathbf{\;}|\;\;\;\mathbf{\;}|\;\;\;\mathbf{\;}|\;\;\;\mathbf{\;}|\;\;\;\mathbf{\;}|\;\;\;\mathbf{ \;}|\;\;\;\mathbf{\;}|\;\;\;\mathbf{\;}|\;\;\;\mathbf{\;}|\;\;\;\mathbf{\;}|\;\;\;\mathbf{\;}|\; \;\;\;\mathbf{\;}|\;\;\;\;\mathbf{\;}|\;\;\;\;\mathbf{\;}|\;\;\;\;\mathbf{\;}|\;\;\;\mathbf{\;}|\; \;\;\;\mathbf{\;}|\;\;\;\mathbf{\;}|\;\;\;\;\mathbf{\;}|\;\;\;\;\mathbf{\;}|\;\;\;\mathbf{\;}|\; \;\;\;\mathbf{\;}|\;\;\;\;\mathbf{\;}|\;\;\;\mathbf{\;}|\;\;\;\;\mathbf{\;}|\;\;\;\mathbf{\;}|\; \;\;\;\mathbf{\;}|\;\;\;\;\mathbf{\;}|\;\;\;\;\mathbf{\;}|\;\;\;\mathbf{\;}|\;\;\;\mathbf{\;}|\; \;\;\;\mathbf{\;}|\;\;\;\mathbf{\;}|\;\;\;\mathbf{\;}|\;\;\;\mathbf{\;\;}|\;\;\mathbf{\;}|\;\; \;\;\mathbf{\;}|\;\;\;\mathbf{\;}|\;\;\;\mathbf{\;\;}|\;\;\;\mathbf{\;\;}|\;\;\;\mathbf{\;}|\;\; \;\mathbf{\;}|\;\;\;\mathbf{\;}|\;\;\;\mathbf{\;}|\;\;\;\mathbf{\;}|\;\;\;\mathbf{\;}|\;\;\;\mathbf{ \;\;}|\;\;\mathbf{\;}\;\mathbf{\;\;}|\;\;\;\mathbf{\;\;}|\;\;\mathbf{\;\;}\;\mathbf{\;}\;\;\mathbf{ \;}|\;\;\;\mathbf{\;}|\;\;\mathbf{\;}\;\;\mathbf{\;}\;\mathbf{\;\;}|\;\;\mathbf{\;}\;\;\mathbf{\; \;}|\;\;\mathbf{\;\;}\;\mathbf{\;}|\;\;\;\mathbf{\;\;}\mathbf{\;}\;\;\mathbf{\;}\;\mathbf{\;}\;\;\mathbf{ \;}\;\mathbf{\;}|\;\;\mathbf{\;\;}\;\mathbf{\;}\;\mathbf{\;\;}\;\mathbf{\;\;}\;\mathbf{\;}\;\;\mathbf{ \;\;}\;\mathbf{\;}\;\;\mathbf{\;}\;\;\mathbf{\;}\;\;\mathbf{\;}\;\mathbf{\;\;}\;\mathbf{\;}\;\;\mathbf{ \;}\;\;\mathbf{\;}\;\;\mathbf{\;}\;\;\mathbf{\;}\;\;\mathbf{\;}\;\;\mathbf{\;}\;\;\mathbf{\;}\;\;\;\mathbf{ \;}\;\;\mathbf{\;\;}\;\;\mathbf{\;}\;\;\mathbf{\;}\;\;\mathbf{\;\;}\;\;\mathbf{\;}\;\;\mathbf{\;\;}\;\; \mathbf{\;}\;\;\mathbf{\;}\;\;\;\mathbf{\;\;}\;\;\mathbf{\;}\;\;\mathbf{\;\;}\;\;\mathbf{\;\;}\;\;\;\mathbf{ \;}\;\;\mathbf{\;\;}\;\;\mathbf{\;}
stated in (1), we employ the DDQL technique, proposed by Google in the DeepMind project [37]. In what follows, The DDQL concept and its agent, which serves as the core building block of the learning logic, are briefly introduced. The design of the learning algorithm and the architecture of the DDQL-based resource allocation approach are then discussed, along with an analysis of various implementation strategies.
### _Double Deep Q-Learning Agent_
RL is a technique wherein an agent is trained to tackle sequential decision problems through trial-and-error interactions with the environment. Q-learning is a widely used RL algorithm wherein the agent learns the value of each action, defined as the sum of future rewards associated with performing that action, and then follows the optimal policy, which is choosing the action with the highest value in each state.
According to Watkins and Dayan [38], one method for obtaining the optimal action-value function is to define a Bellman equation as a straightforward value iteration update using the weighted average of the old value and the new information, that is
\[\footnotesize\begin{split} Q(\theta_{\tau},a_{\tau})& +=\ \sigma[Y_{\tau}^{Q}-Q(\theta_{\tau},a_{\tau})],\end{split} \tag{3}\]
where \(\theta_{\tau}\) and \(a_{\tau}\) are the agent's state and action at time slot \(\tau\) respectively, \(\sigma\) is a scalar step size, and \(Y_{\tau}^{Q}\) is the target, defined by
\[\footnotesize Y_{\tau}^{Q}=\beta_{\tau+1}+\gamma\,\text{max}_{a\in\mathbf{ \mathcal{A}}}Q(\theta_{\tau+1},a), \tag{4}\]
where \(\beta_{\tau+1}\) is the reward at time slot \(\tau+1\), \(\gamma\in[0,1]\) is a discount factor that balances the importance of immediate and later rewards, and \(\mathbf{\mathcal{A}}\) is the set of actions. Since most interesting problems are too large to discover all possible combinations of states and actions and learn all action-values, one potential alternative is to use a Deep Neural Network (DNN) to approximate the action-value function. In a Deep Q-Network (DQN), the state is given as the input and the \(Q\) function of all possible actions, denoted by \(Q(\theta,.;\mathbf{\mathcal{W}})\), is generated as the output, where \(\mathbf{\mathcal{W}}\) is the set of DNN parameters. The target of the DQN is as follows:
\[\footnotesize Y_{\tau}^{DQN}=\beta_{\tau+1}+\gamma\,\text{max}_{a\in\mathbf{ \mathcal{A}}}Q(\theta_{\tau+1},a,\mathbf{\mathcal{W}}_{\tau}). \tag{5}\]
and the update function of \(\mathbf{\mathcal{W}}\) is
\[\footnotesize\mathbf{\mathcal{W}}_{\tau+1}=\mathbf{\mathcal{W}}_{\tau}+\sigma[Y_{ \tau}^{DQN}-Q(\theta_{\tau},a_{\tau};\mathbf{\mathcal{W}}_{\tau})]\nabla_{\mathbf{ \mathcal{W}}_{\tau}}Q(\theta_{\tau},a_{\tau};\mathbf{\mathcal{W}}_{\tau}). \tag{6}\]
To further enhance the efficiency of DQN, it is necessary to consider two additional improvements. The first is the use of an experience memory [39], wherein the observed transitions are stored in a memory bank, and the neural network is updated by randomly sampling from this pool. The authors demonstrated that the concept of experience memory significantly improves the DQN algorithm's performance. The second is to employ the concept of Double Deep Q-Learning (DDQL), introduced in [37]. In both standard Q-learning and DQNs, the max operator selects and evaluates actions using the same values (or the same \(Q\)). Consequently, overestimated values are more likely to be selected, resulting in overoptimistic value estimations. DDQL implements decoupled selection and evaluation processes. The following is the definition of the target in DDQL:
\[\footnotesize Y_{\tau}^{DDQL}=\beta_{\tau+1}+\gamma\,\widehat{Q}(\theta_{ \tau+1},a^{\prime},\mathbf{\mathcal{W}}_{\tau}^{-}), \tag{7}\]
where \(a^{\prime}=argmax_{a\in\mathbf{\mathcal{A}}}Q(\theta_{\tau+1},a,\mathbf{\mathcal{W}}_{ \tau})\), and the update function is
\[\footnotesize\mathbf{\mathcal{W}}_{\tau+1}= \mathbf{\mathcal{W}}_{\tau}+\sigma[Y_{\tau}^{DDQL}\] \[-Q(\theta_{\tau},a_{\tau};\mathbf{\mathcal{W}}_{\tau})]\nabla_{\mathbf{ \mathcal{W}}_{\tau}}Q(\theta_{\tau},a_{\tau};\mathbf{\mathcal{W}}_{\tau}). \tag{8}\]
In this model, \(\mathbf{\mathcal{W}}\) is the set of weights for the main (or evaluation) \(Q\) and is updated in each step, whereas \(\mathbf{\mathcal{W}}^{-}\) is for the target \(\widehat{Q}\) and is replaced with the weights of the main network every \(t\) steps. In other words, \(\widehat{Q}\) remains a periodic copy of \(Q\). The authors demonstrated that the DDQL algorithm not only mitigates observed overestimations but also significantly improves accuracy. The training procedure of the DDQL agent is depicted in Fig. 3, which includes receiving the environment response and storing it in the memory bank, passing transitions to the evaluation network and updating its weights with the update function, and adjusting the weights of the target network. In this figure, \(\theta^{\prime}\) is the resulted state after applying action \(a\).
### _Ddql-Ccra_
Since the CCRA problem comprises different sets of variables and their corresponding constraints, to solve it based on the DDQL agent depicted in Fig. 3, the first step is to design a chain of agents, each of which is responsible for addressing one group of the variables. Our proposed chain consists of four DDQL agents. The first agent, denoted by \(\Lambda^{SP}\), is intended to determine the location of service VNFs in response to requests (\(\mathbf{g}\) and \(\mathbf{z}\)), and thus its action set is the set of network nodes. In other words, \(a^{SP}\in\mathbf{\mathcal{A}}^{SP}=\mathbf{\mathcal{V}}\). \(\Lambda^{PA}\) is the second agent with action set \(\mathbf{\mathcal{K}}\), and it is responsible for assigning the priority level of traffic. The remaining two agents route traffic by determining the inquiry path from the entering node to its VNF location and the response path in the opposite direction, denoted by \(\Lambda^{QPS}\) and \(\Lambda^{PPS}\), respectively. The action set of these agents comprises all possible network paths. To interact with the system, each agent provides an action that contains the index of the request for which it is attempting to satisfy its resource requirements and a value from its action space.
Fig. 3: DDQL Agent.
For example, \(a^{SP}=\{r:1,\xi:3\}\) means that the VNF for request \(1\) should be located in node \(3\), or \(g_{1,3}=1\). Moreover, \(\mathbf{a}=\{a^{SP},a^{PA},a^{OPS},a^{PPS}\}\) represents the set of all agents' actions.
Next, the system state has to be formulated. As the infrastructure is the only side of the system that is known, the state is a collection of network and cloud resources, that is:
\[\begin{split}\theta&=\left[\widehat{c}\psi\forall e \in\mathbf{\mathcal{V}}\right]\oplus\left[\Psi_{\Psi}\forall e\in\mathbf{\mathcal{V}} \right]\oplus\left[\widehat{B}\forall l\in\mathbf{\mathcal{L}}\right]\oplus\left[ \Xi\forall l\in\mathbf{\mathcal{L}}\right]\oplus\\ &\quad\left[\widehat{D_{k,l}}\forall k\in\mathbf{\mathcal{K}},\forall l \in\mathbf{\mathcal{L}}\right],\end{split} \tag{9}\]
where \(\oplus\) returns the concatenation of two arrays. When the system receives actions, the state of the available network and cloud resources is updated by deactivating the resources assigned to the associated request, and resulted state \(\theta^{\prime}\) is generated.
The final step is to design the reward, which is a reaction to the effectiveness of action after receiving it and shifting from state \(\theta\) to resulted state \(\theta^{\prime}\). In other words, agents are wired to the system via the reward. To address the problem defined in (1), we propose the reward as follows for request \(r\):
\[\beta=100\left(1-\frac{\text{OF}_{r,\mathbf{a}}-min\text{OF}_{r}}{max\text{OF}_{ r}-min\text{OF}_{r}}\right)\chi_{r,\mathbf{a}} \tag{10}\]
where \(max\) OF\({}_{r}\) and \(min\) OF\({}_{r}\) are the maximum and minimum costs that can be achieved by allocating the available network and cloud resources to request \(r\) without considering any constraints or requirements, OF\({}_{r,\mathbf{a}}\) is the cost of the allocations provided by the agents, and \(\chi_{r,\mathbf{a}}\) represents the response of request \(r\) to actions \(\mathbf{a}\). \(\chi_{r,\mathbf{a}}\) ensures that all constraints of (1) are met. Consider \(\mathbf{a}\) containing an action that violates one of the constraints (for example, a node or a VNF or a path is overloaded, or a priority level is assigned in such a way that the E2E delay requirement is violated). In this circumstance, the affected request will respond with \(\chi_{r,\mathbf{a}}=0\), and the reward for \(\mathbf{a}\) will be \(0\). Therefore, the probability of selecting that action decreases, and after a certain number of iterations, actions with infeasible allocations are implicitly removed from the set of possible actions. Besides, OF\({}_{r,\mathbf{a}}\) controls the efficiency of \(\mathbf{a}\). Similarly, after a number of iterations, allocations with lower costs will have a greater chance of being selected. Therefore, after training, agents will choose feasible actions (within the constraints of (1)) with lower costs (minimizing OF).
Now, Algorithm 3 details DDQL-CCRA, the learning algorithm proposed to solve the CCRA problem based on DDQL. The algorithm is divided into two phases:
#### Iv-B1 Training Phase
In this phase (lines 1 to 24), \(T\) represents the number of training steps, whereas \(\epsilon^{\prime}\) and \(\widetilde{\epsilon}\) are small positive integers to control the \(\epsilon\)-greedy algorithm. Through each step, the set of actions is determined and transmitted to the system, after which the reward and the updated state are received and used to train the agents employing the ADAM optimizer [40] and update their DNN weights via the memory bank. This process is repeated over the set of requests until the specified maximum number of steps is reached. It is worth mentioning that the action in each agent is selected by an \(\epsilon\)-greedy policy that follows the evaluation function of the corresponding agent with probability \((1-\epsilon)\) and selects a random action with probability \(\epsilon\). The probability is decreased linearly from \(\epsilon\) to \(\widetilde{\epsilon}\) during the training process. Using the \(\epsilon\)-greedy method and the ADAM optimizer ensures the convergence of DDQL-CCRA to feasible, low-cost solutions (based on the defined reward) [41].
#### Iv-B2 Decision Making Phase
In each step of this phase (lines 25 to 36), one request is selected, and its required resources are allotted by the agents. The decision is then transmitted to the system to collect the infrastructure's response and the request. Following this, the reward and the mean reward, denoted by \(\overline{\beta}\), are determined. Fig. 4 depicts the actions generated by the agents, their transmission to the environment, and their subsequent return to the agents in preparation for the next decision-making. Due to the fact that we have no knowledge of the requests' requirements, every change in the criteria is managed by examining the average reward; if it falls below a specified threshold, denoted by \(\widetilde{\beta}\), it indicates that end-users have adopted a new policy and the training phase must be repeated. This procedure continues until the required resources for each request have been determined.
### _DDQL-CCRA Resource Allocation Architecture_
The architecture of the DDQL-CCRA resource allocation method is depicted in Fig. 5. Due to the fact that the characteristics of different services may be entirely different, an isolated DDQL-CCRA algorithm is designed to be executed for each service. The broker receives requests, classifies them, and forwards each service's requests to its respective controller. In addition, the broker collects the most recent state of the network and cloud resources from the resource orchestrator and transmits it to the controllers. The controller is responsible for executing the DDQL-CCRA algorithm by implementing the memory bank, maintaining the state of requests, calculating the reward, and returning action sets to the broker. Action sets are collected by the broker from all controllers and relayed to the resource orchestrator to apply to the infrastructure. Since actions are chosen at random during the training phase, digital twins could be used to evaluate them to prevent the infrastructure from entering unpredictable states that result in disruptions to its operation [42].
In order to enhance the scalability of this architecture, rather than considering the set of all nodes as the action set of \(\Lambda^{SP}\) and the set of all paths of the network as the action set of \(\Lambda^{QPS}\) and \(\Lambda^{PPS}\), these spaces can be pruned to create
Fig. 4: Data flow between the agents and the system.
fixed-size sets consisting of the most likely options for VNF placement and path selection.
* For \(\Lambda^{SP}\), the lower and upper boundaries of the QoS requirements for each service can be extracted (or considered inputs to the problem), and then a set with size \(\mathcal{V}^{\prime}\), named \(\boldsymbol{\mathcal{V}^{\prime}}\), including feasible nodes to maintain the QoS boundaries at the lowest cost (\(\Psi_{v}\)) is generated.
* For \(\Lambda^{QPS}\) and \(\Lambda^{PPS}\), a set of size \(\mathcal{P}^{\prime}\) is created for each service containing feasible paths in order to maintain the QoS boundaries at the lowest cost (\(\Xi_{l}\)). Note that these paths should begin at the edge devices (the entry nodes of requests) and terminate at one of the nodes of \(\boldsymbol{\mathcal{V}^{\prime}}\) for \(\Lambda^{QPS}\). In \(\Lambda^{PPS}\), the same logic is followed, but in reverse order.
The complexity and accuracy of the DDQL-CCRA algorithm can be modified by adjusting the size of these sets. \(\mathcal{V}^{\prime}\) and \(\mathcal{P}^{\prime}\) can be set to large numbers if high precision is required or if the complexity of running the DDQL-CCRA algorithm can be handled by high-powered software/hardware. Alternatively, small sets can be utilized to return the result in a relatively shorter amount of time.
## VI Numerical Results
In this section, the efficiency of the proposed methods is numerically investigated. The system model parameters are
Fig. 5: DDQL-CCRA Resource Allocation Architecture.
listed in Table III, and the configuration of the agents' training procedure is shown in Table IV. Note that the results are obtained on a computer with 8 processing cores, 16 GB of memory, and a 64-bit operating system.
The accuracy of the B&B-CCRA and WF-CCRA methods is illustrated in Fig. 6. The methods are evaluated based on the accuracy of the solutions they provide. Note that the accuracy of a solution for a scenario named \(\eta\) is defined as \(1-\left((\eta-\eta^{\star})/\eta^{\star}\right)\), where \(\eta^{\star}\) is the scenario's optimal solution, which is obtained by solving it with CPLEX 12.10. In Fig. 6.A, the accuracy of B&B-CCRA is plotted vs. the solving time (in logarithmic scale) for five scenarios with different network sizes. As illustrated, the accuracy of B&B-CCRA starts at \(80\%\) after the first iteration, which is obtained by solving the LP transformation of LiCCRA with CPLEX 12.10 in just a few milliseconds, and increases as the solving time passes, reaching \(92\%\) for all samples after 100 seconds. It proves that this method can be easily applied to provide baseline solutions for small and medium size use cases. However, the accuracy growth is slowed by increasing the network size, which is expected given the problem's NP-hardness and complexity. In the two remaining subfigures, the accuracy of WF-CCRA is depicted against the number of requests attending to use system resources, known as request burstiness, and network size. It is evident that regardless of network size, WF-CCRA has an average accuracy greater than \(99\%\), implying that it can be used to allocate resources in a near-optimal manner even for large networks. For different numbers of requests, the average accuracy remains significantly high and greater than \(96\%\). It does, however, slightly decrease as the number of requests increases, which is the cost of decomplexifying the problem by allocating the resources through isolating requests. Since the decrease is negligible, it is expected that the algorithm is capable of allocating resources efficiently for large numbers of requests.
The DDQL-CCRA resource allocation architecture, depicted in Fig. 5, is examined in Fig. 7. In this figure, the mean cost and E2E delay per each supported request, as well as the percentage of supported requests, are plotted against the DDQL-CCRA iteration counter for three scenarios with varying E2E delay requirements. In order to supplement the analysis, this figure additionally includes the outcomes of WF-CCRA in parallel to R-, CM-, and DM-CCRA. In R-CCRA, all allocations are determined at random, but in CM- and DM-CCRA, allocations are made to minimize cost and delay, respectively, without considering other constraints. Note that in order to implement DDQL-CCRA, we deployed the DDQL-CCRA resource allocation architecture on all edge devices (the entry nodes).
When \(\widetilde{D_{r}}\) is less than \(1\) ms, the only feasible solution is to assign all requests to the most costly nodes of the first tier. Consequently, the mean cost for all techniques is high, with the exception of CM-CCRA, which attempts to fit all requests into one of the third-tier nodes with the lowest cost, resulting in the inability to support any request and the mean cost of \(0\). Since the mean delay for all nodes in the first tier is too low, the average delay per each supported request for all methods excluding CM-CCRA is less than \(1\) ms and similar. However, the supported request rate is entirely different for each method. R-CCRA, which assigns nodes evenly to requests, places a third of requests on the first tier, therefore its rate is approximately \(33\%\). DM-CCRA selects the node with the shortest E2E delay; hence, its support rate is the number of requests that can be serviced by a single node in the first tier, which is approximately \(45\%\). Given that DDQL-CCRA employs the \(\epsilon\)-greedy technique, it also generates random results during the initial learning iterations. However, as the learning progresses, it receives the reward based on end-user responses and begins to place more and more requests on the first tier until it reaches the near-optimal solutions supplied by WF-CCRA.
When the E2E delay requirement threshold is changed to \(3\) ms, both the first and second tier nodes can be occupied to support requests. Since DM- and CM-CCRA always select a node in the first and third tiers, respectively, their outcomes are identical to those of the preceding scenario. R-CCRA doubles the percentage of supported requests because it randomly assigns \(66\%\) of requests to the first and second tiers. In addition, its mean delay is slightly smaller than that of WF- and DDQL-CCRA since it utilizes the first tier nodes more than these two cost-effective approaches. Note that the difference is negligible, as the delay of nodes in the first tier is vanishingly small and cannot significantly affect the mean delay. In contrast, when DDQL-CCRA identifies a changing
Fig. 6: Solution accuracy of A) B&B-CCRA vs. solving time, B) WF-CCRA vs. network size, and C) WF-CCRA vs. request burstiness. In subfigures A and B, the number of requests is set to \(200\), and the number of network nodes in subfigure C is \(20\). In subfigure B and C, for each number of nodes or requests, \(50\) random systems are generated, and the problem is solved using WF-CCRA, with B&B-CCRA providing the optimal solution. The results of random samples are represented by blue dots, and the aggregated results are represented through boxplots, where red points indicate medians.
need (lines 35 and 36 of Algorithm 3), it restarts the learning process and enables the \(\epsilon\)-greedy technique. Therefore, it begins anew with random results and optimizes allocation by fitting as many requests as feasible into the second-tier nodes in ascending cost order. Also in this scenario, it can be observed that the learning technique yields near-optimal efficiency outcomes.
The final scenario is eliminating the delay requirement and releasing the entire infrastructure to serve requests. In this case, although the results for DM-CCRA are identical, the support rate for CM-CCRA is approximately \(50\%\), indicating that the node with the lowest cost can service approximately \(50\%\) of requests. Similar to the prior scenario, the outcomes of R-CCRA are enhanced. Now it can support all requests, but its mean cost and delay are not optimal because it consumes the resources of all tiers equally. Similarly, the trend for DDQL-CCRA is the same. As soon as it senses a change in requirements, it begins to randomly assign resources, recognizing that requests should be sent as much as possible to the core clouds. It initially determines that the node with the lowest cost yields the best outcome. Therefore, it places all requests on a single node, thereby reducing the number of supported requests and enhancing the mean delay and cost. Subsequently, the reward of this allocation begins to decline as certain requests cannot be supported, the value of dispersing requests throughout the third tier increases progressively, and the optimal policy leads to an increase in the support rate, coupled with a reduction in the mean cost and delay. In Fig. 7, it is evident that the DDQL-CCRA approach in partially known systems can lead to near-optimal solutions obtained when those systems are fully known.
In Fig. 8, DDQL-CCRA is investigated with regard to request burstiness. The mean cost and E2E delay of each supported request are depicted in Fig. 8.A and Fig. 8.C respectively, whereas Fig. 8.B illustrates the number of supported requests. In this figure, the results of DDQL-CCRA are compared with those of WF-CCRA, FSA [13], BSA [13], CEP [10], A-DDPG [11], and MDRL-SaDS [14]. FSA is a heuristic algorithm that randomly assigns resources to requests in descending order of their required computing capacity. BSA is a similar method that assigns resources in descending order of their remaining capacity to the sorted requests. In CEP, resources are allocated with the aim of minimizing the total cost of links. A-DDPG is an RL method that adjusts the reward for each request to maximize its overall utility. In this solution, utility is defined as the profit of serving the request as a function of its required bandwidth minus the E2E path delay experienced. MDRL-SaDS is another RL technique in which the reward is the computing and networking cost of serving each request divided by the total cost of allocated resources across the infrastructure. This strategy seeks to minimize the cost of allocated resources in relation to their energy consumption.
Evidently, the number of requests supported by FSA is relatively high, as are its mean cost and E2E delay. FSA distributes requests across all tiers, resulting in significant utilization of all resources, a high mean cost, and a high mean E2E delay. Since all links have a similar cost, CEP exhibits comparable performance; however, because it considers the feasibility of links, it achieves slightly better results. A-DDPG, where requests are assigned to nodes with a lower E2E delay, is another costly method. Increasing the number of requests
Fig. 7: DDQL-CCRA convergence compared with WF-CCRA, R-CCRA, CM-CCRA, and DM-CCRA. The number of requests is \(150\), and the number of network nodes is \(30\), and all requests can be serviced by a single tier’s resources. The results are calculated as a moving average with a window size of \(100\) in order to capture trend lines.
causes second- and first-tier nodes to become occupied and requests to be assigned to the resources of other tires, thereby increasing the E2E delay. In BSA, because the performance metric is the remaining capacity of computing nodes and the nodes are ordered from low capacity to high capacity across the tiers, it occupies the nodes from the cloud to the edge, resulting in outcomes with very low cost and moderate delay. A-DDPG and BSA cannot support a substantial number of requests because the feasibility of links is not explicitly evaluated. MDRL-SaDS is the most inefficient method by which requests are routed to the node with the lowest cost. Therefore, the number of supported requests is proportional to the node's capacity and delay. The behavior of the two remaining methods, DDQL- and WF-CCRA, is comparable. They support requests by initially assigning third-tier nodes. Then, once this tier is occupied, they proceed to occupy the second tier, resulting in an exponential cost increase. The mean cost converges to a fixed value when all resources are occupied and the first tier is in use. This approach results in a very low E2E delay because it assigns request priorities based on their delay requirements (unlike other approaches, which are unaware of ATS queues). DDQL-CCRA can provide near-optimal solutions regardless of the number of requests received, as demonstrated.
The final figure compares the proposed approaches to the approaches depicted in the preceding figure for various network sizes. In this scenario, if \(\mathcal{V}\leq 15\), the first-tier network has a very high capacity, whereas if \(\mathcal{V}>15\), there are sufficient resources to fulfill all requests. Therefore, CEP and A-DDPG, which focus on minimizing the cost of allocated links and E2E delay respectively, as well as FSA, which allocates resources randomly, can support a large number of requests despite the high cost of allocations. Since the capacity of the low-cost tier for BSA and MDRL-SaDS is not excessive (and the capacity ratio of this tier to the others is less than the previous figure), the request support rate is unpromising despite the low cost. When the infrastructure is full (\(\mathcal{V}\leq 15\)), the results of WF- and DDQL-CCRA are comparable to those of other algorithms. However, when there are more resources (\(\mathcal{V}>15\)), these two approaches move requests to low-cost resources, thereby reducing the total cost of allocations. When it comes to E2E delay, even though the results are similar for all methods, by adding a node to the initial network, the resources of the first tier are extended and more requests can be supported with smaller delays, resulting in a sudden decrease for \(\mathcal{V}=10\). By adding more nodes, however, more resources are added to the other tires, and FSA (which allocates resources randomly), BSA (which assigns resources in descending order of their remaining capacity), and A-DDPG (which tries to maximize the overall utility) migrate requests to the lower-cost tiers, resulting in a slight increase in E2E delay. Despite the increase in WF- and DDQL-CCRA techniques, their outcome is the lowest E2E delay because they
Fig. 8: A) Mean cost of each supported request, B) number of supported requests, and C) mean E2E delay of each supported request for DDQL-CCRA, WF-CCRA, FSA & BSA [13], CEP [10], A-DDPG [11], and MDRL-SaDS [14] vs. request burstiness. The delay requirement of requests is \(10\) ms, and the number of network nodes is \(9\). The results are calculated as a moving average with a window size of \(100\), where each sample is the average of \(50\) arbitrary systems.
Fig. 9: A) Mean cost of each supported request, B) number of supported requests, and C) mean E2E delay of each supported request for DDQL-CCRA, WF-CCRA, FSA [13], BSA [13], CEP [10], A-DDPG [11], and MDRL-SaDS [14] vs. network size. In this scenario, the first four nodes are added to the first tier, followed by the second four nodes to the second tier, and then the last four nodes to the third tier. The delay requirement is \(10\) ms, and there are a total of \(300\) requests. The results are calculated as a moving average with a window size of \(20\), where each sample is the average of \(50\) arbitrary systems.
manage priority queues according to the delay requirements of requests.
## VII Conclusion
In this paper, the joint problem of communication and computing resource allocation comprising VNF placement and assignment, traffic prioritization, and path selection considering capacity and delay constraints was studied. The primary objective was to minimize the total cost of allocations. We initially formulated the problem as a MINLP model, and used a method, named B&B-CCRA, to solve it optimally. Then, due to the complexity of B&B-CCRA, a WF-based approach was developed to find near-optimal solutions in a timely manner. These two methods can be utilized to solve the problem when the system is fully known. However, for scenarios wherein there is no request-specific information, a DDQL-based architecture was presented that yields near-optimal solutions. The efficiency of the proposed methods was demonstrated by numerical results.
As potential future work, we intend to address the problem by accounting for more dynamic environments in which end-users are mobile and all of their needs are subject to change. In addition, the proposed methods could be supplemented by taking into account dynamic infrastructure resources, in which the cost of resources (such as their energy consumption) or their availability can fluctuate over time. In such highly dynamic scenarios, we intend to enhance the proposed DDQL-based method with Continual Learning in order to reduce the adaptation time required to adjust agents after each change. Another possible research direction is to extend the problem to include radio domain resources (such as power control, channel assignments, rate control, and relay selection in multi-hop scenarios), thereby providing end-user-to-end-user resource allocations. Furthermore, we intend to improve the proposed method for allocating resources to VNF chains rather than individual VNFs.
## Acknowledgment
This research work is partially supported by the Business Finland 6Bridge 6Core project under Grant No. 8410/31/2022, the Academy of Finland IDEA-MILL project under Grant No. 352428, the European Union's Horizon 2020 ICT Cloud Computing program under the ACCORDION project with grant agreement No. 871793, and the Academy of Finland 6G Flagship program under Grant No. 346208.
|
2309.16297 | The relation between the cool-core radius and the host galaxy clusters:
thermodynamic properties and cluster mass | We present a detailed study of cool-core systems in a sample of four galaxy
clusters (RXCJ1504.1-0248, A3112, A4059, and A478) using archival X-ray data
from the Chandra X-ray Observatory. Cool cores are frequently observed at the
centers of galaxy clusters and are considered to be formed by radiative cooling
of the intracluster medium (ICM). Cool cores are characterized by a significant
drop in the ICM temperature toward the cluster center. We extract and analyze
X-ray spectra of the ICM to measure the radial profiles of the ICM
thermodynamic properties including temperature, density, pressure, entropy, and
radiative cooling time. We define the cool-core radius as the turnover radius
in the ICM temperature profile and investigate the relation between the
cool-core radius and the properties of the host galaxy clusters. In our sample,
we observe that the radiative cooling time of the ICM at the cool-core radius
exceeds 10 Gyr, with RXCJ1504.1-0248 exhibiting a radiative cooling time of
$32^{+5}_{-11}$ Gyr at its cool-core radius. These results indicate that not
only radiative cooling but also additional mechanisms such as gas sloshing may
play an important role in determining the size of cool cores. Additionally, we
find that the best-fit relation between the cool-core radius and the cluster
mass ($M_{500}$) is consistent with a linear relation. Our findings suggest
that cool cores are linked to the evolution of their host galaxy clusters. | FanLam Ng, Shutaro Ueda | 2023-09-28T09:50:04Z | http://arxiv.org/abs/2309.16297v2 | The relation between the cool-core radius and the host galaxy clusters: thermodynamic properties and cluster mass
###### Abstract
We present a detailed study of cool-core systems in a sample of four galaxy clusters (RXCJ1504.1-0248, A3112, A4059, and A478) using archival X-ray data from the _Chandra_ X-ray Observatory. Cool cores are frequently observed at the centers of galaxy clusters and are considered to be formed by radiative cooling of the intracluster medium (ICM). Cool cores are characterized by a significant drop in the ICM temperature toward the cluster center. We extract and analyze X-ray spectra of the ICM to measure the radial profiles of the ICM thermodynamic properties including temperature, density, pressure, entropy, and radiative cooling time. We define the cool-core radius as the turnover radius in the ICM temperature profile and investigate the relation between the cool-core radius and the properties of the host galaxy clusters. In our sample, we observe that the radiative cooling time of the ICM at the cool-core radius exceeds \(10\,\mathrm{Gyr}\), with RXCJ1504.1-0248 exhibiting a radiative cooling time of \(32^{+5}_{-11}\,\mathrm{Gyr}\) at its cool-core radius. These results indicate that not only radiative cooling but also additional mechanisms such as gas sloshing may play an important role in determining the size of cool cores. Additionally, we find that the best-fit relation between the cool-core radius and the cluster mass (\(M_{500}\)) is consistent with a linear relation. Our findings suggest that cool cores are linked to the evolution of their host galaxy clusters.
Intracluster medium (858), Galaxy clusters (584), X-ray astronomy (1810) 0000-0002-4882-2888]FanLam Ng
## 1 Introduction
Galaxy clusters contain a large amount of diffuse, hot X-ray emitting gas known as the intracluster medium (ICM, \(T\sim 10^{7-8}\,\mathrm{K}\)), which is trapped and thermalized in the deep gravitational potential well dominated by dark matter. Because of the high temperature of the ICM, the ICM emits X-ray radiation through thermal bremsstrahlung. Since the X-ray emissivity of the ICM is proportional to the square of the electron density, the ICM loses its thermal energy by X-ray radiation faster in the central region of galaxy clusters than in the outskirts.
The radiative cooling time of the ICM at the centers of galaxy clusters exhibiting strong X-ray surface brightness peaks is much shorter than the inferred age of galaxy clusters1 (e.g., Peterson & Fabian, 2006, for a review). Thus, it was expected that runaway cooling occurs and triggers massive star formation in the centers of galaxy clusters (Fabian, 1994). However, this expectation is inconsistent with observational evidence. Neither such expected massive star formation nor a large amount of cooled ICM that serves as fuel for star formation is found (e.g., Tamura et al., 2001; Peterson et al., 2001; O'Dea et al., 2008; McDonald et al., 2011, 2018). This inconsistency indicates that runaway cooling must be suppressed, meaning the presence of heating sources.
Footnote 1: 7.7 Gyr (\(z\sim 1\)) is often adopted as the age of low-\(z\) galaxy clusters.
Although a heating source is required, X-ray observations have revealed that the temperature of the ICM drops toward the cluster center. The ICM temperature at the center is measured at a few keV, corresponding to \(30-50\,\mathrm{\char 37}\) of the peak value of the ICM temperature profile (e.g., Vikhlinin et al.
2011). Such a region, consisting of cooler ICM (or cooling ICM), is referred to as a cool core (Molendi & Pizzolato, 2001). The presence of cool cores indicates that not only the cooling of the ICM is still dominant in cool cores but also the heating is balanced with the cooling of the ICM at least within a specific region. Therefore, exploring cool-core systems in galaxy clusters is important to understand not only the the origin of cool cores but also the evolution of cool cores under conditions of the balance between cooling and heating.
Cool cores are characterized by a significant drop in the ICM temperature toward the cluster center (Molendi & Pizzolato, 2001). However, various alternative observables of the ICM have been used to identify cool cores: central electron density (Hudson et al., 2010; Barnes et al., 2018), central radiative cooling time (e.g., Bauer et al., 2005; Hudson et al., 2010; Wang et al., 2023), central entropy profile (e.g., Bauer et al., 2005; Hudson et al., 2010), classical mass deposition rate (e.g., Chen et al., 2007), and X-ray luminosity ratio (e.g., Sayers et al., 2013; Shitanishi et al., 2018). These observables have also been used to distinguish between cool-core and non-cool-core clusters. For instance, Su et al. (2020) used the central radiative cooling time of the ICM as an indicator to identify cool-core clusters from a sample of galaxy clusters obtained from cosmological simulations such as IllustrisTNG. Additionally, Barnes et al. (2018) investigated the cool-core action in a sample of galaxy clusters from IllustrisTNG using six parameters: central electron density, radiative cooling time, entropy profile, X-ray concentration parameters within a certain or fiducial radius, and cuspiness parameter of X-ray morphology. Lagana et al. (2019) also investigated the optimal parameters for identifying cool cores using some observables including the cuspiness of the gas density profile, central gas density, and properties of the brightest cluster galaxies (BCG).
Hudson et al. (2010) presented an in-depth study of the ICM properties in the central regions of galaxy clusters listed in a catalog of the extended HIghest X-ray FLUx Galaxy Cluster Sample (HIFLUGCS: Reiprich & Bohringer, 2002). They applied 16 cool-core diagnostics parameters to their sample to find the most appropriate parameter for characterizing cool-core clusters. They found that the radiative cooling time at the center and cuspiness serve as the most effective indicators for low-\(z\) and high-\(z\) cool-core clusters, respectively. In addition, they introduced two categories of cool cores: strong and weak cool cores, based on the radiative cooling time at the center. Strong cool cores are defined as those with a radiative cooling time at the center shorter than \(1\,\mathrm{Gyr}\), while weak cool cores are identified when a radiative cooling time (\(t_{\mathrm{cool}}\)) at the center falls within the range of \(1\,\mathrm{Gyr}<t_{\mathrm{cool}}<\)\(7.7\,\mathrm{Gyr}\).
A universal form in the ICM temperature profile has been reported (e.g., Allen et al., 2001; Vikhlinin et al., 2005; Sanderson et al., 2006). Allen et al. (2001) analyzed the ICM temperature profiles scaled by \(r_{2500}\)2 for six cool-core clusters and found an approximately universal form in the scaled temperature profiles. Vikhlinin et al. (2005) measured the ICM temperature profiles from the central region to the outskirts in 13 nearby, relaxed galaxy clusters and groups, and revealed a similar trend in the temperature profiles scaled by \(r_{180}\). Sanderson et al. (2006) also measured the ICM temperature profiles in 20 galaxy clusters and found a possible universal form in the temperature profiles scaled by \(r_{500}\).
Footnote 2: We adopt \(M_{\Delta}\) as the mass enclosed within a sphere of radius \(r_{\Delta}\), whose mean density is \(\Delta\) times the critical density of the Universe at the redshift of the galaxy cluster.
Such previous studies aim to investigate possible relations between the temperature profile and the total mass of galaxy clusters. However, both cooling and heating processes have a significant impact on cool cores, indicating that baryon physics in the centers of galaxy clusters likely plays an important role in shaping the observed characteristics of cool cores. Therefore, it is essential to focus on cool-core systems and characterize cool cores by the ICM temperature.
In this paper, we aim to characterize cool cores in our sample using the cool-core radius that is defined as the turnover radius in the ICM temperature profile. In addition, we aim to study possible relations between the cool-core radius and the properties of the host galaxy clusters, including the ICM thermodynamic properties and cluster mass. To this end, we first analyze the X-ray spectra of the ICM extracted from annular regions determined by the morphology of X-ray surface brightness and X-ray photon counts. Then, we measure the radial profiles of the ICM thermodynamic properties and determine the cool-core radius by analyzing the ICM temperature profile. Finally, we measure thermodynamic perturbations in the ICM to constrain the turbulent velocity of the ICM within the cool cores in our sample.
This paper is organized as follows. Section 2 provides a brief summary of our sample and their properties. Section 3 presents detailed information regarding _Chandra_ observations and data reduction procedures. Section 4 describes the X-ray spectral analysis and measurements of the radial profiles of the ICM thermodynamic properties. Section 5 presents the study of possible relations between the cool-core radius and the properties of the host galaxy clusters and the analysis of thermo
dynamic perturbations in the ICM. Finally, a summary is given in Section 6
Throughout the paper, we assume \(\Omega_{\rm m}=0.3\), \(\Omega_{\Lambda}=0.7\), and the Hubble constant of \(H_{0}=70\,{\rm km\,s^{-1}\,Mpc^{-1}}\). Unless stated otherwise, quoted errors correspond to \(1\,\sigma\) uncertainties.
## 2 Sample
In this paper, we focus on four galaxy clusters: RXCJ1504.1-0248, Abell 3112, Abell 4059, and Abell 478. These galaxy clusters are included in the HIFLUGCS catalog (Reiprich & Bohringer, 2002). According to Hudson et al. (2010), all galaxy clusters in our sample are classified as strong cool-core clusters based on the radiative cooling time of the ICM at the center. Here, we provide a brief summary of previous studies on each galaxy cluster. The values of \(M_{500}\) and \(r_{500}\) mentioned below are extracted from the Meta-Catalogue of X-ray detected Clusters of galaxies (MCXC; Piffaretti et al., 2011).
### Rxcj1504.1-0248
This cluster is located at a redshift of \(z=0.2153\) and is known as one of the most massive galaxy clusters with \(M_{500}=12.47\times 10^{14}\,M_{\odot}\) (\(r_{500}=1.52\,{\rm Mpc}\)). This cluster hosts an extreme cool core characterized by a short radiative cooling time (e.g., Bohringer et al., 2005; Hlavacek-Larrondo & Fabian, 2011). Bohringer et al. (2005) analyzed the X-ray surface brightness profile using a \(\beta\)-model (Cavaliere & Fusco-Femiano, 1976, 1978; Ettori, 2000) and found that the core radius of the \(\beta\)-model is \(\sim 30\,h_{70}^{-1}\,{\rm kpc}^{3}\), which is significantly smaller than the cooling radius of \(\sim 140\,{\rm kpc}\) where is defined as the position at a radiative cooling time of \(10\,{\rm Gyr}\). Furthermore, Bohringer et al. (2005) found a significant drop in the ICM temperature profile toward the cluster center. The ICM temperature at the center is measured to be below \(5\,{\rm keV}\), while the temperature of the ambient ICM is \(\sim 10.5\,{\rm keV}\). Hlavacek-Larrondo & Fabian (2011) reported that there is no obvious X-ray point source associated with the BCG. Giacintucci et al. (2011) found the presence of a radio mini-halo in the central region of this cluster.
### Abell 3112
This cluster is located at a redshift of \(z=0.075\) and has a powerful radio source, PKS 0316-444, in the center. Takizawa et al. (2003) measured the radial profiles of the ICM thermodynamic properties, including temperature, abundance, electron density, pressure, and radiative cooling time with _Chandra_, and found a temperature drop toward the cluster center. Bulbul et al. (2012) also studied the temperature profile of the ICM from the center to the outskirts. The ICM temperature at the center is measured at \(\sim 3.4\,\) keV, while the peak value in the temperature profile is measured at \(\sim 5.1\,{\rm keV}\), which is consistent with that reported by Ezer et al. (2017). The mass of this cluster is estimated at \(M_{500}=4.39\times 10^{14}\,M_{\odot}\) (\(r_{500}=1.13\,{\rm Mpc}\)), which is consistent with that estimated by Nulsen et al. (2010) and Bulbul et al. (2012).
### Abell 4059
This cluster is located at a redshift of \(z=0.049\). The ICM temperature profile shows a temperature drop from \(\sim 4\,{\rm keV}\) at \(\sim 100\,{\rm kpc}\) away from the center to \(\sim 2\,{\rm keV}\) at the center (e.g., Huang & Sarazin, 1998; Choi et al., 2004; Reynolds et al., 2008; Mernier et al., 2015). Lagana et al. (2019) conducted a detailed study of spatial distributions of the ICM temperature, entropy, pressure, and abundance, respectively. The mass of this cluster is estimated as \(M_{500}=2.67\times 10^{14}\,M_{\odot}\) (\(r_{500}=0.96\,{\rm Mpc}\)), which is consistent with that estimated by Vikhlinin et al. (2009). Reynolds et al. (2008) reported a slight dip in the pressure profile at \(r\sim 15\,{\rm kpc}\), which is likely associated with an X-ray cavity caused by AGN feedback. Thus, it is considered that a lack of thermal pressure may be supported by its non-thermal pressure.
### Abell 478
This cluster is located at a redshift of \(z=0.088\). Sun et al. (2003) analyzed the central \(500\,{\rm kpc}\) region with _Chandra_ and reported the radial profiles of the ICM thermodynamic properties, finding that the peak value in the temperature profile is \(\sim 8.5\,{\rm keV}\), while the temperature at the center is \(\sim 3\,{\rm keV}\). These results are in agreement with those measured by Pointecouteau et al. (2004) and Vikhlinin et al. (2005). X-ray cavities are observed in the central \(15\,{\rm kpc}\) region, with two weak and small (\(\sim 4\,{\rm kpc}\)) radio lobes spatially associated with the X-ray cavities (Sun et al., 2003). The mass of this cluster is estimated as \(M_{500}=6.42\times 10^{14}\,M_{\odot}\) (\(r_{500}=1.28\,{\rm Mpc}\)).
## 3 Observation and Data Reduction
We analyzed archival X-ray data of the sample taken with the Advanced CCD Imaging Spectrometer (ACIS; Garmire et al., 2003) on board the _Chandra_ X-ray Observatory. The observation identification numbers (ObsIDs) and corresponding information of _Chandra_ observations in this study are summarized in Table 1. We
used the versions of 4.11 and 4.8.3 for _Chandra_ Interactive Analysis of Observations (CIAO; Fruscione et al., 2006) and the calibration database (CALDB), respectively. To ensure data quality, we examined the light curve of each dataset using the lc_clean task in CIAO, filtering flare data. The blanksky data provided by the CALDB were adopted as background data for the spectral analysis. Point sources were identified by the wavdetect task in CIAO and were subsequently masked. We extracted X-ray spectra of the ICM from each dataset using the specextract task in CIAO and combined them after making individual spectrum, response, and ancillary response files for the spectral fitting. We used XSPEC version 12.11.0f (Arnaud, 1996) and the atomic database (AtomDB) for plasma emission modeling version 3.0.9 in the X-ray spectral analysis, assuming that the ICM is in collisional ionization equilibrium (Smith et al., 2001; Foster et al., 2012). The abundance table of Anders & Grevesse (1989) was used in XSPEC. Here, the abundance of a given element is defined as \(Z_{i}=(n_{i,\rm obs}/n_{\rm H,obs})/(n_{i,\odot}/n_{\rm H,\odot})\), where \(n_{i}\) and \(n_{\rm H}\) represent the number densities of the \(i\)th element and hydrogen, respectively. The iron abundance of the ICM is used to represent the ICM metal abundance, such that the abundance of other elements is tied to the iron abundance as \(Z_{i}=Z_{\rm Fe}\)(Ueda et al., 2021).
## 4 Analysis and Results
To determine the cool-core radius for the sample, our initial goal is to obtain and analyze the ICM temperature profile of each cluster. We define annular regions for X-ray spectral analysis based on the morphology of the X-ray surface brightness of the sample. Next, we extract X-ray spectra of the ICM from each defined annular region and carry out spectral analysis to measure the ICM thermodynamic properties. Then, we perform fitting of the ICM temperature profile to determine the cool-core radius. We also study the radial profiles of the ICM electron number density, pressure, entropy, and radiative cooling time.
### X-ray imaging analysis
Figure 1 shows the X-ray surface brightness images of the sample in the \(0.5-7.0\) keV band taken with _Chandra_ after subtracting the background and correcting the exposure time. Since the morphology of their X-ray surface brightness appears axial symmetric (e.g., Ueda et al., 2021). we adopt the same elliptical model as that used in Ueda et al. (2021) to define annular regions for spectral analysis. The position of the center, the position angle, and the axis ratio of the elliptical model for each cluster are summarized in Table 2. Note that Ueda et al. (2021) calculated a mean surface brightness by applying the concentric ellipse fitting algorithm constructed by Ueda et al. (2017), by minimizing the variance of the X-ray surface brightness relative to the ellipse model. To determine the width of each radial bin, we ensure that the net photon counts for each bin falls within the range of \(5000-10000\) in the \(0.4-7.0\) keV band for better statistics. For several smaller regions near the center and outer regions, we adjust the net photon counts in the range of \(1500-5000\). The detailed information regarding our region selection is summarized in Appendix A (see Table 10).
### X-ray spectral analysis
We extract X-ray spectra of the ICM from each elliptical annulus defined in Section 4.1. The X-ray spectra in the \(0.4-7.0\) keV band are analyzed using the model of phabs * apec in XSPEC. The redshift of each cluster is fixed to the value in Table 1, which is taken from NASA/IPAC Extragalactic Database (NED)4. The column densities of the Galactic absorption (i.e., \(N_{\rm H}\)) toward RXCJ1504.1-0248 and A3112 are fixed to the values measured by HI4PI Collaboration et al. (2016), respectively. However, for A4059 and A478, Choi et al. (2004) and Sun et al. (2003) reported that \(N_{\rm H}\) varies with radius and is systematically larger than that derived from HI4PI Collaboration et al. (2016). In fact, the measured values of \(N_{\rm H}\) for A4059 and A478 in our spectral analysis are a factor of 2 larger than those of HI4PI Collaboration et al. (2016), respectively, and consistent with the previous measurements (Choi et al., 2004; Sun et al., 2003), respectively. Therefore, we allow \(N_{\rm H}\) for A4059 and A478 to vary in the spectral analysis.
Footnote 4: [http://ned.ipac.caltech.edu/](http://ned.ipac.caltech.edu/)
The observed radial profiles of the temperature and electron number density for the sample are shown in Figures 2 and 3, respectively. To estimate the ICM electron number density, we assume a line-of-sight length of \(L/1\) Mpc. Additionally, the observed radial profiles of the ICM metal abundance are presented in Appendix (Figure 10). We also calculate the ICM pressure \(p_{\rm e}\) and entropy \(K_{\rm e}\) as
\[p_{\rm e}=kT\times n_{\rm e} \tag{1}\]
and
\[K_{\rm e}=kT\times n_{\rm e}^{-\frac{2}{3}}, \tag{2}\]
respectively, where \(kT\) is the ICM temperature and \(n_{\rm e}\) is the ICM electron number density. The radial profiles of the pressure and entropy are shown in Figures 4 and 5, respectively. In addition, following the approach of McDonald et al. (2019), we calculate the radiative cooling
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Cluster & Redshift & Expo. time (ksec) & Scale (kpc arcsec\({}^{-1}\)) & ObsID \\ \hline RXCJ1504.1-0248 & 0.215 & 161.7 & 3.51 & 4935, 5793, 17197, 17669, 17670 \\ A3112 & 0.075 & 133.7 & 1.43 & 2216, 2516, 6972, 7323, 7324, 13135 \\ A4059 & 0.049 & 120.0 & 0.96 & 897, 5785 \\ A478 & 0.088 & 50.1 & 1.65 & 1669, 6102 \\ \hline \end{tabular}
\end{table}
Table 1: Summary of our sample: cluster name, redshift, net exposure time, physical scale, and datasets taken with _Chandra_.
Figure 1: X-ray surface brightness of the sample: RXCJ1504.1-0248 (top left), A3112 (top right), A4059 (bottom left), and A478 (bottom right). The X-ray surface brightness in the \(0.5-7.0\,\mathrm{keV}\) band is shown on a logarithm scale in units of \(\mathrm{photon\,sec^{-1}\,arcsec^{-2}\,cm^{-2}}\). This image is smoothed with a Gaussian kernel with \(2.3^{\prime\prime}\) FWHM. A dashed, white ellipse shows the cool-core radius, \(r_{\mathrm{cool}}\), derived from the best-fit results of the analysis of the ICM temperature profile.
time of the ICM, \(t_{\rm cool}\), as
\[t_{\rm cool}=\frac{3}{2}\frac{(n_{\rm e}+n_{\rm p})kT}{n_{\rm e}n_{\rm p}\Lambda(T, Z)}, \tag{3}\]
where \(n_{\rm p}\) is the proton number density, and \(\Lambda(T,Z)\) is the cooling function. To convert from \(n_{\rm e}n_{\rm p}\) to \(n_{\rm e}\), McDonald et al. (2019) assumed \(n_{\rm e}/n_{\rm p}=1.196\), based on the discussion by Markevitch (2007), namely, \(n_{\rm e}/n_{\rm p}=1+2x+x_{eh}\), where \(x\equiv n_{\rm He}/n_{\rm p}\) represents the helium abundance, and \(x_{eh}\) represents electrons from elements heavier than helium. The contribution of \(x_{eh}\) can be negligible, as it is \(x_{\rm eh}\approx 0.005\) assuming an ICM abundance of \(0.3-0.5\) solar compared to \(x=0.098\) from Anders & Grevesse (1989) with 1 solar abundance. Therefore, we adopt the same approach as McDonald et al. (2019) in calculation of the radiative cooling time. We compute the cooling function using the pyatomdb task in AtomDB (Smith et al., 2001; Foster et al., 2012) with the best-fit parameters of the ICM temperature and abundance in each radial bin. The radial profiles of the radiative cooling time for the sample are presented in Figure 6. To show all the radial profiles, we use the distance from the center along the direction of the major axis of the elliptical model as the values on the horizontal axis.5 Note that we here show the radial profiles of the ICM properties derived from projected spectral analysis, not spectral deprojection analysis.
Footnote 5: In this paper, we adopt \(r=\frac{r_{\rm MAJ,inner}+r_{\rm MAJ,outer}}{2}\) as the distance from the center, where \(r_{\rm MAJ,inner}\) and \(r_{\rm MAJ,outer}\) denote the major axes of the inner and outer annuli, respectively.
### ICM temperature profiles
Figure 2 shows the observed temperature profile of each galaxy cluster. In good agreement with previous studies, such as Bohringer et al. (2005) for RXCJ1504.1-0248, Takizawa et al. (2003) for A3112, Reynolds et al. (2008) for A4059, and Sanderson et al. (2005) for A478, all galaxy clusters in the sample exhibit a significant drop in the observed ICM temperature profiles toward the cluster center.
Several empirical models have been proposed to represent the ICM temperature profile from the center to the outskirts (e.g., Allen et al., 2001; Voigt et al., 2002; Kaastra et al., 2004; Vikhlinin et al., 2006; O'Sullivan et al., 2017). For instance, Kaastra et al. (2004) modeled the ICM temperature profile taking into account the temperature at the center, the peak temperature, and the cooling radius that represents a characteristic radius in the temperature profile. However, there is still room for improvement in their model to account for temperature decrease toward the outskirts.
Motivated by Kaastra et al. (2004) and O'Sullivan et al. (2017), we extend their models to account for the temperature profile from the center to the outer region. Therefore, our model can be described as
\[T(r)=T_{\rm center}+2\times(T_{\rm peak}-T_{\rm center})(\frac{x(r)}{1+x(r)}) \tag{4}\]
\[x(r)=\left\{\begin{array}{ll}(r/r_{\rm cool})^{\alpha_{1}}&:r<r_{\rm cool }\\ (r/r_{\rm cool})^{\alpha_{2}}&:r\geq r_{\rm cool},\end{array}\right. \tag{5}\]
where \(T_{\rm center}\) is the ICM temperature at the center, \(T_{\rm peak}\) is the peak temperature, \(r_{\rm cool}\) is the turnover radius in the temperature profile (i.e., the peak position; \(T(r_{\rm cool})=T_{\rm peak}\)), and \(\alpha_{1}\) and \(\alpha_{2}\) represent the slopes of the temperature profile within \(r_{\rm cool}\), and beyond \(r_{\rm cool}\), respectively. In this paper, we define the cool-core radius as \(r_{\rm cool}\).
We fit the observed temperature profile shown in Figure 2 with our model using affine-invariant Markov Chain Monte Carlo (MCMC) sampling (Goodman & Weare, 2010) implemented by the emcee python package (Foreman-Mackey et al., 2013). The log-likelihood function for the fitting is written as
\[-2\ln\mathcal{L}=\sum_{i}\frac{[y_{i}-T(r_{i})]^{2}}{\sigma_{y_{i}}^{2}}, \tag{6}\]
where \(i\) runs over all radial bins in the radial profile, \(y_{i}\) and \(\sigma_{y_{i}}\) are the best-fit value and its uncertainty of the temperature profile in each radial bin, respectively, and \(T(r_{i})\) is the model prediction in each radial bin. We use uninformative uniform priors on \(T_{\rm center}\), \(T_{\rm peak}\), \(r_{\rm cool}\), \(\alpha_{1}\), and \(\alpha_{2}\) with the following ranges: \(T_{\rm center}\in(0,30)\) keV, \(T_{\rm peak}\in(0,30)\) keV, \(r_{\rm cool}\in(0,500)\) kpc, \(\alpha_{1}\in(0,2)\), and \(\alpha_{2}\in(-2,0)\). We sample the posterior probability distributions of the parameters (\(T_{\rm center}\), \(T_{\rm peak}\), \(r_{\rm cool}\), \(\alpha_{1}\), and \(\alpha_{2}\)) over the full parameter space allowed by the
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Cluster & R.A. & Decl. & PAa & AR \\ & & & (deg) & \\ \hline RXCJ1504.1-0248 & 15:04:07.48 & -02:48:17.25 & 156 & 0.78 \\ A3112 & 03:17:57.67 & -44:14:17.44 & 100 & 0.75 \\ A4059 & 23:57:00.79 & -34:45:33.44 & 70 & 0.88 \\ A478 & 04:13:25.15 & 10:27:54.94 & 132 & 0.73 \\ \hline \end{tabular}
\end{table}
Table 2: Sky coordinates (J2000.0) of the center, the position angle (PA), and the axis ratio (AR) of the ellipse model for the X-ray surface brightness distribution of the sample extracted from Ueda et al. (2021).
Figure 2: ICM temperature profiles of the sample: RXCJ1504.1-0248 (top left), A3112 (top right), A4059 (bottom left), and A478 (bottom right). In each panel, the black error bars show the observed values. The red dashed line and shaded region display the best-fit profile and \(1\sigma\) uncertainty, respectively. The blue solid line and shaded region correspond to the best-fit cool-core radius \(r_{\rm cool}\) and \(1\sigma\) uncertainty, respectively. In the middle row of each panel, the residuals between the observed and the best-fit profiles are shown. The mean absolute relative residuals are also displayed in the bottom row of each panel.
priors. The mean value and standard deviation of the marginalized distributions are represented as the best-fit value and its uncertainty, respectively. For each cluster, the best-fit parameters of the temperature profile are summarized in Table 3. The best-fit model of the temperature profile of each cluster is also shown in Figure 2.
We calculate residuals between the observed and the best-fit profiles in each radial bin. The residuals are computed as \((y_{i}-T_{\rm b}(r_{i}))\), where \(T_{\rm b}(r_{i})\) is the best-fit profile in each radial bin. Based on these residuals, we also compute the mean absolute relative residual within each set of five radial bins within \(r_{\rm cool}\) as well as for all the data points for the outer region (i.e., \(r>r_{\rm cool}\)) using \(|(y-T_{\rm b})/T_{\rm b}|\). The profiles of the residuals and the mean absolute relative residuals are shown in the middle and bottom row of each panel in Figure 2, respectively. The calculation of residuals will also be applied to the radial profiles of the other components.
We also calculate the ratios of the parameters of the temperature profile for RXCJ1504.1-0248, A3112, and A478 to those for A4059, since A4059 has the smallest values for each parameter. These ratios are summarized in Table 4.
### ICM electron number density profiles
In the same manner as the analysis of the ICM temperature profiles, we also analyze the radial profiles of the electron number density for the sample. A \(\beta\)-model is frequently used to model the density profile of the ICM (e.g., Cavaliere & Fusco-Femiano, 1976, 1978; Ettori, 2000). For cool-core clusters, a double \(\beta\)-model is preferred to account for the observed profile, because of the strong excess at the cluster center (e.g., Pointecouteau et al., 2004; Santos et al., 2008; Henning et al., 2009; Santos et al., 2010; Ota et al., 2013). Therefore, to fit the observed number density profile for each cluster, we adopt a double \(\beta\)-model expressed as
\[n(r) =n_{1}(r)+n_{2}(r) \tag{7}\] \[=n_{0,\,1}\left[1+\left(\frac{r}{r_{c,\,1}}\right)^{2}\right]^{- \alpha_{1}}+n_{0,\,2}\left[1+\left(\frac{r}{r_{c,\,2}}\right)^{2}\right]^{- \alpha_{2}},\]
where \(n_{0,\,1}\) and \(n_{0,\,2}\) represent the values at the center, respectively, \(r_{c,\,1}\) and \(r_{c,\,2}\) are the core radii of each \(\beta\)-model, respectively, and \(\alpha_{1}\) and \(\alpha_{2}\) are the slopes of each \(\beta\) model, respectively.
Following the procedures of the MCMC analysis in Section 4.3, we use uninformative uniform priors \(n_{0,\,1},\,n_{0,\,2}\in(0,1)\,{\rm cm}\,^{-3},\,\,r_{c,\,1},\,r_{c,\,2}\in(0,1000)\,{\rm kpc},\)\(\alpha_{1},\,\alpha_{2}\in(0.5,3.5]\), and ensure \(n_{0,\,1}>n_{0,\,2}\). Thus, we fit the observed profile and sample the posterior probability distributions of these six parameters over the full parameter space allowed by the priors. The best-fit parameters obtained from the MCMC analysis are summarized in Table 5. The best-fit profile of each cluster along with its uncertainty is shown in Figure 3.
### ICM pressure and entropy profiles
Based on the best-fit profiles for the ICM temperature and electron number density, we compute the predicted profiles for the ICM pressure and entropy profiles using Equations 1 and 2, respectively. Here, we do not perform direct fitting of the ICM pressure and entropy profiles. The observed ICM pressure and entropy profiles and their predicted profiles are shown in Figures 4 and 5, respectively.
As mentioned in Section 2.3, Reynolds et al. (2008) observed a slight dip in the pressure profile of A4059 at \(r\sim 15\,{\rm kpc}\). We also observe this dip in our pressure profile, which is in agreement with that reported by Reynolds et al. (2008). The X-ray cavity found by Reynolds et al. (2008) may be associated with this pressure dip and give non-thermal pressure support for this region.
## 5 Discussion
We have conducted an analysis of the radial profiles of the ICM thermodynamic properties, including temperature, electron number density, pressure, entropy, and radiative cooling time for our sample: RXCJ1504.1-0248, A3112, A4059, and A478. We have also determined the cool-core radius for each cluster by analyzing their observed ICM temperature profiles.
In this section, we discuss the characteristics of the cool-core systems defined by the cool-core radius in the sample. We also explore possible relations between the cool-core radius and the properties of the host galaxy clusters, as well as investigate potential universal forms in the radial profiles scaled by the cool-core radius. Furthermore, we study perturbations in the ICM thermodynamic properties within the cool cores.
### Cool-core radius
Cool cores are characterized by a significant drop in the ICM temperature toward the cluster center (Molendi & Pizzolato, 2001). Since the cool-core radius has been defined as the turnover radius in the ICM temperature profile, the cool-core radius corresponds to a boundary region where the cooling of the ICM becomes dominant. Therefore, the cool-core radius is an important aspect for understanding the underlying physics of cool cores.
We have determined the cool-core radius for the sample as summarized in Table 3. Among our sample, RXCJ1504.1-0248 has the largest cool-core radius
(\(r_{\rm cool}=324\pm 55\,\)kpc), while A4059 has the smallest cool-core radius (\(r_{\rm cool}=77\pm 8\,\)kpc). A3112 and A478 exhibit the relatively small and large cool-core radii, respectively.
We find that the radiative cooling time of the ICM at the cool-core radius exceeds 10 Gyr for all galaxy clusters in our sample (see Figure 6). In particular, RXCJ1504.1-0248 exhibits a radiative cooling time of \(32^{+5}_{-11}\,\)Gyr at its cool-core radius. These time scales are significantly longer than the inferred age of low-\(z\) galaxy clusters. Our results indicate that the ICM temperature starts dropping toward the cluster center from a region exhibiting such a long radiative cooling time. Although it is difficult to predict the past thermodynamic properties of the ICM at the same region as the present cool-core radius, it is expected that the effect of radiative cooling in such regions is minimal or negligible. Furthermore, our results show that the radiative cooling time of the ICM gradually increases toward the outskirts, indicating no apparent feature or discontinuity at the cool-core radius. Therefore, our findings indicate that mechanisms for determining the size of cool cores rely not only on radiative cooling but also on additional mechanisms.
Gas sloshing can induce displacement of cool gas originally in a cool core toward the outskirts, leading to mixing of cool gas with ambient hot gas (e.g., Ascasibar & Markevitch, 2006; ZuHone et al., 2010; Keshet, 2012; Naor & Keshet, 2020; Keshet et al., 2023). Since the cool-core systems in the sample have developed well as indicated by the observed temperature drop, radiative cooling is expected to play a dominant role in generating cool gas in the inner regions of the cool cores. Such cool gas is likely to be displaced toward the outer regions by gas sloshing. In fact, evidence of gas sloshing has been observed in the cool cores in our sample (Ueda et al., 2021), supporting this hypothesis.
### Relation between the cool-core radius and the cluster mass
To explore possible relations between the cool-core radius and the cluster mass, we extract the \(M_{500}\) and \(r_{500}\) values for our sample from the MCX catalog (Piffaretti et al., 2011). These values are summarized in Table 6. Since \(r_{500}\) is commonly used as a scaling factor for the radial profiles of the ICM thermodynamic properties, this study provides insights into the connection between the cool-core radius and \(r_{500}\).
To investigate the relations of the cool-core radius to \(M_{500}\) and to \(r_{500}\), we conduct a regression analysis using a model expressed as
\[\frac{r_{\rm cool}}{\hat{r}_{\rm cool}}=N_{i}\left(\frac{x_{i}}{\hat{x}_{i}} \right)^{\alpha_{i}}, \tag{8}\]
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Cluster & \(T_{\rm center}/T_{\rm center,A4059}\) & \(T_{\rm peak}/T_{\rm peak,A4059}\) & \(r_{\rm cool}/r_{\rm cool,A4059}\) \\ \hline RXCJ1504.1-0248 & \(3.42\pm 0.42\) & \(2.55\pm 0.13\) & \(4.21\pm 0.82\) \\ A3112 & \(1.83\pm 0.24\) & \(1.15\pm 0.03\) & \(1.16\pm 0.16\) \\ A478 & \(1.27\pm 0.55\) & \(1.62\pm 0.07\) & \(3.18\pm 0.92\) \\ \hline \end{tabular}
\end{table}
Table 4: Ratios of the parameters of the temperature profile for RXCJ1504.1-0248, A3112, and A478 to those of A4059.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Cluster & \(n_{0,1}\) & \(r_{\rm e,1}\) & \(\alpha_{1}\) & \(n_{0,2}\) & \(r_{\rm e,2}\) & \(\alpha_{2}\) \\ & (\(10^{-2}\,\)cm\({}^{-3}\)) & (kpc) & & (\(10^{-2}\,\)cm\({}^{-3}\)) & (kpc) & \\ \hline RXCJ1504.1-0248 & \(4.10\pm 0.15\) & \(30.0\pm 2.0\) & \(0.72\pm 0.08\) & \(0.24\pm 0.12\) & \(329\pm 111\) & \(1.71\pm 0.83\) \\ A3112 & \(0.80\pm 0.02\) & \(53.9\pm 2.6\) & \(0.62\pm 0.01\) & \(0.78\pm 0.03\) & \(19.1\pm 2.9\) & \(2.16\pm 0.39\) \\ A4059 & \(0.40\pm 0.01\) & \(72.5\pm 4.6\) & \(0.60\pm 0.03\) & \(0.38\pm 0.04\) & \(14.7\pm 8.8\) & \(2.86\pm 0.51\) \\ A478 & \(1.54\pm 0.09\) & \(19.8\pm 2.1\) & \(0.64\pm 0.10\) & \(0.59\pm 0.08\) & \(161\pm 17\) & \(0.85\pm 0.08\) \\ \hline \end{tabular}
\end{table}
Table 5: Parameter constraints of the double \(\beta\)-model derived from the MCMC analysis.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Cluster & \(T_{\rm center}\) (keV) & \(T_{\rm peak}\) (keV) & \(r_{\rm cool}\) (kpc) & \(\alpha_{1}\) & \(\alpha_{2}\) \\ \hline RXCJ1504.1-0248 & \(4.13\pm 0.24\) & \(12.00\pm 0.56\) & \(325\pm 55\) & \(0.97\pm 0.11\) & \(-0.66\pm 0.48\) \\ A3112 & \(2.21\pm 0.16\) & \(5.41\pm 0.11\) & \(89.4\pm 8.0\) & \(0.88\pm 0.10\) & \(-0.35\pm 0.14\) \\ A4059 & \(1.21\pm 0.13\) & \(4.70\pm 0.08\) & \(77.1\pm 8.7\) & \(0.90\pm 0.10\) & \(-0.05\pm 0.05\) \\ A478 & \(1.54\pm 0.65\) & \(7.62\pm 0.32\) & \(245\pm 66\) & \(0.55\pm 0.13\) & \(-0.45\pm 0.42\) \\ \hline \end{tabular}
\end{table}
Table 3: Parameter constraints of the temperature profile model derived from the MCMC analysis.
where \(N_{i}\) is a normalization of the model defined as \(N_{i}=10^{A_{i}}\), \(x_{i}\) represents either \(M_{500}\) or \(r_{500}\), and \(\alpha_{i}\) is the slope of the model. Here, \(i\) denotes the values for \(M_{500}\) or \(r_{500}\). We center the relation on the pivot values \(\hat{r}_{\rm cool}=167\,{\rm kpc}\), \(\hat{M}_{500}=5.41\times 10^{14}\,M_{\odot}\), and \(\hat{r}_{500}=1.20\,{\rm Mpc}\) set at the median of the distributions of \(r_{\rm cool}\), \(M_{500}\), and \(r_{500}\), respectively.
Following the procedures of the MCMC analysis in Section 4.3, we use uninformative uniform priors \(A_{r}\), \(A_{M}\in(-5,5)\) and \(\alpha_{r}\), \(\alpha_{M}\in(0,10)\). We fit the data in the log-log space and sample the posterior probability distributions of the parameters over the full parameter space allowed by the priors. The best-fit parameters for the relations of \(r_{\rm cool}\) to \(M_{500}\) and to \(r_{500}\) are summarized
Figure 3: Same as Figure 2 but for the ICM electron number density profiles of the sample: RXCJ1504.1-0248 (top left), A3112 (top right), A4059 (bottom left), and A478 (bottom right).
Figure 4: Same as Figure 3 but for the ICM pressure profiles of the sample: RXCJ1504.1-0248 (top left), A3112 (top right), A4059 (bottom left), and A478 (bottom right).
Figure 5: Same as Figure 3 but for the ICM Entropy profiles of the sample: RXCJ1504.1-0248 (top left), A3112 (top right), A4059 (bottom left), and A478 (bottom right).
in Table 7. The best-fit relations for \(M_{500}\) and \(r_{500}\) with \(1\sigma\) uncertainty are shown in Figure 7.
We find that the best-fit relation of the cool-core radius to \(M_{500}\) is consistent with a linear relation (i.e., \(\alpha_{M}=0.94\pm 0.18\)). Since \(M_{500}\) is proportional to \(r_{500}^{3}\), the best-fit slope for the \(r_{\rm cool}\)-\(r_{500}\) relation is also consistent with this dependency (i.e., \(\alpha_{r}=3.12\pm 0.59\)).
If we assume \(r_{\rm cool}\propto L_{\rm X}\), meaning that the size of cool cores depends on the X-ray luminosity of their host galaxy clusters, can we infer a slope for the \(r_{\rm cool}\)-\(M_{500}\) relation? Since the cluster mass \(M_{500}\) was estimated using the luminosity-mass scaling relation from Arnaud et al. (2010) (\(L_{\rm X}\propto M^{1.64}\)), with the X-ray luminosity in the \(0.1-2.4\) keV band within \(r_{500}\) being used in the calculation (see Piffaretti et al., 2011), a slope of \(\sim 1.6\)
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline Cluster & \(r_{\rm cool}\) & \(r_{\rm cool}/r_{\rm cool,A4059}\) & \(r_{500}\) & \(r_{\rm cool}/r_{500}\) & \(r_{500}/r_{500,A4059}\) & \(M_{500}\) & \(M_{500}/M_{500,A4059}\) \\ & (kpc) & & (Mpc) & & & \((10^{14}\,M_{\odot})\) & \\ \hline RXCJ1504.1-0248 & \(325\pm 55\) & \(4.21\pm 0.82\) & 1.52 & \(0.21\pm 0.04\) & 1.58 & 12.47 & 4.68 \\ A3112 & \(89.4\pm 8.0\) & \(1.16\pm 0.16\) & 1.13 & \(0.080\pm 0.007\) & 1.17 & 4.39 & 1.65 \\ A4059 & \(77.1\pm 7.7\) & 1.00 & 0.96 & \(0.080\pm 0.008\) & 1.00 & 2.67 & 1.00 \\ A478 & \(245\pm 66\) & \(3.18\pm 0.92\) & 1.28 & \(0.19\pm 0.05\) & 1.32 & 6.42 & 2.41 \\ \hline \end{tabular}
\end{table}
Table 6: Cool-core radius, cluster mass \(M_{500}\), fiducial radius \(r_{500}\), and those scaled by the values of A4059.
Figure 6: Radial profiles of the ICM radiative cooling time: RXCJ1504.1-0248 (top left), A3112 (top right), A4059 (bottom left), and A478 (bottom right). The blue, vertical solid line and shaded region correspond to the cool-core radius and \(1\sigma\) uncertainty for each cluster.
may be expected for the \(r_{\rm cool}\)-\(M_{500}\) relation. Previous studies found that the observed slope for the luminosity-mass scaling relation falls within the range of \(1.3\sim 2\) (see a review Giodini et al., 2013, and references therein). Furthermore, the self-similar model predicts a slope of \(4/3\) for the luminosity-mass relation. On the other hand, if \(r_{\rm cool}\propto t_{\rm cool}\), \(t_{\rm cool}\propto n^{-1}\sqrt{T}\) may yield a slope of \(1/3\) for the \(r_{\rm cool}\)-\(M_{500}\) relation. However, the best-fit slope is inconsistent with these slopes.
The observed linear relation between \(r_{\rm cool}\) and \(M_{500}\) indicates that not only baryon physics such as cooling, heating, and heat transport is crucial for the formation of cool cores, but also the size of cool cores is linked to the evolution of their host galaxy clusters. Once the mass of a galaxy cluster increases owing to a cluster merger, the ICM density at the center is expected to increase after the relaxation of merger events. Then, radiative cooling becomes more efficient, leading to the accumulation of a large amount of cool gas in the central region. Since galaxy clusters grow through continuous accretion of material, including cluster mergers, from their surrounding large-scale environments, gas sloshing is expected to take place in cool cores continuously. In fact, this hypothesis is supported by recent comprehensive studies of cool cores (Ueda et al., 2020, 2021). Therefore, the continuous occurrence of gas sloshing may play a crucial role not only in determining the size of cool cores but also in suppressing runaway cooling of the ICM. Cool cores may coevolve with their host galaxy clusters.
For the relation between the cool-core radius and a fiducial radius, Vikhlinin et al. (2005) reported that the projected temperature as a function of radius reaches a maximum at \(r\sim 0.1-0.2\,r_{180}\). Rasmussen & Ponman (2007) studied the relation between the peak position in the temperature profile (i.e., \(r_{\rm cool}\)) and \(r_{500}\) using a sample of 15 nearby galaxy groups observed with _Chandra_. Assuming a linear relation, they fitted the data and obtained \(r_{\rm cool}=(0.20\pm 0.02)\,(r_{500}/{\rm kpc})-(46.7\pm 15.1)\) kpc. O'Sullivan et al. (2017) also studied the relation between the turnover radius and \(r_{500}\) using a sample of high-richness local galaxy groups, finding that the observed relation in their sample is mostly consistent with that presented by Rasmussen & Ponman (2007). In addition, Rasmussen & Ponman (2007) and O'Sullivan et al. (2017) pointed out that the relation for galaxy groups differs from that observed in a sample of local galaxy clusters (Vikhlinin et al., 2005), suggesting that this discrepancy may be attributed to differences in the physical properties between galaxy clusters and groups. In
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multicolumn{3}{c|}{\(\tau_{\rm cool}\) vs. \(M_{500}\)} & \multicolumn{3}{c}{\(r_{\rm cool}\) vs. \(r_{500}\)} \\ \hline \(A_{M}\) & \(N_{M}=10^{A_{M}}\) & \(\alpha_{M}\) & \(A_{r}\) & \(N_{r}=10^{A_{r}}\) & \(\alpha_{r}\) \\ \hline \(-0.10\pm 0.04\) & \(0.79^{+0.08}_{-0.07}\) & \(0.94\pm 0.18\) & \(-0.10\pm 0.04\) & \(0.80^{+0.08}_{-0.07}\) & \(3.12\pm 0.59\) \\ \hline \end{tabular}
\end{table}
Table 7: Best-fit parameters of the relation of \(r_{\rm cool}\) to \(M_{500}\) and to \(r_{500}\).
Figure 7: Relations of the cool-core radius (\(r_{\rm cool}\)) to \(M_{500}\) and to \(r_{500}\). Left: the relation between \(r_{\rm cool}\) and \(M_{500}\). The red solid line and shaded region shows the best-fit relation obtained from the fitting in the log-log space and its \(1\,\sigma\) uncertainty, respectively. Right: Same as the left panel but for the relation between \(r_{\rm cool}\) and \(r_{500}\).
fact, our best-fit relation between \(r_{\rm cool}\) and \(r_{500}\) seems to deviate from that observed for galaxy groups. Since galaxy groups have a lower temperature gas compared to galaxy clusters, radiative cooling becomes more efficient owing to emission lines from the metals in the ICM. Additionally, mergers may have a more significant impact on a cool core of the primary galaxy group. Perhaps, the \(r_{\rm cool}\)-\(r_{500}\) relation may have a break. Further studies are required to arrive at a firm conclusion regarding the similarities and differences for the \(r_{\rm cool}\)-\(r_{500}\) relation between galaxy clusters and groups.
### Scaled radial profiles of the ICM thermodynamic properties
To look deeply into the ICM thermodynamic properties within the cool cores, we, here, scale the observed radial profiles. These scaled profiles are presented by normalizing the values on the horizontal and the vertical axes by the cool-core radius and the value of each component at the cool-core radius, respectively. Figure 8 shows the scaled radial profile of the ICM temperature. Figure 9 presents the scaled profiles of the other components.
A possible universal form is found in the scaled temperature profiles. To clarify this point, we simultaneously fit the scaled temperature profiles using the same model as that used in Section 4.3 with the MCMC method. In this fitting, we fix \(r_{\rm cool}\) in Equation 4 at 1.0 since the temperature profile has already been scaled by \(r_{\rm cool}\). Instead, we add a new parameter, intrinsic scatter, into the log-likelihood function. Therefore, the log-likelihood function can be expressed as
\[-2\ln\mathcal{L}=\sum_{i}\ln{(2\pi\sigma_{i}^{2})}+\sum_{i}\frac{[y_{i}-T(r_{i })]^{2}}{\sigma_{i}^{2}}, \tag{9}\]
where \(i\) runs over all annulus in which sample, \(y_{i}\) and \(T(r_{i})\) are the scaled value and the model prediction of the scaled temperature profile in each radial bin, respectively, and \(\sigma_{i}\) includes the observational uncertainty \(\sigma_{y_{i}}\) and lognormal intrinsic scatter \(\sigma_{\rm int}\),
\[\sigma_{i}^{2}=\sigma_{y_{i}}^{2}+\sigma_{\rm int}^{2}. \tag{10}\]
The \(\sigma_{\rm int}\) parameter accounts for the intrinsic scatter around the mean relation due to unaccounted errors and/or astrophysics. We continue to use uninformative uniform priors on \(T_{\rm center}\), \(T_{\rm peak}\), \(\alpha_{1}\), \(\alpha_{2}\) and \(\ln\sigma_{\rm int}\) as \(T_{\rm center}\in(0,2)\), \(T_{\rm peak}\in(0,2)\), \(\alpha_{1}\in(0,2)\), \(\alpha_{2}\in(-2,0)\) and \(\ln\sigma_{\rm int}\in(-20,1)\). We sample the posterior probability distributions of the parameters over the full parameter space allowed by the priors. The best-fit parameters are summarized in Table 8.
We find that the ratio of the temperature at the center to the peak value is \(0.34\pm 0.13\), which is in good agreement with the well-known observational trend in the temperature profiles of cool-core clusters (e.g., Sanderson et al., 2006; Hudson et al., 2010; Simionescu et al., 2011). The slopes of the model for the regions inside and outside the cool cores are measured at \(\alpha_{1}=0.78\pm 0.12\) and \(\alpha_{2}=-0.14\pm 0.10\), respectively. The intrinsic scatter \(\sigma_{\rm int}\) is obtained at \(3.25^{+1.04}_{-0.79}\,\%\).
A possible universal form in the temperature profiles scaled by a fiducial radius (\(r_{2500}\) or \(r_{500}\)) within cool cores has been proposed (e.g., Allen et al., 2001; Sanderson et al., 2006). On the other hand, Vikhlinin et al. (2005) and Hudson et al. (2010) reported no such universal form. Our results indicate a possible universal form in the scaled temperature profile. Since all cool cores in the sample are classified into strong cool cores (Hudson et al., 2010), such a universal form may be observed in strong cool cores only. Additionally, we have scaled the temperature profiles by the cool-core radius, rather than a fiducial radius like \(r_{500}\). It is possible that the cool-core radius is a more appropriate factor than a fiducial radius for scaling the radial profiles of the ICM thermodynamic properties, as indicated by the observed intrinsic scatter. The ratio of \(r_{\rm cool}\) to \(r_{500}\) varies within
Figure 8: Temperature profile of each cluster scaled by \(r_{\rm cool}\) and \(T_{\rm peak}\) for the horizontal and the vertical axes, respectively. The red dashed line and shaded region display the best-fit profile and \(1\,\sigma\) uncertainty, respectively. The blue, vertical solid line indicates \(r_{\rm cool}=1\). The residuals and mean absolute relative residuals are shown in the middle and bottom rows, respectively.
our sample (see Table 6), emphasizing the significance of the chosen radius for scaling in revealing a universal form in the temperature profile.
To investigate possible universal forms seen in the other components, we also scale the radial profiles using the same manner as that for the scaled temperature profiles. Figure 9 shows the scaled radial profiles of the ICM electron number density, pressure, entropy, and radiative cooling time. In contrast to the scaled temperature profiles, we find that there is no universal form in the scaled profiles within the cool cores. The values of each profile at the center are highly scattered, suggesting that the ICM temperature is likely the most fundamental factor for characterizing cool cores. However, further studies are required to reveal possible universal forms in the radial profiles of the ICM thermodynamic properties. An approach involving forward model fitting will be useful to conduct a simultaneous analysis of the ICM temperature and density profiles (e.g., Umetsu et al., 2022).
In the region outside the cool cores, a universal form is found in the scaled entropy profile. Such a universal form is known as the universal entropy profile (Voit et al., 2005). However, in the region inside the cool cores, the scaled entropy profiles for our sample start varying toward the cluster center, indicating that the cool-core radius is influenced by the interplay between cooling and heating processes.
### Analysis of thermodynamic perturbations
It has been studied that thermodynamic perturbations in the ICM are a good proxy for examining gas motions including turbulence (e.g., Gaspari et al., 2014; Churazov et al., 2016; Hofmann et al., 2016; Ueda et al., 2018; Zhuravleva et al., 2018; Kitayama et al., 2020; Ueda et al., 2021; Zhuravleva et al., 2023). Gaspari et al. (2014) presented that entropy perturbations in the ICM can be use to infer one-dimensional Mach numbers of turbulence (\(\mathcal{M}_{\rm 1D}\)), assuming that pressure perturbations in the ICM are negligible. Following Gaspari et al. (2014), Hofmann et al. (2016) measured pressure and entropy perturbations in the ICM, and estimated \(\mathcal{M}_{\rm 1D}\) within the cool cores in a sample of 33 galaxy clusters.
Motivated by Gaspari et al. (2014) and Hofmann et al. (2016), we investigate thermodynamic perturbations in the ICM to constrain gas motions in the cool cores in our sample. We have measured the residuals and mean absolute relative residuals between the observed and the best-fit profiles (see the middle and bottom panels of Figures 2, 3, 4, and 5), which can be converted into average fractional perturbations. We calculate the average fractional perturbations in the ICM thermodynamic properties within the cool cores, as summarized in Table 9. Our results are consistent with those measured by Hofmann et al. (2016) for their sample. In particular, the average fractional perturbations in the ICM pressure in our sample is lower than 0.15, which agrees with their measurements (\(0.09\pm 0.06\)). Thus, the observed perturbations are nearly isobaric. Therefore, similar to Hofmann et al. (2016), the entropy perturbations in the ICM can be used to infer \(\mathcal{M}_{\rm 1D}\).
Assuming that the perturbations are isobaric, the observed entropy perturbations can directly be converted into values of \(\mathcal{M}_{\rm 1D}\). A478 exhibits a slightly larger \(\mathcal{M}_{\rm 1D}\) with \(0.077\pm 0.069\), while the inferred values of \(\mathcal{M}_{\rm 1D}\) are comparable in the sample. Thus, the corresponding three dimensional Mach numbers (\(\mathcal{M}_{\rm 3D}\)) of the sample is lower than \(\mathcal{M}_{\rm 3D}<0.25\), which is significantly lower than unity and is consistent with those measured by the previous studies (Hofmann et al., 2016; Hitomi Collaboration et al., 2018; Ueda et al., 2021). We expect that _XRISM_ will be able to achieve direct measurements of the turbulent velocity for our sample (Tashiro et al., 2018). Additionally, the Athena X-ray Observatory will provide us with a great opportunity to measure turbulent velocities in cool cores (Nandra et al., 2013; Barcons et al., 2017).
## 6 Summary and conclusions
In this paper, we have conducted a detailed study of the radial profiles of the ICM thermodynamic properties in cool-core systems in a sample of four galaxy clusters (RXCJ1504.1-0248, A3112, A4059, and A478) using archival X-ray data from the _Chandra_ X-ray Observatory. The goal of this study was to observe the characteristics of the cool cores in the sample and explore mechanisms to generate the observed characteristics. To this end, we have measured the turnover radius in the radial profile of the ICM temperature and defined the cool-core radius as the turnover radius. We have also studied the thermodynamic properties of the ICM within the cool-core radius and the relation between the cool-core radius and the cluster mass. The main conclusions of this paper are summarized as follows:
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \(T_{\rm center}\) & \(T_{\rm peak}\) & \(r_{\rm cool}\) & \(\alpha_{1}\) & \(\alpha_{2}\) & \(\sigma_{\rm int}(\%)\) \\ \hline \(0.34\pm 0.13\) & \(0.99\pm 0.03\) & 1 (fixed) & \(0.78\pm 0.12\) & \(-0.14\pm 0.10\) & \(3.25^{+1.04}_{-0.79}\) \\ \hline \end{tabular}
\end{table}
Table 8: Best-fit parameters derived from the simultaneous fitting of the scaled temperature profiles.
1. Since cool cores are characterized by a significant drop in the ICM temperature toward the cluster center, we defined the cool-core radius as the turnover radius in the ICM temperature profile, allowing us to study the ICM thermodynamic properties in the regions inside and outside the cool cores. We found no apparent feature at the cool-core radius in the radial profiles of the ICM electron density, pressure, entropy, and radiative cooling time. These results indicate that the boundary between inside and outside cool cores is primarily identified in the temperature profile, suggesting that the ICM temperature is the most fundamental factor for characterizing cool cores.
2. In our sample, the radiative cooling time of the ICM at the cool-core radius exceeds 10 Gyr, with
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Cluster & \(\langle|\mathrm{d}T|/T\rangle\) & \(\langle|\mathrm{d}n|/n\rangle\) & \(\langle|\mathrm{d}P|/P\rangle\) & \(\langle|\mathrm{d}K|/K\rangle\) \\ \hline RXCJ1504.1-0248 & \(0.047\pm 0.041\) & \(0.019\pm 0.015\) & \(0.046\pm 0.044\) & \(0.051\pm 0.044\) \\ A3112 & \(0.021\pm 0.017\) & \(0.029\pm 0.051\) & \(0.032\pm 0.060\) & \(0.030\pm 0.030\) \\ A4059 & \(0.046\pm 0.035\) & \(0.028\pm 0.019\) & \(0.031\pm 0.027\) & \(0.060\pm 0.043\) \\ A478 & \(0.075\pm 0.066\) & \(0.015\pm 0.010\) & \(0.073\pm 0.060\) & \(0.077\pm 0.069\) \\ \hline \end{tabular}
\end{table}
Table 9: Average fractional perturbations in the temperature, electron number density, pressure, and entropy profiles within the cool cores.
Figure 9: Scaled radial profiles of the ICM electron number density (top left), pressure (top right), entropy (bottom left), and radiative cooling time (bottom right) in the sample. The radial profiles are normalized in the same manner as in Figure 8.
RXCJ1504.1-0248 exhibiting a radiative cooling time of \(32^{+5}_{-11}\) Gyr at its cool-core radius. Such a long time scale indicates that not only radiative cooling but also additional mechanisms may be required to explain the observed properties. Gas sloshing is possible to displace cool gas generated by radiative cooling away from the center and induce mixing between such cool gas and the ambient hot gas in the outer region, leading to a temperature drop.
3. We found that the best-fit relation between the cool-core radius and the cluster mass \(M_{500}\) in our sample is consistent with a linear relation. Our findings suggest that cool cores are linked to the evolution of galaxy clusters. Cool cores may co-evolve with their host galaxy clusters. Since the cluster mass increases owing to mergers, which in turn induce gas sloshing, it is plausible that gas sloshing plays a significant role in the evolution of cool cores.
4. A possible universal form in the temperature profiles scaled by the cool core radius is found in the cool cores in our sample. However, the scaled profiles of the other components within the cool cores are highly scattered, indicating that there is no universal form.
5. The one-dimensional Mach numbers of turbulence (\(\mathcal{M}_{\rm 1D}\)) in the cool cores in the sample are constrained by analyzing the entropy perturbations in the ICM. The inferred \(\mathcal{M}_{\rm 3D}\) is significantly lower than unity, suggesting that subsonic gas motions are dominant in the cool cores.
We thank Keiichi Umetsu for fruitful discussions. We also thank H.-Y. Karen Yang for helpful comments. The scientific results of this paper are based in part on data obtained from the Chandra Data Archive: ObsID 4935, 5793, 17197, 17669, 17670, 2216, 2516, 6972, 7323, 7324, 13135, 897, 5785, 1669, and 6102. S.U. acknowledges the support from the National Science and Technology Council of Taiwan (NSTC 111-2811- M-007-008 and 111-2112-M-001-026-MY3). We thank the ASIAA Summer Student Program 2019 for hospitality and providing us with an opportunity to launch the project. CXO astropy (Astropy Collaboration et al., 2013, 2018), CIAO (Fruscione et al., 2006), XSPEC (Arnaud, 1996)
## Appendix A Region selection for X-ray spectral analysis
Here, we summarize detailed information regarding the region selection and radial bin sizes of the regions for the X-ray spectral analysis.
## Appendix B Radial profiles of the ICM metal abundance
Here, we show the radial profiles of the ICM metal abundance for the sample. We find that the ICM metal abundance in A3112 starts slightly decreasing at \(\sim 15\) kpc toward the center, A4059 also exhibits a decrease of the ICM metal abundance from \(\sim 20\) kpc to the center, which is consistent with that measured by Choi et al. (2004).
|
2309.17189 | RTFS-Net: Recurrent Time-Frequency Modelling for Efficient Audio-Visual
Speech Separation | Audio-visual speech separation methods aim to integrate different modalities
to generate high-quality separated speech, thereby enhancing the performance of
downstream tasks such as speech recognition. Most existing state-of-the-art
(SOTA) models operate in the time domain. However, their overly simplistic
approach to modeling acoustic features often necessitates larger and more
computationally intensive models in order to achieve SOTA performance. In this
paper, we present a novel time-frequency domain audio-visual speech separation
method: Recurrent Time-Frequency Separation Network (RTFS-Net), which applies
its algorithms on the complex time-frequency bins yielded by the Short-Time
Fourier Transform. We model and capture the time and frequency dimensions of
the audio independently using a multi-layered RNN along each dimension.
Furthermore, we introduce a unique attention-based fusion technique for the
efficient integration of audio and visual information, and a new mask
separation approach that takes advantage of the intrinsic spectral nature of
the acoustic features for a clearer separation. RTFS-Net outperforms the prior
SOTA method in both inference speed and separation quality while reducing the
number of parameters by 90% and MACs by 83%. This is the first time-frequency
domain audio-visual speech separation method to outperform all contemporary
time-domain counterparts. | Samuel Pegg, Kai Li, Xiaolin Hu | 2023-09-29T12:38:00Z | http://arxiv.org/abs/2309.17189v4 | # RTFS-Net: Recurrent time-frequency modelling for efficient audio-visual speech separation
###### Abstract
Audio-visual speech separation methods aim to integrate different modalities to generate high-quality separated speech, thereby enhancing the performance of downstream tasks such as speech recognition. Most existing state-of-the-art (SOTA) models operate in the time domain. However, their overly simplistic approach to modeling acoustic features often necessitates larger and more computationally intensive models in order to achieve SOTA performance. In this paper, we present a novel time-frequency domain audio-visual speech separation method: Recurrent Time-Frequency Separation Network (RTFS-Net), which applies its algorithms on the complex time-frequency bins yielded by the Short-Time Fourier Transform. We model and capture the time and frequency dimensions of the audio independently using a multi-layered RNN along each dimension. Furthermore, we introduce a unique attention-based fusion technique for the efficient integration of audio and visual information, and a new mask separation approach that takes advantage of the intrinsic spectral nature of the acoustic features for a clearer separation. RTFS-Net outperforms the previous SOTA method using only 10% of the parameters and 18% of the MACs. This is the first time-frequency domain audio-visual speech separation method to outperform all contemporary time-domain counterparts.
## 1 Introduction
The 'cocktail party problem' (Bronkhorst, 2000; Haykin & Chen, 2005; Cherry, 2005) describes our ability to focus on a single speaker's voice amidst numerous overlapping voices in a noisy environment. While humans effortlessly tackle this problem, replicating this ability in machines remains a longstanding challenge in the field of signal-processing. Audio-only Speech Separation (AOSS) methods (Luo & Mesgarani, 2019; Luo et al., 2020; Subakan et al., 2021), solely utilizing the mixed-speaker audio signal, face limitations in scenarios with strong background noise, reverberation, or heavy voice overlap. To overcome these issues, researchers turned to a multi-modal approach: Audio-visual Speech Separation (AVSS) (Gao & Grauman, 2021; Lee et al., 2021; Li et al., 2022). AVSS methods integrate additional visual cues into the paradigm and can be generally split into two main classifications: Time-domain (T-domain) and Time-Frequency domain (TF-domain) methods, each with their own benefits and challenges.
T-domain methods (Wu et al., 2019; Li et al., 2022; Martel et al., 2023; Lin et al., 2023) work on the long, uncompressed and high-dimensional audio features returned by the simple 1D convolutional encoder design proposed by Luo & Mesgarani (2018). This approach facilitates fine-grained, high
quality audio separation, but the resulting high parameter count and computational complexity required introduces extended training periods, intensive GPU usage and slow inference speeds. On the other hand, TF-domain methods (Afouras et al., 2018; Alfouras et al., 2018; Gao & Grauman, 2021) apply their algorithms on the complex 2D representation yielded by the Short-Time Fourier Transform (STFT). Typically, the STFT uses large windows and hop lengths to compress the data, resulting in more computationally efficient separation methods. However, from a historical perspective, all TF-domain methods have been substantially outperformed by T-domain methods. Based on our research, this gap in performance between the two domains mainly stems from three critical factors, which are the central focus of the ensuing discussion in this paper.
Firstly, while some attempts have been made (Afouras et al., 2020; Lee et al., 2021) to model amplitude and phase separately, no TF-domain AVSS methods explore the independent and tailored modelling of the two acoustic dimensions (time and frequency) in order to exploit this domain's advantage over T-domain methods. Recent research by Luo & Yu (2023) and Wang et al. (2023) in the AOSS field have capitalized on this advantage and outperformed their T-domain counterparts by large margins. However, their usage of large LSTM (Hochreiter & Schmidhuber, 1997) and transformer (Vaswani et al., 2017) architectures leads to a huge computational burden, making them an unattractive solution in the AVSS space. Secondly, while existing AVSS studies (Li et al., 2022; Lin et al., 2023) have explored various fusion strategies and improved separation performance by integrating visual information, they ignore how visual features from multiple receptive fields can increase model performance. This type of visual information serves as important a priori knowledge crucial for accurately extracting the target speaker's voice. Thirdly, TF-domain AVSS studies (Afouras et al., 2020; Lee et al., 2021; Gao & Grauman, 2021) do not pay adequate attention to the underlying complex nature of the features, and hence lose critical amplitude and phase information when separating the target speaker's voice from the audio mixture. This greatly degrades the reconstruction performance of the inverse STFT (iSTFT) and hence leads to poor model performance.
In this work, we propose a novel TF-domain AVSS method named Recursive Time-Frequency Separation Network (RTFS-Net, Figure 1). Using an STFT based audio encoder and a high-fidelity pretrained video encoder, RTFS-Net utilizes a series of stacked multilayer recursive RTFS Blocks to accurately extract salient features and perform target speaker extraction. Our contributions are threefold:
1. Our innovative RTFS Blocks tackle the first issue by effectively compressing features. At this compressed subspace, we explicitly model both acoustic dimensions individually, then in tandem before subsequently applying an attentional mechanism (TF-AR) to restore the dimensions with minimal information loss, allowing us to reap the benefits of independent time-frequency processing without bearing a substantial computational cost.
2. Our Cross-dimensional Attention Fusion (CAF) Block provides a low parameter and computationally efficient solution to the second issue by aggregating the multi-modal information through a multi-head attention strategy in order to optimally fuse the visual cues of the target speaker into the audio features, facilitating high quality separation.
3. The introduction of our Spectral Source Separation (\(S^{3}\)) Block addresses the third issue by explicitly reconstructing the plural features of the target speaker, thus achieving a higher quality separation without increasing computational cost.
We conducted comprehensive experimental evaluations on three widely used datasets: LRS2 (Afouras et al., 2018), LRS3 (Afouras et al., 2018) and VoxCeleb2 (Chung et al., 2018), to demonstrate the value of each contribution. To the best of our knowledge, RTFS-Net is the first TF-domain AVSS method to outperform all contemporary T-domain methods, achieving this while also exhibiting a clear advantage in terms of computational performance and a reduced parameter count. We provide a Web page where sample results can be listened to alongside this submission, and our code will be open sourced after publication for reproducibility purposes.
## 2 Related work
**Audio-only speech separation**. Modern deep learning advances introduced neural networks for speaker-agnostic speech separation, with Conv-TasNet (Luo & Mesgarani, 2019) and DualPathRNN (Luo et al., 2020) making significant T-domain breakthroughs. However, T-domain methods (Luo
& Mesgarani, 2019; Luo et al., 2020) often exhibit a marked performance degradation in reverberant conditions, attributed to their neglect of explicit frequency-domain modeling. Consequently, recent research focuses on high-performance speech separation models in the TF-domain. For instance, TFPSNet (Yang et al., 2022) incorporates the transformer from DPTNet (Chen et al., 2020) to assimilate spectral-temporal information. TF-GridNet (Wang et al., 2023) extends this by introducing a cross-frame self-attention module in order to achieve SOTA performance on the WSJ0-2mix (Hershey et al., 2016). However, the inclusion of many LSTM and Transformer layers results in extremely high computational complexity, leading to increased training times and GPU memory requirements. Lightweight models like A-FRCNN (Hu et al., 2021) and TDANet (Li et al., 2023) strike a balance between performance and efficiency by using an encoder-decoder paradigm with recurrent connections and top-down attention. However, their separation proves in noisy scenarios remains suboptimal compared to multi-modal approaches.
**Audio-visual speech separation**. Conventional AVSS statistical methods relied on knowledge-driven modeling (Loizou, 2013; Wang & Brown, 2006). Early deep-learning explorations into AVSS mainly occurred in the TF-domain, focusing on tasks like amplitude and phase reconstruction for target speakers (Afouras et al., 2018; Alfouras et al., 2018). With the advent of AV-ConvTasNet (Wu et al., 2019), it became a prevailing belief that T-domain AVSS methods consistently outperformed TF-domain methods. Notably, CTCNet (Li et al., 2022), inspired by thalamic brain structures, introduced a unique multiple-fusion approach. Leveraging multiscale contexts from visual and auditory cues, this module greatly enhanced the spatial scale of fused features, thus improving the model's separation capacity. However, as mentioned previously, T-domain methods come with a higher computational load. TF-domain AVSS techniques often use larger windows and hop sizes, which curtails computational complexity. Nevertheless, their full potential is yet to be realized. The recent Visualvoice model (Gao & Grauman, 2021) combined a multi-task learning framework for both AVSS and cross-modal speaker embedding, incorporating facial expressions, lip movements, and audio cues. Despite its touted efficiency, Visualvoice lacks robust modeling, and it lags behind many modern T-domain methods.
## 3 Methods
Expanding on prior SOTA methods (Wu et al., 2019; Li et al., 2022), we present our AVSS pipeline, illustrated in Figure 1. The mono-aural mixed-speech audio signal, \(\mathbf{x}\in\mathbb{R}^{1\times L_{\mathrm{a}}}\), in conjunction with the video frames capturing the target speaker's lip movements, \(\mathbf{y}\in\mathbb{R}^{1\times L_{\mathrm{c}}\times H\times W}\), are used as inputs to RTFS-Net in order to derive the target speaker's estimated audio signal, \(\hat{\mathbf{s}}\in\mathbb{R}^{1\times L_{\mathrm{a}}}\). In this context, \(L_{\mathrm{a}}\) and \(L_{\mathrm{v}}\) signify the durations of the audio and video inputs, while \(H\) and \(W\) correspond to the dimensions of the single-channel (grey-scale) video frames.
Firstly, the audio and video encoders extract auditory \(\mathbf{a}_{0}\) and visual \(\mathbf{v}_{0}\) features. These serve as the inputs for our separation network, which fuses these features and extracts the salient multimodal features, \(\mathbf{a}_{R}\). Next, our Spectral Source Separation (\(S^{3}\)) method is applied to separate the target speakers audio \(\mathbf{z}\) from the encoded audio signal \(\mathbf{a}_{0}\) using \(\mathbf{a}_{R}\). Finally, the target speaker's estimated audio feature map \(\mathbf{z}\) is decoded into the estimated audio stream \(\mathbf{\hat{s}}\) and compared with the ground-truth signal \(\mathbf{s}\) for training.
Figure 1: The overall pipeline of RTFS-Net. The red and blue solid lines signify the flow directions of auditory and visual features respectively. The snowflake indicates the component is not involved in training, i.e. the weights are frozen.
### Encoders
Our encoders distill relevant features from both audio and visual inputs. For the video encoder \(E_{\mathrm{v}}(\cdot)\), we employ the CTCNet-Lip (Li et al., 2022) pretrained network to extract the visual features \(\mathbf{v}_{0}\) of the target speaker,
\[\mathbf{v}_{0}=E_{\mathrm{v}}(\mathbf{y}),\quad\mathbf{v}_{0}\in\mathbb{R}^{C_{\mathrm{v}} \times T_{\mathrm{v}}}. \tag{1}\]
For the audio encoder, we firstly define \(\mathbf{\alpha}\) as the complex-valued hybrid TF-domain bins obtained using the STFT on \(\mathbf{z}\). For a mixed audio of \(n_{\mathrm{spk}}\) speakers, we define \(\mathbf{s}_{i}\) as the speech of speaker \(i\) and \(\mathbf{\epsilon}\) as the presence of some background noise, music, or other extraneous audio sources. Then,
\[\mathbf{\alpha}(t,f)=\mathbf{\epsilon}(t,f)+\sum_{i=1}^{n_{\mathrm{spk}}}\mathbf{s}_{i}(t, f),\quad\mathbf{\alpha}(t,f)\in\mathbb{C}\;\forall\;(t,f), \tag{2}\]
where \(t\in[0,T_{\mathrm{a}}]\) is the time dimension and \(f\in[0,F]\) is the frequency dimension. We concatenate the real (Re) and imaginary (Im) parts of \(\mathbf{\alpha}\) along a new 'channels' axis, then apply a 2D convolution \(E_{\mathrm{a}}(\cdot)\) with a \(3\times 3\) kernel and \(C_{\mathrm{a}}\) output channels across the time and frequency dimensions to obtain the auditory embedding \(\mathbf{a}_{0}\) of \(\mathbf{z}\). Using the symbol \(||\) for concatenation, we write,
\[\mathbf{a}_{0}=E_{\mathrm{a}}\left(\mathrm{Re}(\mathbf{\alpha})||\mathrm{Im}(\mathbf{ \alpha})\right),\quad\mathbf{a}_{0}\in\mathbb{R}^{C_{\mathrm{a}}\times T_{\mathrm{ a}}\times F}. \tag{3}\]
### Separation network
The core of RTFS-Net is a separation network that uses recursive units to facilitate information interaction in the two acoustic dimensions, and efficiently aggregates multimodal features using an attention-based fusion mechanism. The initial step is to preprocess the auditory \(\mathbf{a}_{0}\) and visual \(\mathbf{v}_{0}\) features separately in preparation for fusion. For the Visual Preprocessing (VP) Block we adopt the TDANet Block, as described in detail in Li et al. (2023). For the Audio Preprocessing (AP) Block we use a single RTFS Block, whose structure will be defined in Section 3.2.2. The outputs of the two preprocessing blocks are fed to our proposed CAF Block to fuse the multimedia features into a single enriched feature map (see Figure 1 and 2). This audio-visual fusion is subsequently processed with an additional \(R\) stacked RTFS Blocks. Following CTCNet (Li et al., 2022), these \(R\) sequential blocks share parameters (including the AP Block), which has been shown to reduce model size and increase performance, since it turns the series of blocks in to a recurrent neural architecture.
#### 3.2.1 Cross-dimensional Attention Fusion Block
The CAF Block (Figure 2) is a depth-wise and group-convolution based architecture designed to consume as little resources as possible while fusing the 2D visual data into the 3D audio data. It involves two separate fusion operations that we call the _attention_ fusion (\(\mathbf{f}_{1}\)) and the _gated_ fusion (\(\mathbf{f}_{2}\)). The attention fusion considers multiple visual sub-representation spaces to aggregate information from a wide receptive field and apply attention to the audio features. The gated fusion up-samples the visual information's time dimension, then expands the visual features into the TF-domain using \(F\) gates produced from the preprocessed audio features. We use Vaswani et al. (2017)'s 'keys' and 'values' nomenclature.
Figure 2: Structure of our CAF Block. As in Figure 1, the red and blue solid lines signify the flow directions of auditory and visual features respectively.
Firstly, let both \(\mathcal{P}_{1}\) and \(\mathcal{P}_{2}\) denote a depth-wise convolution with a \(1\times 1\) kernel and following global layer normalization (gLN) (Luo and Mesgarani, 2019). We generate the audio 'value' embeddings and the aforementioned 'gate' from the preprocessed audio signal \(\mathbf{a}_{1}\).
\[\mathbf{a}_{\rm val}=\mathcal{P}_{1}(\mathbf{a}_{1}),\quad\mathbf{a}_{\rm gate}={\rm ReLU }\left(\mathcal{P}_{2}(\mathbf{a}_{1})\right),\qquad\mathbf{a}_{\rm val},\,\mathbf{a}_{ \rm gate}\in\mathbb{R}^{C_{\rm a}\times T_{\rm a}\times F}. \tag{4}\]
**Attention Fusion**. We apply a 1D group convolution \(\mathcal{F}_{1}\) with \(C_{\rm a}\) groups and \(C_{\rm a}\times h\) output channels to \(\mathbf{v}_{1}\), followed by a gLN layer. By chunking across the channels, we can decompose the visual features into \(h\) distinct sub-feature representations, or attention 'heads', \(\mathbf{v}_{\rm h}\). Next, we take the'mean' of the \(h\) heads to aggregate the information from the different sub-feature representations into \(\mathbf{v}_{\rm m}\), then subsequently apply the Softmax operation in order to create a multi-head attention style set of features \(\mathbf{v}_{\rm attn}\) with values between 0 and 1. To align the video frame length \(T_{\rm v}\) with the audio's time dimension \(T_{\rm a}\), we use nearest neighbor interpolation, \(\phi\). This is written,
\[\mathbf{v}_{h} =\mathcal{F}_{1}(\mathbf{v}_{1}), \mathbf{v}_{h} \in\mathbb{R}^{(C_{\rm a}\times h)\times T_{\rm v}}, \tag{5}\] \[\mathbf{v}_{m} =\mathrm{mean}(\mathbf{v}_{h}[1],\dots,\mathbf{v}_{h}[h])), \mathbf{v}_{m} \in\mathbb{R}^{C_{\rm a}\times T_{\rm v}},\] (6) \[\mathbf{v}_{\rm attn} =\phi({\rm Softmax}(\mathbf{v}_{m})), \mathbf{v}_{\rm attn} \in\mathbb{R}^{C_{\rm a}\times T_{\rm a}}. \tag{7}\]
Our attention mechanism is applied to each of the \(F\) 'value' slices of \(\mathbf{a}_{\rm val}\) with length \(T_{\rm a}\),
\[\mathbf{f}_{1}[i]=\mathbf{v}_{\rm attn}\odot\mathbf{a}_{\rm val}[i],\quad\forall i\in\{1,\dots,F\}, \tag{8}\]
where \(\mathbf{f}_{1}[i]\in\mathbb{R}^{C_{\rm a}\times T_{\rm a}}\ \forall i\implies\mathbf{f}_{1}\in \mathbb{R}^{C_{\rm a}\times T_{\rm a}\times F}\).
**Gated Fusion**. We use a 1D convolutional layer \(\mathcal{F}_{2}\) with kernel size 1, \(C_{\rm a}\) output channels and \(C_{\rm a}\) groups (since \(C_{\rm a}<C_{\rm v}\)), followed by a gLN layer to align \(C_{\rm v}\) with \(C_{\rm a}\). Next, we again use interpolation \(\phi\) to align \(T_{\rm v}\) with \(T_{\rm a}\) and generate the visual 'key' embeddings.
\[\mathbf{v}_{\rm key}=\phi\left(\mathcal{F}_{2}(\mathbf{v}_{1})\right), \quad\mathbf{v}_{\rm key}\in\mathbb{R}^{C_{\rm a}\times T_{\rm a}}. \tag{9}\]
Next, we utilize all \(F\) of the \(T_{\rm a}\)-dimensional slices of \(\mathbf{a}_{\rm gate}\) as unique gates to comprehensively expand the visual information into the TF-domain,
\[\mathbf{f}_{2}[i]=\mathbf{a}_{\rm gate}[i]\odot\mathbf{v}_{\rm key},\quad\forall i\in\{ 1,\dots,F\}, \tag{10}\]
where \(\mathbf{f}_{2}[i]\in\mathbb{R}^{C_{\rm a}\times T_{\rm a}}\ \forall i\implies\mathbf{f}_{2}\in \mathbb{R}^{C_{\rm a}\times T_{\rm a}\times F}\).
**CAF Block**. Finally, we sum the two fused features together. We can denote our CAF Block, \(\Phi\), as:
\[\mathbf{a}_{2}=\Phi(\mathbf{a}_{1},\mathbf{v}_{1})=\mathbf{f}_{1}+\mathbf{f}_{2}, \quad\mathbf{a}_{2}\in\mathbb{R}^{C_{\rm a}\times T_{\rm a}\times F}. \tag{11}\]
#### 3.2.2 RTFS Blocks
Compared to previous TF-domain AVSS methods (Afouras et al., 2020; Gao and Grauman, 2021; Alfouras et al., 2018; Lee et al., 2021), our RTFS Blocks use a dual-path architecture to explicitly model audio in both acoustic dimensions to improve the separation quality, as shown in Figure 3. We denote the auditory features input into the RTFS Block as \(\mathbf{A}\). Given our recurrent structure, \(\mathbf{A}\) represents either \(\mathbf{a}_{0}\) (input to AP Block) or the output from the previous RTFS Block with a skip connection: \(\mathbf{a}_{j}+\mathbf{a}_{0}\) for \(j\in\{1,...,R\}\). Note that in Figure 1 the residual connection is not shown for simplicity. Our RTFS Block processes auditory features in the four steps discussed below.
**Compression of time and frequency resolution**. We use a 2D convolution with a \(1\times 1\) kernel to convert \(\mathbf{A}\) to a smaller channel dimension \(D<C_{\rm a}\). This means we can effectively employ a larger \(C_{\rm a}\) value for detailed fusion (CAF) and separation (\(S^{3}\)), while maintaining a lightweight and efficient block design. Similar to Li et al. (2023), in the compression phase we employ \(q\) stacked 2D depth-wise convolutional layers with \(4\times 4\) kernels and stride 2, see Appendix A for the effects of different \(q\) values. The resultant multi-scale set with varying temporal and frequency resolutions can be denoted \(\{\mathbf{A}_{i}|i\in\{0,q-1\}\}\), where \(\mathbf{A}_{i}\in\mathbb{R}^{D\times\frac{T_{\rm a}}{2}\times\frac{F}{2^{q}}}\). We use adaptive average pooling \(p\) to compress each member of the set to the dimensions of the smallest member, \(\mathbf{A}_{q-1}\), then sum to obtain the compressed global features \(\mathbf{A}_{G}\). This is written:
\[\mathbf{A}_{G}=\sum_{i=0}^{q-1}p(\mathbf{A}_{i}),\quad\mathbf{A}_{G}\in\mathbb{R}^{D\times \frac{T_{\rm a}}{2^{q-1}}\times\frac{F}{2^{q-1}}}. \tag{12}\]
**Dual-path architecture**. The Dual-path RNN architecture has been extensively deployed in AOSS tasks (Luo et al., 2020; Chen et al., 2020; Wang et al., 2023). However, their usage of large LSTMs leads to high parameter counts and an elevated computational complexity. In natural language processing, Simple Recurrent Units (SRU (Lei et al., 2018)) were introduced to replace LSTMs by processing most of the operations in parallel, speeding up model training and inference time. Inspired by this, we adopt SRUs for the AVSS task, see Appendix B for a detailed analysis between different recurrent architectures.
As seen in Figure 3, we first process the _frequency_ dimension, then the _time_ dimension. As with all dual-path methods, the SRUs are applied across each slice in the _time_ dimension to process the _frequency_ dimension. Similar to (Wang et al., 2023), we unfold1 the features by zero-padding the frequency dimension of \(\mathbf{A}_{G}\), then unfolding with kernel size \(8\) and stride \(1\),
Footnote 1: [https://pytorch.org/docs/stable/generated/torch.nn.Unfold.html](https://pytorch.org/docs/stable/generated/torch.nn.Unfold.html)
\[\hat{\mathbf{R}}_{f}=\left[\mathrm{Unfold}(\mathbf{A}_{G}[:,t]),\;t\in\left\{0,\dots, T_{\text{a}}/2^{q-1}\right\}\right]\in\mathbb{R}^{8D\times\frac{T_{\text{a}}}{2^{q-1} }\times F^{\prime}}, \tag{13}\]
where \(F^{\prime}\) is the resulting unfolded and padded frequency dimension. Note: this operation leads to a large input dimension for the RNN, creating a major source of computational complexity. However, it is crucial for exemplary separation performance. This is why it is so important that we first compress the time-frequency resolution. After unfolding, layer normalization is applied in the channel dimension, and then a bidirectional, 4 layer \(\mathrm{SRU}\) with hidden size \(h_{\text{a}}\) is applied,
\[\tilde{\mathbf{R}}_{f}=\left[\mathrm{SRU}(\hat{\mathbf{R}}_{f}[:,t]),\;t\in\left\{0, \dots,T_{\text{a}}/2^{q-1}\right\}\right]\in\mathbb{R}^{2h_{\text{a}}\times \frac{T_{\text{a}}}{2^{q-1}}\times F^{\prime}}. \tag{14}\]
A transposed convolution \(\mathcal{T}\) with kernel 8 and stride 1 is used to restore the unfolded dimensions,
\[\tilde{\mathbf{R}}_{f}=\mathcal{T}\left(\tilde{\mathbf{R}}_{f}\right)+\mathbf{A}_{G},\quad \tilde{\mathbf{R}}_{f}\in\mathbb{R}^{D\times\frac{T_{\text{a}}}{2^{q-1}}\times \frac{F}{2^{q-1}}}. \tag{15}\]
We next process the _time_ dimension using the same method, and then finally apply Wang et al. (2023)'s TF-domain self-attention network, denoted \(\mathrm{Attn}\). These two steps are expressed below as:
\[\tilde{\mathbf{R}}_{t}=\mathcal{T}\left(\tilde{\mathbf{R}}_{t}\right)+\tilde{\mathbf{R}}_ {f},\quad\tilde{\mathbf{A}}_{G}=\mathrm{Attn}(\tilde{\mathbf{R}}_{t})+\tilde{\mathbf{R}}_ {t},\qquad\tilde{\mathbf{R}}_{t},\;\tilde{\mathbf{A}}_{G}\in\mathbb{R}^{D\times\frac{T _{\text{a}}}{2^{q-1}}\times\frac{F}{2^{q-1}}}. \tag{16}\]
**Reconstruction of time and frequency resolution**. Reconstruction of high-quality temporal and frequency features presents a formidable challenge. Delving into the underlying causes, the reconstruction process often relies on interpolation or transposed convolutions for up-sampling, resulting in the emergence of checkerboard artifacts in the reconstructed outputs. To solve this problem, we propose the Temporal-Frequency Attention Reconstruction (TF-AR) unit, denoted \(I(\cdot,\cdot)\). This unit prioritizes the reconstruction of key features by exploiting an attention mechanism, thus reducing information loss. For two tensors \(\mathbf{m}\) and \(\mathbf{n}\), we define the TF-AR unit as:
\[I(\mathbf{m},\mathbf{n})=\phi\left(\sigma\left(W_{1}\left(\mathbf{n}\right)\right)\right) \odot W_{2}\left(\mathbf{m}\right)+\phi\left(W_{3}\left(\mathbf{n}\right)\right), \tag{17}\]
where \(W_{1}(\cdot)\), \(W_{2}(\cdot)\) and \(W_{3}(\cdot)\) denote 2D depth-wise convolutions with \(4\times 4\) kernels followed by a gLN layer. We use the notation \(\sigma\) for the sigmoid function, \(\odot\) for element-wise multiplication and
Figure 3: RTFS Block design. After compressing the data to a more efficient size, we process first the frequency dimension, then the time dimension, then both dimensions in tandem using TF-domain self-attention to capture inter-dependencies. We then carefully restore the data to its original dimensions using our Temporal-Frequency Attention Reconstruction units.
\(\phi\) for nearest neighbour interpolation (up-sampling). To conduct the reconstruction, we firstly use \(q\) TF-AR units to fuse \(\mathbf{\bar{A}}_{G}\) with every element of \(\mathbf{A}_{i}\),
\[\mathbf{A}_{i}^{\prime}=I(\mathbf{A}_{i},\mathbf{\bar{A}}_{G}),\quad\mathbf{A}_{i}^{\prime}\in \mathbb{R}^{D\times\frac{\mathbf{\bar{T}}_{\mathbf{\bar{x}}}}{2^{\mathbf{\bar{r}}}}\times \frac{F}{2^{\mathbf{\bar{r}}}}}\ \forall i\in\{0,\dots,q-1\}. \tag{18}\]
Next, the multi-scale features are continuously up-sampled and aggregated using \(q-1\) additional TF-AR units to obtain the finest-grained auditory features, \(\mathbf{A}_{0}^{\prime\prime}\in\mathbb{R}^{D\times T_{\mathbf{\bar{x}}}\times F}\). A residual connection to \(\{\mathbf{A}_{i}|i\in\{0,q-2\}\}\) is crucial, and creates a U-Net (Ronneberger et al., 2015) style structure.
\[\mathbf{A}_{q-1-i}^{\prime\prime}=I(\mathbf{A}_{q-1-i}^{\prime},\mathbf{A}_{q-i}^{\prime} )+\mathbf{A}_{q-1-i},\quad\forall i\in\{0,\dots,q-2\}. \tag{19}\]
Finally, \(\mathbf{A}_{0}^{\prime\prime}\) is converted back to \(C_{\mathrm{a}}\) channels using a 2D convolution with a \(1\times 1\) kernel and a residual connection to the input of the RTFS Block is added. The output features are used as the audio input for the CAF block after the AP Block, and as the input to the next RTFS Block during the repeated \(R\) stacked RTFS Blocks stage of the separation network.
### Spectral Source Separation
The majority of existing T-domain AVSS methods (Wu et al., 2019; Li et al., 2022; Martel et al., 2023) generate a mask \(\mathbf{m}\) from the refined features \(\mathbf{a}_{R}\), then use element-wise multiplication \(\odot\) between the encoded audio mixture \(\mathbf{a}_{0}\) and the mask in order to obtain the target speaker's separated speech \(\mathbf{z}\). This is written,
\[\mathbf{z}=\mathbf{m}\odot\mathbf{a}_{0}. \tag{20}\]
Some TF-domain AVSS methods (Afouras et al., 2018; Gao and Grauman, 2021; Lee et al., 2021) directly apply this approach to the TF-domain setting without modification, while other methods (Alfouras et al., 2018; Owens and Efros, 2018) choose not to use masks at all, and directly passes the output of the separation network \(\mathbf{a}_{R}\) to the decoder. However, we found both of these TF-domain methods for target speaker extraction to be suboptimal. We need to pay attention to the underlying complex nature of the audio features produced by the STFT in order to obtain a clearer distinction. This leads us to introduce our \(S^{3}\) Block, which utilizes a high-dimensional application of the multiplication of complex numbers, Equation 21, to better preserve important acoustic properties during the speaker extraction process, see Section 5.2 and Appendix C.
\[(a+bi)(c+di)=ac-bd+i(ad+bc). \tag{21}\]
Firstly, a mask \(\mathbf{m}\) is generated from \(\mathbf{a}_{R}\) using a 2D convolution \(\mathcal{M}\) with a \(1\times 1\) kernel,
\[\mathbf{m}=\mathrm{ReLU}\left(\mathcal{M}\left(\mathrm{PReLU}(\mathbf{a}_{R})\right) \right),\quad\mathbf{m}\in\mathbb{R}^{C_{\mathrm{a}}\times T_{\mathrm{a}}\times F}. \tag{22}\]
Without loss of generality, we choose the top half of the channels as the real part, and the bottom half of the channels as the imaginary part. We hence define,
\[\mathbf{m}^{\mathrm{r}} =\mathbf{m}\left[0:C_{\mathrm{a}}/2-1\right], \mathbf{E}^{\mathrm{r}} =\mathbf{a}_{0}\left[0:C_{\mathrm{a}}/2-1\right], \tag{23}\] \[\mathbf{m}^{\mathrm{i}} =\mathbf{m}\left[C_{\mathrm{a}}/2:C_{\mathrm{a}}\right], \mathbf{E}^{\mathrm{i}} =\mathbf{a}_{0}\left[C_{\mathrm{a}}/2:C_{\mathrm{a}}\right]. \tag{24}\]
Next, with \(||\) denoting concatenation along the channel axis, we apply Equation 21 and calculate:
\[\mathbf{z}^{\mathrm{r}} =\mathbf{m}^{\mathrm{r}}\odot\mathbf{E}^{\mathrm{r}}-\mathbf{m}^{\mathrm{i}} \odot\mathbf{E}^{\mathrm{i}}, \tag{25}\] \[\mathbf{z}^{\mathrm{i}} =\mathbf{m}^{\mathrm{r}}\odot\mathbf{E}^{\mathrm{i}}+\mathbf{m}^{\mathrm{i}} \odot\mathbf{E}^{\mathrm{r}},\] (26) \[\mathbf{z} =(\mathbf{z}^{\mathrm{r}}\ ||\ \mathbf{z}^{\mathrm{i}}),\qquad\mathbf{z}\in \mathbb{R}^{C_{\mathrm{a}}\times T_{\mathrm{a}}\times F}, \tag{27}\]
to obtain \(\mathbf{z}\), the target speaker's separated encoded audio features.
### Decoder
The decoder \(D(\cdot)\) takes the separated target speaker's audio features \(\mathbf{z}\) and reconstructs the estimated waveform \(\mathbf{\hat{s}}=D(\mathbf{z})\), where \(\mathbf{\hat{s}}\in\mathbb{R}^{1\times L_{\mathrm{a}}}\). Specifically, \(\mathbf{z}\) is passed through a transposed 2D convolution with a \(3\times 3\) kernel and 2 output channels. Mirroring the encoder, we take the first channel as the real part and the second channel as the imaginary part and form a complex tensor. This tensor is passed to the iSTFT to recover the estimated target speaker audio.
## 4 Experimental Setup
**Datasets**. We utilized the same AVSS datasets as previous works (Gao and Grauman, 2021; Li et al., 2022) in the field in order to create a fair comparison of performance: LRS2-2Mix (Afouras et al., 2018a), LRS3-2Mix (Afouras et al., 2018b) and VoxCeleb2-2Mix (Chung et al., 2018). The models were trained and tested on two second 25fps video clips with an audio sampling rate of 16 kHz. This equates to 32,000 audio frames and 50 video frames, see Appendix D for more details.
**Evaluation**. Following recent literature (Li et al., 2022), SI-SNRi and SDRi were used to evaluate the quality of the separated speeches, see Appendix E. For these metrics, a higher value indicates better performance. The parameter counts displayed in the results tables are the number of trainable parameters, excluding the pretrained video model. Likewise, the number of Multiply-ACcumulate (MAC) operations indicates the MACs used while processing two seconds of audio at 16 kHz, excluding the pretrained video network. In our main results table, we also include inference time: the time taken to process 2 seconds of audio on a NVIDIA 2080 GPU. For parameters, MACs and inference speed, a lower value is preferable and is the main focus of this work. Model hyperparameter configurations are available in Appendix F. All training settings are available in Appendix G.
## 5 Results
### Comparisons with state-of-the-art methods
In Table 1, we directly compared RTFS-Net with a range of SOTA AVSS methods. We investigated three different model configurations corresponding to \(R=4\), \(R=6\) and \(R=12\) RTFS Blocks, including the AP Block. On the LRS2-2Mix dataset, RTFS-Net-4 achieved an SI-SNRi of 14.1 dB, only slightly lower than CTCNet's 14.3 dB, but it realized a 10-fold and 8-fold reduction in model parameters and computational cost respectively. Furthermore, RTFS-Net-6 outperformed the previous SOTA technique, CTCNet, on LRS2-2Mix and achieved comparable performance on VoxCeleb2-2Mix, striking a good balance between performance and efficiency. Our method not only showed its superiority in complex environments and the robustness of our TF-domain-based approach, but also comes with a significantly smaller model footprint. Even RTFS-Net-12, which outperformed all other techniques on all datasets, presents a three fold reduction in computational cost, while utilizing a mere 1/10\({}^{\text{th}}\) of the parameters. It is also worth noting that the CTCNet results used \(R=16\) repeats. To the best of our knowledge, RTFS-Net is the first AVSS method to use under 1M parameters, and the first TF-domain model to outperform all T-domain counterparts.
To demonstrate the separation quality of our method, we randomly selected two long uninterrupted audio samples from the VoxCeleb2 test set and mixed them together. The target speaker's estimated separated audio for several AVSS methods can be listened to for a selection of T-domain and TF-domain AVSS methods at the Web page found in Appendix H. We showed samples for AV-ConvTasNet (Wu et al., 2019), VisualVoice (Gao and Grauman, 2021), AVLIT (Martel et al., 2023) and the previous SOTA method, CTCNet (Li et al., 2022).
\begin{table}
\begin{tabular}{c|c c|c c|c c|c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{2}{c|}{LRS2-2Mix} & \multicolumn{2}{c|}{LRS3-2Mix} & \multicolumn{2}{c|}{VoxCeleb2-2Mix} & \multicolumn{1}{c}{Params} & MACs & Time \\ & SI-SNRi & SDRi & SI-SNRi & SDRi & SI-SNRi & SDRi & (M) & (G) & (ms) \\ \hline \multicolumn{1}{c|}{CaffNet-C*} & - & 12.5 & - & 12.3 & - & - & - & - & - \\ \multicolumn{1}{c|}{Thann-Dat} & - & 11.6 & - & - & - & - & - & - & - \\ \multicolumn{1}{c|}{AV-ConvTasnet} & 12.5 & 12.8 & 11.2 & 11.7 & 9.2 & 9.8 & 16.5 & - & 60.3 \\ \multicolumn{1}{c|}{VisualVoice} & 11.5 & 11.8 & 9.9 & 10.3 & 9.3 & 10.2 & 77.8 & - & 130.2 \\ \multicolumn{1}{c|}{AVLIT} & 12.8 & 13.1 & 13.5 & 13.6 & 9.4 & 9.9 & 5.8 & 36.4 & **53.4** \\ \multicolumn{1}{c|}{CTCNet} & 14.3 & 14.6 & 17.4 & 17.5 & 11.9 & 13.1 & 7.1 & 167.2 & 122.7 \\ \hline \multicolumn{1}{c|}{RTFS-Net-4} & 14.1 & 14.3 & 15.5 & 15.6 & 11.5 & 12.4 & **0.7** & **21.9** & 57.8 \\ \multicolumn{1}{c|}{RTFS-Net-6} & 14.6 & 14.8 & 16.9 & 17.1 & 11.8 & 12.8 & **0.7** & 30.5 & 64.7 \\ \multicolumn{1}{c|}{RTFS-Net-12} & **14.9** & **15.1** & **17.5** & **17.6** & **12.4** & **13.6** & **0.7** & 56.4 & 109.9 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of RTFS-Net with existing AVSS methods on the LRS2-2Mix, LRS3-2Mix and VoxCeleb2-2Mix datasets. These metrics are averaged across all speakers for each test set, larger SI-SNRi and SDRi values indicate better performance. ‘-’ indicates the results are not reported in the original paper. ‘*’ indicates that the audio is reconstructed using the ground-truth phase.
### Ablation study
**Cross-dimensional Attention Fusion (CAF)**. To empirically show the value of our CAF Block, we used a reduced version of RTFS-Net-4 that limited the power of the separation network and emphasised the fusion network, see Appendix F.1. As a baseline, we substituted the CAF block with a TF-domain adaptation of the fusion strategy employed by the previous SOTA method, CTCNet. CTCNet uses a concatenation approach: it uses interpolation to upscale the video dimensions (\(C_{\mathrm{v}}\times T_{\mathrm{v}}\)) to the dimensions of the audio (\(C_{\mathrm{a}}\times T_{\mathrm{a}}\)), concatenates along the channels and then uses a convolution to restore the audio channel dimension, \(C_{\mathrm{a}}\). Similar to Equation 8, we adapted this to a 3D TF-domain audio setting. In Table 2 we observe that despite our CAF Block using only 3.6% of the parameters and 1.3% of the MACs, it outperformed the baseline by a large margin. This suggests that multimodal fusion can be effectively performed with a well-designed, small model.
**Temporal-Frequency Attention Reconstruction (TF-AR)**. To test the efficacy of our RTFS Block's reconstruction method, i.e. the TF-AR units, we compared RTFS-Net-4 with a baseline that used interpolation and addition to perform the work of the TF-AR units, similar to the upsampling process in U-Net (Ronneberger et al., 2015). We observe in Table 3 that our TF-AR units boosted performance by a substantial 1dB in both performance metrics with an almost negligible affect on computational complexity due to the depth-wise convolutional nature of the TF-AR units.
**Spectral Source Separation (\(S^{3}\))**. In these experiments we further reduced the model configuration seen in Appendix F.1 by setting \(C_{\mathrm{a}}=128\). We tested our \(S^{3}\) Block against four alternative methods. _Regression_ gives \(\mathbf{a}_{R}\) directly to the decoder to decode into the target speaker's audio. _Mask_ is the approach used by many SOTA AOSS methods such as Luo et al. (2020) and discussed in Section 3.3. _Mask + Gate_ and _Mask + DW-Gate_ both apply a convolutional gate, as seen in Luo et al. (2019), after the mask. The mask is fed into two convolutions with respective Tanh and Sigmoid activations. The outputs are multiplied together to form the final mask. _DW_ here indicates the usage of depth-wise convolutions.
Table 4 shows the _Regression_ approach was the least effective, with significantly lower metrics than the other methods. However, it also utilized the fewest MACs and parameters. For a very small increase in parameters, the _Mask_ and _Mask+DW-Gate_ approaches yielded far better results. A
\begin{table}
\begin{tabular}{c|c c|c c|c c} \hline \hline Target Speaker & \multicolumn{2}{c|}{LRS2-2Mix} & RTFS-Net & RTFS-Net & Extraction & Extraction \\ Extraction Method & SI-SNRi & SDRi & Params (K) & MACs (G) & Params (K) & MACs (M) \\ \hline Regression & 10.0 & 9.9 & **208** & **3.0** & **0** & **0** \\ Mask & 10.8 & 11.2 & 224 & 3.6 & 16 & 534 \\ Mask + DW-Gate & 10.8 & 11.3 & 225 & 3.6 & 17 & 542 \\ Mask + Gate & 11.1 & 11.6 & 257 & 4.6 & 49 & 1595 \\ \(S^{3}\) (ours) & **11.3** & **11.7** & 224 & 3.6 & 16 & 534 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Comparison of source separation methods on the LRS2-2Mix dataset. The _Extraction Parameters_ and _MACs_ represent those used only in the _Target Speaker Extraction Method_.
\begin{table}
\begin{tabular}{c|c c|c c|c c} \hline \hline \multirow{2}{*}{
\begin{tabular}{c} AV Fusion Strategies \\ \end{tabular} } & \multicolumn{2}{c|}{LRS2-2Mix} & RTFS-Net & RTFS-Net & Fusion & Fusion \\ & SI-SNRi & SDRi & Params (K) & MACs (G) & Params (k) & MACs (M) \\ \hline CTCNet Fusion (adapted) & 11.3 & 11.7 & 528 & 14.3 & 197 & 6365 \\ CAF Block (ours) & **11.7** & **12.1** & **339** & **8.0** & **7** & **83** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison of audio-visual fusion strategies on the LRS2-2Mix dataset. _RTFS-Net Parameters_ and _MACs_ indicate the entire network structure. The _Fusion Parameters_ and _MACs_ represent those used only during the fusion process.
\begin{table}
\begin{tabular}{c|c c|c c|c c} \hline \hline \multirow{2}{*}{TF-AR} & \multicolumn{2}{c|}{LRS2-2Mix} & RTFS-Net & RTFS-Net & TF-AR & TF-AR \\ & SI-SNRi & SDRi & Params (K) & MACs (G) & Params (k) & MACs (M) \\ \hline Without & 13.0 & 13.3 & **729** & **21.4** & **0** & **0** \\ With (ours) & **14.1** & **14.3** & 739 & 21.9 & 10 & 494 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Validation for the effectiveness of the TF-AR units on the LRS2-2Mix dataset. The _TF-AR Parameters_ and _MACs_ represent those used only in the TF-AR block.
non depth-wise gate increased the performance further, but the number of MACs and parameters is significantly higher. However, all methods were eclipsed by the performance of our \(S^{3}\) Block. This mathematics-based approach does not significantly increase the parameter or MAC count, thereby providing a performance enhancement to all TF-domain AVSS methods at no additional cost - likely benefiting AOSS methods as well. A visualization is available in Appendix C.
## 6 Conclusion
In this study we introduced RTFS-Net, a novel approach to AVSS that explicitly models time and frequency dimensions at a compressed subspace to improve performance and increase efficiency. Empirical evaluations across multiple datasets demonstrate the superiority of our approach. Notably, our method achieves remarkable performance improvements while maintaining a significantly reduced computational complexity and parameter count. This indicates that enhancing AVSS performance does not necessarily require larger models, but rather innovative and efficient architectures that better capture the intricate interplay between audio and visual modalities.
## 7 Reproducibility Statement
The code for RTFS-Net was written in Python 3.10 using standard Python deep learning libraries, specifically PyTorch and PyTorch Lightning. In order to accommodate full reproducibility, we will open-source the code for RTFS-Net under the MIT licence on GitHub once this paper has been accepted into the conference. The code shall include all files used to reproduce the experiments seen in Table 1, including the conda environment we used to run the code, the full config files for RTFS-Net-4, RTFS-Net-6 and RTFS-Net-12, the weights for the pretrained video model and the RTFS-Net model code itself, including TDANet and all layers, blocks and networks mentioned in this paper. Datasets must be obtained separately from the references provided, see Appendix D, as they are the property of the respective publishers, but we provide the data-preprocessing scripts alongside this code-base. The GPU optimized PyTorch implementation of the SRU is already open-source and available on PyPi2, supplied by the original author. Experimentation and training was accomplished using a single server with 8 NVIDIA 3080 GPUs, see Appendix G for additional details. For those who wish to recreate the code themselves, all hyperparameters are listed in Appendix F. Evaluation metrics and loss functions are described and mathematically defined in Appendix G.1 and E respectively, but we will additionally provide these in the code-base.
Footnote 2: [https://pypi.org/project/sru/](https://pypi.org/project/sru/)
|
2309.14480 | The influence of antiferromagnetic spin cantings on the magnetic helix
pitch in cubic helimagnets | In cubic helimagnets MnSi and Cu2OSeO3 with their nearly isotropic magnetic
properties, the magnetic structure undergoes helical deformation, which is
almost completely determined by the helicoid wavenumber k = D / J, where
magnetization field stiffness J is associated with isotropic spin exchange, and
D is a pseudoscalar value characterizing the antisymmetric
Dzyaloshinskii-Moriya (DM) interaction. While the wavenumber can be measured
directly in a diffraction experiment, the values of J and D can be calculated
from the constants of pair spin interactions, which enter as parameters into
the Heisenberg energy. However, the available analytical expression for D,
which is of the first order in the spin-orbit coupling (SOC), has significant
problems with accuracy. Here we show that hardly observable distortions of the
magnetic structure, namely the antiferromagnetic spin cantings, can
significantly change the constant D in the next approximation in SOC, thus
affecting the wavenumber of magnetic helicoids. The obtained analytical
expressions agree with the results of numerical simulation of magnetic helices
in Cu2OSeO3 to within a few percent. | Viacheslav A. Chizhikov, Vladimir E. Dmitrienko | 2023-09-25T19:24:28Z | http://arxiv.org/abs/2309.14480v1 | # The influence of antiferromagnetic spin cantings on the magnetic helix pitch in cubic helimagnets
###### Abstract
In cubic helimagnets MnSi and Cu\({}_{2}\)OSeO\({}_{3}\) with their nearly isotropic magnetic properties, the magnetic structure undergoes helical deformation, which is almost completely determined by the helicoid wavenumber \(k=\mathcal{D}/\mathcal{J}\), where magnetization field stiffness \(\mathcal{J}\) is associated with isotropic spin exchange, and \(\mathcal{D}\) is a pseudoscalar value characterizing the antisymmetric Dzyaloshinskii-Moriya (DM) interaction. While the wavenumber can be measured directly in a diffraction experiment, the values of \(\mathcal{J}\) and \(\mathcal{D}\) can be calculated from the constants of pair spin interactions, which enter as parameters into the Heisenberg energy. However, the available analytical expression for \(\mathcal{D}\), which is of the first order in the spin-orbit coupling (SOC), has significant problems with accuracy. Here we show that hardly observable distortions of the magnetic structure, namely the antiferromagnetic spin cantings, can significantly change the constant \(\mathcal{D}\) in the next approximation in SOC, thus affecting the wavenumber of magnetic helicoids. The obtained analytical expressions agree with the results of numerical simulation of magnetic helices in Cu\({}_{2}\)OSeO\({}_{3}\) to within a few percent.
## I Introduction
The helical order in Nature is found in a variety of systems, from condensed matter physics to biology [1], and serves as an important example of primitive self-organization. This self-organization can create rather complex structures, such as blue phases in cholesteric liquid crystals [2; 3] or skyrmion textures in chiral cubic magnets MnSi, MnGe, Fe\({}_{1-x}\)Co\({}_{x}\)Si, Cu\({}_{2}\)OSeO\({}_{3}\), etc. [4; 5; 6; 7; 8; 9]. At the same time, there is a remarkable similarity in the physics of such different systems [10; 11; 12]. Nevertheless, in each individual case, its own microscopic mechanisms are responsible for the emergence of the helical order, and significant efforts are required to find them. In particular, it is necessary to understand how local interactions in a system affect its final structure. For example, the first question is: what determines the period and direction of spirals? On the other hand, the description of the microscopic particularities is important both for applications such as spintronics and multiferroics [13; 14; 15] and for fundamental problems such as the topological Hall effect [16].
Since the discovery of the chiral magnetic properties of MnSi in 1976 [17; 18], the phenomenological theory based on the Ginzburg-Landau (GL) free energy has been used to describe and predict twisted magnetic structures [19; 20]. However, this approach, which uses our knowledge of the symmetry of a physical system, cannot tell how large the values of the free energy coefficients are and how they relate to the spin coupling parameters. Microscopic theories, such as the Heisenberg model of a classical ferromagnet with a spin-orbit term proposed by Dzyaloshinskii and Moriya [21; 22; 23; 24]. in turn, have the disadvantage that they are difficult to use in any analytical calculations. Nevertheless, despite some doubts about the correctness of the Heisenberg model for itinerant magnets, it is often used for numerical simulations [25; 26; 27; 28; 12]. For this model, it is necessary to know the parameters of spin interactions: constants \(J_{ij}\) of the isotropic exchange between \(i\)th and \(j\)th atoms, and pseudovectors \(\mathbf{D}_{ij}\) of the antisymmetric Dzyaloshinskii-Moriya (DM) exchange. These parameters can be obtained in two ways: from a comparison of theoretical predictions with experimental data and from _ab initio_ calculations [29; 30]. Using the Ruderman-Kittel-Kasuya-Yoshida theory that is more relevant for itinerant magnets allows one to calculate only the magnetization stiffness \(\mathcal{J}\), but not the DM interaction parameter \(\mathcal{D}\)[31; 32].
These two approaches, microscopic and phenomenological, have been used for many years to describe the same magnetic structures, complementing each other well. However, for a correct transition from one model to another, even for such a simple case as MnSi, it took a long time. The transition consists in expressing the constants entering the GL energy of the continuum model in terms of the parameters \(J_{ij}\), \(\mathbf{D}_{ij}\) of the Heisenberg model. The reverse transition is obviously ambiguous due to the large number of microscopic parameters. In Refs. [33; 34], such a transition was carried out for the first time for MnSi crystals in the nearest neighbor approximation. An important side result of this work was the prediction of antiferromagnetic spin cantings, which are an inherent feature of the magnetic structure of helimagnets. Further, the approach was first extended by taking into account interactions with next-nearest neighbors in MnSi [32] and, finally, generalized to the case of other cubic helimagnets, including Cu\({}_{2}\)OSeO\({}_{3}\)[35]. For the multiferroic Cu\({}_{2}\)OSeO\({}_{3}\), the spin cantings play an additional important role, connecting its magnetic and ferroelectric properties [36]. An alternative mathematical approach describing the transition between the microscopic and phenomenological models for crystals with the \(B20\) structure (FeGe) is developed in Refs. [37; 38].
Previously, when passing from the microscopic model to the continuum one for cubic helimagnets, only contributions to the energy up to the second order in spin-orbit coupling (SOC), inclusive, were taken into account. How
ever, the accuracy of the transition remains unexplored, and there is reason to believe that this approximation is insufficient for a correct calculation of the wavenumber of spin helicoids. In this paper, we will calculate the third-order SOC contributions to the energy and show that they provide a good approximation for calculating wavenumbers. In Sec. II, we describe the current state of the problem of transition from a microscopic model to a continuum one and show that the Keffer rule [39] for the DM interaction requires an increase in the calculation accuracy. In Sec. III, the GL energy for cubic helimagnets is derived in the third approximation in SOC. In Sec. IV, the obtained expressions are used to calculate the wavenumber of helicoids for the ferrimagnet Cu\({}_{2}\)OSeO\({}_{3}\), and the comparison with the results of numerical simulation turns out to be quite satisfactory. Section V discusses the effect of a strong magnetic field on the wavenumber, and the relationship between the DM interaction and magnetic anisotropy.
## II Discrete-continuum transition: a preview
Before proceeding to a consistent description of the transition from the microscopic model of a cubic helimagnet to its phenomenological model, we briefly describe the current state of affairs in this area. The problem of correspondence between discrete (atomistic) and continuum descriptions of matter has always been in the branches of physics that study fluids and solids, whether it is the theory of elasticity, fluid mechanics or electrodynamics of continuous media. The transition from a discrete model to a continuum one is often ambiguous, and the reverse transition has no physical meaning. In particular, the transition from discrete spins ordered in the magnetic lattice of a crystal to a continuous magnetization field is not unambiguous and depends on the method of spatial averaging of the magnetic moments, in the case when the magnetization changes along the crystal.
Neglecting the crystal field associated with the local anisotropy of the positions of magnetic atoms, the energy of a Heisenberg magnet with antisymmetric DM exchange can be written in the following form:
\[\begin{split} E=\frac{1}{2}\sum_{i,j}\left(-J_{ij}\mathbf{s}_{i }\cdot\mathbf{s}_{j}+\mathbf{D}_{ij}[\mathbf{s}_{i}\times\mathbf{s}_{j}] \right)\\ -\mathbf{H}\cdot\sum_{i}g_{i}\mu_{\text{B}}\mathbf{s}_{i},\end{split} \tag{1}\]
where \(\mathbf{s}_{i}\) is the classical spin (\(|\mathbf{s}_{i}|=1\)) related to the magnetic moment of the \(i\)th atom as \(\mathbf{m}_{i}=g_{i}\mu_{\text{B}}\mathbf{s}_{i}\), \(g_{i}\) is the spin \(g\)-factor, \(\mathbf{H}\) is an external magnetic field, \(J_{ij}\) and \(\mathbf{D}_{ij}\) are the parameters of the isotropic spin exchange and the DM interaction, respectively. The first sum is taken over all pairs of interacting magnetic atoms, and each pair is included in the sum twice (\(i,j\equiv j,i\)), which requires the coefficient \(\frac{1}{2}\).
The permutation relations \(J_{ji}=J_{ij}\), \(\mathbf{D}_{ji}=-\mathbf{D}_{ij}\) are obvious from Eq. (1). In addition, for equivalent bonds, the isotropic exchange constants are the same, and the DM vectors are related by the symmetry operations of the crystal point group. Note that for crystals without an inversion center, which we are interested in, all these symmetry elements are rotations, while in the case of point groups with mirror symmetry, \(\mathbf{D}_{ij}\) should behave like pseudovectors.
The DM interaction has relativistic spin-orbit nature, so the small parameter \((v/c)^{2}\) arises naturally, where \(v\) is a characteristic velocity of electrons in the crystal. Note that the quadratic part of the energy (1) should also include the anisotropic term \(\mathbf{s}_{i}\cdot\hat{\mathcal{P}}_{ij}\cdot\mathbf{s}_{j}\), where \(\hat{\mathcal{P}}_{ij}\) is a symmetric traceless tensor. This tensor is also related to the DM interaction, e.g. from Refs. [25; 40; 41]\(\hat{\mathcal{P}}_{ij}=-(\mathbf{D}_{ij}\otimes\mathbf{D}_{ij}-\frac{1}{3} \mathbf{D}_{ij}^{2}\hat{I})/2/J_{ij}\). However, since \(\hat{\mathcal{P}}_{ij}\) is of the second order in SOC and contributes to the energy starting from the fourth-order terms, we will neglect it here. Note that the spin-orbit interaction also corrects the isotropic exchange, \(\Delta J_{ij}=\mathbf{D}_{ij}^{2}/12/J_{ij}\). Without loss of generality, we can assume that this correction is already included in \(J_{ij}\).
The Heisenberg energy (1) allows one to simulate magnetic structures in terms of lattice models. However, it is often convenient to use the continuum approximation based on the expression for the energy density as a function of the magnetization field, which varies smoothly in the crystal. A naive transition from one model to another is carried out by introducing a slowly varying unimodular field \(\boldsymbol{\mu}(\mathbf{r})\) (\(|\boldsymbol{\mu}|=1\)), which coincides with classical spins in all atomic positions: \(\mathbf{s}_{i}=\boldsymbol{\mu}(\mathbf{r}_{i})\), \(\mathbf{s}_{j}=\boldsymbol{\mu}(\mathbf{r}_{j})\). For small interatomic distances
\[\boldsymbol{\mu}(\mathbf{r}_{j})\approx\boldsymbol{\mu}(\mathbf{r}_{i})+( \mathbf{r}_{ij}\cdot\boldsymbol{\nabla})\boldsymbol{\mu}(\mathbf{r}_{i})+ \frac{1}{2}(\mathbf{r}_{ij}\cdot\boldsymbol{\nabla})^{2}\boldsymbol{\mu}( \mathbf{r}_{i}), \tag{2}\]
where \(\mathbf{r}_{ij}\equiv\mathbf{r}_{j}-\mathbf{r}_{i}\), and thus we can pass from Eq. (1) to a GL-like energy, which is a functional of the field \(\boldsymbol{\mu}(\mathbf{r})\).
For cubic helimagnets of the MnSi and Cu\({}_{2}\)OSeO\({}_{3}\) type, the energy density containing terms of the second order in SOC has the form
\[\mathcal{E}=\frac{1}{2}\mathcal{J}\frac{\partial\boldsymbol{\mu}}{\partial r_ {\alpha}}\cdot\frac{\partial\boldsymbol{\mu}}{\partial r_{\alpha}}+\mathcal{D }\boldsymbol{\mu}\cdot\text{curl}\boldsymbol{\mu}-\mathbf{H}\cdot M_{0} \boldsymbol{\mu}, \tag{3}\]
where \(M_{0}=\sum_{i}^{\prime}g_{i}\mu_{\text{B}}\) is the saturation magnetization, and the magnetic structure stiffness and DM constant are calculated as follows [35]:
\[\mathcal{J}=\frac{1}{6}\,\sum_{i,j}^{\prime}J_{ij}r_{ij}^{2},\,\,\,\mathcal{D} =-\frac{1}{6}\,\sum_{i,j}^{\prime}\mathbf{D}_{ij}\cdot\mathbf{r}_{ij}, \tag{4}\]
where the prime means that the summation is carried out over a unit cell. Note that, due to the high symmetry of cubic crystals, Eq. (3) does not contain terms related to crystal anisotropy. The first anisotropic terms in the energy of cubic helimagnet is the fourth order in SOC.
Eqs. (4) reveal the link between the parameters of two models, discrete and continuum, describing the same
magnetic crystal. Besides, it is obvious that these expressions cannot be correct. Indeed, the formulas for \(\mathcal{J}\) and \(\mathcal{D}\) contain interatomic distances \(\mathbf{r}_{ij}\), which are not present in the initial model based on the Heisenberg energy (1). A complete analysis of this problem was done in Ref. [32]. It has been shown that, at the microscopic level, the spin structure looks more complicated than the naive transition from Eq. (1) to Eq. (3) suggests. Two types of microscopic particularities have been predicted to break the smoothness of the field \(\mathbf{\mu}(\mathbf{r})\). First, instead of one magnetization field, one should consider several fields corresponding to the magnetic sublattices of the crystal. It is the neglect of phase shifts between the magnetic sublattices that leads to incorrect expressions (4). Secondly, there are antiferromagnetic cantings of neighboring spins, which do not disappear even in a structure polarized by a strong magnetic field [33]. In Ref. [32], a formal technique has been suggested that takes into account the contribution of phase shifts. For this, we propose to use some ideal positions ("exchange coordinates") \(\tilde{\mathbf{r}}_{i}\) of magnetic atoms in the unit cell, for which the phase shifts vanish. The exchange coordinates are defined as functions of the isotropic exchange constants \(J_{ij}\) and have the same symmetry (Wyckoff position) as the real coordinates of the atoms. The technique allows saving Eqs. (4) for the parameters of the continuum model, where the exchange distances \(\tilde{\mathbf{r}}_{ij}\equiv\tilde{\mathbf{r}}_{j}-\tilde{\mathbf{r}}_{i}\) are substituted instead of the real interatomic distances \(\mathbf{r}_{ij}\). As for the antiferromagnetic cantings, they give a constant contribution to the energy in the second approximation in SOC and do not affect the twist.
However, there is another problem with parameter \(\mathcal{D}\) in Eq. (4), which deals with the Keffer rule [39]. It is known that the DM interaction is of superexchange origin, i.e., the spin interaction of magnetic atoms occurs through the electrons of nonmagnetic atoms, e.g. silicon in MnSi or oxygen in Cu\({}_{2}\)OSeO\({}_{3}\)[29; 30]. Besides, the DM vector \(\mathbf{D}_{ij}\sim[\mathbf{r}_{\text{O}i}\times\mathbf{r}_{\text{O}j}]\), where \(\mathbf{r}_{\text{O}i}\) and \(\mathbf{r}_{\text{O}j}\) are the distances from the nonmagnetic atom (O) to interacting magnetic ones (\(i\), \(j\)). Since \(\mathbf{r}_{ij}=\mathbf{r}_{\text{O}j}-\mathbf{r}_{\text{O}i}\), the vector \(\mathbf{D}_{ij}\) must be perpendicular to the bond \(\mathbf{r}_{ij}\), which according to Eq. (4) makes \(\mathcal{D}\) equal to zero. The contradiction is partially removed by the approximate nature of the Keffer rule. A much greater effect is due to the replacement in Eq. (4) of real distances \(\mathbf{r}_{ij}\) with exchange distances \(\tilde{\mathbf{r}}_{ij}\), which do not have to be perpendicular to \(\mathbf{D}_{ij}\). However, the analysis carried out in Ref. [35] for the Cu\({}_{2}\)OSeO\({}_{3}\) crystal showed that the angles between the vectors \(\mathbf{D}_{ij}\) and \(\tilde{\mathbf{r}}_{ij}\) are close to the right angle, and thus the Keffer rule still affects the smallness of \(\mathcal{D}\). So, we assume that the next order contributions to \(\mathcal{D}\) can strongly affect the degree of twist. In this paper, we will show that this is indeed the case, and the key role here is played by antiferromagnetic cantings, which do not affect the twist in the second spin-orbit approximation.
## III Discrete-Continuum Transition: Third Order Approximation
Let us proceed a transition from the discrete model of a cubic magnet based on the Heisenberg energy (1) to the continuum approximation, following the technique developed in Ref. [35]. In the latter model, the energy of a magnetic crystal is a volume integral of the GL energy density:
\[E=\int\mathcal{E}(\mathbf{r})d\mathbf{r}, \tag{5}\]
with, from Eq. (1),
\[\begin{split}\mathcal{E}(\mathbf{r})=\frac{1}{2}\sum_{i,j}^{ \prime}\left(-J_{ij}\mathbf{s}_{i}(\mathbf{r})\cdot\mathbf{s}_{j}(\mathbf{r} ^{\prime})+\mathbf{D}_{ij}\cdot[\mathbf{s}_{i}(\mathbf{r})\times\mathbf{s}_{ j}(\mathbf{r}^{\prime})]\right)\\ -\mathbf{H}\cdot\sum_{i}^{\prime}g_{i}\mu_{\text{B}}\mathbf{s}_ {i}(\mathbf{r}).\end{split} \tag{6}\]
Here, the summation is carried out over the magnetic atoms and bonds of the unit cell, and the classical spins are replaced by unimodular functions of coordinates. The number of functions coincides with the number of magnetic sublattices of the crystal. For convenience, we use the system in which the unit cell volume is equal to unity, \(V_{\text{u.c.}}=1\).
Note that, in Eq. (6), the pair interactions link the function \(\mathbf{s}_{i}\) at the point \(\mathbf{r}\) with the function \(\mathbf{s}_{j}\) at the point \(\mathbf{r}^{\prime}=\mathbf{r}+\mathbf{r}_{ij}\), where \(\mathbf{r}_{ij}\equiv\mathbf{r}_{j}-\mathbf{r}_{i}\) is the distance between the corresponding atoms in the crystal. In order to reduce all spin functions to the same argument, we use the Taylor series expansion
\[\begin{split}\mathbf{s}_{j}(\mathbf{r}^{\prime})=[1+\mathbf{r}_{ ij}\cdot\mathbf{\nabla}+\frac{1}{2}&(\mathbf{r}_{ij}\cdot\mathbf{\nabla})^{2}+ \ldots]\mathbf{s}_{j}(\mathbf{r})\\ &\equiv\exp(\mathbf{r}_{ij}\cdot\mathbf{\nabla})\mathbf{s}_{j}( \mathbf{r}).\end{split} \tag{7}\]
Thus, spatial derivatives appear in the continuum model energy.
All terms of the GL energy can be assigned an order of smallness in terms of the spin-orbit interaction. Thus, isotropic exchange is of the zeroth order, and the DM interaction is of the first order in SOC. Since spatial modulations of magnetization in twisted magnets are due to the DM interaction, each spatial derivative can also be assigned the first order in SOC. We confine ourselves to weak magnetic fields \(H<H_{c2}\), where \(H_{c2}=\mathcal{D}^{2}/(g\mu_{\text{B}}\mathcal{J})\) is the field of the full unwinding of the helical magnetic structure. Thus, the magnetic field can be considered as the second order in SOC. Summarizing the above,
\[D\sim\nabla\sim(v/c)^{2},\quad g\mu_{\text{B}}H\sim(v/c)^{4}. \tag{8}\]
The presence of the small parameter allows to calculate the GL energy in successive approximations. For example, in the ferromagnetic case in the zeroth approximation, all functions \(\mathbf{s}_{i}(\mathbf{r})\) corresponding to different magnetic sublattices can be replaced by a single unimodular
vector function \(\mathbf{\mu}(\mathbf{r})\equiv\mathbf{M}(\mathbf{r})/|\mathbf{M}(\mathbf{r})|\) directed along the magnetization:
\[\mathbf{s}_{i}^{(0)}(\mathbf{r})=\mathbf{s}_{j}^{(0)}(\mathbf{r}^{\prime})=\bm {\mu}(\mathbf{r}), \tag{9}\]
with \(\mathbf{M}(\mathbf{r})=\sum_{i}^{\prime}g_{i}\mu_{\mathrm{B}}\mathbf{s}_{i}( \mathbf{r})\). Then it is easy to obtain the energy density in the zero approximation:
\[\mathcal{E}^{(0)}=-\frac{1}{2}\sideset{}{{}^{\prime}}{\sum}_{i,j}{}^{\prime}J_ {ij}, \tag{10}\]
which is due to the isotropic exchange of collinear spins.
Now let's expand \(\mathbf{s}_{i}(\mathbf{r})\) into components parallel and perpendicular to the magnetization:
\[\mathbf{s}_{i}=\mathbf{\mu}\sqrt{1-u_{i}^{2}}+\mathbf{u}_{i}, \tag{11}\]
where \(\mathbf{u}_{i}\perp\mathbf{\mu}\) is a small spin canting, which can be represented as follows:
\[\mathbf{u}_{i}=\mathbf{u}_{i}^{\prime}+\mathbf{u}_{i}^{\prime\prime}+\mathbf{ u}_{i}^{\prime\prime\prime}+\dots, \tag{12}\]
with the number of primes corresponding to the approximation order. Taking into account Eqs. (11), (12), successive terms in the expansion of \(\mathbf{s}_{i}(\mathbf{r})\) can be written as
\[\begin{cases}\mathbf{s}_{i}^{(0)}=\mathbf{\mu},\\ \mathbf{s}_{i}^{\prime}=\mathbf{u}_{i}^{\prime},\\ \mathbf{s}_{i}^{\prime\prime}=\mathbf{u}_{i}^{\prime\prime}-\frac{1}{2} \mathbf{u}_{i}^{\prime 2}\mathbf{\mu},\\ \mathbf{s}_{i}^{\prime\prime\prime}=\mathbf{u}_{i}^{\prime\prime\prime}-( \mathbf{u}_{i}^{\prime}\cdot\mathbf{u}_{i}^{\prime\prime})\mathbf{\mu},\\ \dots\end{cases} \tag{13}\]
A similar expansion of the function \(\mathbf{s}_{j}(\mathbf{r}^{\prime})\), taking into account Eq. (7), contains spatial derivatives:
\[\begin{cases}\mathbf{s}_{j}^{(0)}=\mathbf{\mu},\\ \mathbf{s}_{j}^{\prime}=\mathbf{u}_{j}^{\prime}+(\mathbf{r}_{ij}\cdot\mathbf{ \nabla})\mathbf{\mu},\\ \mathbf{s}_{j}^{\prime\prime}=\mathbf{u}_{j}^{\prime\prime}-\frac{1}{2} \mathbf{u}_{j}^{\prime 2}\mathbf{\mu}+(\mathbf{r}_{ij}\cdot\mathbf{\nabla})\mathbf{u}_{j}^{ \prime}+\frac{1}{2}(\mathbf{r}_{ij}\cdot\mathbf{\nabla})^{2}\mathbf{\mu},\\ \dots\end{cases} \tag{14}\]
Here the functions of \(\mathbf{r}^{\prime}\) on the left-hand sides of the equalities are related to the functions of \(\mathbf{r}\) on the right-hand sides.
To find the canting \(\mathbf{u}_{i}\), let's write out the part of the magnetic energy associated with the \(i\)th spin:
\[E_{i}=-\mathbf{h}_{i}\cdot\mathbf{s}_{i}, \tag{15}\]
where
\[\mathbf{h}_{i}=\sum_{j}\left(J_{ij}\mathbf{s}_{j}+[\mathbf{D}_{ij}\times \mathbf{s}_{j}]\right)+g_{i}\mu_{\mathrm{B}}\mathbf{H} \tag{16}\]
is the local effective field affecting the spin. The equilibrium condition is that the directions of the spin and the local field coincide:
\[\mathbf{s}_{i}=\mathbf{h}_{i}/|\mathbf{h}_{i}|. \tag{17}\]
Comparison of Eqs. (11) and (17) gives
\[\mathbf{u}_{i}=(\mathbf{h}_{i}-\mathbf{\mu}(\mathbf{\mu}\cdot\mathbf{h}_{i}))/| \mathbf{h}_{i}|. \tag{18}\]
The perturbation theory is also applicable to the effective field:
\[\mathbf{h}_{i}=\mathbf{h}_{i}^{(0)}+\mathbf{h}_{i}^{\prime}+\mathbf{h}_{i}^{ \prime\prime}+\dots \tag{19}\]
For example, the effective fields in the zeroth and first approximations in SOC are
\[\mathbf{h}_{i}^{(0)}=\sum_{j}J_{ij}\mathbf{\mu} \tag{20}\]
and
\[\mathbf{h}_{i}^{\prime}=\sum_{j}\left(J_{ij}(\mathbf{r}_{ij}\cdot\mathbf{\nabla}) \mathbf{\mu}+J_{ij}\mathbf{u}_{j}^{\prime}+[\mathbf{D}_{ij}\times\mathbf{\mu}]\right), \tag{21}\]
correspondingly. Note that due to the definition of cantings and the unimodularity of \(\mathbf{\mu}(\mathbf{r})\), \(\mathbf{h}_{i}^{\prime}\cdot\mathbf{\mu}=0\). Then the first-order canting is
\[\mathbf{u}_{i}^{\prime}=\frac{\mathbf{h}_{i}^{\prime}}{|\mathbf{h}_{i}^{(0)}|}= \frac{\sum_{j}\left(J_{ij}(\mathbf{r}_{ij}\cdot\mathbf{\nabla})\mathbf{\mu}+J_{ij} \mathbf{u}_{j}^{\prime}+[\mathbf{D}_{ij}\times\mathbf{\mu}]\right)}{\sum_{j}J_{ ij}}. \tag{22}\]
As mentioned above, since the Heisenberg energy (1) does not explicitly contain atomic coordinates, they cannot affect the coefficients of the GL free energy (3). This means that the cantings \(\mathbf{u}^{\prime}\) must include something to compensate for the phase shifts between magnetic sublattices. We can take this into account by introducing fictitious "exchange" coordinates defined by the system of linear equations
\[\sum_{j}J_{ij}(\mathbf{r}_{j,\mathrm{ex}}-\mathbf{r}_{i,\mathrm{ex}})=0. \tag{23}\]
When solving the system (23), it is necessary to take into account the symmetry of the crystal. For example, in cubic magnets with the \(B20\) structure (MnSi, etc.), all magnetic atoms are in same Wyckoff positions \(4a\) (\(x,x,x\)) of the space group \(P2_{1}3\)[42], and the system reduces to a single linear equation that determines the exchange coordinate \(x_{\mathrm{ex}}\)[32]. In what follows, the Taylor expansions will include exchange coordinates instead of real ones, i.e. \(\mathbf{s}_{j}(\mathbf{r}^{\prime})=\exp(\tilde{\mathbf{r}}_{ij}\cdot\mathbf{ \nabla})\mathbf{s}_{j}(\mathbf{r})\), where \(\tilde{\mathbf{r}}_{ij}\equiv\mathbf{r}_{ij,\mathrm{ex}}=\mathbf{r}_{j, \mathrm{ex}}-\mathbf{r}_{i,\mathrm{ex}}\). When taking into account the exchange coordinates defined in this way, the first term in the right side numerator of Eq. (22) is equal to zero, and this equation can be rewritten as
\[\sum_{j}J_{ij}(\mathbf{u}_{i}^{\prime}-\mathbf{u}_{j}^{\prime})=\sum_{j}[ \mathbf{D}_{ij}\times\mathbf{\mu}]. \tag{24}\]
A solution can be found in the form
\[\mathbf{u}_{i}^{\prime}=[\mathbf{\rho}_{i}\times\mathbf{\mu}], \tag{25}\]
where associated with magnetic atoms vectors \(\mathbf{\rho}_{i}\) can be calculated from the system
\[\sum_{j}J_{ij}(\mathbf{\rho}_{i}-\mathbf{\rho}_{j})=\sum_{j}\mathbf{D}_{ij}, \tag{26}\]
provided that they satisfy the symmetry of the crystal.
The first-order cantings \(\mathbf{u}^{\prime}_{i}\) are sufficient to calculate the GL free energy in the third approximation in SOC (see Appendix).
Thus, the first-order contribution \(\mathcal{E}^{(1)}\) to the energy density is zero, and the second-order contribution is
\[\mathcal{E}^{(2)}=\frac{1}{2}\mathcal{J}\frac{\partial\mathbf{\mu}}{\partial r_{ \alpha}}\!\cdot\!\frac{\partial\mathbf{\mu}}{\partial r_{\alpha}}\!+\!\mathcal{D}_ {1}\mathbf{\mu}\!\cdot\!\mathrm{curl}\mathbf{\mu}\!-\!\mathbf{H}\!\cdot\!(M_{0}\mathbf{\mu} )\!+\!C_{2}, \tag{27}\]
with coefficients \(\mathcal{J}\) and \(\mathcal{D}_{1}\) being calculated by Eq. (4), where the physical distances \(\mathbf{r}_{ij}\) are replaced with the exchange distances \(\tilde{\mathbf{r}}_{ij}\):
\[\mathcal{J}=\frac{1}{6}\,\sum_{i,j}^{\prime}J_{ij}\tilde{r}_{ij}^{2}, \tag{28}\]
\[\mathcal{D}_{1}=-\frac{1}{6}\,\sum_{i,j}^{\prime}\mathbf{D}_{ij}\cdot\tilde{ \mathbf{r}}_{ij}, \tag{29}\]
and the constant term is
\[C_{2}=-\frac{1}{6}\,\sum_{i,j}^{\prime}J_{ij}(\mathbf{\rho}_{i}-\mathbf{\rho}_{j})^{2}. \tag{30}\]
Finally, the third-order energy density is
\[\mathcal{E}^{(3)}=\mathcal{D}_{2}\mathbf{\mu}\cdot\mathrm{curl}\mathbf{\mu}+C_{3}, \tag{31}\]
with
\[\mathcal{D}_{2}= \frac{1}{12}\,\sum_{i,j}^{\prime}J_{ij}\tilde{\mathbf{r}}_{ij} \cdot[\mathbf{\rho}_{i}\times\mathbf{\rho}_{j}] \tag{32}\] \[+\frac{1}{12}\,\sum_{i,j}^{\prime}\mathbf{D}_{ij}\cdot[\tilde{ \mathbf{r}}_{ij}\times(\mathbf{\rho}_{i}+\mathbf{\rho}_{j})],\]
\[C_{3}=\frac{1}{6}\,\sum_{i,j}^{\prime}\mathbf{D}_{ij}\cdot[\mathbf{\rho}_{i}\times \mathbf{\rho}_{j}]. \tag{33}\]
As we can see, the second-order correction to the DM parameter \(\mathcal{D}\) is fully determined by the antiferromagnetic cantings \(\mathbf{u}^{\prime}_{i}\) related to the rotation vectors \(\mathbf{\rho}_{i}\). In the next section, using the cubic ferrimagnet Cu\({}_{2}\)OSeO\({}_{3}\) as an example, we will show how this correction affects the pitch of the magnetic helices.
## IV Helimagnetic Cu\({}_{2}\)OSeO\({}_{3}\)
The formulas of the previous section refer to the case of a ferromagnetic crystal, in which neighboring spins are almost codirectional, and the magnetic structure varies slowly on scales much larger than the unit cell. The Cu\({}_{2}\)OSeO\({}_{3}\) crystal belongs to a different type of magnets, namely, it is a collinear ferrimagnet. Like the \(B20\)-type structures, the crystal is described by the space group \(P2_{1}3\), and its unit cell contains sixteen magnetic copper atoms in two different Wyckoff positions [43]:
\[\begin{array}{ll}4a&(x_{1},x_{1},x_{1})=(0.8860,0.8860,0.8860),\\ 12b&(x_{2},y_{2},z_{2})=(0.1335,0.1211,0.8719).\end{array} \tag{34}\]
The magnetic moments of all copper atoms are the same in magnitude, but not in direction. Four spins in the special position \(4a\), lying on the three-fold axes of the cubic crystal, are approximately opposite to twelve spins in the general position \(12b\). Thus, the total magnetic moment of the unit cell is two times less compared to the ferromagnetic ordering and is codirectional with the spins in the \(12b\) position. Due to the absence of an inversion center, the local ferrimagnetic structure rotates on scales much larger than the unit cell, which leads to the appearance of either a helical phase or a rather unusual skyrmionic phase with double twist of the magnetization field [44; 45; 46; 47; 48; 49]. The possibility of observing even more intriguing magnetic structures, such as coupled magnetic monopoles [27] and tilted skyrmion and spiral states [50], is also discussed.
Using a simple mathematical trick, the formulas derived for a ferromagnetic crystal can be used without modification for the case of collinear ferrimagnetic. First, we change the sign of the constant \(g_{i}\) for all magnetic moments in the \(4a\) position. Second, for all bonds connecting atoms in different Wyckoff positions, we change the signs of the isotropic exchange constants \(J_{ij}\) and the DM vectors \(\mathbf{D}_{ij}\). This allows us to assume that all classical spins in the unit cell are directed along magnetization, while the real magnetic moments can be opposite.
In Refs. [29; 30], the DFT calculations of the exchange parameters of bonds in Cu\({}_{2}\)OSeO\({}_{3}\) were carried out. Using these data, we calculated the continuum model constants \(\mathcal{J}\) and \(\mathcal{D}\) in the second approximation in SOC [35]. Let us briefly describe the results obtained, referring to Ref. [35] for details.
The initial data are the isotropic exchange constants \(J_{1}\)-\(J_{5}\) and the DM vectors \(\mathbf{D}_{1}\)-\(\mathbf{D}_{5}\) calculated in Ref. [29] for five nonequivalent bonds (Table 1). Two nonequivalent ferromagnetic bonds (\(J>0\)) connect atoms in the position \(12b\). Three other bonds are antiferromagnetic (\(J<0\)) and they connect atoms in different Wyckoff positions. Note that an atom can have more than one bond of each type. Thus, each atom in the Wyckoff position \(12b\) has four neighbors in the same position and three neighbors in the \(4a\) position. Each atom in the position \(4a\) has nine neighbors in the position \(12b\). The DM vectors are approximately satisfy the Keffer rule: the angle between the DM vector \(\mathbf{D}_{ij}\) and the bond \(\mathbf{r}_{ij}\) varies in the range \(85.3\)-\(99.5^{\circ}\), and the average deviation from the right angle is \(5.3^{\circ}\).
The first step is to calculate the exchange coordinates
of the magnetic atoms. Using the symmetry of the crystal, Eq. (23) reduces to a system of four linear equations for the unknowns \(x_{1,\rm{ex}}\), \(x_{2,\rm{ex}}\), \(y_{2,\rm{ex}}\), \(z_{2,\rm{ex}}\). All the coefficients and constant terms in the equations are linear combinations of the isotropic exchange constants \(J_{1}\)-\(J_{5}\). Consequently, the exchange coordinates have the form \(P_{4}(J_{1},\ldots J_{5})/Q_{4}(J_{1},\ldots J_{5})\), where \(P_{4}\) and \(Q_{4}\) are fourth-degree polynomials in the exchange constants. The result of routine calculations with the data from Table 1 is
\[\begin{array}{ll}x_{1,\rm{ex}}=0.942,\\ x_{2,\rm{ex}}=-0.004,\;\;y_{2,\rm{ex}}=0.020,\;\;z_{2,\rm{ex}}=0.897.\end{array} \tag{35}\]
Then, using Eqs. (28), (29), we calculate the exchange parameters of the continuum model in the second approximation in SOC: \({\cal J}=5.130\) meV, \({\cal D}_{1}=0.970\) meV. Without taking into account corrections of the next orders, the wavenumber of magnetic helicoids is \(k={\cal D}_{1}/{\cal J}=0.189\), and the pitch is \(p=2\pi/|k|=33.2\) in the parameters of the cubic lattice. The experimental values known from small-angle neutron scattering data are \(k_{\rm{exp}}=0.088\) and \(p_{\rm{exp}}=71.4\), respectively. Note that the angles between the DM vectors \({\bf D}_{ij}\) and the exchange distances \(\tilde{\bf r}_{ij}\) are less consistent with the Keffer rule: they vary in the range 71.3-110.2\({}^{\circ}\) with the average deviation from the right angle of 11.5\({}^{\circ}\).
In order to find the correction \({\cal D}_{2}\) according to Eq. (32), it is necessary to calculate the vectors \(\mathbf{\rho}_{1}\) and \(\mathbf{\rho}_{2}\), which determine the spin cantings of the atoms \((x_{1},x_{1},x_{1})\) and \((x_{2},y_{2},z_{2})\), respectively. As in the case of exchange coordinates, the symmetry reduces Eq. (26) to a system of four linear equations with the unknowns \(\rho_{1x}\), \(\rho_{2x}\), \(\rho_{2y}\), \(\rho_{2z}\). In addition, all the coefficients of the equations are linear combinations of the exchange constants \(J_{1}\)-\(J_{5}\), and the constant terms are combinations of components of the vectors \({\bf D}_{1}\)-\({\bf D}_{5}\). This means that the components of \(\mathbf{\rho}_{1}\), \(\mathbf{\rho}_{2}\) have the form \(P_{3,1}(J_{1},\ldots J_{5};{\bf D}_{1},\ldots{\bf D}_{1})/Q_{4}(J_{1},\ldots J _{5})\), where \(P_{3,1}\) are polynomials linear in the components of the vectors \({\bf D}_{1}\)-\({\bf D}_{5}\) and third-degree in \(J_{1}\)-\(J_{5}\). Routine calculation with the data from Table 1 gives
\[\begin{array}{ll}\mathbf{\rho}_{1}=(-0.049,-0.049,-0.049),\\ \mathbf{\rho}_{2}=(-0.123,-0.034,-0.159).\end{array} \tag{36}\]
Note that the vector \(\mathbf{\rho}_{1}\) corresponds to the canting of the classical spin, which according to our trick is opposite to the real spin of the atom. The cantings of the real magnetic moments in the \(4a\) position have the opposite sign.
The calculation of the second-order spin-orbit correction to the DM constant by Eq. (32) gives \({\cal D}_{2}=-0.369\) meV. In total, \({\cal D}={\cal D}_{1}+{\cal D}_{2}=0.600\) meV, the helicoid wavenumber is \(k={\cal D}/{\cal J}=0.117\), and the helix pitch is \(p=53.7\) in lattice parameters. The positive value of \(k\) means that in the considered enantiomorph of the Cu\({}_{2}\)OSeO\({}_{3}\) crystal only right-handed helicoids can appear. It can be seen that taking into account the third-order terms in the GL energy leads to a change in the wavenumber of helices by more than one and a half times. This means that the antiferromagnetic cantings, whose influence on twist was previously neglected, can significantly affect the mesoscopic magnetic structure of helimagnets. For example, in the case of Cu\({}_{2}\)OSeO\({}_{3}\), a significant weakening of helicity occurs.
We should note the fact that the calculated value of the wavenumber, \(k_{\rm{calc}}=0.117\), still differs significantly from the experimental value \(k_{\rm{exp}}=0.088\). In order to understand whether the discrepancy is due to the limitation to the third order in SOC or to the insufficient accuracy of the _ab initio_ calculation of the exchange constants of bonds, a spin helicoid was simulated using the same constants from Table 1. The simulation was carried out for periodic spin helicoids with pitches \((n,0,0)\), \((n,n,0)\), and \((n,n,n)\) directed along crystallographic directions [100], [110], and [111], respectively. The integer parameter \(n\) for each of the directions went through all the values within certain limits, so that the wavenumber of helicoids varied within 0.06-0.17. For every pitch, the elongated crystal cell was filled with one turn of a roughly made helicoid, after which the spin structure relaxed using Eqs. (16), (17).
Fig. 2 shows the calculated dependences of the helicoid energy per unit cell of the crystal on the wavenumber \(k\). The energy is measured from the level \({\cal E}^{(0)}=-158.916\) meV calculated for collinear spins by Eq. (10) with the exchange constants \(J_{1}\)-\(J_{5}\) from Table 1. The results of modeling helicoids for various crystallographic directions are well approximated by parabolas. For comparison, the analytical curve
\[{\cal E}(k)=\frac{1}{2}{\cal J}k^{2}-{\cal D}k+C_{2}+C_{3} \tag{37}\]
is shown, which corresponds to the GL energy density calculated using Eqs. (27), (31) for the helicoidally twisted
\begin{table}
\begin{tabular}{l l l l l l} n & neighboring Cu atom & Wyckoff pos. & \(J\), meV & \({\bf D}\), meV & \(\angle({\bf D}_{ij},{\bf r}_{ij})\) \\ \hline
1 & \((z_{2}-\frac{1}{2},\frac{1}{2}-x_{2},1-y_{2})\) & 12b & 1.132 & \((0.289,-0.325,-0.051)\) & 94.3\({}^{\circ}\) \\
2 & \((x_{1}-1,x_{1}-1,x_{1})\) & \(4a\) & \(-6.534\) & \((1.120,-1.376,0.300)\) & 85.3\({}^{\circ}\) \\
3 & \((z_{2}-1,x_{2},1+y_{2})\) & \(12b\) & 3.693 & \((-0.263,0.167,-0.407)\) & 99.5\({}^{\circ}\) \\
4 & \((1-x_{1},x_{1}-\frac{1}{2},\frac{3}{2}-x_{1})\) & \(4a\) & \(-0.900\) & \((-0.499,1.238,1.144)\) & 86.3\({}^{\circ}\) \\
5 & \((\frac{1}{2}-x_{1},1-x_{1},x_{1}-\frac{1}{2})\) & \(4a\) & \(-0.984\) & \((0.045,-0.087,-0.059)\) & 85.8\({}^{\circ}\) \\ \end{tabular}
\end{table}
Table 1: Nonequilibrium bonds of the copper atom in the position \((x_{2},y_{2},z_{2})\) with its magnetic neighbors and the exchange parameters [29].
field \(\mathbf{\mu}=(\cos kz,\sin kz,0)\). The constants calculated by Eqs. (30), (33) are \(C_{2}=-1.573\) meV, \(C_{3}=-0.016\) meV.
It is evident from Fig. 2 that the plot points for every nonequivalent crystallographic direction fit their own smooth curve. This is a consequence of the cubic anisotropy of the crystal, which manifests itself starting from the fourth-order contributions to the energy density. Taking into account the anisotropic terms, the dependences \(\mathcal{E}(k)\) for helicoids should actually be described by the fourth-degree polynomials in \(k\). Finding the coefficients of these polynomials is, in fact, an ill-posed problem, the solution of which strongly depends both on the accuracy of calculating the discrete points of the graphs and on the fitting method. However, it is easy to show that in the fourth approximation in SOC, the anisotropy is introduced by the combination \(n_{x}^{4}+n_{y}^{4}+n_{z}^{4}\), where \(\mathbf{n}\) is the helix axis direction. For example, for the crystallographic directions \(\langle 100\rangle\), \(\langle 110\rangle\), and \(\langle 111\rangle\) this combination is equal to \(1\), \(1/2\), and \(1/3\), respectively. Then we can restrict ourselves to the parabolic dependence of the helicoid energy on the wavenumber:
\[\begin{split}\mathcal{E}(k)=&\alpha_{2}k^{2}+ \alpha_{1}k+\alpha_{0}\\ &+(\beta_{2}k^{2}+\beta_{1}k+\beta_{0})(n_{x}^{4}+n_{y}^{4}+n_{z }^{4}).\end{split} \tag{38}\]
Fitting the discrete plots with Eq. (38) using the least squares method gives the following values of the coefficients: \(\alpha_{0}=-1.579\) meV, \(\alpha_{1}=-0.566\) meV, \(\alpha_{2}=2.457\) meV, \(\beta_{0}=-0.003\) meV, \(\beta_{1}=-0.012\) meV, \(\beta_{2}=0.113\) meV. The stiffness of the magnetic structure and the DM constant now depend on the direction \(\mathbf{n}\):
\[\begin{split}\mathcal{J}&=2\alpha_{2}+2\beta_{2}(n _{x}^{4}+n_{y}^{4}+n_{z}^{4}),\\ \mathcal{D}&=-\alpha_{1}-\beta_{1}(n_{x}^{4}+n_{y}^ {4}+n_{z}^{4}).\end{split} \tag{39}\]
The parameters calculated by Eq. (39) are given in Table 2, and the approximation accuracy is obvious from the fitting parabolas in Fig. 2.
Comparison of the simulation results with the values of the exchange constants calculated by the analytical formulas (28), (29) and (32) shows minor difference within
Figure 1: (Color online) The unit cell of the cubic ferrimagnet Cu\({}_{2}\)OSeO\({}_{3}\), containing eight formula units, and its magnetic sublattice. The crystal symmetry is described by the space group \(P2_{1}3\). Below the Curie point \(T_{\mathrm{C}}=58\) K, the spins of copper atoms are almost collinear and have the opposite directions in the Wyckoff positions \(4a\) and \(12b\).
Figure 2: (Color online) Calculated dependence of the average magnetic energy density of the Cu\({}_{2}\)OSeO\({}_{3}\) ferrimagnet on the helicoid wavenumber \(k=2\pi/p\): comparison of the analytical expression (37) and the numerical simulation results. In both cases, the exchange constants from Table 1 are used, and the energy is measured from the level \(\mathcal{E}^{(0)}=-158.916\) meV corresponding to the exactly collinear spins. The helicoid pitch is measured in the unit cell parameters.
a few percent, and, therefore, the difference from the experimental data is due to the initial parameters used in the calculations (Table 1). The seemingly significant difference between the minima of the parabola (37) and the simulation curves is due to the unaccounted for constant terms of the fourth order in SOC, some of which are comparable to \(C_{3}\). For example,
\[\frac{1}{16}\,\sum_{i,j}{}^{\prime}J_{ij}(\mathbf{u}_{i}^{\prime 2}-\mathbf{u}_{ j}^{\prime 2})^{2}=0.0103\pm 0.0003\;\;\mathrm{meV}, \tag{40}\]
\[-\frac{1}{4}\,\sum_{i,j}{}^{\prime}\mathbf{D}_{ij}\cdot(\mathbf{u}_{i}^{\prime 2 }[\boldsymbol{\mu}\times\mathbf{u}_{j}^{\prime}]+\mathbf{u}_{j}^{\prime 2}[ \mathbf{u}_{i}^{\prime}\times\boldsymbol{\mu}]) \tag{41}\]
\[=0.0127\pm 0.0008\;\;\mathrm{meV},\]
where a small uncertainty is caused by the cubic anisotropy. In addition, the energy \(\mathcal{E}^{(4)}\) must include terms related to the second-order cantings \(\mathbf{u}_{i}^{\prime\prime}\), which are not considered in the paper.
## V Discussion
Summarizing the above, we found that antiferromagnetic spin cantings, which are usually imperceptible due to their smallness, have a significant effect on the mesoscopic structure of helimagnets, changing the wavenumber \(k\) of magnetic helicoids. In the analyzed example of the Cu\({}_{2}\)OSeO\({}_{3}\) ferrimagnet, the influence of the cantings leads to an increase in the helix pitch, but it can also be vice versa. Indeed, suppose that all DM vectors \(\mathbf{D}_{ij}\) in Table 1 change their signs. According to Eq. (26), this would mean that the rotation vectors \(\boldsymbol{\rho}_{i}\) also change signs. It is evident from Eqs. (29), (32) that the first-order DM parameter \(\mathcal{D}_{1}\) changes its sign, while the second-order correction \(\mathcal{D}_{2}\) does not. Consequently, the modulus of the wavenumber \(|k|\) decreases, and the helix pitch \(p=2\pi/|k|\) increases. Note that in this case the wavenumber is negative, which corresponds to the left-handed helicoid. However, it should be distinguished from the case of spatial inversion, which also leads to a change in the handedness. Indeed, upon inversion, the crystal transforms into its mirror enantiomorph, so the signs of the interatomic distances \(\tilde{\mathbf{r}}_{ij}\) change, but the pseudovectors \(\mathbf{D}_{ij}\) and \(\boldsymbol{\rho}_{i}\) retain their signs. Then, according to Eqs. (29), (32), the sign of the wavenumber changes, but the modulus is preserved.
The correction \(\mathcal{D}_{2}\) to the DM parameter depends on the magnitude of spin cantings. In turn, the cantings can vary under the action of a strong external field \(H\gg H_{c2}\), where \(H_{c2}\) is the field of complete unwinding of spin helices. When the field becomes comparable in magnitude to the exchange interaction, its contribution to the effective field \(\mathbf{h}_{i}^{(0)}\) can no longer be neglected. Since in the polarized phase \(\boldsymbol{\mu}\parallel\mathbf{H}\), Eq. (20) should be rewritten as
\[\mathbf{h}_{i}^{(0)}=\left(\sum_{j}J_{ij}+g_{i}\mu_{\mathrm{B}}H\right) \boldsymbol{\mu}, \tag{42}\]
and the system (26) for rotation vectors now looks like
\[g_{i}\mu_{\mathrm{B}}H\boldsymbol{\rho}_{i}+\sum_{j}J_{ij}(\boldsymbol{\rho} _{i}-\boldsymbol{\rho}_{j})=\sum_{j}\mathbf{D}_{ij}. \tag{43}\]
m Fig. 3 shows the dependences of the components of the \(\boldsymbol{\rho}_{i}\) vectors and the wavenumber \(k\) on the external magnetic field up to 450 kOe, calculated for Cu\({}_{2}\)OSeO\({}_{3}\) with the exchange constants from Table 1. Note that for this crystal \(H_{c2}\sim 1\) kOe [45]. Although there is no helical order if \(H>H_{c2}\), the wavenumber \(k\) still can be measured with magnons. In the polarized phase, the DM interaction still affects the dispersion relation of spin waves, leading to a shift of the magnon spectrum by the value of \(k\) for magnons propagating along the field [51]. It is worth noting that although the magnon spectrum of the Cu\({}_{2}\)OSeO\({}_{3}\) crystal has been studied repeatedly [52; 53; 54; 55], no one has consciously studied the change in the wavenumber in a strong magnetic field.
The numerical simulation results shown in Fig. 2 demonstrate anisotropy well described by the cubic invariant \(n_{x}^{4}+n_{y}^{4}+n_{z}^{4}\), where \(\mathbf{n}\) is
\begin{table}
\begin{tabular}{c c c c} & \(\mathcal{J}\), meV & \(\mathcal{D}\), meV & \(k=\mathcal{D}/\mathcal{J}\) \\ \hline modeling: & & & \\ \(\langle 100\rangle\) & 5.1405 & 0.5788 & 0.1126 \\ \(\langle 110\rangle\) & 5.0274 & 0.5727 & 0.1139 \\ \(\langle 111\rangle\) & 4.9887 & 0.5705 & 0.1143 \\ analytics experiment & & 0.600 & 0.117 \\ & & & 0.088 \\ \end{tabular}
\end{table}
Table 2: Results of modeling spin helicoids with the axes parallel to the crystallographic directions \(\langle 100\rangle\), \(\langle 110\rangle\), \(\langle 111\rangle\). The simulation was performed with the exchange constants from Table 1.
Figure 3: (Color online) Calculated dependencies of the wavenumber \(k=\mathcal{D}/\mathcal{J}\) of magnetic helicoids and the components of the \(\boldsymbol{\rho}_{i}\) vectors on the external magnetic field \(H\) for the Cu\({}_{2}\)OSeO\({}_{3}\) crystal. The spin exchange constants from Table 1 are used. The dependencies for \(\rho_{i\alpha}\) start from the values specified in Eq. (36) at \(H=0\).
helicoid axis. Note that this anisotropy arises exclusively due to the DM interaction. Moreover, the antiferromagnetic cantings again play a significant role here. For example, coefficient \(\beta_{0}\) in Eq. (38), which makes a constant anisotropic contribution to the energy \(\mathcal{E}(k)\), is related to the fourth-order terms (40) and (41), which explicitly contain the cantings \(\mathbf{u}_{i}^{\prime}\) (the second-order cantings \(\mathbf{u}_{i}^{\prime\prime}\) are not considered here). Another possible mechanism for the appearance of anisotropy is the interaction of spins with a local crystal field. In the latter case, the energy does not contain anisotropic terms with spatial derivatives and, accordingly, \(\beta_{1}=\beta_{2}=0\). It was shown in Ref. [56] that this mechanism contradicts the experimental data on the anisotropy of the \(A\) phase stability in MnSi and Cu\({}_{2}\)OSeO\({}_{3}\).
## Acknowledgements
We are grateful to S. V. Grigoriev, S. Grytsiuk, I. V. Kashin, S. N. Andreev, and V. V. Mazurenko for fruitful discussions. This work was supported by the Ministry of Science and Higher Education within the State assignment of Federal Scientific Research Center "Crystallography and Photonics" of Russian Academy of Sciences.
## Appendix A
Let us pass from the energy density (6), which contains spin fields related to different magnetic sublattices, to the GL energy density (3), which depends on the single vector field \(\mathbf{\mu}(\mathbf{r})\). To do this, we will sequentially collect together the terms of the same order in SOC, taking into account the expansions (13), (14). For example, the first-order contribution to the energy is
\[\begin{split}\mathcal{E}^{(1)}=\frac{1}{2}\sideset{}{{}^{\prime}}{ \sum}_{i,j}\left\{-J_{ij}\mathbf{\mu}\cdot(\tilde{\mathbf{r}}_{ij}\cdot\mathbf{\nabla })\mathbf{\mu}-J_{ij}(\mathbf{u}_{i}^{\prime}\cdot\mathbf{\mu}+\mathbf{\mu}\cdot\mathbf{u }_{j}^{\prime})\right.\\ \left.+\mathbf{D}_{ij}[\mathbf{\mu}\times\mathbf{\mu}]\right\}=0,\end{split} \tag{43}\]
where the first term in curly brackets is equal to zero due to the unimodularity of the field \(\mathbf{\mu}(\mathbf{r})\) (\(\mathbf{\nabla}\mathbf{\mu}^{2}=0\)), whereas the second one is zero due to the definition of the spin cantings (\(\mathbf{u}_{i}\perp\mathbf{\mu}\)).
Similarly, it is easy to show that the cantings \(\mathbf{u}_{i}^{\prime\prime}\) and \(\mathbf{u}_{i}^{\prime\prime\prime}\) do not contribute to \(\mathcal{E}^{(2)}\) and \(\mathcal{E}^{(3)}\), correspondingly:
\[\begin{split}-\frac{1}{2}\sideset{}{{}^{\prime}}{\sum}_{i,j}J_{ ij}(\mathbf{u}_{i}^{\prime\prime}\cdot\mathbf{\mu}+\mathbf{\mu}\cdot\mathbf{u}_{j}^{ \prime\prime})&=0,\\ -\frac{1}{2}\sideset{}{{}^{\prime}}{\sum}_{i,j}J_{ij}(\mathbf{u }_{i}^{\prime\prime\prime}\cdot\mathbf{\mu}+\mathbf{\mu}\cdot\mathbf{u}_{j}^{\prime \prime\prime})&=0.\end{split} \tag{44}\]
Even more surprising is that the second-order cantings \(\mathbf{u}_{i}^{\prime\prime}\) also do not contribute to the third-order energy density \(\mathcal{E}^{(3)}\). For example, the contribution
\[\begin{split}-\frac{1}{2}\sideset{}{{}^{\prime}}{\sum}_{i,j}J_{ ij}\mathbf{u}_{i}^{\prime\prime}\cdot(\tilde{\mathbf{r}}_{ij}\cdot\mathbf{\nabla })\mathbf{\mu}\\ =-\frac{1}{2}\sideset{}{{}^{\prime}}{\sum}_{i}\mathbf{u}_{i}^{ \prime\prime}\cdot\left(\sum_{j}J_{ij}\tilde{\mathbf{r}}_{ij}\cdot\mathbf{\nabla }\right)\mathbf{\mu}=0\end{split} \tag{45}\]
is zero due to the definition of the exchange coordinates, see Eq. (23). Let's combine the rest of the third-order terms with \(\mathbf{u}_{i}^{\prime\prime}\):
\[\begin{split}\frac{1}{2}\sideset{}{{}^{\prime}}{\sum}_{i,j}\left\{- J_{ij}\mathbf{u}_{i}^{\prime\prime}\cdot\mathbf{u}_{j}^{\prime}+J_{ij}( \mathbf{u}_{i}^{\prime}\cdot\mathbf{u}_{i}^{\prime\prime})\mathbf{\mu}\cdot\mathbf{\mu }+\mathbf{D}_{ij}\cdot[\mathbf{u}_{i}^{\prime\prime}\times\mathbf{\mu}]\right\}\\ =\frac{1}{2}\sideset{}{{}^{\prime}}{\sum}_{i}\mathbf{u}_{i}^{ \prime\prime}\cdot\sum_{j}\left\{J_{ij}(\mathbf{u}_{i}^{\prime}-\mathbf{u}_{j}^ {\prime})-[\mathbf{D}_{ij}\times\mathbf{\mu}]\right\}=0.\end{split} \tag{46}\]
Here the equality to zero is due to Eq. (24) for the first-order cantings. The terms containing \(\mathbf{u}_{i}^{\prime\prime}\) disappear in the same way. It is used that the calculated sums are taken over all Cu-Cu bonds of the unit cell and are symmetric in the permutation of the summation indices \(i\leftrightarrow j\), subject to the obvious relations: \(J_{ji}=J_{ij}\), \(\mathbf{D}_{ji}=-\mathbf{D}_{ij}\), \(\tilde{\mathbf{r}}_{ji}=-\tilde{\mathbf{r}}_{ij}\). Thus, the cantings of the second and third orders, \(\mathbf{u}_{i}^{\prime\prime}\) and \(\mathbf{u}_{i}^{\prime\prime\prime}\), do not contribute to the energy up to the third approximation in SOC inclusive.
Above, to calculate some (zero) contributions to the energy, we used (i) the unimodularity of the vector field \(\mathbf{\mu}(\mathbf{r})\), (ii) the perpendicularity of the antiferromagnetic cantings \(\mathbf{u}_{i}\) to the magnetization, (iii) the symmetry under permutation of summation indices \(i,j\). In addition, one can also use the fact that the summation over all bonds is similar to averaging over the point group of the crystal. Indeed, the vectors \(\tilde{\mathbf{r}}_{ij}\), \(\mathbf{D}_{ij}\) associated with equivalent interatomic bonds are related to each other by symmetry elements of the point group. The same can be said about the vectors \(\mathbf{\rho}_{i}\) associated with the equivalent atomic positions. For the crystals of the tetrahedral class 23 (\(T\)) studied in this work, the averaging of the products of vector components associated with the interatomic bonds or atomic positions can be performed as follows
\[\langle a_{\alpha}\rangle=0, \tag{47}\]
\[\langle a_{\alpha}b_{\beta}\rangle=\frac{1}{3}(\mathbf{a}\cdot\mathbf{b}) \delta_{\alpha\beta}, \tag{48}\]
\[\begin{split}\langle a_{\alpha}b_{\beta}c_{\gamma}\rangle=& \frac{1}{6}\mathbf{a}\cdot[\mathbf{b}\times\mathbf{c}]\varepsilon_{\alpha \beta\gamma}\\ &+\frac{1}{6}(|\varepsilon_{\lambda\mu\nu}|a_{\lambda}b_{\mu}c_{ \nu})|\varepsilon_{\alpha\beta\gamma}|,\end{split} \tag{49}\]
with \(\varepsilon_{\alpha\beta\gamma}\) being the antisymmetric Levi-Civita symbol. Using the methods listed above, one can calculate all
contributions to the energies \(\mathcal{E}^{(2)}\) and \(\mathcal{E}^{(3)}\). Some of them are equal to zero or can be reduced to the surface part of the magnetic energy, which is not considered here. In what follows, we list only non-zero contributions, starting from the second order in SOC.
\[\begin{split}-\frac{1}{4}\sideset{}{{}^{\prime}}{\sum}_{i,j}& J_{ij}\mathbf{\mu}\cdot(\tilde{\mathbf{r}}_{ij}\cdot\mathbf{\nabla})^{2}\mathbf{\mu}\\ &=\frac{1}{2}\left(\frac{1}{6}\sideset{}{{}^{\prime}}{\sum}_{i,j} \mathcal{J}_{ij}\tilde{\mathbf{r}}_{ij}^{2}\right)\frac{\partial\mathbf{\mu}}{ \partial r_{\alpha}}\cdot\frac{\partial\mathbf{\mu}}{\partial r_{\alpha}}.\end{split} \tag{10}\]
Here we use the averaging \(\langle\tilde{r}_{ij,\alpha}\tilde{r}_{ij,\beta}\rangle=\frac{1}{3}\tilde{ \mathbf{r}}_{ij}^{2}\delta_{\alpha\beta}\) and the equality \(\mathbf{\mu}\cdot\Delta\mathbf{\mu}=-(\partial\mathbf{\mu}/\partial r_{\alpha})\cdot( \partial\mathbf{\mu}/\partial r_{\alpha})\), which is correct up to a total derivative reduceable to a surface integral.
\[\begin{split}\frac{1}{2}\sideset{}{{}^{\prime}}{\sum}_{i,j}& \mathbf{D}_{ij}\cdot[\mathbf{\mu}\times(\tilde{\mathbf{r}}_{ij}\cdot\mathbf{\nabla}) \mathbf{\mu}]\\ &=\left(-\frac{1}{6}\sideset{}{{}^{\prime}}{\sum}_{i,j}\mathbf{D}_ {ij}\cdot\tilde{\mathbf{r}}_{ij}\right)\mathbf{\mu}\cdot\mathrm{curl}\mathbf{\mu}. \end{split} \tag{11}\]
\[\begin{split}-\mathbf{H}\cdot\sideset{}{{}^{\prime}}{\sum}_{i} \mathcal{J}_{ij}\mathbf{D}_{ij}\cdot[\mathbf{\mu}\times(\tilde{\mathbf{r}}_{ij} \cdot\mathbf{\nabla})\mathbf{\mu}]\\ \end{split} \tag{12}\]
where \(M_{0}\equiv\sum_{i}^{\prime}g_{i}\mu_{\mathrm{B}}\). The contributions (10)-(12) describe a GL-like energy associated with the continuous magnetization field \(\mathbf{\mu}(\mathbf{r})\). As a rule, the continuum model of cubic helimagnet is limited to these three terms. In addition to them, there are also two constant contributions associated with antiferromagnetic spin cantings:
\[\begin{split}\frac{1}{4}\sideset{}{{}^{\prime}}{\sum}_{i,j} \mathcal{J}_{ij}\left(\mathbf{u}_{i}^{\prime 2}-2\mathbf{u}_{i}^{\prime}\cdot \mathbf{u}_{j}^{\prime}+\mathbf{u}_{j}^{\prime 2}\right)=\frac{1}{6}\sideset{}{{}^{\prime}}{\sum}_{i,j} \mathcal{J}_{ij}\left(\mathbf{\rho}_{i}-\mathbf{\rho}_{j}\right)^{2},\end{split} \tag{13}\]
\[\begin{split}\frac{1}{2}\sideset{}{{}^{\prime}}{\sum}_{i,j}& \mathbf{D}_{ij}\cdot\left([\mathbf{u}_{i}^{\prime}\times\mathbf{\mu}]+[\mathbf{\mu} \times\mathbf{u}_{j}^{\prime}]\right)\\ &=-\frac{1}{3}\sideset{}{{}^{\prime}}{\sum}_{i,j}\mathbf{D}_{ij} \cdot(\mathbf{\rho}_{i}-\mathbf{\rho}_{j})\\ &=-\frac{2}{3}\sideset{}{{}^{\prime}}{\sum}_{i}\mathbf{\rho}_{i} \cdot\sum_{j}\mathbf{D}_{ij}\\ &=-\frac{2}{3}\sideset{}{{}^{\prime}}{\sum}_{i}\mathbf{\rho}_{i} \cdot\sum_{j}J_{ij}\left(\mathbf{\rho}_{i}-\mathbf{\rho}_{j}\right)\\ &=-\frac{1}{3}\sideset{}{{}^{\prime}}{\sum}_{i,j}\mathcal{J}_{ ij}\left(\mathbf{\rho}_{i}-\mathbf{\rho}_{j}\right)^{2}.\end{split} \tag{14}\]
In Eq. (14), we use the symmetry under permutation of the summation indices \(i\leftrightarrow j\) and Eq. (26), which serves as the definition of the rotation vectors \(\mathbf{\rho}_{i}\).
Let us also list the nonzero contributions to the energy in the third approximation in SOC.
\[\begin{split}-\frac{1}{2}\sideset{}{{}^{\prime}}{\sum}_{i,j}& J_{ij}\mathbf{u}_{i}^{\prime}\cdot(\tilde{\mathbf{r}}_{ij}\cdot\mathbf{\nabla}) \mathbf{u}_{j}^{\prime}\\ &=\frac{1}{2}\sideset{}{{}^{\prime}}{\sum}_{i,j}\mathcal{J}_{ij} (\mathbf{\rho}_{i}\cdot(\tilde{\mathbf{r}}_{ij}\cdot\mathbf{\nabla})\mathbf{\mu})(\mathbf{ \rho}_{j}\cdot\mathbf{\mu})\\ &=\left(\frac{1}{12}\sideset{}{{}^{\prime}}{\sum}_{i,j}\mathcal{J} _{ij}\tilde{\mathbf{r}}_{ij}\cdot[\mathbf{\rho}_{i}\times\mathbf{\rho}_{j}]\right) \mathbf{\mu}\cdot\mathrm{curl}\mathbf{\mu},\end{split} \tag{15}\]
\[\begin{split}\frac{1}{2}\sideset{}{{}^{\prime}}{\sum}_{i,j} \mathbf{D}_{ij}\cdot([\mathbf{u}_{i}^{\prime}\times(\tilde{\mathbf{r}}_{ij} \cdot\mathbf{\nabla})\mathbf{\mu}]+[\mathbf{\mu}\times(\tilde{\mathbf{r}}_{ij}\cdot\bm {\nabla})\mathbf{u}_{j}^{\prime}])\\ &=\frac{1}{2}\sideset{}{{}^{\prime}}{\sum}_{i,j}(\mathbf{D}_{ij} \cdot\mathbf{\mu})(\mathbf{\rho}_{i}+\mathbf{\rho}_{j})\cdot(\tilde{\mathbf{r}}_{ij} \cdot\mathbf{\nabla})\mathbf{\mu}\\ &=\left(\frac{1}{12}\sideset{}{{}^{\prime}}{\sum}_{i,j}\mathbf{D}_ {ij}\cdot[\tilde{\mathbf{r}}_{ij}\times(\mathbf{\rho}_{i}+\mathbf{\rho}_{j})] \right)\mathbf{\mu}\cdot\mathrm{curl}\mathbf{\mu}.\end{split} \tag{16}\]
When deriving Eqs. (15), (16), averaging (10) was used for triples of vectors \(\{\tilde{\mathbf{r}}_{ij},\mathbf{\rho}_{i},\mathbf{\rho}_{j}\}\) and \(\{\mathbf{D}_{ij},\tilde{\mathbf{r}}_{ij},\mathbf{\rho}_{i}+\mathbf{\rho}_{j}\}\), respectively. We neglected the surface terms proportional to \(|\varepsilon_{\alpha\beta\gamma}|\mu_{\alpha}(\partial\mu_{\gamma}/\partial r_{ \beta})\). These two terms add up to a correction to the DM parameter of the continuum model. In addition, there is also a constant contribution to the energy
\[\begin{split}\frac{1}{2}\sideset{}{{}^{\prime}}{\sum}_{i,j}& \mathbf{D}_{ij}\cdot[\mathbf{u}_{i}^{\prime}\times\mathbf{u}_{j}^{\prime}]\\ &=\frac{1}{2}\sideset{}{{}^{\prime}}{\sum}_{i,j}(\mathbf{D}_{ij} \cdot\mathbf{\mu})([\mathbf{\rho}_{i}\times\mathbf{\rho}_{j}]\cdot\mathbf{\mu})\\ &=\frac{1}{6}\sideset{}{{}^{\prime}}{\sum}_{i,j}\mathbf{D}_{ij} \cdot[\mathbf{\rho}_{i}\times\mathbf{\rho}_{j}].\end{split} \tag{17}\]
It is worth noting that, by the definition (9) of vector \(\mathbf{\mu}\), \(\sum_{i}^{\prime}g_{i}\mathbf{u}_{i}=0\). This means that the energy of interaction with an external magnetic field is zero in the third order in SOC: \(-\mathbf{H}\cdot\sum_{i}^{\prime}g_{i}\mu_{\mathrm{B}}\mathbf{u}_{i}^{\prime}=0\). |
2309.10816 | Multisource Holography | Holographic displays promise several benefits including high quality 3D
imagery, accurate accommodation cues, and compact form-factors. However,
holography relies on coherent illumination which can create undesirable speckle
noise in the final image. Although smooth phase holograms can be speckle-free,
their non-uniform eyebox makes them impractical, and speckle mitigation with
partially coherent sources also reduces resolution. Averaging sequential frames
for speckle reduction requires high speed modulators and consumes temporal
bandwidth that may be needed elsewhere in the system.
In this work, we propose multisource holography, a novel architecture that
uses an array of sources to suppress speckle in a single frame without
sacrificing resolution. By using two spatial light modulators, arranged
sequentially, each source in the array can be controlled almost independently
to create a version of the target content with different speckle. Speckle is
then suppressed when the contributions from the multiple sources are averaged
at the image plane. We introduce an algorithm to calculate multisource
holograms, analyze the design space, and demonstrate up to a 10 dB increase in
peak signal-to-noise ratio compared to an equivalent single source system.
Finally, we validate the concept with a benchtop experimental prototype by
producing both 2D images and focal stacks with natural defocus cues. | Grace Kuo, Florian Schiffers, Douglas Lanman, Oliver Cossairt, Nathan Matsuda | 2023-09-19T17:55:07Z | http://arxiv.org/abs/2309.10816v1 | # Multisource Holography
###### Abstract.
Holographic displays promise several benefits including high quality 3D imagery, accurate accommodation cues, and compact form-factors. However, holography relies on coherent illumination which can create undesirable speckle noise in the final image. Although smooth phase holograms can be speckle-free, their non-uniform eyebox makes them impractical, and speckle mitigation with partially coherent sources also reduces resolution. Averaging sequential frames for speckle reduction requires high speed modulators and consumes temporal bandwidth that may be needed elsewhere in the system.
In this work, we propose multisource holography, a novel architecture that uses an array of sources to suppress speckle in a single frame without sacrificing resolution. By using two spatial light modulators, arranged sequentially, each source in the array can be controlled almost independently to create a version of the target content with different speckle. Speckle is then suppressed when the contributions from the multiple sources are averaged at the image plane. We introduce an algorithm to calculate multisource holograms, analyze the design space, and demonstrate up to a 10 dB increase in peak signal-to-noise ratio compared to an equivalent single source system. Finally, we validate the concept with a benchtop experimental prototype by producing both 2D images and focal stacks with natural defocus cues.
+
Footnote †: journal: Journal of Computer Vision and Pattern Recognition
## 1. Introduction
Computer generated holography uses a spatial light modulator (SLM) to mimic the wavefront coming from a three-dimensional (3D) object. This enables high resolution displays with accurate per-pixel focal cues, and recent user studies demonstrated that holographic displays have the potential to drive the human accommodation response [12], offering a solution to the vergence-accommodation conflict of stereoscopic displays [15]. Holography is a particularly promising technology for head-mounted displays (HMDs) since it also enables compact form-factors, can compensate for optical aberrations, and can correct for eyeglass prescriptions entirely in software [17].
However, holographic displays rely on spatially coherent illumination to achieve 3D cues [14], which can create speckle in the displayed content. Speckle is a phenomenon that occurs with coherent light when random path length differences interfere at the image plane, creating a noisy pattern of dark and bright spots due to random constructive and destructive interference [18]. This effect is undesirable since it hides details in the hologram and creates noisy, visually unappealing images. Although reducing the illumination coherence can suppress speckle, it also reduces resolution and depth of field [6].
Smooth phase holograms offer a different option to control speckle: by removing randomness in the image plane phase, all interference is constructive and speckle is eliminated [17]. However, these holograms have highly non-uniform energy distribution in the eyebox, greatly reducing practicality [20]. In addition, the focal cues generated are limited and often exhibit
unnatural ringing. In user studies, these focal cues were not effective at driving accommodation [14].
Another option for speckle reduction is temporal multiplexing, when several frames with unique speckle patterns are shown in rapid sequence such that they are visually averaged by the eye. However, this requires high speed SLMs with frame rates in the kilohertz range, and the amount of despeckling increases sub-linearly with the number of averaged frames. Reducing the number of frames needed for speckle control could increase the amount of speckle reduction with the same number of frames, could allow for more flexibility choosing modulators, or could free temporal bandwidth to address other challenges, for example, increasing eyebox size or field of view (FoV) through scanning.
We propose a novel architecture for speckle reduction in holographic displays that can create natural defocus cues with a uniform eyebox at the full SLM resolution, all in a single frame. To do this, we modify the illumination setup of a traditional holographic display, which typically consists of a single light source that generates a coherent plane wave at the SLM. In our architecture, we replace this single source with a grid of multiple sources, which each generate a plane wave at a different angle of incidence. By using sources that are incoherent with each other, the speckle patterns from each source average at the image plane, reducing speckle contrast.
However, with a single SLM, each source creates a shifted copy of the same hologram, creating haze and doubling in the displayed content. To address this, we propose using two SLMs spaced a few millimeters apart axially. This arrangement creates a modulation response that varies with the angle of incidence, similar to how volume holograms use their thickness to create angular selectivity [12]. With the two SLMs, we can independently control the output image from each source, removing the doubling artifacts while continuing to get the speckle-reduction benefits of the multiple sources. We refer to this architecture, including both the array of sources and the two SLMs, as _multisource holography_.
In summary, we make the following contributions:
* We introduce the multisource holography architecture and a corresponding hologram generation algorithm. We demonstrate full resolution holograms with natural defocus cues and uniform eyebox in a single frame.
* In simulation, we demonstrate improvements of 10 dB in peak signal-to-noise ratio (PSNR) compared to an equivalent single source system with the same degrees of freedom. We further show that multisource holography with no temporal averaging outperforms temporal multiplexing with 6 jointly optimized frames.
* We analyze how the source spacing and number of sources impact hologram quality and provide guidance on the minimum spacing needed to achieve full resolution.
* We validate the multisource holography concept with a full-color benchtop prototype. We introduce a customized calibration procedure and experimentally demonstrate low speckle holograms for both planar images and focal stacks with natural blur.
## 2. Related Work
Smooth Phase HologramsAs described above, smooth phase holograms eliminate speckle by enforcing near-constant phase at the image plane, which removes randomness and ensures interference between neighboring points is always constructive. Enforcing a specific phase at the image plane requires complex modulation at the SLM, so a practical option is the double phase amplitude coding (DPAC) method, which can almost entirely remove speckle [15, 16]. Even without complex modulation, one can achieve low phase variation at the image plane through gradient descent with uniform phase initialization at the SLM [17] or by explicitly enforcing a piecewise constant phase in the loss function [15]. Although these smooth phase approaches can create high quality and speckle-free two dimensional (2D) images, defocus blur is limited and contains unnatural ringing. To address this issue, holograms can be optimized to target natural-appearing blur while encouraging image phase to remain smooth [16, 17]. However, the amount of blur is still limited, and all smooth phase holograms concentrate energy into a small region of the eyebox, making these systems very sensitive to eye movement and imperfections in the user's eye. Furthermore, in a recent user study, smooth phase holograms were not effective at driving accommodation [14].
Random Phase HologramsRandom phase at the image plane, which generates scattering similar to a diffuse object, enables natural defocus blur and uniform energy distribution in the eyebox [18]. However, this same randomness reintroduces speckle due to interference from different random path lengths. For 2D images, by letting the phase at the image plane be a free variable, one can use iterative approaches to shape the phase such that speckle is minimized from a particular viewing angle [13, 14]. Adding spatial "don't care" regions can further enhance image quality in the regions of interest [18]. However, for 3D content, the number of degrees of freedom on the SLM is insufficient to suppress speckle everywhere at once. One option is to let out-of-focus content be unconstrained, which enables better in-focus imagery but creates additional speckle in defocused regions [15, 16]. Generating natural defocus blur with low speckle over the whole volume requires additional despeckling approaches that cannot be achieved through the algorithm alone.
Partial CoherenceDecreasing the coherence of the illumination can reduce speckle by imposing an incoherent blur on the output image through wavelength diversity (temporal partial coherence) or angular diversity (spatial partial coherence) [17, 18]. Spatial partial coherence in holographic displays has been demonstrated using an echelon stair [19], and temporal partial coherence has been demonstrated with different light sources, such as light emitting diodes (LEDs) [16, 17] and superluminescent LEDs (SLEDs) [18, 19]. However, partially coherent sources result in a direct trade-off between resolution and speckle reduction, which is incompatible with a high resolution, low speckle
display. Lee et al. (2020) designed a partially coherent light source specifically to balance resolution, depth of field, and speckle, but the trade-off still exists so despeckling is limited without further resolution reduction. Similar coherence properties have been explored in the context of interferometric 3D sensing (Kotwal et al., 2023) and transmission matrix characterization (Gkioulekas et al., 2015; Kotwal et al., 2020), and we refer interested readers to these sources for an in-depth analysis.
Temporal MultiplexingTo achieve despeckling without sacrificing resolution, one can display many holograms in sequence, each with a unique speckle pattern. Due to human persistence of vision, the user sees an average of the displayed images, effectively suppressing speckle. Systems with 8 to 24 frames of temporal multiplexing per color have been demonstrated with high speed modulators such as digital micromirror devices (DMDs) (Curtis et al., 2021; Lee et al., 2020), ferro-electronic liquid crystal on silicon (FLCoS) (Lee et al., 2022), and micro-electromechanical systems (MEMS) (Choi et al., 2022). These prior works achieve state-of-the-art image quality for temporal multiplexing by jointly optimizing all frames and accounting for the limited bit depth of these high speed SLMs. However, to create a fully life-like HMD, one needs to refresh content at least 1.8 kHz (Cuervo et al., 2018); achieving these refresh speeds with temporal multiplexing requires updating content between sub-frames, which current algorithms do not support.
In addition, reducing the number of frames needed for speckle control could free the temporal bandwidth for other uses. For example, Lee et al. (2020) demonstrated increased viewing angle (in other words, increased etendue) by scanning illumination angle over time. Approaches like these could help overcome the fundamental etendue limits (Park and Lee, 2022) of holographic displays, but they reduce the amount of temporal bandwidth available for despeckling.
Multiple ModulatorsOur system is capable of reducing speckle in a single frame through the use of two cascaded SLMs, taking advantage of the compression that layered displays can provide. In conventional optics, layered modulators can be used to break the trade-off between spatial and angular resolution in light field displays (Wetzstein et al., 2012). Similarly, in diffractive optics, Ye et al. (2014) showed that static layered diffractive elements can control the 4D bidirectional reflectance distribution function (BRDF) under incoherent illumination, and Peng et al. (2017) used pairs of static diffractive optical elements (DOEs) to create different holograms based on the relative translations between the DOEs. Although these prior publications have different application spaces, they demonstrate that two layered modulators can create several (more than two) different images, highlighting the compressive nature of the layered displays.
In our system, we take advantage of compression in layered displays to achieve more despeckling than in non-compressive systems (such as temporal multiplexing) with the same number of degrees of freedom. We note that interferometer-inspired setups (Choi et al., 2021; Wang et al., 2019), which also use multiple SLMs for image quality enhancement, are not designed to take advantage of potential compression, and therefore have limited speckle reduction based on the degrees of freedom in the two modulators.
Multiple Incoherent SourcesDespeckling in our system is achieved through multiple discrete sources of illumination that are incoherent with each other. To our knowledge, the only prior work with similar illumination is that of Jo et al. (2022) in which multiple sources are used for etendue expansion while simultaneously providing some despeckling. Like our work, they show that multiple sources with a single modulation plane create uncontrollable replicas in the final image. However, they use a binary amplitude mask in the Fourier plane to break correlations between replicas, as where we use a second SLM with a small air gap, which is more amenable to a compact system and provides additional degrees of freedom for better image quality. Finally, since Jo et al. (2022) target etendue expansion as their application, they fix the number and locations of the sources such that, at any position in the image, a maximum of 9 different sources are averaged for speckle reduction. We demonstrate that speckle reduction can be dramatically increased with more sources, and we analyze the effect of number of sources and source spacing on image quality.
Camera-Based CalibrationEven if speckle is theoretically reduced, any non-idealities in the optical system can cause additional speckle in practice due to mismatch between the model used in optimization and the true system. To account for imperfections, one can design a model of the optical system with learnable parameters, then fit the unknowns in an offline calibration process using experimentally captured data (Chakravarthula et al., 2020; Choi et al., 2021; Kavakli et al., 2022; Peng et al., 2020). A special case of camera-based calibration is the "active" approach proposed by Peng et al. (2020), in which the SLM pattern is fine-tuned online to a particular image based on camera feedback. Although these holograms do not generalize to new content, they highlight what is feasible with a given experimental system. To best demonstrate the potential of multisource holography, we use both offline calibration with a physically-based model and online active camera-in-the-loop.
## 3. System Overview
A traditional holographic display uses an SLM to shape an incoming coherent beam to form an image. Denoting the complex modulation function of the SLM as \(\mathbf{s}(\vec{x})\), we can write the image formation model as
\[\mathbf{g}_{\vec{x}}(\vec{x}) =\mathcal{P}_{2}\left\{\mathbf{p}(\vec{x})\odot\mathbf{s}(\vec{x })\right\},\] \[\mathbf{I}_{\vec{x}}(\vec{x}) =|\mathbf{g}(\vec{x})|^{2}, \tag{1}\]
where \(\vec{x}\) is the 2D spatial coordinate at the SLM, \(\mathbf{g}_{\vec{x}}(\cdot)\) and \(\mathbf{I}_{\vec{x}}(\cdot)\) are, respectively, the electric field and intensity a distance \(z\) from the SLM, and \(\mathbf{p}(\cdot)\) is the complex field illuminating the SLM, which is most commonly a plane wave of unit energy, \(\mathbf{p}(\vec{x})=1\). Finally, \(\odot\) denotes pointwise multiplication, and \(\mathcal{P}_{2}\{\cdot\}\) is the angular spectrum method (ASM) propagation operator defined as
\[\mathcal{P}_{2}\left\{\mathbf{s}(\vec{x})\right\} =\mathcal{F}^{-1}\big{\{}\mathcal{F}\left\{\mathbf{s}(\vec{x}) \right\}\odot\mathcal{H}_{2}(\vec{u})\big{\}},\] \[\mathcal{H}_{2}(\vec{u}) =\begin{cases}\exp\left(\frac{2\pi jz}{\lambda}\sqrt{1-\|\vec{x} \|^{2}}\right),&\text{if }\sqrt{\|\vec{u}\|^{2}}<\frac{1}{\lambda},\\ 0,&\text{otherwise},\end{cases} \tag{3}\]
where \(\mathcal{F}\{\cdot\}\) is the 2D Fourier transform operator and \(\vec{u}\) is the 2D coordinate in frequency space (Matsushima and Shimobaba, 2009).
Here, we assume monochromatic illumination with wavelength \(\lambda\); see Supplement for an extension to broadband sources.
To generate a hologram, one can use first order methods like gradient descent to optimize for an SLM pattern that creates a given target image:
\[\mathbf{s}=\underset{\mathbf{s}}{\mathrm{argmin}}\sum_{z}\|\mathbf{I}_{z}(\vec{x })-\hat{\mathbf{I}}_{z}(\vec{x})\|_{2}^{2}, \tag{4}\]
where \(\hat{\mathbf{I}}_{z}(\vec{x})\) is the target intensity at a given plane, and optimization is performed over a dense range of propagation distances in the volume of interest.
To encourage natural defocus cues, we render the target images with realistic blur based on incoherent illumination where the blur kernel size is determined by the maximum diffraction angle of the SLM (see Supplement for details). However, the holographic display aims to control a 3D volume of light using only a single 2D SLM pattern, making the optimization problem overdetermined. As a result, the 3D volume cannot be matched exactly and uncontrollable speckle noise is visible in the image, particularly as the image volume grows.
### Despeckling with Multiple Illumination Sources
Our goal is to reduce speckle in holographic displays. Our basic strategy is common in the despeckling literature: produce several versions of the image that each have a unique speckle pattern, and when these copies of the image are superimposed, the speckle is reduced through averaging (Goodman, 2007). To create different versions of the image, we propose using multiple sources of illumination. When sources are placed at different locations behind a collimating lens, as shown in Fig. 2a, each source illuminates the SLM from a different angle:
\[\mathbf{p}(\vec{x};\vec{m}_{i})=e^{j(\vec{x}\cdot\vec{m}_{i})}, \tag{5}\]
where \(\cdot\) denotes inner product and \(\vec{m}_{i}\) is the phase slope (in radians per meter) of the \(i\)-th source at the SLM plane, which is related to illumination angle of incidence by
\[\vec{\theta}=\frac{\lambda\vec{m}_{i}}{2\pi}. \tag{6}\]
Note that, unlike the work of Jo et al. (2022), we are not using the sources to expand etendue. Therefore, we choose small slopes for \(\vec{m}_{i}\), within the range of angles that the SLM is able to create natively.
If the different sources are all incoherent with each other, they will not exhibit interference effects at the image plane when they are combined. Instead, the multisource image is the sum of the individual source intensities as follows:
\[\mathbf{g}_{z,m_{i}}(\vec{x}) =\mathcal{P}_{z}\left\{\mathbf{p}(\vec{x},\vec{m}_{i})\odot \mathbf{s}(\vec{x})\right\} \tag{8}\] \[\mathbf{I}_{z}(\vec{x}) =\sum_{i}\left|\mathbf{g}_{z,m_{i}}(\vec{x})\right|^{2} \tag{7}\]
This achieves part of our goal: each source creates a unique speckle pattern, so speckle contrast is reduced when the individual source intensities are combined. However, for a useful display, the final output intensity \(\mathbf{I}_{z}\) should have the potential to be shaped into arbitrary target images, which is not the case in this configuration. To demonstrate the problem, we make the small angle approximation to the ASM kernel, and derive the following relationship (see Supplement for derivation):
\[\mathbf{g}_{z,m_{i}}(\vec{x})=\mathbf{g}_{z,0}(\vec{x}+\frac{\lambda\vec{x}}{ 2\pi}\vec{m}_{i})\odot e^{j(\vec{x}\cdot\vec{m}_{i})} \tag{9}\]
where \(\mathbf{g}_{z,0}(\cdot)\) is the electric field from an on-axis plane wave (\(\vec{m}=\vec{0}\)). This means that the output electric field from the \(i\)-th source is a translated copy of electric field with on-axis illumination, up to a carrier wave. In other words, a single ideal SLM has infinite memory effect (Freund et al., 1988).
Based on Eqs. 8 and 9, the total intensity with all the sources can be written as
\[\mathbf{I}_{z}(\vec{x})=\left|\mathbf{g}_{z,0}(\vec{x})\right|^{2}*\sum_{i} \delta(\vec{x}+\frac{\lambda\vec{x}}{2\pi}\vec{m}_{i}), \tag{10}\]
where \(*\) denotes a 2D convolution. Therefore, producing a given output image \(\mathbf{I}_{z}\) requires deconvolving the source locations. This is a very poorly posed problem, regardless of the \(\vec{m}_{i}\) used, since the
Figure 2. **System Architecture:** Multisource holography uses an array of mutually incoherent sources that each generate a plane wave at a different angle. (a) With a single SLM, all sources are modulated with the same pattern but propagate in different directions, creating replicas of the content. Generating an image with this configuration is a poorly posed problem. (b) We propose adding a second SLM a small distance \(\Delta z\) in front of the first. This enables different modulation function for different angles of incidence, enabling unique content for each source. By jointly optimizing the two SLM patterns, the holograms from each source line up correctly, removing replica artifacts. Since the sources are incoherent with each other, their intensities add at the image plane which suppresses speckle through averaging.
result of the deconvolution, \(\left|\mathbf{g}_{z,0}(\vec{x})\right|^{2}\), is a physical quantity that must be nonnegative. As a result, multiple sources with a single SLM is not a viable solution for holographic displays.
### Multisource Holography with Two Modulators
In order to display arbitrary content with multiple incoherent sources, there cannot be strong correlations between the different source outputs. We want each angle of illumination to generate a unique pattern, and this requires that the modulator have an angularly selective response. We achieve this requirement by adding a second SLM a distance \(\Delta z\) from the first, as shown in Fig. 2b, which yields the following image formation model:
\[\mathbf{I}_{z}(\vec{x})=\sum_{i}\left|\mathcal{P}_{z}\left\{\mathcal{P}_{ \Delta z}\left\{\mathbf{p}(\vec{x};\vec{m}_{i})\odot\mathbf{s}_{1}(\vec{x}) \right\}\odot\mathbf{s}_{2}(\vec{x})\right\}\right|^{2}, \tag{11}\]
where \(\mathbf{s}_{1}(\cdot)\) and \(\mathbf{s}_{2}(\cdot)\) are the modulation functions of the two SLMs. To see how the second SLM breaks the correlation between sources, let \(\mathbf{g}_{\Delta z,0}(\vec{x})\) be the electric field just before the second SLM given on-axis illumination. Applying Eq. 9, we can describe the electric field after the second SLM as
\[\mathbf{g}_{\Delta z,0}(\vec{x}+\frac{\lambda\Delta z}{2\pi}\vec{m}_{i}) \odot\mathbf{s}_{2}(\vec{x})\odot e^{i(\vec{x}\cdot\vec{m}_{i})}. \tag{12}\]
Here, the electric field is translated based on the source angle, then pointwise multiplied by the modulation function of the second SLM. As long as the relative translation between any two sources is at least one SLM pixel, then the output fields (and therefore the final intensities) will be substantially decorrelated, breaking the memory effect (Freund et al., 1988). This gives the following condition on the source spacing:
\[\Delta m\geq\frac{2\pi p}{\lambda\Delta z}, \tag{13}\]
where \(\Delta m\) is the spacing between sources (\(\Delta m=\|\vec{m}_{i}-\vec{m}_{i}\|^{2}\)), and \(p\) is the SLM pixel pitch, assumed be the same for both SLMs.
As long as Eq. 13 is met, our multisource holography setup can create different content for each source. Conceptually, each source "sees" a different relative translation between the two SLM patterns. Therefore, we would like to design the SLMs so each of these translations creates the desired target image for each source. This is similar to the work of Peng et al. (2017), where pairs of static DOEs are combined with different translations to create unique images. Based on their results, where several unique holograms were created from a single DOE pair, we expect our system can also create the desired output for more than two sources simultaneously, even though there are only two SLMs. In other words, we expect the system to be compressive, which allows our system to generate more incoherent copies of the image, resulting in more despeckling, than other systems (for example, temporal multiplexing) with the same number of degrees of freedom.
In practice, we jointly solve for both SLM patterns using the using the model in Eq. 11 and solving the optimization problem,
\[\mathbf{s}_{1},\mathbf{s}_{2}=\operatorname*{argmin}_{\mathbf{s}_{1},\mathbf{ s}_{2}}\sum_{z}\|\mathbf{I}_{z}(\vec{x})-\mathbf{\hat{I}}_{z}(\vec{x})\|_{2}^{2}, \tag{14}\]
using ADAM (Kingma and Ba, 2014).
## 4. Simulation
To demonstrate the improvements that multisource holography provides, we optimize holograms in simulation to generate a focal stack with rendered incoherent blur, similar to what one would see in a natural scene (Fig. 3a). Our focal stack, shown in Fig. 3a, covers a 10 mm range in SLM space (from \(z=15\) mm to \(z=25\) mm) with a blur radius of 4 pixels per millimeter of defocus, matched to the maximum diffraction angle of an SLM with 8 um pixels (see Supplement for an explanation of these parameters). Simulations are conducted in red-green-blue (\(\lambda=640\) nm, \(520\) nm, \(450\) nm) assuming monochromatic illumination, and we supervise the loss at 15 evenly spaced planes. Optimization is implemented in PyTorch on an Nvidia A6000 GPU at 2\(\times\) the SLM resolution in each direction to avoid aliasing.
### Single Source Holograms
Using the traditional configuration with a single source and single SLM, it's difficult to create practical, high quality holograms with natural defocus. To demonstrate the challenges, we solve Eq. 4 using the model in Eq. 1, where we assume a complex SLM. Although most off-the-shelf modulators control either phase or amplitude, but not both, we choose a complex SLM for this simulation as it has the same number of degrees of freedom as our 2-SLM multisource approach.
For single source holograms, the SLM initialization has a big impact on the result. We consider two different initializations: constant (in both phase and amplitude) versus uniform random. In both cases, after initialization we iteratively optimize the SLM pattern using ADAM in Pytorch based on the loss function in Sec. 3; however, even after optimization the final SLM pattern is influenced by the starting point. For example, with constant initialization, the phase of \(\mathbf{g}_{z}(\vec{x})\) tends to be low variance (Yoo et al., 2021), resulting in a smooth phase hologram (Fig. 3b). Similarly, with random initialization, \(\mathbf{g}_{z}(\vec{x})\) tends to be high variance, resulting in a random phase hologram (Fig. 3c). As shown in Fig. 3b, the smooth phase simulation has low speckle noise, but exhibits unnatural ringing in defocused regions. In contrast, the random phase hologram has more natural defocus effects but contains substantial speckle noise.
Although ringing may seem like an acceptable trade-off for speckle removal, smooth phase holograms are impractical for near-eye displays due to their eyebox energy distribution, shown in the bottom row for the green channel. The eyebox, which is created by an eyepiece of focal length \(f\), is the area where the user's pupil is located (see Fig. 2). The electric field at the eyebox, \(\mathbf{e}(\cdot)\), is described by
\[\mathbf{e}(\vec{u})=\mathcal{F}\left\{\mathbf{g}_{z_{0}}(\vec{x})\right\}, \tag{15}\]
where \(z_{0}\) is the propagation distance from the SLM to the focal plane of the eyepiece, and \(\vec{u}\) is the spatial coordinate at the eyebox (\(\vec{u}=\vec{x}/\lambda f\)) (Goodman, 2005).
The eyebox of the smooth phase hologram (Fig. 3b, bottom) has a very strong peak in the center, with almost 5 orders of magnitude more energy in the peak than in the eyebox periphery. This presents several challenges for a practical display since the eyebox energy is mostly concentrated into an area only a handful of microns across. First, this means that small eye movements, even those contained
within the theoretical eyebox (Kuo et al., 2020), cause the eye to miss the peak, and then the user will not see the image. See Fig. 9 for an example of this effect. Second, since the light is concentrated into a small point on the user's pupil, "floaters" (debris in the vitreous humor) or other imperfections in the eye can cause substantial artifacts in the hologram. These imperfections are barely noticeable in daily life since the image on the retina is typically an integral over the full pupil; however, in a smooth phase hologram, only a small part of the pupil is sampled. Computationally removing the effects of floating debris is unrealistic as it would require detailed, real-time mapping of every user's eyes. Even if eye imperfections could be overcome, user studies suggest that the small eyebox of smooth phase holograms cannot effectively drive accommodation: Kim et al. (2022b) found much lower accommodative gain for smooth phase holograms compared to random phase holograms. As a result of all of these restrictions, we believe smooth phase holograms cannot achieve compelling 3D content with good image quality for all users.
Random phase holograms, on the other hand, simulate a diffuse surface at the object, which scatters light to cover the full theoretical eyebox (Fig. 3c, bottom), but this comes at the cost of speckle. Although random phase holograms can be low speckle for a 2D scene, for a 3D focal stack the degrees of freedom on the SLM are insufficient to control speckle at all planes, even with a complex modulator. Not only does this speckle hide detail and make images visually unappealing, the high frequency speckle can also interfere with the human accommodation response which expects low spatial frequencies in defocused regions (Kim et al., 2022b). As a result, with a single source, neither smooth nor random phase holograms can produce high quality images that drive accommodation without additional speckle reduction.
### Multisource Holograms
Our multisource holography approach achieves the benefits of random phase holograms while adding substantial despeckling to reduce noise and produce more natural defocus cues. However, as described in Sec. 3.1, adding more sources with a single SLM results in a poorly posed optimization problem that is not able to display arbitrary content. Figure 3d shows an example with a \(4\times 4\) grid of sources and a single complex SLM. Although there is substantial noise reduction compared to random phase with a single source, the resulting image contains low frequency artifacts, as expected, due to the strong correlations between the outputs of each source.
Figure 3. **Single Source vs. Multisource (Simulation):** In simulation, we compare four methods for generating a target focal stack (a) with natural defocus cues. (b) A traditional single source hologram optimized with smooth phase has no speckle but there are ringing artifacts in the defocused regions. More importantly, the energy distribution in the eyebox (bottom row) is extremely non-uniform (note that plots are logarithmic) with a large peak in the center, which makes the display sensitive to eye imperfections and requires precise, low latency eye tracking and 3D pupil steering for a usable display. (c) A single source hologram with random phase achieves an approximately uniform eyebox distribution, but the image is corrupted by severe speckle. (d) Multiple sources reduce speckle, but with a single SLM, correlations between the outputs of each source create haze and doubling in the displayed hologram. (e) Our multisource holography approach uses two SLMs (here, one phase SLM and one amplitude SLM) to break correlations between the individual source outputs. This removes the low frequency artifacts in (d) while preserving the speckle reduction. Although (e) uses two SLMs, all simulations have the same degrees of freedom since (b)-(d) are simulated with a single complex SLM. Out of these approaches, only multisource holography is capable of creating high quality focal stacks with a practical energy distribution in the eyebox.
However, even with a single SLM, the multisource hologram is able to create an approximately uniform eyebox when initialized with a random pattern, albeit with some periodic structure due to the sources.
Our final design uses an array of sources with two SLMs, as described in Sec. 3.2, where the gap between the SLMs creates an angularly selective response that breaks the correlations between sources. Figure 3e shows a simulation of this configuration with a \(4\times 4\) grid of sources such that all sources are outside the memory effect region (Eq. 13) for all wavelengths of interest. Of our two SLMs, spaced \(\Delta z=2\) mm apart, the first SLM modulates phase only, and the second SLM modulates amplitude only, creating the exact same degrees of freedom as in the prior simulations. We initialize the SLMs with uniform random phase and amplitude, respectively. Figure 3e demonstrates that the second SLM successfully breaks the correlations between the sources, removing the low frequency artifacts of Fig. 3d while substantially suppressing speckle compared to Fig. 3c. This simulation shows that multisource holography can create natural defocus cues with low speckle, no ringing artifacts, and uniform energy distribution in the eyebox.
### Source Configuration Analysis
The number of sources and their arrangement are key design choices in multisource holography, so next we analyze the impact of these parameters. Figure 4a illustrates the effect of source spacing. Sources were arranged in a \(4\times 4\) grid and the distance between neighboring sources, \(\Delta m\), was varied. We simulate a \(\Delta z=2\) mm gap between the SLMs, and as before, we use a phase SLM as the first modulator and an amplitude SLM as the second, each with an 8 um pixel pitch. Simulations were done for a single wavelength of 520 nm, and the number of sources within the memory effect region, defined in Eq. 13, are indicated by the background color in the plots.
Figure 4a (top plot) shows PSNR as a function of \(\Delta m\) for a natural scene. When the sources are within the memory effect region of the two SLMs, they create correlated patterns. Similar to the scenario with only one SLM (Sec. 3.1), the resulting output image is described by a convolution (Eq. 10). In this case, since the sources are close together, this creates a small blur instead of the dramatic ghost artifacts in Fig. 3d. Since this blur reduces noise effectively, and PSNR is not a metric that's sensitive to high resolution features, the PSNR is highest at small source spacing. However, this blur is not desirable for a high resolution holographic display.
To quantify the system's ability to display high frequency features, we simulate a binary grating with a period of two SLM pixels, the highest spatial frequency the SLM can produce. We optimize for a focal stack and measure the Michelson contrast, \((I_{\text{max}}-I_{\text{min}})/(I_{\text{max}}+I_{\text{min}})\) in focus, averaged over a \(100\times 100\) pixel area. Assuming an 8 um SLM pixel and an eyepiece with focal length \(f=27.5\) mm, this corresponds to the contrast at 30 cycles per degree or 1 arcmin resolution, on par with the human visual system. We test with the focal plane at three different locations in the volume (\(z=15.7\) mm, 20 mm, 24.3 mm) and report the average contrast.
Figure 4a (bottom plot) shows this contrast as a function of source spacing. When \(\Delta m=0\), the sources are on top of each other. This is equivalent to a single source, which, although noisy, can display high resolution features. Once the sources move slightly apart, they are fully within the memory effect region of the two SLMs, so the the output is blurred and contrast drops. As the spacing between the sources increases, progressively more sources leave the memory effect region and the contrast at 1 arcmin increases, demonstrating
Figure 4. **Source Configuration Analysis:** We assess the impact of source spacing (a) and number of sources (b) on peak signal-to-noise-ratio (PSNR) for a natural scene (top) and contrast at 1 arcmin (bottom). Here, we assume the system is scaled so 1 arcmin corresponds to the maximum SLM resolution. (a) When sources are close together, all sources are within the memory effect region (i.e. do not meet Eq. 13), so each source generates a similar output, creating blur in the final image. Although a small blur increases PSNR, it decreases resolution creating a dip in the contrast metric at small spacings. As the source spacing increases, more sources leave the memory effect region and contrast at 1 arcmin increases, demonstrating that full resolution is possible when the sources are spaced sufficiently far apart. Example images at two different spacings (indicated by the orange dots) are shown on the right. (b) As the number of sources grows, PSNR increases due to better speckle suppression. However, for large numbers of sources the SLMs cannot fully control the outputs of all sources, creating haze in the final image (see 100 source example). This effect is captured by the contrast metric, which decreases after about 36 sources.
that multisource holography can create high resolution features when sources are spaced sufficiently far apart. Since the memory effect cutoff (Eq. 13) also depends on the gap between the SLMs, a similar trend holds when \(\Delta z\) is varied; see Supplement for an analysis of \(\Delta z\).
Next, we consider how the number of sources impacts hologram quality. Figure 4b shows PSNR (top) and contrast at 1 arcmin (bottom) as a function of the number of sources. Sources are arranged in an evenly spaced square grid, with \(\Delta m=50\) radians/mm spacing, which is outside the memory effect region. As the number of sources increases, there is additional despeckling due to more incoherent averaging, and this results in an increase in both PSNR and contrast at 1 arcmin (note that contrast is also negatively affected by speckle). Although there are only 2 SLMs, the image quality continues to improve far beyond two sources. This demonstrates the compressive nature of the system since it implies that each source is still able to create the correct pattern at full resolution using a limited number of degrees of freedom.
However, compressive systems still have limits and eventually there are not sufficient degrees of freedom to uniquely create the correct content for each source. Looking at the simulated image with 100 sources, one can see haze caused by some sources creating incorrect content. Once again, PSNR does not reflect this trend, since the additional haze (which is not well captured by PSNR) is balanced by further speckle reduction. However, our contrast metric is a better proxy: around 36 sources, the contrast at 1 arcmin starts to decrease, reflecting this performance limit. This suggests that the best image quality is with a \(6\times 6\) grid, which achieves 29.4 dB PSNR, over 10 dB higher than the single source baseline.
We'd like to point out that there is a substantial design space for multisource holography. Future work includes analyzing sources that are not confined to a grid, exploring extended sources, and varying the source intensities. In addition, the source parameters could be optimized specifically for a dataset of natural images, analogous to the work of Baek et al. (2021). However, these explorations are out of scope for this paper.
### Time Multiplexing Comparison
So far we have restricted our comparisons to single source holograms made with a single frame, but a common approach to speckle reduction is time multiplexing. In this approach, several holograms are displayed in rapid succession, and due to persistence of vision, the user sees an average of the displayed frames. High speed modulators have made this method increasingly practical, and prior work (Choi et al., 2022; Lee et al., 2022) has demonstrated that temporal multiplexing can create natural defocus blur with a uniform eyebox.
Our method is not meant to be a replacement to temporal multiplexing; the two approaches are orthogonal and can be combined for even more speckle reduction. Since noise reduction goes with the square root of the number of uncorrelated images, temporal multiplexing provides diminishing returns with increasing frame rate. Additional despeckling may be necessary to reduce noise to an imperceptible level, even with high speed modulators.
In addition, reducing the necessary temporal bandwidth could help with another fundamental challenge in holography: limited etendue, which results in a trade-off between FoV and eyebox size. One practical option to overcome this limitation is to scan the location of either the FoV or eyebox (Lee et al., 2020), enabling expanded etendue without eye tracking. However, scanning also requires temporal bandwidth, which is no longer available for despeckling. By providing substantial despeckling in a single frame, multisource holography could open new paths for increasing etendue.
Figure 5 compares our multisource holography approach to temporal multiplexing with 6 jointly optimized frames per color. Similar to recent work using temporal multiplexing (Choi et al., 2021), our holograms are computed using iterative optimization where all multiplexed frames are summed together before computing the loss function. Then, all the frames are simultaneously updated by the optimizer. As in prior simulations, we target a focal stack with 15 planes and natural defocus blur. Our multisource simulation uses one phase and one amplitude SLM, 25 sources with \(\Delta m=75\) rad/mm, and only a single frame per color.
Qualitatively, the two approaches have similar noise levels and image quality, with multisource visually outperforming temporal
Figure 5. **Comparison with Temporal Multiplexing (Simulation): Our multisource approach with no temporal multiplexing (one frame per color) outperforms a traditional single source hologram with a phase only SLM and 6 jointly optimized frames per color when generating a focal stack with natural blur. Our multisource simulation uses \(5\times 5\) sources and \(\Delta m=75\) rad/mm.**
multiplexing in white regions. Quantitatively, multisource holography exceeds 6 frame temporal multiplexing in PSNR and structural similarity index measure (SSIM) over the focal stack. In the temporal multiplexing example, we simulated a phase only SLM, which differs from the simulations in Sec. 4.1 and Sec. 4.3, since this is the most realistic choice given currently available hardware. If fact, most high speed SLMs are even more restricted; the SLMs capable of this much multiplexing are typically binary or have limited bit depth, although in this simulation we assume no quantization. As the number of temporally multiplexed frames increases, the quality eventually exceeds that of multisource holography (see Supplement for an example), but it comes at the cost of temporal bandwidth.
## 5. Experimental System Calibration
We have shown in simulation that multisource holography is a promising technique, but in practice, achieving high quality experimental results requires accurate knowledge of system parameters such as the source locations and positions of the two SLMs. To calibrate our experimental system, we adapt the approaches of Peng et al. (2020) and Chakravarthula et al. (2020) to multisource holography by designing a physics-inspired forward model where unknown parameters are learned from a dataset of experimentally captured SLM-image pairs. Next, we go into the details of this model and the calibration procedure.
### System Model with Learnable Parameters
_SLM Model._ Our model starts with the digital values sent to the SLMs. For each SLM, these values are passed through a learned lookup table (LUT) which describes the mapping from digital input to phase. The LUT is parameterized by 256 coefficients (one for each possible input value), and the LUT is made differentiable using 1D interpolation.
Next, the phase is convolved with a small learnable kernel that represents cross-talk between pixels due to field fringing (Aper et al., 2004; Moser et al., 2019; Persson et al., 2012). Field fringing is a phenomenon of liquid-crystal-on-silicon (LCoS) SLMs in which the output phase is blurred by the gradual transition at pixel boundaries of the electric field that modulates the liquid crystal layer. Since this effect is sub-pixel, we upsample the phase values by 2\(\times\) in each direction before applying the convolution kernel (5 pixels in the upsampled space).
For each SLM, the phase with field fringing is converted to an electric field (assuming uniform amplitude), yielding the complex modulation functions \(\mathbf{s}_{1}(\vec{x})\) and \(\mathbf{s}_{2}(\vec{x})\).
_Source Model._ Each source is assumed to be a plane wave with learnable angle of incidence and learnable relative intensity. For each source, we parameterize the angle of incidence as a 2D location in Fourier space; simulating a delta function at that location and then taking the Fourier transform and multiplying by the relative intensity yields the input field for a given source, \(\mathbf{p}(\vec{x};\vec{m}_{i})\).
_Propagation Model._ We adapt the ideal ASM propagation model (Eq. 2) to include aberrations by multiplying the ASM kernel (Eq. 3) by a complex learnable pupil function. To further enable modeling of spatially varying aberrations, different locations of the input field should have variable pupil functions. Therefore, we learn a \(9\times 16\) grid of pupil functions, and we perform bilinear interpolation to get the intermediate values.
However, applying a fully-spatially varying model is very computationally intensive. To avoid computing a different pupil function for each point of the input field, we instead take a stochastic, patch-based approach: during optimization, we randomly choose a patch of the input field (about \(1200\times 1200\) pixels in the upsampled coordinates) and use the pupil function that corresponds to the center of that patch. Over the course of optimization, this approximates the smoothly varying aberrations, with the added advantage of reducing the memory requirements of the model by only simulating a fraction of the FoV in each iteration. See Supplement for more details on how aberrations are parametrized.
_SLM Alignment._ If the two SLMs are not perfectly aligned with sub-pixel accuracy, we need to account for their relative positions in the model. After propagating the field from the first SLM, we apply a learned warping function that transforms the field into the coordinate space of the second SLM. Our warping function, based on the thin-plate spline model (TPS) of Duchon (1977), can account for non-radial distortion between the two SLMs, enabling accurate alignment even when there are non-ideal optics between the modulators. The warping is implemented in a differentiable manner in Kornia (Riba et al., 2020) using bilinear interpolation separately on the real and imaginary parts of the complex field.
_Model Summary._ We put together all the components of the model as follows: starting with the first SLM, we use our SLM model to covert the digital input values into a complex modulation function. This is multiplied by the source field, then propagated a distance \(\Delta z\)
Figure 6. **Schematic of Experimental Setup: Our benchtop prototype uses two SLMs with a 4\(f\) system in between. A second 4\(f\) relays both SLMs to the correct positions in front of a bare sensor, which is mounted on a linear motion stage. Irises in the Fourier planes remove higher orders from the SLMs. To create the multiple sources, we use a superuminescent light emitting diode (SLED) passed through a fiber splitter. Due to the low coherence of the SLED, the fiber outputs are mutually incoherent, as required by our method. A beamsplitter allows for switching between single source and multisource illumination for comparisons.**
using our modified ASM propagator with spatially varying pupil functions. The field is then warped to match the coordinate space of the second SLM and multiplied by the complex modulation function \(\mathbf{s}_{2}(\vec{x})\), which, once again, is computed with the SLM model described above. Finally, we apply the ASM propagator with spatial variance a second time to propagate a distance \(z\), then take the absolute value squared to simulate intensity on the sensor. This process is repeated for each source in the system while summing the contributions.
### Calibration Procedure
To fit the learnable parameters of our model, we collect an experimental dataset of SLM-image pairs and optimize for the unknown parameters using gradient descent in PyTorch. We use random patterns on the SLM, which have similar statistics to the random phase holograms we aim to display. To further facilitate the optimization process, we apply a Gaussian filter on the input phase with a standard deviation varying from 4 pixels to zero pixels (no blur). This creates training data with larger features that are especially helpful when optimizing the TPS and the source position parameters, which do not converge correctly with high-frequency content alone. We capture datasets with both single source illumination and multisource illumination.
Since the low frequency SLM inputs are less sensitive to field fringing and aberrations, we use the single source blurred patterns to optimize the TPS warping function before fitting the rest of the model. We also optimize a second similar warping function to align the final intensity to the camera capture. Once the alignment functions are close to accurate, we use the remaining single source dataset with all spatial frequencies to fit the other parameters.
After the single source model is optimized, we use the multisource dataset to fit the source locations and intensities. Finally we fine-tune the other parameters using the multisource data to get the complete model. We repeat this process for each color separately.
Note that unlike many learned models in prior work [14, 15], our model does not contain any black-box neural networks; all parameters are physically meaningful. This limits the number of learnable parameters, which in turn means less training data is required, the model optimizes quickly, and the chance of over-fitting is low. For example, our training dataset contains only about 300 captures per color per source configuration, and training takes approximately 10 minutes on an Nvidia GV100. Although we only capture training data at a single propagation distance \(z\), we find that the model extends well to other planes without retraining.
### Active Camera-in-the-Loop
To highlight the potential of multisource holography, we additionally use the "active" camera-in-the-loop (CiTL) method proposed by Peng et al. [16], where feedback from a camera in the system is used to fine-tune the SLM pattern(s) for a specific image or focal stack. We pre-optimize the SLM patterns using our learned model, display the patterns on the experimental system, and continue optimization while replacing the model output with the captured image before back-propagation. For focal stacks, we capture the experimental images at a different location in the volume at each iteration, and we fine-tune the alignment between the capture to the model output using cross-correlation on a patch-wise basis. Final results are captured after updates are complete, with one static pair of SLM patterns for all depths.
## 6. Experimental Results
We demonstrate multisource holography on a benchtop experimental system, depicted in Fig. 6. To create the multiple sources, we split the output of a fiber-coupled light source using cascaded 1:4 fiber splitters (Thorlabs TWQ560HA) to create 16 different sources, which are arranged in a \(4\times 4\) grid. By choosing a superluminescent light emitting diode (SLED, Exalos EXC250011-00), which has a very short coherence length, we find that the outputs of the 16 different fibers are mutually incoherent without explicitly adding path length differences. However, the SLED has a spectral bandwidth of about 10 nm, which is not accounted for in our model, and we discuss this limitation more in Sec. 7. Although the spectral bandwidth of a laser would match our model better and result in improved resolution [14], we found that the longer coherence length of a laser made it challenging to consistently break the coherence between fiber outputs, even with added path length differences. This is not a fundamental challenge as one could use an array of laser diodes instead of splitting a single laser output.
The multiple sources are spaced 4 mm apart, held in a 3D printed housing. Combined with the \(f_{c}=500\) mm collimating lens, this yields \(\Delta m=79\) rad/mm, 99 rad/mm, and 110 rad/mm for red, green,
Figure 7. **2D Results (Experiment):** Although a single source random phase hologram can theoretically control speckle well for a 2D image, the experimental 2D capture (top) has visible speckle when one zooms in. Our multisource configuration with \(4\times 4\) sources (middle) has noticeably reduced speckle while maintaining high frequency features. PSNR is shown in the bottom left.
and blue respectively, which are outside the memory effect region of Eq. 13. The angles of incidence at the SLM are within \(\pm 0.69^{\circ}\) (see Eq. 5), which is well within the paraxial approximation as assumed in Sec. 3. A beamsplitter in front of the sources lets the illumination be toggled between the multisource configuration and a traditional single source, and a linear polarizer ensures the beam is correctly polarized for the SLMs.
The system uses two phase only LCoS SLMs (Holoeye Pluto-2.1-VIS-016), and a \(4f\) system with 1:1 magnification (\(f_{1}=f_{2}=200\) mm) relays the first SLM to a distance \(\Delta z=2\) mm behind the second SLM. A second \(4f\) system (\(f_{3}=200\) mm, \(f_{4}=150\) mm) relays the SLMs to the camera sensor. Irises in the Fourier planes of both \(4f\) systems filter higher orders from the SLMs.
SLM patterns are optimized using the calibrated model outlined in Sec. 5. SLMs are initialized with uniform random phase, and we jointly optimize both SLMs, even for the single source case. Different SLM patterns are optimized for each color (central wavelengths at \(638\) nm, \(510\) nm, and \(455\) nm, for red, green, and blue respectively) and displayed in sequence.
Images are captured on a monochrome camera sensor (XIMEA MC089MG-SY), which is mounted on a brushless translation stage (Thorlabs DDS050) to enable focal-stack capture from \(z=15\) mm to \(z=25\) mm, defined in SLM space. Note that the actual distances at the camera are slightly less due to the demagnification of the second \(4f\) system. Since the sensor is monochrome, color results are captured sequentially and combined in post-processing. After capture, images are rectified into the SLM coordinate space using bilinear interpolation, un-modulated areas of the image are cropped out, and the relative intensities of the color channels are adjusted.
### 2D Results
Figure 7 shows a 2D experimental capture on our system at \(z=20\) mm comparing single source and multisource holograms. Although a traditional single source holographic display can theoretically create very high quality 2D holograms, in practice there is still speckle noise visible in a random phase hologram. Even in the 2D scenario where single source performs quite well, multisource holography still provides noticeable despeckling, improving PSNR by 4.7 dB in this example. Both results are optimized with active CiTL as described in Sec. 5.3; see Supplement for results without this fine tuning.
### Focal Stack Results
However, the true benefits of multisource holography become most apparent when displaying 3D content. Using our calibrated model, we optimize the SLM patterns while targeting a focal stack with natural blur. We use the same blur parameters and propagation distances as the simulations (Sec. 4).
Figure 8 shows the experimentally captured results. As expected from our simulations, the hologram made with a single source is severely corrupted by speckle. In comparison, multisource holography can generate low speckle images over the whole volume, complete with natural defocus cues, resulting in a 7.4 dB PSNR increase calculated on the full focal stack. As a reminder, our multisource holograms are random phase, which creates an approximately uniform energy distribution in the eyebox (see Supplement for a visualization), and are produced with only one frame per color. Similar to the 2D images, these results are all captured with active CiTL; versions without active CiTL are included in the Supplement.
Figure 8. **Focal Stack Results (Experiment): Focal stacks created by a single source hologram with random phase (top) suffer from severe speckle noise since there are insufficient degrees of freedom on the SLM to control speckle throughout a 3D volume. Our multisource approach with \(4\times 4\) sources (middle) greatly reduces speckle, enabling experimental focal stacks with natural defocus cues. PSNR calculated over the full focal stack is shown in the bottom left.**
## 7. Discussion
We have demonstrated both in simulation and experiment that multisource holography can provide significant despeckling without resolution loss, enabling focal stacks with realistic blur. However, there are several directions for further investigation.
_Pupil-Aware Holography._ Our holograms (like most in the literature) are simulated assuming the entire eyebox is fully contained within the user's pupil. This is atypical for conventional (non-holographic) near-eye displays where the eyebox is usually larger than the pupil size to give users freedom to move their eyes without leaving the eyebox. However, in a holographic display, substantial artifacts can occur when only a fraction of the eyebox enters the user's eye. Our initial simulations suggest that multisource holography could improve image quality given unknown pupil locations.
To demonstrate, we simulate 2D holograms optimized using the pupil-aware loss proposed by Chakravarthula et al. (2022), in which random pupil locations are sampled during optimization (Fig. 9). Smooth phase holograms (Fig. 9) have excellent image quality when the pupil is centered, but a pupil at the edge of the eyebox sees a low intensity, completely incorrect version of the image. Random phase holograms (Fig. 9c) have approximately uniform intensity but are corrupted by speckle regardless of pupil position, even for a 2D images with no focal cues. In comparison, multisource holography (Fig. 9d) can produce a clean image for pupil locations over the whole eyebox. Extending this concept to light fields (Choi et al., 2022; Padmanaban et al., 2019) is another direction of future work.
_Source Design and Etendue Expansion._ Although we analyzed several important parameters of the source design in Sec. 4.3, we restricted our analysis to a grid of uniform intensity sources within the etendue of the native SLM. There may be additional performance gains from different source configurations such as extended sources, optimized source locations, or variable source intensities. In addition, by increasing the spacing between sources, multisource holography may be able to expand system etendue, similar to the work of Jo et al. (2022), helping with another fundamental problem in holographic displays.
_Multisource Holography with 1 SLM._ Using two SLMs may not be feasible for all applications. An alternative is to replace one of the SLMs with a static DOE. This creates an angularly-selective response similar to the two SLMs, breaking correlations between sources and enabling many of the benefits of multisource holography. However, the reduced degrees of freedom mean that fewer sources can simultaneously generate the correct pattern, so the amount of despeckling will be reduced. To improve performance with only a single active modulator, co-optimization of the DOE with the other system parameters could be investigated, similar to the work of Baek et al. (2021).
_Compact Architecture._ Our experimental system is a large bench-top setup containing multiple 4\(f\) systems, but we envision multisource holography could be built into a compact architecture. Starting with the design of Kim et al. (2022), which uses a waveguide to illuminate a reflective phase only SLM, we propose coupling the multiple sources into the waveguide to generate the multisource illumination. For the second SLM, we suggest using a transmissive amplitude modulator, placed just before the eyepiece. However, without a 4\(f\) relay system, SLM higher orders must be taken into account in the model (Gopakumar et al., 2021) or filtered using compact volume holograms (Bang et al., 2019).
_SLED Bandwidth._ We took advantage of the short coherence length of the SLED to create the multiple sources used in our experimental setup. However, the SLED has a bandwidth of about 10 nm while our model and analysis in Sec. 4.3 assumes monochromatic light. We include in the Supplement a complete model that accounts for the spectral bandwidth of the source and a practical optimization strategy based on Peng et al. (2021) for this scenario. However, modeling a larger bandwidth source has higher computational cost, so we chose to assume monochromatic illumination during optimization. We expect our results would improve with more accurate modeling of the SLED, but we found that our monochromatic model was sufficient to show the benefits of multisource holography. See Supplement for a visualization of the effect of the SLED.
_Computation Speed._ Computational cost is a limitation of our method, since the image formation model requires separately simulating the contributions from each source. Furthermore, all our simulations were conducted at 2\(\times\) resolution in each dimension, resulting in computation times of about half an hour to generate a focal stack. For example, for a \(1080\times 1920\) modulator with 16 sources, we run 2000 iterations, each of which takes about 0.8 sec. Upsampling may not be necessary in all scenarios, and in these cases computation time drops to about 0.2 sec per iteration, but
Figure 9. **Pupil-Invariance (Simulation):** When the user’s pupil does not cover the full eyebox, holograms can have significant artifacts, even for 2D images. To demonstrate, we optimize holograms with the pupil aware loss of Chakravarthula et al. (2022). We show examples at two different pupil positions in the eyebox, visualized in (a), where the eyebox extent for each color is depicted with dotted lines. The total intensity of the simulated image relative to a centered pupil is shown in the top right of each simulation. Smooth phase holograms (b) can create high quality images when the pupil is centered, but when the pupil is translated, image content is highly corrupted and has very low intensity. Random phase holograms (c) have approximately uniform intensity when the pupil moves but the image is very noisy due to speckle. Multisource holography (d) can create a low noise image that’s invariant to the pupil position, which is desirable for a practical display.
compute is still a limitation. To address this, neural networks offer a promising path towards real-time computation, as they have already been demonstrated for single source holography (Peng et al., 2020; Shi et al., 2021; Yang et al., 2022), albeit only for smooth phase so far. Adapting these approaches to multisource holography will be necessary for a practical display.
## 8. Conclusion
We introduced a new architecture for holographic displays that uses an array of mutually incoherent sources and two SLMs to reduce speckle. To our knowledge, our design is the first single-frame method that can generate low speckle holograms at full resolution with realistic focal cues and a uniform eyebox. We analyzed the concept in simulation, explored the design space, and validated with a benchtop experimental setup capable of producing high quality focal stacks. In conclusion, we believe multisource holography is a promising path to address some of the key open problems in holographic displays.
|
2307.09633 | Co-Simulation Framework For Network Attack Generation and Monitoring | Resilience assessment is a critical requirement of a power grid to maintain
high availability, security, and quality of service. Most grid research work
that is currently pursued does not have the capability to have hardware
testbeds. Additionally, with the integration of distributed energy resources,
the attack surface of the grid is increasing. This increases the need for
reliable and realistic modeling techniques that are usable by the wider
research community. Therefore, simulation testbeds have been used to model a
real-world power grid topology and measure the impact of various perturbations.
Existing co-simulation platforms for powergrid focus on a limited components
of the overall system, such as focusing only on the dynamics of the physical
layer. Additionally a significant number of existing platforms need specialized
hardware that may be too expensive for most researchers. Finally, not many
platforms support realistic modeling of the communication layer, which requires
use of Supervisory Control and Data Acquisition communication protocol such as
DNP3 while modeling cybersecurity scenarios.
We present Network Attack Testbed in [Power] Grid (NATI[P]G), (pronounced
natig), a standalone, containerized, and reusable environment to enable cyber
analysts and researchers to run different cybersecurity and performance
scenarios on powergrid. Our tool combines GridLAB-D, a grid simulator, HELICS,
a co-simulation framework, and NS-3, a network simulator, to create an
end-to-end simulation environment for the power grid. We demonstrate use cases
by generating a library of datasets for several scenarios. These datasets can
be used to detect cyberattacks at the cyber layer, and develop counter measures
to these adverse scenarios. | Oceane Bel, Joonseok Kim, William J Hofer, Manisha Maharjan, Sumit Purohit, Shwetha Niddodi | 2023-06-30T22:24:11Z | http://arxiv.org/abs/2307.09633v1 | # Co-Simulation Framework For Network Attack Generation and Monitoring
###### Abstract
Resilience assessment is a critical requirement of a power grid to maintain high availability, security, and quality of service. Most grid research work that is currently pursued does not have the capability to have hardware testbeds. Additionally, with the integration of distributed energy resources, the attack surface of the grid is increasing. This increases the need for reliable and realistic modeling techniques that are usable by the wider research community. Therefore, simulation testbeds have been used to model a real-world power grid topology and measure the impact of various perturbations.
Existing co-simulation platforms for powergrid focus on a limited components of the overall system, such as focusing only on the dynamics of the physical layer. Additionally a significant number of existing platforms need specialized hardware that may be too expensive for most researchers. Finally, not many platforms support realistic modeling of the communication layer, which requires use of Supervisory Control and Data Acquisition communication protocol such as DNP3 while modeling cybersecurity scenarios.
We present Network Attack Testbed in [Power] Grid (NAT[P]G), (pronounced _natiq_), a standalone, containerized, and reusable environment to enable cyber analysis and researchers to run different cybersecurity and performance scenarios on powergrid. Our tool combines GridLAB-D, a grid simulator, HELICS, a co-simulation framework, and NS-3, a network simulator, to create an end-to-end simulation environment for the power grid. We demonstrate use cases by generating a library of datasets for several scenarios. These datasets can be used to detect cyberattacks at the cyber layer, and develop counter measures to these adverse scenarios.
## I Introduction
Cyber-physical systems (CPS), such as microgrids, are key infrastructure components that impact social, financial, and national security on a daily basis. Thus, with cyberattacks increasing in sophistication by the day, there is a need to understand various failure scenarios and plan for mitigation strategies for a reliable and resilient power grid operation [1, 2]. Simulation environments have been extensively used to model CPS components, topologies, adversaries, attack sequences, and their impact on the systems [3, 4]. CPSs are complex and interdependent systems, with both discrete and continuous measurements. However, existing attack detection models, such as data-driven intrusion detection and prevention systems, require high amount of data to train models before they are deployed on the grid. This stymies efforts to combat the rate of improvement that cyberattackers have when attacking Cyber-physical systems.
To develop adequate defenses, we must efficiently generate end-to-end models of systems and adversaries. Current simulators, such as NetSim [5] and Mininet [6], provide a partial view of the system, but fail to exhibit physical constraints and conditional operations across different components. Co-simulation environments have been developed to address these limitations, but struggle to address usability and flexibility challenges. Additionally, the co-simulators provide little or no support for _perturb-and-observe_ to model adversarial scenarios and generate benchmark datasets for downstream applications such as risk assessment, attack detection, and risk mitigation.
We present Network Attack Testbed in [Power] Grid (NAT[P]G), a co-simulation environment for distribution power grid network using state-of-the-art simulators. This co-simulator is used to generate attack scenarios that can enable researchers to understand how attackers could behave in a network given a set of goals. Our work builds upon past work where researchers modeled attacks using network simulators to understand the behavior of attackers [7, 8]. By modeling potential adversary behaviors, researchers can develop faster ways to identify them on the network during attacks.
We focus on man-in-the-middle attacks on a power grid as our primary attack scenario. We implemented these attacks on the NS3 network simulator and measure their impact in the GridLAB-D simulator. We do not limit our scenarios to transport and session layers, but rather demonstrate application layer perturbations in the communication layer. The goal is to provide an example on how our tool can be used by other researchers without specialized hardware. Using our testbed, we identified different behaviors between grid following and grid forming inverters during cyberattacks. We also use our testbed to find settings for capacitors and generators to minimize frequency deviation when switches are tripped and microgrids are islanded.
The contributions of the paper are as follows:
* Our co-simulation tackles the entire stack, creating a simulation close to what is expected of a real test bed.
* We produce a containerized framework for a wide range of cyber resilience assessment applications, improving upon existing testbeds.
* We demonstrate the feasibility of modeling, simulating, and validating CPS environments without using special
ized hardware.
* We leverage application layer commands using DNP3 protocol in NS3 to simulate realistic grid behaviour.
* We simulate man-in-the-middle scenarios using DNP3 protocols.
This paper is organized as follows. In Section 2, we survey existing approaches and address our motivation. Section 3 delineates the design of our framework and use cases of cyberattacks. Section 4 describes the details of attack scenarios to demonstrate the usability of our framework. We report our experimental results in Section 5. Section 6 concludes our work and discusses future work.
## II Background and related work
The power grid is a critical infrastructure due to its effect on daily life [9]. Energy providers must balance supply with demand and handle unforeseen events, such as extreme weather patterns. These events can affect the functionality of the grid and impact a large portion of the population. Any threats to the grid should be identified early enough so that providers can deal with the threats before they impact the functions of the grid [10].
In recent years, traditional power systems have become more integrated with information and communication technology, which has given way to Cyber-Physical Power Systems [11]. Alongside the growth of network simulators, more researchers are turning to simulators to develop new technology. This means that simulators must evolve to get closer to realistic networks without losing simulation efficiency.
### _Ns3_
Network Simulator 3 (NS3) [12] is a network simulator commonly used in network architecture and system development. It can simulate almost all aspects of a network, including the physical proprieties of signal transmission. This allows users greater control over the different aspects of the network, allowing them to simulate various attack scenarios with different network topologies, interject data flows at a particular node, and replace normal data with false data. An example of this control is setting whether or not a packet has reached its final destination. This simulates how an attacker can stop a packet at the node controlled by them before sending out a new packet with different information to a victim node. Another example is updating a source IP to the original source IP that was used by the intercepted packet, leaving a victim unaware of an IP change.
Using the simulator, we can create datasets and a system that can be used by other researchers to model potential attacks. Our system has a configuration file that can be used to set different topologies and tune attack parameters. The simulator can also collect performance and routing information using NS3 monitors. The monitored information can be used to create attacker models that can be used as controllers in a network to adapt the network settings in response to attackers.
### _GridLAB-D_
GridLAB-D is a simulation tool that enables power distribution system simulation and analysis [13]. It is extensively used to represent the system behavior of different power system components and complex interactions between these components and modern grid technologies like distributed energy resources (DERs). It can perform power flow analysis and dynamic studies and generates detailed load and market models. With the recent interests in studies regarding the interaction of power system and communication networks, GridLAB-D has been used for bench-marking IEEE feeder models for modeling detailed and dynamic power system operations, and time-series power flow simulations [14, 15]. It facilitates larger simulations of power system models and simplified implementation of system architectures for illustrating a wide range of scenarios that reflect the impacts of communication layers on power system operations.
### _Co-simulation Environment_
There are existing platforms that simulate power systems. One of these platforms [16] includes a mechanism that uses NS3 as a networking interface in conjunction with Framework for Network Co-Simulation (FNCS) and GridLAB-D as tools to keep track of value changes over time of power system components. In line with this work, we utilize a co-simulation to simulate cyberattacks on power networks. Additionally, our tool allows for larger simulation by using HELICS instead of FNCS as the interconnect between GridLAB-D and NS3 3. Battarai et al. [14] presents a HELICS based co-simulation environment, but the work does not focus on scenarios involving application layer perturbations. We introduce DNP3 protocol as part of NS3 to send measurement data and control commands between the NS3 nodes representing components in SCADA systems.
### _Reconfigurable networks_
Current networks have the ability to reconfigure themselves as workloads and needs change. Several things can happen during reconfiguration, such as user equipment changing which antenna it connects to or topology changes. Slicing, in 5G networks, is another way to implement reconfiguration, where partitioning the network can be done to isolate network attacks. This prevents an attacker from harming the entire network and spreading its influence. Slices can be updated as needed over time to isolate network sections or to distribute resources in response to a metric such as performance or monetary spending. Therefore, live reconfiguration can be a powerful tool for security and for enhancing performance enhancement of networks. A current research area revolves around how to make use of 5G network slicing and reconfigurability for power grids.
### _DNP3 protocol_
DNP3 is a protocol used to send packets between nodes, commonly in a utility distribution network [17]. According to surveys, more than 75% of North American electric utilities use or have used DNP3 as a communication protocol [18]. Using this protocol in conjunction with the NS3 simulator allowed us to develop an end-to-end power grid traffic simulator.
Using this setup, we can simulate different situations, such as downed nodes or attacks on the network. We can also monitor the traffic flow between the nodes to model any changes in the traffic between when the attack is happening and when the traffic is normal.
### _Cybersecurity simulations platforms_
Previous work has been done with the goal of generating cyber attacks, such as man-in-the-middle attacks on LAN wireless network [19], WAN networks [20], hard connected [21] and VANET networks [22]. Additionally, researchers have looked at the effect of cyberattacks, like man-in-the-middle and denial of service, on power systems, and have found that denial of service attacks has significant impacts on the run time of devices on the network [23]. We focus on creating a lightweight simulation tool that can be used by other researchers. The tool can be used to parameterize the attack that they want to simulate and get traffic information characterizing the effect of the attack on the grid. Another example of attack generation platform is GridSTAGE [24], which can be used to simulate false data injection attacks. It provides a framework where the user can input the parameters of an attack, and measure its effects on network traffic.
## III Co-simulation Design
This section briefly describes the co-simulation platform developed for simulating the cyber-physical dynamics of the distribution grid and simulating attack scenarios at various parts in the grid. The platform combines various industry-grade and open-source tools such as GridLAB-D, NS3, HELICS to emulate the different layers of the grid infrastructure. The platform uses the communication protocol, DNP3, for the grid communications. This paper focuses on generation of man-in-the-middle attack behavior in distribution grid. We use a Docker container to distribute our tool to other researchers. Docker was chosen since it allows us to create a lightweight environment that can easily be used by other researchers. The container enables researchers that don't have access to an real power network to conduct research on a realistic environment.
### _Our co-simulation platform_
Our current platform is divided into three layers: the control layer, the network or communication layer, and the physical layer. GridLAB-D is used for the physical layer, while NS3 is used to construct the connecting network. Finally the control layer is represented as the control node and it is controlled through the main simulation program. The control layer consists of one utility control center that receive measurements from and send control commands to the respective microgrids in the physical layer. The network layer represents the communication medium that carries information between the control and the physical layer. This layer consists of various network topology and uses DNP3 as grid communication protocol between the control layer and physical layer.
The physical layer consists of the physical distribution grid using IEEE 123 bus test feeder model with various DERs. The IEEE 123 bus feeder model is further logically split into three microgrids which can be configured in grid connected or islanded mode. Each microgrid has a substation which have remote terminal units (RTUs) that aggregate data from and disseminate control signals coming from the control center to various DERs in the microgrid. The control center does periodic poll requests every 4 seconds. Once the poll request is received, the microgrid/substation responds with collected measurements. A man-in-the-middle attack node sits between the control center and each of the substations. The man-in-the-middle attacker changes the data that are sent between the substations and the control centers. The attacker's goal is to trick users into thinking the substations sent good data to the control center while sending a different command to the substation. The different command can be used to collect additional information from the substation or modify parameters such as tripping a relay in the microgrid.
#### Iii-A1 GridLAB-D and IEEE 123 node test feeder
The GridLAB-D simulation tool is used to model the IEEE 123 node test feeder [15]. This test feeder is used as the base power system model for the developed tool and multiple distributed energy resources (DERs) are integrated at different locations to construct a distribution system architecture with three different microgrids. The microgrids can be connected to the grid or islanded in different combinations to create different feeder structures. Each microgrid consists of three DERs: one grid-following inverter based photovoltaics (PV), one grid-connected inverter based PV and one diesel generator. The generator is modeled using synchronous machine with simple excitation system enabling droop curve to the voltage/reactive power output, and GGOV1 governor model with primary power and frequency droop controls. Similarly, inverters are equipped with current and voltage control loops to adjust various droop characteristics, and functions to change active and reactive power and voltage set-points. There are physical and virtual relays with over-current, over-frequency, and under-frequency protection functions integrated in different locations in the test feeder.
#### Iii-A2 HELICS-NS3
For our platform, as seen in Figure 1, we use HELICS as a interconnecting bridge between GridLAB-D and NS3. At the start of a simulation run, the HELICS-NS3 node and GridLAB-D connect to the HELICS broker as federates and are ready to send/receive data from/to each other. Each NS3 node is assigned a HELICS endpoint that is used to communicate with the HELICS broker. The HELICS broker serves as the timekeeper for the simulation. When the NS3 node receives a data request, it pings the GridLAB-D tool for the updated values for the registered points. Similarly, when the NS3 node receives a control command signal, it converts the command signal to GridLAB-D setpoint change request to the GridLAB-D tool via HELICS broker. Once the request is fulfilled, the broker advances the simulation to the next timestep. This continues until the end of the simulation.
#### Iii-A3 DNP3-NS3
We added DNP3 protocol into NS3 to simulate realistic grid communication scenarios between utility control center and the distributed grid. The DNP3-NS3 module requires a configuration file consisting of all the measurement data expected from GridLAB-D simulation. The data points
are GridLAB-D specific names of measurement data or set-points such as active power, voltage settings, ON/OFF states of switches etc. configured as analog or binary points. There are other types of points, but we focus on analog and binary points for the scope of this paper. To generate distribution grid scenarios, NS3 node with HELICS-NS3 and DNP3-NS3 modules enabled is needed. At the start of a simulation run, GridLAB-D measurement points get registered with GridLAB-D through HELICS broker. HELICS-NS3 is responsible for retrieving measurements from GridLAB-D tool and sending set-points to modify the DER settings. The DNP3-NS3 module is responsible for converting raw GridLAB-D measurement values into DNP3 protocol format and also translating DNP3 control commands into GridLAB-D specific set-point instructions.
### _Configurations_
#### Iii-B1 Network Topologies
Our tool takes a JSON file containing node to node connections, gives it to NS3, and uses it to build a topology. Normally, to make sure that the topology is valid, a user can use a topology generator such as NS3 topology generator [25], but doing so requires prior knowledge of operating NS3. We simplify this process by reducing the knowledge needed to operate NS3 to a configuration file, thereby removing the need for additional programs outside the ones already installed in our Docker container.
The configuration file allows control over the jitter of a node, the connection type between two nodes, and the topology of the network. For example, a user can create a configuration that generates a topology where there are two groups of nodes: The first group of nodes is connected using Carrier Sense Multiple Access (CSMA) following a mesh topology structure, and the second group is connected using a point to point connection using a star topology. Then both of the groups can be connected over Wi-Fi so that data can be sent between each cluster. This example is one of many configurations that can be created.
#### Iii-B2 Cyber-Attack
We use JSON files to setup the attack in our setup. The configuration file takes in the start and end time of the attack, the number of attackers in the network and attack specific values. The attack-specific values include the parameters of the victim device that are being attacked, such as active or reactive power settings. It also includes the attack value that the attacker uses to update the value of the device parameter. Finally, the user can also select the attack scenario that they want to simulate on the network.
### _Attack generation method_
To generate attacks, we use the internet module and the DNP3 module to intercept a packet and update it to contain new data. If the destination address is found to the be victim address, then the attacker simulates the packet arriving to its original destination. Then, through the DNP3 protocol, the attacker simulates sending a new packet containing the updated information or command. The \(CapturePacket\) function is set in the ipv4 l3 protocol module. The rest of the functions are developed through the DNP3 module. In Algorithm 1, the packet can be intercepted using the internet stack before it is sent to the target node(s).
#### Iii-B1 Our attacker
Our attacker is a man-in-the-middle attacker who has access to a network node using a DNP3 application within NS3. The attacker uses a modified ipv4 l3 protocol to identify if the message that is intercepted is heading to the ipv4 address of the victim. Once the victim address is identified, the attacker captures the messages, and sends an updated message with new data in place of the old. Before the message is sent, the source address is updated to match the address of the original sender. Thus, once the receiver gets
Fig. 1: Overview of the co-simulation environment with interactions between HELICS, GridLAB-D and NS3. Each node uses the DNP3 protocol to communicate. The control center, where the Open Platform Communications (OPC) server is located, is responsible for control a region of the grid network. The substation is responsible for power distribution and aggregating the information for the microgrid to be sent to the control center. The control center does periodic poll requests every 4 seconds. Once the poll request is received, the microgrid/substation responds with collected measurements.
the message, they will act upon it as if it was from a legitimate source. The attack can be used as a standalone attack where the goal is to spread misinformation; or it can be used as a part of a larger attack where the goal would be not only to spread misinformation but also to take control over a node/section of the network.
For this attack scenario, the attacker intercepts traffic going from the control center to the substation or the microgrid or from the substation to the control center. When intercepting the packet going from the control center to the substation, the attacker modifies the packet to not only conduct the normal action that was requested from the control center but to also conduct an action chosen by the attacker. For example, if the control center sends out a poll request to the individual nodes in the network, the attacker can intercept that message and send an action to the recipient to trip a relay and modify the resulting data that is returned by the substation to hide the changed values. By intercepting the data going from the substation to the control center, the attacker can send fake data to the control center as an attempt to trick the control center into thinking that everything is fine at the substation level.
## IV Experimental Setup
This section describes generation of three types of attack scenarios on a distribution grid. We also demonstrate how to use our tool to modify where the attacker is and how the attack impact varies depending on the location of the attacker in the network. For all of the three attack scenarios described bellow, the attacker is located between the control center and the substation.
* In the first attack, the attacker intercepts the measurement data flowing from substation to control center and modifies the reactive power setpoints/reference (Qref) of Inverter 42 (grid-following inverter) situated in Microgrid 1.
* In the second attack, the attacker intercepts the DNP3 command flowing from control center to the substation and modifies the active power setpoints (Pref) value for both grid-following and grid-forming inverters -Inverter 42 and Inverter 51 of Microgrid 1. Additionally, during both of these attacks the switches connecting the microgrids are tripped to island the microgrids from each other and the grid.
* In the third attack, the attacker intercepts the DNP3 command flowing from control center to the substation and islands the microgrids from each other and the grid. We conduct that attack to examine the impact of an islanding attack on the frequency measured on Microgrid 3. Islanding happens when microgrids are disconnected from one another, making the microgrids dependent on
their individual power sources. Finally, once we find a set of parameters that can be used to counter the effect of islanding on the frequency, we trip an internal virtual relay to cause extra load shedding.
* In the fourth attack, the attacker is in the same location as with attack 2 and 3. We conduct the two previous attacks on a ring topology. The attacks in the previous scenarios were conducted in a star topology. The goal of this attack is to compare the effect of both attacks on the analog and binary points that are sent from the substation to the control center on a different topology.
### _Data format and setup_
The experiment setup has a control center that is responsible for monitoring the distribution grid of an area. The grid consists of the IEEE 123 test feeders with three microgrids. The grid dynamics is simulated using GridLAB-D tool. The microgrids can be islanded by tripping the relay/switches connecting them to each other and the grid. Each microgrid has a substation which acts as the remote terminal units (RTUs) that aggregate data from and disseminate control signals coming from the control center to various DERs in the microgrid. The control center and substation are setup as NS3 nodes and communicate over virtual NS3 network. The substation nodes have HELICS-NS3 to integrate with GridLAB-D. The control center and substations have DNP3-NS3 module installed to provision DNP3 based grid communication. The control center makes a periodic DNP3 polling request (every 4 seconds) to each of the substations for latest measurement values from the microgrid. Each of the substations respond back with data collected from GridLAB-D simulation for the respective microgrid. The control center can also send DNP3 control command to a substation to control a particular set-point of DER in the microgrid. The substation acts on the control command by sending write instruction to the GridLAB-D simulation.
During the configuration setup, all the GridLAB-D specific measurement data such as active power, reactive power, ON/OFF state of switches are pre-configured as DNP3 analog and binary points in a configuration file. During simulation start-up, these data points are ingested by NS3-HELICS module at the substation node and registered with GridLAB-D. The raw measurements are collected and converted into DNP3 protocol format. The packaged data is then sent to the control center for processing over the network.
Figure 2 shows the system architecture with the design of our power grid, and the grid consists of the IEEE 123 test feeders with three microgrids. The microgrids can be islanded by tripping the relay/switches connecting them to each other and the grid. The control center and the substation are also connected to the router where NS3 is loaded to allow different topologies during experimentation.
### _Topologies tested_
We run our attacks on two topologies: a default star topology that has the control center at the center of the network and a ring topology built using our topology configuration file. Figure 3 illustrates both of the topologies.
### _Attack scenarios_
We place an attacker located between the control center and the microgrid/substation. The attacker must stay unnoticed since it does not have authority to be on the network. Using a modified Internet Stack, the attacker reads the bytes in the packet and, if it finds a certain packet that matches the requirement for the attack, executes an interception. The attacker then modifies the intercepted packet data to communicate false information to the control center. This attack can be also used in conjunction with a command injection. Using both, an attacker can perform changes to the settings of a node while hiding it from the control center.
#### Iii-C1 Attack 1: Data Modification
In this scenario, the attacker intercepts a poll request response going from the substation to the control center. The attacker then modifies the reactive power setpoints/reference (Qref) of Inverter 42 (grid-following inverter) situated in Microgrid 1 to trick the control center to believing that the inverter is in a different state then its current state.
#### Iii-C2 Attack 2: Inverter active/reactive power setpoints modifications
In this scenario, the microgrids are islanded from other microgrids and the grid. Then, a man-in-the-middle (MITM) attack changes the setpoint of an inverter to introduce voltage issues. The attack happens at approximately two minutes into simulation. As seen in Table I, Inverter 42 has its Qref setpoint changed from 0 Var (default) to \(-50\) kVar (attack value). In the second scenario, the microgrids are also islanded. Then, inverters are attacked consistently to cause voltage stability issues: Inverter 42 has its Pref value randomly toggled between 450 kW (default) and 350 kW (attack value) and Inverter 51 has its Pref value randomly toggled between 210 kW (default) and 110 kW (attack value). The attack starts at approximately two minutes into simulation.
This scenario uses packet capture (PCAP) files that are generated by NS3 to identify the attack and quantify its impact on the resulting network. _PCAP_ files are used as ways to visualize network traffic. NS3 can generate them using the point to point helper class in the point to point module and can be read using wireshark.
#### Iii-C3 Attack 3: Tripping relays using command injection
In this scenario, a command injection attack causes islanding of microgrids, as shown in Table II. In some cases, power generation is enough to sustain the loads running on the microgrid, while in others (i.e. Microgrid 3) limited power generation increases the Under-Frequency-Load-Shedding (UFLS). UFLS occurs due to insufficient generation on the microgrids to supply the load. The sw60to160 relay was tripped via command injection at approximately two minutes into data capture. In another version of this attack, power dispatch has been adjusted to minimize UFLS. Finally in a third version of this attack, the sw60to160 relay is tripped and the config file of the sw76to86 virtual relay was modified so that additional UFLS would occur.
For this experiment, we use frequency measurement to compare the three parts. Frequency has been used to identify
any load mismatches that may be happening in a power system [26, 27]. When a cyberattack occurs, its impact on the point value of nodes in a power system causes the frequency to shift away from its normal, pre-attack, value.
### _Attack 2 and 3 on a Ring topology_
For this scenario, we run attack scenario 2 and 3 on a ring topology. This scenario serves as an example of how to use the dynamic topology configuration function of our tool. We compare the resulting effect of the attack on the collected analog and binary points collected over the grid.
## V Results
During experimentation, we collect information described in the previous section to demonstrate the operation of our tool. We identified how changing certain parameters, such as the active power of an inverter (Pref) value changes, can cause a grid following and grid forming inverter to have different output voltage behavior. Additionally, when looking at the microgrid's frequency change during an attack, by changing the generator's nominal and output power, the frequency was able to return to its value before the attack started.
### _Attack 1: Data Modification_
We start the experiment by examining the response that is sent from the substation to the control center in response
Fig. 3: A simplified presentation of the topologies used in our experiments. The Ring topology was defined using our topology configuration file while the star topology was defined directly in our code as the default topology in case the user does not have a specific topology to use.
Fig. 2: Microgrid setup for experimentation, using the IEEE feeder model as described by Ashok _et al._[8]. We use this setup to run the cyber attacks and collect data on how the attacks impact the performance of the power grid. The attack conducts a man-in-the-middle attack on two inverters in Microgrid 1, the switches connecting the Microgrids and the relay in Microgrid 3. Substations can be responsible for power distribution over multiple Microgrids as well as act as aggregate points.
to a poll request. The substation's response is sent in three segments of 274 bytes. This response message contains points, each point representing a value associated with a node. These node points, and their values, represent the behavior of that node at the time the packet was sent, for example, the output voltage of generator 1. Each of these points are separated by a flag, and the flag represents whether or not that point was set correctly.
An attacker who has access to these response packets can use a history of these packets to build an understanding of what nodes constitute the microgrid. For this experiment, we intercept some of these response packets, and change several points inside them, as seen in Figure 4. After the bytes are updated by the attacker, the attacker sends out the updated packet with the new values back into the network. Our tool can generate PCAP files though the NS3 simulator, which can be used to visualize this attack.
### _Attack 2: Inverter active/reactive power setpoints modifications_
The attacker starts by reducing the Qref value of inverter 42 two minutes into the simulation, as seen in Figure 6. This causes a slight decrease in current, as seen in Figure 5 where the current normally averages between 7359 W to 7346 W. This very slight increases in maximum and decreases in minimum, resulting in larger waves, in current may go unnoticed if there is no existing knowledge that a change similar to this observed change signifies an adversary on the network. Reactive power (Qref) maintains voltage levels that are needed for system stability. Therefore, it makes sense that by reducing the Qref value, the current value changes over time.
Our tool can visualize these changes to make informed decisions on how they can tackle such attacks. A user can use the datasets generated by our tool to train models to identify and counter attackers based on the impact that is viewed at the node level. In this case, if an inverter's current suddenly drops, the traffic can be rerouted away from that inverter and microgrid to isolate that section of the network. Additionally, the network can send a signal that a section of it is under attack so that some countermeasure can be applied to eliminate the threat.
Our tool can also be used to simulate a multi-node attack, as demonstrated in the second part of this attack. Here, the attacker randomly varies the Pref value of two inverters on Microgrid 1 as seen in Figure 7.
Figure 8 shows a drastic decrease in output voltage of Inverter 42 during the attack where the attacker randomly increases and decreases the Pref value. As seen in Figure 7, the variation in Pref value matches the variation in output voltage
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|} \hline \multicolumn{6}{|c|}{Inverter Description} \\ \hline Inverter ID & location & type & Rated & Pref & Qref & attacked values \\ \hline Inverter 51 & MG1 & grid forming & 400 kW & 210 kW & 0 VAR & Pref (110 kW) \\ Inverter 42 & MG1 & grid following & 600 kW & 450 kW & 0 VAR & Pref (350 kW), Qref \\ & & & & & (\(-50\) kVAR) \\ Inverter 101 & MG2 & grid following & 180 kW & 126 kW & 0 VAR & NA \\ Inverter 105 & MG2 & grid forming & 600 kW & 300 kW & 0 VAR & NA \\ Inverter 76 & MG3 & grid following & 120 kW & 84 kW & 0 VAR & NA \\ Inverter 80 & MG3 & grid forming & 100 kW & 70 kW & 0 VAR & NA \\ \hline \end{tabular}
\end{table} TABLE I: Available Inverters in our current simulation with default and attack values.
Fig. 4: The attacker modifies the response from the substation to the control center to make the control center believe that the Pref value of inverter 42 is set to 350 kW instead of 450 kW, its default value.
for inverter 42 (inverter-2). The output voltage drops to the attack value of 350 kW when the attacker sets the Pref value to 350 kW. By identifying these drop in voltages, a user of our tool can create counter measures to such attacks by potentially resetting the inverter if the voltage fluctuates erratically.
Figure 9 shows how the attack happening on Microgrid one's inverter can affect the resulting output voltage in inverters in a seperate microgrid. Since Microgrid 1 is islanded from the rest of the Microgrids and the grid when the attack starts, we can observe an increase in power fluctuation when looking at the output voltage of the inverters.
Figure 10 shows the impact of the attack on a grid forming inverter. In this case, we can see a significant drop in the inverter's output voltage, but it is quickly brought back to its normal value after a few timesteps. The fluctuations observed in Figures 8 and 10 show that a grid forming inverter is less susceptible to attacks on the Pref value compared to a grid following inverter.
We can also visualize the attacks with PCAP files. Figure 11 shows how the Pref value changes over time. Using PCAP, we can see the cycle of commands that is sent by the attacker to the inverter. We can also see that the only points that get modified are the Pref value of both of the inverters. Interestingly, the size of the packet that is received as a response from the microgrids varies between 230 Bytes and 304 Bytes as the Pref fluctuates. This is another indication that can be used by a user to identify an attacker on the system.
### _Attack 3: Tripping relays using command injection_
In this scenario, we are trip the relays connecting the microgrids to one another, as seen in Table II. We look at the frequency measurement of the microgrids to identify how
Fig. 5: Current change once the attack starts at load 42, which is the load connected to inverter 42. The attack starts 2 minutes in the simulation and causes a shift in the current fluctuation of the load, where the current waves became bigger compared to before the attack.
Fig. 8: Effect of attack on inverter 42’s, a grid following inverter, output voltage.The attacker dynamically fluctuates the Pref value of two distinct inverters to affect the resulting output voltage and current of the microgrid. In this scenario the microgrids are islanded from both each other and the grid.
Fig. 6: The Qref of inverter 42 (inverter-2) is dropped to \(-50\) kVAR from 0 kVAR (the default value) after 2 minutes in the simulation. The rest of the inverters are not attacked.
Fig. 7: The Pref of Inverter 42 (inverter-2) is dropped to 350 kW from 450 kW (the default value) and the Pref value of Inverter 51 (inverter-3) is dropped from 210 kW to 110 kW after two minutes in the simulation. This is an example of the Pref values being randomly fluctuating between the default and attack values.
resilient it is to attacks. Tripping the relays can cause the microgrids to become islanded from one another and the grid. Notably in this attack, microgrid 3, as shown in Table III, is the only microgrid that has capacitors and it contains one large capacitor connected to multiple phases and three smaller capacitors connected to individual phases. In Figure 12, we can see that the frequency of the attacked microgrid increases to around 71 Hz during scenario A, the scenario where the generator and capacitor use the default value for our tool, as seen in Table IV. Out of all the Microgrids, Microgrid 3 has the lowest power generation, causing it to struggle if Microgrid 3 is islanded, such as the result of tripping the relays in this attack.
Capacitors are a useful tool when regulating the frequency of a microgrid during an attack. When the simulation ran with a larger capacitor and a generator with a lower nominal power rating and lower power amount delivered to interconnected nodes, as seen in Table V, the frequency returns to the normal pre-attack value after approximately two minutes.
Finally, when we trip the virtual relays between nodes 76 and 86, as seen in Figure 2, while keeping the changes created in scenario B, we can see that the frequency does not return to the value before the attack started. Additionally, the frequency does not drop as much as during the scenario A attack. We also observe that after an initial climb, the frequency slightly drops before starting to climb at a slower rate. The frequency finally stabilizes at around 140 seconds.
### _Attack 2 and 3 on a Ring topology_
Using our topology configuration file, we change the resulting topology from a star topology to a ring topology, as seen in Figure 3. Each substation/microgrid/control center node is connected to a ring of intermediate nodes. The nodes on the
Fig. 11: Visualization of attack through PCAP file.
Fig. 10: Impact of attack on inverter 51’s, a grid forming inverter, output voltage
Fig. 9: Impact of attack on inverter 42 and inverter 51 on an inverter’s output voltage in a different microgrid
ring are then used to conduct the man-in-the-middle attack. We conduct the same attack as in the previous section where we modify the Pref and Qref value of two different inverters located on the same microgrid. For this topology, similar to the previous topology, the nominal power was only lowered to 300 kW. Using our tool, we found out that to mitigate frequency changes when Microgrid three is islanded the same parameter values for the generator and the capacitor worked for both the ring and star topology.
With regards to attack 2, where the Qref and Pref values are modified, we can observe that on this topology, when the Qref or Pref values are lowered, that the resulting current measured at the inverter fluctuates more similar to what was observed in a star topology. Figure 13 shows the resulting current measured at the connected load to inverter 42 in Microgrid 1 when the Pref value for both inverter 42 and 51 have their Pref value fluctuating over time. We can see that the waves have both a smaller minimum and a higher maximum similar to the current graph for the star topology.
\begin{table}
\begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline \multicolumn{5}{|c|}{Updated Generator and Capacitor values} \\ \hline ID & location & rated/Nominal power & delivered power/capacitor size \\ \hline cap83 & A, B, C & 2401.7771 V & A=600 kVAr, B=600 KVAr, C=600 KVAr \\ Gen3 & MG3 & 300 kW & 20 kW+16667 j \\ \hline \end{tabular}
\end{table} TABLE V: All three phases are increased to 600 kVAr. Both the rated and delivered power for Generator 3 are reduced for this part of the experiment.
\begin{table}
\begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline \multicolumn{5}{|c|}{Microgrid 3 Capacitor Default values} \\ \hline ID & phases & Nominal power & Capacitor size information \\ \hline cap83 & A, B, C & 2401.7771 V & A=200 kVAr, B=200 kVAr, C=200 kVAr \\ cap88 & A & 2401.7771 V & A=50kVAr \\ cap90 & B & 2401.7771 V & B=50kVAr \\ cap92 & C & 2401.7771 V & C=50kVAr \\ \hline \end{tabular}
\end{table} TABLE III: Microgrid 3 is the only microgrid containing capacitors. The size information is in regard to the capacitor size that is connected to a specific phase. For example, in this table, A=200 kVar represents the size of the capacitor connected to phase A.
Fig. 12: Frequency changes depending on when the attack starts and what changes, if any, are done to reduce the frequency deviation.
\begin{table}
\begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline \multicolumn{5}{|c|}{Synchronous Generator Default values} \\ \hline ID & location & rated power output & power delivered to interconnected nodes \\ \hline Gen1 & MG1 & 10 MW & 30 kW+3000 \(\,\)j \\ Gen2 & MG1 & 1 MW & 25 kW+8333 \(\,\)j \\ Gen3 & MG3 & 450 kW & 50 kW+16667 j \\ Gen4 & MG2 & 600 kW & 50 kW+16667 j \\ \hline \end{tabular}
\end{table} TABLE IV: GridLAB-D simulates 2 types of generators (synchronous vs induction). The microgrids only use synchronous generators. Microgrid 3’s generator is rated with the lowest power level out of the rest of the generators. It also delivers the most (tied with Gen4) power to the interconnected nodes.
Fig. 13: Current change once the attack starts at load 42, which is the load connected to Inverter 42 when the Pref value of both Inverter 42 and 51 are fluctuating. The attack starts 2 minutes into the simulation and causes a shift in the current values of the load.
## VI Conclusion and future work
We demonstrate the benefit of using a lightweight tool to model different security scenarios and their effect on grid nodes. We also described the design of our tool, and how it uses the IEEE model from GridLAB-D to model grid components. Using our tool, we identified that there is differing behavior between grid forming and grid following inverters when the Pref and Qref values are changed by an attacker. Grid following inverters are more affected by any changes done to the Pref value, while grid forming inverters can restabilize the output voltage of the inverter down to its original value. We also identified parameter settings that enable the microgrid to recover the original frequency value once the microgrid is islanded.
In future work, we will implement insider attacks as part of our simulation tool. This attacker takes over a trusted and authorized node in the network and intercepts traffic going both directions between the substation and the control center. The attacker can inform the control center that the substation is working properly while conducting a denial of service attack, for example. This attack effectively renders the attacker's action invisible to the control center. By the time the attack is detected, the attacker could have stolen private information or destabilized part(s) of the system.
In addition, we will expand the capability of topology configuration files. We will also add the ability to enable neural network control routing decision to optimize different performance values of the network. We will add the ability to set the protocol used for communication as well as the type of connections, such as point to point, CSMA connections, and/or LTE/DNP3 protocols. Finally we will make our tool publicly available as a docker container.
## Acknowledgments
We would like to thank Md Touhiduzzaman and Burhan Hyder for their valuable feedback on the paper.
|
2307.00171 | The Integer Linear Programming Inference Cookbook | Over the years, integer linear programs have been employed to model inference
in many natural language processing problems. This survey is meant to guide the
reader through the process of framing a new inference problem as an instance of
an integer linear program and is structured as a collection of recipes. At the
end, we will see two worked examples to illustrate the use of these recipes. | Vivek Srikumar, Dan Roth | 2023-06-30T23:33:11Z | http://arxiv.org/abs/2307.00171v1 | # The Integer Linear Programming Inference Cookbook
###### Abstract
Over the years, integer linear programs have been employed to model inference in many natural language processing problems. This survey is meant to guide the reader through the process of framing a new inference problem as an instance of an integer linear program and is structured as a collection of recipes. At the end, we will see two worked examples to illustrate the use of these recipes.
###### Contents
* 1 Introduction
* 2 Notation and Preliminaries
* 3 Basic Operators: Logical Functions
* 3.1 Variables and their Negations
* 3.2 Disjunctions and their Variants
* 3.3 A recipe for Boolean expressions
* 4 Simple and Complex Logical Implications
* 4.1 Simple Conditional Forms
* 4.2 Complex conditional forms
* 4.3 The Case for Special Cases: Empirical Evidence
* 5 Complex Building Blocks
* 5.1 Spanning Trees
* 5.2 Graph Connectivity
* 5.3 Other Graph Problems
* 5.4 Soft Constraints
* 6 Worked Examples
* 6.1 Sequence Labeling
* 6.2 Recognizing Event-Event Relations
* 7 Final Words
Introduction
Effective decision-making requires the use of knowledge. This has been a clear, and long-standing principle in AI research, as reflected, for example, in the seminal early work on knowledge and AI--summarized by Brachman and Levesque (1985)--and the thriving _Knowledge Representation and Reasoning_ and the _Uncertainty in AI_ communities. However, the message has been somewhat diluted as data-driven statistical learning has become increasingly pervasive across AI. Nevertheless, the idea that reasoning and learning need to work together (Khardon and Roth, 1996; Roth, 1996) and that knowledge representation is a crucial bridge between them has not been lost.
One area where the link between learning, representation, and reasoning has been shown to be essential and has been studied extensively is Natural Language Processing (NLP), and in particular, the area of Structured Output Prediction within NLP. In structured problems, there is a need to assign values to multiple random variables that are interrelated. Examples include extracting multiple relations among entities in a document, where a the two arguments for a relation such as born-in cannot refer to people, or co-reference resolution, where gender agreement must be maintained when determining that a specific pronoun refers to a given entity. In these, and many other such problems, it is natural to represent knowledge as Boolean functions over propositional variables. These functions would express knowledge, for example, of the form "if the relation between two entities is born-in, then its arguments must be a person and a location" (formalized as functions such as \(x_{i}\to x_{j}\lor x_{k}\), or exactly one of \(x_{1},x_{2},\ldots x_{k}\) can be true). These functions serve to _constrain_ the feasible solutions to the inference problem and open the possibility to model the global decision problem as a constrained optimization problem.
An influential, and as we will see, also natural formalism for the decision problem is to frame it as an Integer Linear Program (ILP). This approach was first employed in NLP in the context of information extraction and machine translation (Roth and Yih, 2004; Germann et al., 2004; Roth and Yih, 2005) The objective function for the integer program in question is typically learned, and could be viewed as proposing, for each variable of interest, a distribution over the values it can take. The final assignment to these variables is then determined by maximizing the objective, subject to knowledge constraints, such as the ones described above. The ability to decouple the modeling of a problem and the knowledge needed to support inference, from learning the models is one reason that has made the ILP formulation a popular one in NLP. Over the years, ILPs have been employed to model inference in many natural language processing (NLP) problems--information extraction (Roth and Yih, 2004; Choi et al., 2006; Denis and Muller, 2011; Berant et al., 2014), decoding in machine translation (Germann et al., 2004), semantic role labeling (Punyakanok et al., 2008; Srikumar and Roth, 2011), dependency parsing (Riedel and Clarke, 2006; Martins et al., 2009), coreference resolution (Denis and Baldridge, 2009), sentence compression (Clarke and Lapata, 2008; Thadani and McKeown, 2013), inferring alignments (Goldwasser and Roth, 2008; Chang et al., 2010; Li and Srikumar, 2016), summarization (Woodsend and Lapata, 2012), supertagging (Ravi et al., 2010), common sense reasoning (Roy and Roth, 2015; Goldwasser and Zhang, 2016), and many others. It is important to point out that these examples include both cases where the computational aspects of inference were handled by powerful off-the-shelf solvers such as Express-MP or Gurobi, and those where approximate methods were designed for inference.1
Footnote 1: See, for example, [https://ilpingference.github.io/eacl2017/](https://ilpingference.github.io/eacl2017/) for details.
The integer linear programming formalism is both expressive and easy to use for representing and reasoning with knowledge for two reasons. First, every MAP inference problem with discrete variables can be represented as a linear objective (Roth and Yih, 2007), making ILP a natural formalism for such problems. Second, all Boolean functions can be compiled into a set of linear
inequalities, to be used as constraints in the ILP formulation.
This tutorial-style survey paper focuses on this second point, and is meant to guide the reader through the process of framing a new inference problem as an instance of an integer linear program. It is structured as a collection of commonly used recipes, and at the end, we will see two worked examples to illustrate the use of these recipes.
To simplify discourse, we will make two assumptions. First, we will assume that we have all the scoring functions needed to write the objective function. Second, we will primarily focus on the process of writing down the inference problems, not solving them. It is important to separate the declaration of a problem from its solution; this article concerns the former. We could solve inference problems using off-the-shelf black box solvers, general heuristics, or specially crafted algorithms tailored to the problem at hand.
A final note before we get started: While the motivating examples used in this paper are drawn from natural language processing, the techniques for converting Boolean expressions into linear inequalities that are discussed here are applicable more broadly. As a result, the next few sections are written without a specific domain in mind, but the worked examples that follow are grounded in NLP tasks.
## 2 Notation and Preliminaries
To start off, let us first see the notation that will be used through this survey.
Decision variables.Our goal is to collectively make a set of possibly interacting decisions. We will refer to individual Boolean decisions using the symbol \(y\) with subscripts. Usually, the decisions in the subscripts deal with assigning labels to inputs. For example, the decision that the \(i^{th}\) label is A will be represented as \(y_{i:\texttt{A}}\). For brevity, if the label \(A\) is the constant true, we will write \(y_{i}\) to denote \(y_{i:\texttt{true}}\).
We can map from the space of Boolean decisions (i.e., predicates) to integers using the Iverson bracket (Iverson, 1962). The Iverson bracket for a predicate \(y\), denoted by \([y]\), is defined as
\[[y]=\begin{cases}1&\text{if $y$ is true}\\ 0&\text{if $y$ is false}.\end{cases} \tag{1}\]
In other words, it maps true to 1 and false to 0. As Knuth (1992) points out, the Iverson bracket is a notational convenience that vastly simplifies mathematical exposition. Here, we will assume the implicit existence of the Iverson bracket to translate false and true to 0 and 1 respectively. This implicit notational device will allow us to reason about Boolean variables like \(y\) as if they were integers.
Each decision \(y_{i}\) is associated with a score \(c_{i}\). We will assume the convention that we prefer decisions whose scores are larger. Importantly, in this survey, we will not concern ourselves with where the scores originate; the scoring function could have been learned in the past, or the inference could be situated within the context of a learning algorithm that estimates the scoring function, or perhaps the scores were manually set using domain knowledge. Furthermore, we do not make any assumptions about the nature of the scores--while they could represent log probabilities that the corresponding variable is true, we do not assume that they are probabilities in the formal sense; we merely require that variable assignments that are associated with a higher total score are preferable.
Finally, we will use the boldface symbol \(\mathbf{y}\) to denote a vector of decision variables and the boldface \(\mathbf{c}\) to denote the vector of coefficients that score the decision variables in \(\mathbf{y}\).
Integer linear programs.The goal of inference is to assign values to the decision variables such that their total score is maximized. We will formalize this task as an integer linear program (ILP). To define the integer linear program, we need to specify a linear objective function and a collection of linear constraints that characterize the set of valid decisions. In general, we can write the inference problem as
\[\max_{\mathbf{y}} \sum_{i}c_{i}y_{i}\] (2) s.t. \[\mathbf{y}\in\mathcal{Y}, \tag{3}\] \[y_{i}\in\{0,1\}. \tag{4}\]
Here, \(\mathcal{Y}\) denotes a set of legal assignments to the inference variables. The actual definition of this set in the form of linear inequalities is dependent on the problem and the subsequent sections are devoted to recipes for constructing this set.
Of course, even the definition of the inference variables is a problem-specific design choice. The inference variables in the objective function are constrained to be zero or one. Thus, our problem is an instance of a 0-1 integer linear program. The linear objective (2) ensures that only the coefficients for variables that are assigned to true (or equivalently, to 1 via the Iverson bracket) will count towards the total score. While not explicitly stated in the formulation above, we can also add additional auxiliary discrete or real valued inference variables to allow us to state the problems in an easy way or to facilitate solving them.
Integer and mixed-integer programming is well studied in the combinatorial optimization literature. An overview of their computational properties is beyond the scope of this survey and the reader should refer to textbooks that cover this topic (Papadimitriou and Steiglitz, 1982; Schrijver, 1998, for example). For our purposes, we should bear in mind that, in general, integer programming is an NP-hard problem. Indeed, 0-1 integer programming was one of Karp's 21 NP-complete problems (Karp, 1972). Thus, while the techniques described in this tutorial provide the tools to encode our problem as an integer program, we should be aware that we may end up with a problem formulation that is intractable. For certain NLP problems such as semantic role labeling (Punyakanok et al., 2008, for example), we can show that certain ways to model the problem leads to inference formulations that are intractable in the worst case. Yet, curiously, in practice, off-the-shelf solvers seem to solve them quite fast! Indeed, the same problem could be encoded in different ways, one of which can be solved efficiently while another is not. One example of this situation is the task of graph-based dependency parsing. The ILP encoding of Riedel and Clarke (2006) required a specialized cutting-plane method, while the flow-inspired encoding of Martins et al. (2009) was more efficiently solvable.
## 3 Basic Operators: Logical Functions
In this section, we will introduce the basic building blocks needed to convert Boolean expressions into a set of linear inequalities. For now, we will only use 0-1 decision variables as described in SS2 without any auxiliary real-valued variables. Using _only_ the techniques described in this section, we should be able to write any Boolean expression as a set of linear inequalities.
### Variables and their Negations
Recall that each variable \(y\) in the 0-1 ILP corresponds to a Boolean decision. A natural first constraint may seek to enforce a certain set of decisions, or equivalently, enforce their logical conjunction. This gives us our first recipe.
**Constraint 1:** Forcing the conjunction of decisions \(y_{1},y_{2},\ldots,y_{n}\) to be true. That is, \(y_{1}\wedge y_{2}\wedge\cdots\wedge y_{n}\).
\[\sum_{i=1}^{n}y_{i}=n.\]
Since the decision variables can only be 0 or 1, the sum in the constraint counts the number of decisions enforced. With \(n\) variables, this sum can be \(n\) if, and only if, each one of them takes the value 1.
Handling negations.Setting a variable \(y\) to false is equivalent to setting \(1-y\) to true. This observation gives us a general strategy to deal with negations: Suppose a variable \(y\) is negated in a Boolean expression. While converting this expression into a linear inequality (using one of the recipes in this survey), we will replace occurrences of \(y\) in the inequality with \(1-y\). For example, the constraint \(\neg y\) would become \(1-y=1\) (or \(y=0\)). Applying this strategy to the above constraint gives us a second constraint that forbids a collection of \(n\) decisions from being true.
**Constraint 2:** Forbidding all the decisions \(y_{1},y_{2},\ldots,y_{n}\) from being true. That is, \(\neg y_{1}\wedge\neg y_{2}\wedge\cdots\wedge\neg y_{n}\).
\[\sum_{i=1}^{n}y_{i}=0.\]
The need to force decision variables to be either true or false arises when we wish to unconditionally enforce some external knowledge about the prediction.
**Example 1**.: Suppose we know the ground truth assignments for a subset of our decision variables and we wish to ascertain the best assignment to the other variables according to our model. We could do so by forcing the known variables to their values. Such an approach could be useful for training models with partial supervision over structures.
**Example 2** (Testing inference formulations).: Another use case for the above constraint recipes is that it offers a way to check if our inference formulation for a problem is correct. Suppose we have a labeled data set that maps inputs \(\mathbf{x}\) (e.g., sentences) to outputs \(\mathbf{y}\) (e.g., labeled graphs) and we have framed the problem of predicting these outputs as an ILP.
One way to test whether our problem formulation (as defined by our constraints) is meaningful is to add additional constraints that clamp the decision variables to their ground truth labels in a training set. If the resulting ILP is infeasible for any example, we know that the _rest_ of our constraints do not accurately reflect the training data. Of course, we may choose not to correct this inconsistency with the data, but that is a modeling choice.
### Disjunctions and their Variants
An important building block in our endeavor is the disjunction. Suppose we have a collection of decision variables and we require that at least one of them should hold. Using the Iverson notation naturally gives us the constraint formulation below.
**Constraint 3:** Disjunction of \(y_{1},y_{2},\ldots,y_{n}\). That is, \(y_{1}\lor y_{2}\vee\cdots\lor y_{n}\).
\[\sum_{i=1}^{n}y_{i}\geq 1\]
Note that this constraint can incorporate negations using the construction from SS3.1, as in the following example.
**Example 3**.: If we want to impose the constraint \(\neg y_{1}\vee\neg y_{2}\vee\neg y_{3}\), we need to use \(1-y_{1}\), \(1-y_{2}\) and \(1-y_{3}\) in the recipe above. This gives us
\[1-y_{1}+1-y_{2}+1-y_{3}\geq 1,\] \[\text{that is, }y_{1}+y_{2}+y_{3}\leq 2.\]
There are several variations on this theme. Sometimes, we may require that the number of true assignments should be at least, or at most, or exactly equal to some number \(k\). These _counting quantifiers_ or _cardinality quantifiers_ generalize both conjunctions and disjunctions of decisions. A conjunction of \(n\) variables demands that the number of true assignments should be equal to \(n\); their disjunction demands that at least one of the variables involved should be true.
**Constraint 4:** At least, at most or exactly \(k\)**true** assignments among \(y_{1},y_{2},\ldots,y_{n}\).
\[\text{At least }k\text{:}\quad\sum_{i=1}^{n}y_{i}\geq k\] \[\text{At most }k\text{:}\quad\sum_{i=1}^{n}y_{i}\leq k\] \[\text{Exactly }k\text{:}\quad\sum_{i=1}^{n}y_{i}=k.\]
The use of counting quantifiers does not increase the expressive power over logical expressions. They merely serve as a syntactic shorthand for much larger Boolean expressions. For example, if we wish to state that exactly two of the three variables \(y_{1}\), \(y_{2}\) and \(y_{3}\) are true, we can encode it using the following expression:
\[(y_{1}\wedge y_{2}\wedge\neg y_{3})\vee(y_{1}\wedge\neg y_{2}\wedge y_{3}) \vee(\neg y_{1}\wedge y_{2}\wedge y_{3})\]
An important (that is, frequently applicable) special case of counting quantifiers is _uniqueness quantification_, where we require exactly one of a collection of decisions to hold. While the corresponding linear constraint is clearly easy to write using what we have seen above, uniqueness constraints are important enough to merit stating explicitly.
**Constraint 5:** Unique assignment among \(y_{1},y_{2},\ldots,y_{n}\). That is, \(\exists!\ y_{i}\).
\[\sum_{i=1}^{n}y_{i}=1.\]
As an aside, this constraint is identical to the logical XOR if we have exactly two variables (i.e., their parity is one when the constraint holds), but not when the number of variables is more. For example, with three variables, if all of them are assigned to true, their parity is one, but the above constraint is not satisfied.
**Example 4** (Multiclass classification).: The linear constraint templates described in this section find wide applicability. The simplest (albeit unwieldy) application uses the unique label constraint to formally define multiclass classification. Suppose we have inputs that are to be assigned one of \(n\) labels \(\{\texttt{l}_{\texttt{1}},\texttt{l}_{\texttt{2}},\ldots,\texttt{l}_{ \texttt{n}}\}\). We can write this prediction problem as an integer linear program as follows:
\[\max_{\mathbf{y}} \sum_{i=1}^{n}c_{\text{label:l}_{\texttt{1}}}\cdot y_{\text{ label:l}_{\texttt{1}}}\] such that \[\sum_{i=1}^{n}y_{\text{label:l}_{\texttt{1}}}=1,\] \[y_{\text{label:l}_{\texttt{1}}}\in\{0,1\}.\]
We have \(n\) decision variables, each corresponding to one of the possible label assignments. The decision of choosing the label \(\texttt{l}_{\texttt{1}}\) is scored in the objective function by a score \(c_{\text{label:l}_{\texttt{1}}}\). The goal of inference is to find the score maximizing assignment of these decision variables. The constraint mandates that exactly one of the inference outcomes is allowed, thus ensuring that the label that maximizes the score is chosen.
The above example merely illustrates the use of the unique label constraint. While inference for multiclass classification can be written in this form, it is important to note that it is unwise to use a black box ILP solver to solve it; simply enumerating the labels and picking the highest scoring one suffices. This example highlights the difference between _framing_ a problem as an integer linear program and _solving_ it as one. While multiclass classification can clearly be framed as an ILP, solving it as one is not a good idea.
However, the multiclass as an ILP construction is a key building block for defining larger structured outputs. A commonly seen inference situation requires us to a unique label to each of a collection of categorical random variables, subject to other constraints that define the interactions between them. In such a situation, each categorical random variable will invoke the multiclass as an ILP construction.
### A recipe for Boolean expressions
In SS3.1 and SS3.2, we saw recipes for writing Boolean variables, their negations, conjunctions and disjunctions as linear inequalities. With the full complement of operators, we can convert any constraint represented as a Boolean expression into a collection of linear inequalities using the following procedure:
1. Convert the Boolean expression into its conjunctive normal form (CNF) using De Morgan's laws and the distributive property, or by introducing new variables and using the Tseitin transformation (Tseitin, 1983).
2. Recall that a CNF is a conjunction of disjunctive clauses. Express each clause in the CNF (a disjunction) as a linear inequality.
Let us work through this procedure with two examples. In both examples, we will not worry about the objective function of the ILP and only deal with converting Boolean expressions into linear constraints.
**Example 5**.: Suppose we have three Boolean variables \(y_{1}\), \(y_{2}\) and \(y_{3}\) and our goal is to convert the following Boolean expression into linear inequalities:
\[(y_{1}\wedge\neg y_{2})\vee(y_{1}\wedge\neg y_{3})\]
The first step, according to the recipe above, is to convert this into its equivalent conjunctive normal form:
\[y_{1}\wedge\left(\neg y_{2}\vee\neg y_{3}\right).\]
Now, we have two clauses, each of which will become a linear constraint. Using the templates we have seen so far and simplifying, we get the following linear constraints:
\[y_{1} =1,\] \[y_{2}+y_{3} \leq 1.\]
**Example 6**.: Suppose we have three decision variables \(y_{1},y_{2}\) and \(y_{3}\) and we wish to enforce the constraint that either all of them should be true or all of them should be false. The constraint can be naturally stated as:
\[(y_{1}\wedge y_{2}\wedge y_{3})\vee\left(\neg y_{1}\wedge\neg y_{2}\wedge\neg y _{3}\right).\]
To express the constraint as a set of linear inequalities, let us first write down its conjunctive normal form:
\[(y_{1}\vee\neg y_{2})\wedge(y_{1}\vee\neg y_{3})\wedge(y_{2}\vee\neg y_{1}) \wedge(y_{2}\vee\neg y_{3})\wedge(y_{3}\vee\neg y_{1})\wedge(y_{3}\vee\neg y_ {2})\,.\]
Now, we can convert each disjunctive clause in the CNF form to a different linear constraint following the templates we have seen before. After simplification, we get the following linear system that defines the feasible set of assignments:
\[y_{1}-y_{2} \geq 0,\] \[y_{1}-y_{3} \geq 0,\] \[y_{2}-y_{1} \geq 0,\] \[y_{2}-y_{3} \geq 0,\] \[y_{3}-y_{1} \geq 0,\] \[y_{3}-y_{2} \geq 0.\]
The procedure provides a systematic approach for converting Boolean constraints (which are easier to state) to linear inequalities (allowing us to use industrial strength solvers for probabilistic inference). Indeed, the recipe is the approach suggested by Rizzolo and Roth (2007) and Rizzolo (2012) for learning based programming. However, if applied naively, this methodical approach can present us with difficulties with respect to the number of linear constraints generated.
Consider the final set of inequalities obtained in Example 6 above. While we could leave the linear system as it is, the system of equations implies that \(y_{1}=y_{2}=y_{3}\), as does the logical expression that we started with. This example illustrates an important deficiency of the systematic approach for converting logical formulae into linear inequalities. While the method is sound and complete, it can lead to a much larger set of constraints than necessary. We will see in SS4 that such "improperly" encoded constraints can slow down inference.
One way to address such a blowup in the number of constraints is to identify special cases that represent frequently seen inference situations and lead to large number of constraints, and try to find more efficient conversion techniques for them. The following sections enumerate such special cases, starting with implications (SS4) and moving on to combinatorial structures (SS5).
## 4 Simple and Complex Logical Implications
The first special case of constraints we will encounter are conditional forms. At first, we will simply convert the implications into disjunctions and use the disjunction templates from SS3. Then, in SS4.2, we will exploit the fact that our inference variables can only be 0 or 1 to reduce the number of constraints.
### Simple Conditional Forms
First, let us consider the simplest implication constraint: \(y_{1}\to y_{2}\). Clearly, this is equivalent to the disjunction \(\neg y_{1}\lor y_{2}\) and we can convert it to the constraint \(-y_{1}+y_{2}\geq 0\). We can generalize this to a conditional form with a conjunctive antecedent and a disjunctive consequent:
\[\bigwedge_{i=1}^{m}y_{l_{i}}\rightarrow\bigvee_{i=1}^{n}y_{r_{i}}.\]
The implication is equivalent to the disjunction:
\[\left(\bigvee_{i=1}^{m}\neg y_{l_{i}}\right)\bigvee\left(\bigvee_{i=1}^{n}y_{ r_{i}}\right).\]
Now, we can use the disjunction and negation rules that we have seen before. We get
\[\sum_{i=1}^{m}\left(1-y_{l_{i}}\right)+\sum_{i=1}^{n}y_{r_{i}}\geq 1.\]
Simplifying the expression and moving constants to the right hand side gives us our next recipe:
**Constraint 6:** Implications of the form \(\bigwedge_{i=1}^{m}y_{l_{i}}\rightarrow\bigvee_{i=1}^{n}y_{r_{i}}\)
\[-\sum_{i=1}^{m}y_{l_{i}}+\sum_{i=1}^{n}y_{r_{i}}\geq 1-m.\]
One special case merits explicit mention--the Horn clause, which is well studied in logic programming (Chandra and Harel, 1985).
**Constraint 7:** Horn clauses of the form \(y_{l_{1}}\wedge y_{l_{2}}\wedge\cdots\wedge y_{l_{m}}\to y_{r}\)
\[-\sum_{i=1}^{m}y_{l_{i}}+y_{r}\geq 1-m.\]
### Complex conditional forms
Suppose we have three decisions \(y_{1}\), \(y_{2}\) and \(y_{3}\) and we require that the decision \(y_{3}\) holds if, and only if, both \(y_{1}\) and \(y_{2}\) hold. We can write this requirement as
\[y_{1}\wedge y_{2}\leftrightarrow y_{3}. \tag{5}\]
The constraint can be written as two implications:
\[y_{1}\wedge y_{2}\to y_{3} \tag{6}\] \[y_{3}\to y_{1}\wedge y_{2}. \tag{7}\]
The first implication matches the template we saw in SS4.1 and we can write it as \(-y_{1}-y_{2}+y_{3}\geq-1\). The second one can be broken down into two conditions \(y_{3}\to y_{1}\) and \(y_{3}\to y_{2}\). These correspond to the inequalities \(y_{1}-y_{3}\geq 0\) and \(y_{2}-y_{3}\geq 0\) respectively. In other words, the single biconditional form, following the methodical approach, gets translated into three linear inequalities. In general, if there are \(n\) elements in the conjunction on the left hand side of the implication, we will have \(n+1\) linear inequalities. Can we do better?2
Footnote 2: We should point out that we are working under the assumption that fewer, more dense inequalities are better for solvers. Indeed, the experiments in §4.3 corroborate this assumption. However, while seems to empirically hold for solvers today, the inner workings of a solver may render such optimization unnecessary.
In this section, we will see several commonly seen design patterns concerning conditional expressions. It summarizes and generalizes techniques for converting conditional forms into linear inequalities from various sources (Gueret et al., 2002; Punyakanok et al., 2008; Noessner et al., 2013, inter alia).
Equivalence of decisions.Suppose we wish to enforce that two decision variables should take the same value. If this condition were written as a logical expression, we would have \(y_{1}\leftrightarrow y_{2}\). We saw in the example in SS 3.3 that naively converting the implication into a CNF and proceeding with the conversion leads to two constraints per equivalence. Instead, we can use the facts that the decisions map to numbers, and that we have the ability to use linear equations, and not just inequalities, to get the following natural constraint:
**Constraint 8:** Equivalence of two variables: \(y_{1}\leftrightarrow y_{2}\).
\[y_{1}-y_{2}=0.\]
Disjunctive Implication.Suppose we have two collections of inference variables \(y_{l_{1}}\), \(y_{l_{2}}\),\(\cdots\), \(y_{l_{n}}\) and \(y_{r_{1}},y_{r_{2}},\cdots,y_{r_{m}}\). We wish to enforce the constraint that if _any_ of the \(y_{l_{i}}\) decisions are true, then at least one of the \(y_{r_{i}}\)'s should be true. It is easy to verify that if written naively, this will lead to \(n\) linear inequalities. However, only one suffices.
**Constraint 9:** Disjunctive Implication: \(\bigvee\limits_{i=1}^{n}y_{l_{i}}\to\bigvee\limits_{i=1}^{m}y_{r_{i}}\)
\[-\sum\limits_{i=1}^{n}y_{l_{i}}+n\sum\limits_{i=1}^{m}y_{r_{i}}\geq 0.\]
To show that this is correct, let us consider two cases.
1. First, if the left hand side of the implication is false (i.e., none of the \(y_{l_{i}}\)'s are true), then the implication holds. In this case, we see that the inequality is satisfied no as negative terms remain on its left hand side.
2. Second, if the left hand side of the implication is true, then at least one, and as many as \(n\) of the \(y_{l_{i}}\)'s are true. Consequently, the sum of the negative terms in the inequality can be as low as \(-n\). For the implication to hold, at least one of the \(y_{r_{i}}\)'s should be true. But if so, we have \(n\sum y_{r_{i}}\geq n\). In other words, the left hand side of the inequality becomes non-negative.
We see that the inequality is satisfied whenever the implication holds.
Conjunctive Implication.This setting is similar to the previous one. We have two collections of inference variables \(y_{l_{1}},y_{l_{2}},\cdots,y_{l_{n}}\) and \(y_{r_{1}},y_{r_{2}},\cdots,y_{r_{m}}\). We wish to enforce the constraint that if _all_ the \(y_{l_{i}}\)'s are true, then _all_ the \(y_{r_{i}}\)'s should be true. As with the case of disjunctive implications, if written naively, this will lead to \(m\) linear inequalities. Once again, we can compactly encode the requirement with one inequality.
**Constraint 10:** Conjunctive implication: \(\bigwedge\limits_{i=1}^{n}y_{l_{i}}\to\bigwedge\limits_{i=1}^{m}y_{r_{i}}\)
\[-m\sum\limits_{i=1}^{n}y_{l_{i}}+\sum\limits_{i=1}^{m}y_{r_{i}}\geq m(1-n).\]
Intuitively, if even one of the \(y_{l_{i}}\)'s is false, the inequality holds irrespective of the number of \(y_{r_{i}}\)'s that are true. However, if all the \(y_{l_{i}}\)'s are true, then every \(y_{r_{i}}\) needs to be true for the inequality to hold. To show the correctness the above recipe, consider the contrapositive of the conjunctive implication: \(\bigvee\limits_{i=1}^{m}\neg y_{r_{i}}\to\bigvee\limits_{i=1}^{n}\neg y_{l_ {i}}\). We have a disjunctive implication where all variables are negated. We can use the recipe for disjunctive implications from above, but replace all variables \(y_{l_{i}}\) and \(y_{r_{i}}\) with \(1-y_{l_{i}}\) and \(1-y_{r_{i}}\) to account for the fact that they are negated. Cleaning up the resulting inequality gives us the recipe for conjunctive implications.
**Example 7**.: Using the conjunctive implication, we can now revisit the constraint (5) we saw at the beginning of this section, and see that it can be written using only two inequalities instead of three. As earlier, we will write this biconditional form as two conditional forms (7) and (6). The first one, being a simple conditional form, corresponds to one constraint. The second constraint \(y_{3}\to y_{1}\wedge y_{2}\) is a conjunctive implication and can be written as the single inequality \(-2y_{3}+y_{1}+y_{2}\geq 0\).
Clearly other conditional forms that are not discussed here are possible. However, not all of them are amenable to being reduced to a single inequality. The usual strategy to handle such complex conditional forms is to symbolically transform a constraint into the forms described here and convert the resulting constraints into a system of linear inequalities.
Complex implications are useful to write down many-to-many correspondences between inference assignments. The need to write down many-to-many correspondences arises naturally when we are predicting labels for nodes and edges of a graph and we wish to restrict values of edge labels based on the labels assigned to nodes to which the edge is incident.
**Example 8**.: To illustrate an application of complex implications, consider a problem where we have a collection of slots, denoted by the set \(\mathcal{S}=\{S_{1},S_{2},S_{3},\cdots\}\). Suppose our goal is to assign a unique label from \(\mathcal{L}=\{\mathtt{l}_{\mathtt{1}},\mathtt{l}_{\mathtt{2}},\mathtt{l}_{ \mathtt{3}},\mathtt{l}_{\mathtt{4}}\}\) to each slot.
The problem definition naturally gives us inference variables of the form \(y_{S_{i}:\mathtt{l}_{\mathtt{3}}}\) that states that the slot \(S_{i}\) is assigned a label \(\mathtt{l}_{\mathtt{j}}\). The uniqueness constraint can be written as a Boolean expression demanding that, for every slot, there is a unique label.
\[\forall s\in\mathcal{S},\quad\exists!\ \mathtt{l}\ \mathtt{l}\in\mathcal{L},y_{s: \mathtt{l}}.\]
We can write this constraint as a collection of linear inequalities, using the multiclass as an ILP construction:
\[\forall s\in\mathcal{S},\quad\sum_{l\in\mathcal{L}}y_{s:\mathtt{l}}=1.\]
In addition, suppose our knowledge of the task informs us that the slots \(S_{1}\) and \(S_{2}\) constrain each other:
The slot \(S_{1}\) can assume one of the labels \(\mathtt{l}_{\mathtt{1}}\) or \(\mathtt{l}_{\mathtt{2}}\) if, and only if, the slot \(S_{2}\) is assigned either the label \(\mathtt{l}_{\mathtt{3}}\) or \(\mathtt{l}_{\mathtt{4}}\).
Likewise, \(S_{1}\) can assume one of \(\mathtt{l}_{\mathtt{3}}\) or \(\mathtt{l}_{\mathtt{4}}\) if, and only if, the slot \(S_{4}\) is assigned either the label \(\mathtt{l}_{\mathtt{1}}\) or \(\mathtt{l}_{\mathtt{2}}\).
This domain knowledge can be formally written as
\[y_{S_{1}:\mathtt{l}_{\mathtt{1}}}\lor y_{S_{1}:\mathtt{l}_{ \mathtt{2}}} \leftrightarrow y_{S_{2}:\mathtt{l}_{\mathtt{3}}}\lor y_{S_{2}: \mathtt{l}_{\mathtt{4}}},\] \[y_{S_{4}:\mathtt{l}_{\mathtt{1}}}\lor y_{S_{4}:\mathtt{l}_{ \mathtt{2}}} \leftrightarrow y_{S_{1}:\mathtt{l}_{\mathtt{3}}}\lor y_{S_{1}: \mathtt{l}_{\mathtt{4}}}.\]
Each constraint here is a biconditional form, which can be written as two disjunctive implications and subsequently converted into linear inequalities using the recipe we have seen earlier in this section:
\[-y_{S_{1}:\mathtt{l}_{\mathtt{1}}}-y_{S_{1}:\mathtt{l}_{\mathtt{ 2}}}+2y_{S_{2}:\mathtt{l}_{\mathtt{3}}}+2y_{S_{2}:\mathtt{l}_{\mathtt{4}}} \geq 0,\] \[-y_{S_{2}:\mathtt{l}_{\mathtt{3}}}-y_{S_{2}:\mathtt{l}_{\mathtt{4} }}+2y_{S_{1}:\mathtt{l}_{\mathtt{1}}}+2y_{S_{1}:\mathtt{l}_{\mathtt{2}}} \geq 0,\] \[-y_{S_{4}:\mathtt{l}_{\mathtt{1}}}-y_{S_{4}:\mathtt{l}_{\mathtt{ 2}}}+2y_{S_{1}:\mathtt{l}_{\mathtt{3}}}+2y_{S_{1}:\mathtt{l}_{\mathtt{4}}} \geq 0,\] \[-y_{S_{1}:\mathtt{l}_{\mathtt{3}}}-y_{S_{1}:\mathtt{l}_{\mathtt{4} }}+2y_{S_{4}:\mathtt{1}_{\mathtt{1}}}+2y_{S_{4}:\mathtt{1}_{\mathtt{2}}} \geq 0.\]
It should be easy to verify that if we had used Boolean operations to convert each of the biconditional forms into a conjunctive normal form and then applied the recipes from SS3, we would end up with eight inequalities instead of the four listed above.
### The Case for Special Cases: Empirical Evidence
The above discussion assumes that fewer inequalities are better handled by solvers. To see that this is indeed the case, let us look at the results of experiments where we compare the naive conversion of conjunctive and disjunctive implications (i.e., via their conjunctive normal form, as in SS3.3), and their more compact counterparts defined in this section.
We considered synthetic problems with 100 categorical variables, each of which can take 50 values. As in Example 8, this gives us 5000 Boolean variables, with the unique label constraint within each block. We constructed random implications of the form seen above using these categorical variables, and their Boolean counterparts. To do so, we sampled two equally sized random sets of categorical variables to define the left- and right- hand sides of the implication respectively, and assigned a random label to each. Note that each label assignment gives us a Boolean inference variable. We randomly negated half of these sampled inference variables and constructed a conjunctive or disjunctive implication as per the experimental condition.
Given above setup, the question we seek to resolve is: _Is it more efficient to create a smaller number of compact inequalities than employing the naive conversion approach via conjunctive normal forms?_ We considered two independent factors in our experiments: the number of implications, and the fraction of categorical variables participating in one constraint, i.e., the _constraint density_. For different values of these factors, we constructed 100 integer linear programs using the both the naive and complex conversion strategies, and measured the average wall-clock time for finding a solution.3
Footnote 3: All experiments were conducted on a 2.6 GHz Intel Core i5 laptop using the Gurobi solver ([http://www.gurobi.com](http://www.gurobi.com)), version 8.1. To control for any confounding effects caused by multi-core execution of the solver, we restricted the solver to use one of the machine’s cores for all experiments.
Figures 1 and 2 show the results of these experiments. We see that for both kinds of implications, not only does the more compact encoding lead to a solution faster, the time improvements increase as the number of Boolean constraints increases. Across all settings, we found that when the number of Boolean constraints is over seven, the improvements in clock time are statistically significant with \(p<0.001\) using the paired t-test. These results show the impact of using fewer inequalities for encoding constraints. For example, for conjunctive implications, with 100 constraints, we get over \(2\times\) speedup in inference time. The results also suggest a potential strategy for making a solver faster: if a solver could automatically detect the inherent structure in the naively generated constraints, it may be able to rewrite constraints into the more efficient forms.
## 5 Complex Building Blocks
So far we have seen basic building blocks that can help us declaratively construct output spaces for ILP inference. While any Boolean expression can be expressed as linear inequalities using only the tools introduced in SS3, we saw in SS4 that certain Boolean predicates (conditional forms) can be more compactly encoded as linear inequalities than the naive expansion would suggest. In this section, we will look at more complex building blocks that abstract away larger predicates efficiently. We will use the fact that graph problems can be framed as linear programs to make these abstractions. We demonstrate two inference situations that frequently show up in NLP: spanning tree constraints and graph connectivity. We should note that other examples exist in the literature, for example, Germann et al. (2004) studied the use of ILPs to define the decoding problem for machine translation as a traveling salesman problem. We refer the reader to Trick (2005) for a discussion on using higher-order constructs for constrained inference.
Figure 1: Comparing encodings for conjunctive implications. The dashed brown lines show the average solver time (in milliseconds) across 100 different runs for the naïve conversion to linear inequalities (§3), while the solid blue lines correspond to the compact conversion (§4). The shaded regions show one standard deviation. The two subfigures show different constraint densities, which control how many categorical variables are involved in the implications. Across both conditions, the compact encoding is more efficient.
Figure 2: Comparing encodings for disjunctive implications. See Figure 1 for details about the figure elements. As with conjunctions, compact encoding is more efficient.
Notation.Since we will be dealing with constraints on graph structures, let us introduce the notation we will use for the rest of this section. We will denote vertices of a graph by integers \(1,2,\ldots,n\) and edges by pairs \((i,j)\). Thus, for any vertex \(i\), its outgoing edges are pairs of the form \((i,j)\) and incoming edges are pairs of the form \((j,i)\).
### Spanning Trees
Our first example concerns spanning trees. Suppose each edge in the graph is associated with a score. Our goal is to identify the highest scoring collection of edges that form a spanning tree. Of course, efficient algorithms such as those of Boruvka, Prim or Kruskal solve the problem of finding maximum spanning trees for undirected graphs. If we are dealing with directed graphs, then the equivalent problem of finding the maximum spanning arborescence can be solved by the Chu-Liu-Edmonds' algorithm. However, we might want to enforce additional task- or domain-specific constraints on the tree, rendering these efficient maximum spanning tree (or arborescence) methods unsuitable.
To simplify discourse, we will assume that we have a fully connected, undirected graph at hand. Our goal is to identify a subset of edges that form a tree over the vertices. The construction outlined in this section should be appropriately modified to suit variations.
Let us introduce a set of inference variables of the form \(y_{ij}\) corresponding to an edge \((i,j)\) connecting vertices \(i\) and \(j\). Since we are considering an undirected graph, and will not allow self-edges in the spanning tree, we can assume that \(i<j\) for all our inference variables. If the variable \(y_{ij}\) is set to true, then the corresponding edge \((i,j)\) is selected in the final sub-graph. One method for enforcing a tree structure is to enumerate every possible cycle and add a constraint prohibiting it. However, doing so can lead to an exponential number of constraints, necessitating specialized solution strategies such as the cutting plane method (Riedel and Clarke, 2006).
Alternatively, we can exploit the connection between network flow problems and optimal trees to construct a more concise set of linear inequalities (Magnanti and Wolsey, 1995; Martins et al., 2009). In particular, we will use the well-studied relationship between the spanning tree problem and the single commodity flow problem. In the latter, we are given a directed graph, and we seek to maximize the total amount of a commodity (also called the flow) transported from a source node to one or more target nodes in the graph. Each edge in the graph has capacity constraints that limit how much flow it can carry.
Without loss of generality, suppose we choose vertex \(1\) to be root of the tree. Then, we can write the requirement that the chosen vertices should form a tree using the single commodity flow model as follows:
1. Vertex \(1\) sends a flow of \(n-1\) units to the rest of the graph.
2. Each other vertex consumes one unit of flow. The amount of flow consumed by the node is simply the difference between its incoming and outgoing flows.
3. Only edges that are chosen to be in the tree can carry flow.
To realize these three conditions, we will need to introduce auxiliary non-negative integer (or real) valued variables \(\phi_{ij}\) and \(\phi_{ji}\) that denote the flow associated with edge \((i,j)\) in either direction. Note that the flow variables are directed even though the underlying graph is undirected. These auxiliary variables do not feature in the ILP objective, or equivalently they are associated with zero costs in the objective.
Using these auxiliary variables, we get the following recipe:
**Constraint 11:** Select a spanning tree among vertices \(1,2,\cdots,n\) of a undirected graph using edge variables \(y_{ij}\), where \(i<j\). Introduce new integer variables \(\phi_{ij}\) and \(\phi_{ji}\) for every such pair \(i,j\).
\[\begin{array}{c}\sum\limits_{j}\phi_{1j}-\sum\limits_{j}\phi_{j1}=n-1,\\ \mbox{for every vertex $i\in\{2,3,\cdots,n\}$},\qquad\sum\limits_{j}\phi_{ji}- \sum\limits_{j}\phi_{ij}=1,\\ \mbox{for every edge $(i,j)$},\qquad\phi_{ij}\leq(n-1)y_{ij},\\ \mbox{for every edge $(i,j)$},\qquad\phi_{ji}\leq(n-1)y_{ij},\\ \mbox{for every edge $(i,j)$},\qquad\qquad\phi_{ij}\geq 0,\\ \mbox{for every edge $(i,j)$},\qquad\qquad\phi_{ji}\geq 0,\\ \sum\limits_{i,j}y_{ij}=n-1,\end{array}\]
The first constraint here enforces that the chosen root sends a flow of \(n-1\) units to the rest of the vertices. The second one says that every other vertex can consume exactly one unit of flow by mandating that the difference between the total incoming flow and the total outgoing flow for any vertex is \(1\). The third and fourth inequalities connect the inference variables \(y_{ij}\) to the flow variables by ensuring that only edges that are selected (i.e. where \(y_{ij}\) is **true**) can carry the flow. The next two constraints ensures that all the flows are non-negative. Finally, to ensure that the final sub-graph is a tree, the last constraint ensures that exactly \(n-1\) edges are chosen. We will refer these constraints collectively as the Spanning Tree constraints over the variables \(y_{ij}\).
There are other ways to efficiently formulate spanning tree constraints using linear inequalities. We refer the reader to Magnanti and Wolsey (1995) for an extensive discussion involving tree optimization problems and their connections to integer linear programming.
To illustrate the Spanning Tree construction, and how it can be used in conjunction with other constraints, let us look at an example.
**Example 9**.: Consider the graph in Figure 3(a). Suppose our goal is to find a tree that spans all the nodes in the graph, and has the highest cumulative weight. To this end, we can instantiate the recipe detailed above.
Each edge in the graph corresponds to one inference variable that determines whether the corresponding node is in the tree or not. The variables are weighted in the objective as per the edge weight. (We do not need to add variables for any edge not shown in the figure; they are weighted \(-\infty\), and will never get selected.) Collectively, all the edge variables, scaled by their corresponding weights, gives us the ILP objective to maximize, namely:
\[10y_{12}+50y_{13}+5y_{15}+11y_{23}+15y_{15}-9y_{34}-7y_{35}-50y_{45}\]
Next, we can instantiate the spanning tree constraints using flow variables \(\{\phi_{12},\phi_{21},\cdots\}\). To avoid repetition, we will not rewrite the constraints here. Solving the (mixed) integer linear program with the flow constraints gives us an assignment to the \(y_{ij}\) variables that corresponds to the tree in Figure 3(b). Of course, if our goal was merely to find the maximum spanning tree in the graph, we need not (and perhaps, should not) seek to do so via an ILP, and instead use one of the named greedy algorithms mentioned earlier that is specialized for this purpose.
Now, suppose we wanted to find the second highest scoring tree. Such a situation may arise, for example, to find the top-\(k\) solutions of an inference problem. To do so, we can add a single extra constraint in addition to the flow constraints that prohibit the tree from Figure 3 (b). In other words, the solution we seek should satisfy the following constraint:
\[\neg\left(y_{13}\wedge y_{23}\wedge y_{25}\wedge y_{34}\right)\]
We can convert this constraint into linear inequalities using the recipies we have seen previously in this survey. Adding the inequality into the ILP from above will give us the tree in Figure 3(c).
### Graph Connectivity
Our second complex building block involves distilling a _connected_ sub-graph from a given graph. Suppose our graph at hand is directed and we seek to select a sub-graph that spans all the nodes and is connected. We can reduce this to the spanning tree constraint by observing that any connected graph should contain a spanning tree. This observation gives us the following solution strategy: Construct an auxiliary problem (i.e, finding a spanning tree) whose solution will ensure the connectivity constraints we need.
Let inference variables \(y_{ij}\) denote the decision that the edge \((i,j)\) is selected. To enforce the connectivity constraints, we will introduce auxiliary Boolean inference variables \(z_{ij}\) (with zero objective coefficients) for every edge \((i,j)\) or \((j,i)\) that is in the original graph. In other words, the auxiliary variables we introduce are undirected.
Using these auxiliary variables, we can state the connectivity requirement as follows:
Figure 3: An undirected graph to illustrate the spanning tree constraints. The goal is to find the two highest scoring trees spanning the nodes in subfigure (a). Example 9 shows how to generate the two trees in subfigures (b) and (c) incrementally. The directed edges in the tree show the direction of commodity flow in the solutions to the mixed integer programs.
1. The inference variables \(z_{ij}\) form a spanning tree over the nodes.
2. If \(z_{ij}\) is true, then either the edge \((i,j)\) or the edge \((j,i)\) should get selected.
We can write these two requirements using the building blocks we have already seen.
**Constraint 12:** Find a connected spanning sub-graph of the nodes \(1,2,\cdots,n\)
Spanning Tree constraints over variables \(z_{ij}\),
for every \((i,j)\) such that \(i<j\), \(z_{ij}\to y_{ij}\lor y_{ji}\).
Each of these constraints can be reduced to a collection of linear inequalities using the tools we have seen so far. We will see an example of how a variant of this recipe can be used in SS6.2. In the construction above, the \(z\)'s help set up the auxiliary spanning tree problem. Their optimal values are typically disregarded, and it is the assignment to the \(y\)'s that constitute the solution to the original problem.
### Other Graph Problems
In general, if the problem at hand can be written as a known and tractable graph problem, then there are various efficient ways to instantiate linear inequalities that encode the structure of the output graph. We refer the reader to resources such as Papadimitriou and Steiglitz (1982), Magnanti and Wolsey (1995) and Schrijver (1998) for further reference. We also refer the reader to the AD\({}^{3}\) algorithm (Martins et al., 2015) that supports the coarse decomposition of inference problems to take advantage of graph algorithms directly.
### Soft Constraints
The constraints discussed so far in this survey are hard constraints. That is, they prohibit certain assignments of the decision variables. In contrast, a _soft constraint_ merely penalizes assignments that violates them rather than disallowing them. Soft constraints can be integrated into the integer linear programming framework in a methodical fashion. Srikumar (2013) explains the process of adding soft constraints into ILP inference. Here we will see a brief summary.
As before, suppose we have an inference problem expressed as an integer linear program:
\[\max_{\mathbf{y}} \sum_{i}c_{i}y_{i}\] s.t. \[\mathbf{y}\in\mathcal{Y},\] \[y_{i}\in\{0,1\}.\]
Here, the requirement that \(\mathbf{y}\in\mathcal{Y}\) is assumed to be stated as linear inequalities. However, as we have seen in the previous sections, they could be equivalently stated as Boolean expressions.
If, in addition to the existing constraint, we have an additional Boolean constraint \(C(\mathbf{y})\) written in terms of inference variables \(\mathbf{y}\). Instead of treating this as a hard constraint, we only wish to penalize assignments \(\mathbf{y}\) that violate this constraint by a penalty term \(\rho_{C}\). We will consider the case where \(\rho_{C}\) is independent of \(\mathbf{y}\). To address inference in such a scenario, we can introduce a new Boolean variable \(z\) that tracks whether the constraint is not satisfied. That is,
\[z\leftrightarrow\neg C(\mathbf{y}). \tag{8}\]
If the constraint is not satisfied, then the corresponding assignment to the decision variables should be penalized by \(\rho_{C}\). We can do so by adding a term \(-z\rho_{C}\) to the objective of the original ILP. Since the constraint (8) that defines the new variable \(z\) is also a Boolean expression, it can be converted into a set of linear inequalities.
This procedure gives us the following new ILP that incorporates the soft constraint:
\[\max_{\mathbf{y},z} \sum_{i}c_{i}y_{i}-z\rho_{C}\] s.t. \[\mathbf{y}\in\mathcal{Y},\] \[z\leftrightarrow\neg C(\mathbf{y}),\] \[y_{i},z\in\{0,1\}.\]
We can summarize the recipe for converting soft constraints into larger ILPs below:
**Constraint 13:**: Soft constraint \(C(\mathbf{y})\) with a penalty \(\rho_{C}\)
Add a Boolean variable \(z\) to the objective with coefficient \(-\rho_{C}\)
Add constraint \(z\leftrightarrow\neg C(\mathbf{y})\)
## 6 Worked Examples
In this section, we will work through two example NLP tasks that use the framework that we have seen thus far. First, we will look at the problem of predicting sequences, where efficient inference algorithms exist. Then, we will see the task of predicting relationships between events in text, where we need the full ILP framework even for a simple setting.
### Sequence Labeling
Our first example is the problem of sequence labeling. Using the tools we have seen so far, we will write down prediction in a first order sequence model as an integer linear program.
**Example 10** (Sequence Labeling).: Suppose we have a collection of \(n\) categorical decisions, each of which can take one of three values \(\mathcal{L}=\{\mathtt{a},\mathtt{b},\mathtt{c}\}\). We can think of these \(n\) decisions as slots that are waiting to be assigned one the three labels. Each slot has an intrinsic preference for one of the three labels. Additionally, the label at each slot is influenced by the label of the previous slot. The goal of inference is to find a sequence of labels that best accommodates both the intrinsic preferences of each slot and the influence of the neighbors.
Let us formalize this problem. There are two kinds of scoring functions. The decision at the \(i^{th}\) slot is filled with a label \(\mathtt{L}\) is associated with an _emission_ score \(c_{i\mathtt{L}}\) that indicates the intrinsic preference of the slot getting the label. Additionally, pairs of decisions in the sequence are scored using _transition scores_. That is, the outcome that the \(i^{th}\) label is \(\mathtt{L}_{1}\) and the \(j^{th}\) label is \(\mathtt{L}_{2}\) is jointly scored using \(\alpha_{\mathtt{L}_{1}\mathtt{L}_{2}}\). (Notice that the transition score is independent of \(i\) in this formulation.) Now, our goal is find a label assignment to all \(n\) slots that achieves the maximum total score.
Figure 4 gives the usual pictorial representation of this predictive problem. A first-order sequence labeling problem of this form is ubiquitous across NLP for tasks such as part-of-speech tagging, text chunking and various information extraction problems. There are different ways to
frame this problem as an ILP. We will employ one that best illustrates the use of the techniques we have developed so far.
First, let us start with the decision variables. There are two kinds of decisions--emissions and transitions--that contribute to the total score. Let \(y_{i:\mathsf{L}}\), scored by \(c_{i:\mathsf{L}}\), denote the decision that the \(i^{th}\) label is \(\mathsf{L}\). Let \(y_{i:\mathsf{L}_{1},\mathsf{L}_{2}}\) denote the decision that the \(i^{th}\) label is \(\mathsf{L}_{1}\) and the next one is \(\mathsf{L}_{2}\). This transition is scored by \(c_{\mathsf{L}_{1},\mathsf{L}_{2}}\). These variables and their associated scores give us the following objective function for the inference:
\[\max_{\mathbf{y}}\sum_{i=1}^{n}\sum_{\mathsf{L}\in\mathcal{L}}c_{i:\mathsf{L}} \cdot y_{i:\mathsf{L}}+\sum_{i=1}^{n-1}\sum_{\mathsf{L}_{1},\mathsf{L}_{2}\in \mathcal{L}}c_{\mathsf{L}_{1},\mathsf{L}_{2}}\cdot y_{i:\mathsf{L}_{1},\mathsf{ L}_{2}}. \tag{9}\]
Note that the objective simply accumulates scores from _every_ possible decision that can be made during inference. For the sake of simplicity, we are ignoring initial states in this discussion, but they can be easily folded into the objective.
Now that the inference variables are defined, we need to constrain them. We have two kinds of constraints:
1. Each slot can take exactly one label in \(\mathcal{L}=\{\mathsf{a},\mathsf{b},\mathsf{c}\}\). Once again, we instantiate the Multi-class Classification as an ILP construction (SS3.2) to get \[\forall i\in\{1,2,\cdots,n\};\sum_{L\in\mathcal{L}}y_{i:\mathsf{L}}=1.\] (10) These equations give us \(n\) linear constraints in all.
2. The transition decisions and the emission decisions should agree with each other. Written down in logic, this condition can be stated as: \[\forall\ i\in\{1,2,\cdots,n\};\quad\forall\ \mathsf{L}_{1},\mathsf{L}_{2}\in \mathcal{L};\quad y_{i:\mathsf{L}_{1},\mathsf{L}_{2}}\leftrightarrow y_{i: \mathsf{L}_{1}}\wedge y_{i+1:\mathsf{L}_{2}}\] Together, these \(n|\mathcal{L}|^{2}\) constraints ensure that the output is a valid sequence. Since each of them is a conjunctive biconditional form (SS4.2), we get the following linear inequalities representing the constraints: \[\forall i,\mathsf{L}_{1},\mathsf{L}_{2}; -2y_{i:\mathsf{L}_{1},\mathsf{L}_{2}}+y_{i:\mathsf{L}_{1}}+y_{i+ 1:\mathsf{L}_{2}}\geq 0\] (11) \[y_{i:\mathsf{L}_{1},\mathsf{L}_{2}}-y_{i:\mathsf{L}_{1}}-y_{i+ 1:\mathsf{L}_{2}}\geq-1\] (12) In all, we get \(2n|\mathcal{L}|^{2}\) linear inequalities to represent these consistency constraints.
Figure 4: An example factor graph for a sequence model. This figure illustrates the case of five decisions in a sequence. Circles denote random variables whose assignment we seek and the squares represent factors or scoring functions as described in the text.
The objective (9) and the constraints (10), (11) and (12) together form the integer linear program for sequence labeling.
It is important to note once again that here, we are only using the integer linear programs as a declarative language to state inference problems, not necessarily for solving them. Specifically for the sequence labeling problem framed as a first-order Markov model, the Viterbi algorithm offers a computationally efficient solution to the inference problem. However, we may wish to enforce constraints that renders the Viterbi algorithm unusable.
The strength of the ILP formulation comes from the flexibility it gives us. For example, consider the well-studied problem of part-of-speech tagging. Suppose, we wanted to only consider sequences where there is at least one verb in the final output. It is easy to state this using the following constraint:
\[\sum_{i=1}^{n}y_{i:\texttt{verb}}\geq 1. \tag{13}\]
With this constraint, we can no longer use the vanilla Viterbi algorithm for inference. But, by separating the declaration of the problem from the computational strategies for solving them, we can at least write down the problem formally, perhaps allowing us to use a different algorithm, say Lagrangian relaxation (Everett III, 1963; Lemarechal, 2001), or a call to a black box ILP solver for solving the new inference problem.
### Recognizing Event-Event Relations
Our second example involves identifying relationships between events in text. While the example below is not grounded directly in any specific instantiation of the task, it represents a simplified version of the inference problem addressed by Berant et al. (2014); Ning et al. (2018a,b); Wang et al. (2020).
**Example 11** (Event-Event Relations).: Suppose we have a collection of events denoted by \(E=\{e_{1},e_{2},\cdots,e_{n}\}\) that are attested in some text. Our goal is to identify causal relationships between these events. That is, for any pair of events \(e_{i}\) and \(e_{j}\), we seek a directed edge that can be labeled with one of a set of labels \(R=\{\textsc{Cause},\textsc{Prevent},\textsc{None}\}\) respectively indicating that the event \(e_{i}\) causes, prevents or is unrelated to event \(e_{j}\).
For every pair of events \(e_{i}\) and \(e_{j}\), we will introduce decision variables \(y_{ij:r}\) for each relation \(r\in R\) denoting that the edge \((i,j)\) is labeled with the relation \(r\). Each decision may be assigned a score \(c_{ij:r}\) by a learned scoring function. Thus, the goal of inference is to find a score maximizing set of assignments to these variables. This gives us the following objective:
\[\sum_{e_{i},e_{j}\in E}\sum_{r\in R}c_{ij:r}\cdot y_{ij:r}. \tag{14}\]
Suppose we have three sets of constraints that restrict the set of possible assignments to the inference variables. These constraints are a subset of the constraints used to describe biological processes by Berant et al. (2014).
1. Each edge should be assigned exactly one label in \(R\). This is the Multiclass Classification as an ILP construction, giving us \[\forall e_{i},e_{j}\in E,\sum_{r\in R}y_{ij:r}=1.\] (15)
2. If an event \(e_{i}\) causes or prevents \(e_{j}\), then \(e_{j}\) can neither cause nor prevent \(e_{i}\). In other words, if a Cause or a Prevent relation is selected for the \((i,j)\) edge, then the None relation should be chosen for the \((j,i)\) edge. We can write this as a logical expression as: \[\forall e_{i},e_{j}\in E,y_{ij:\textsc{Cause}}\lor y_{ij:\textsc{Prevent}} \to y_{ji:\textsc{None}}.\] This is an example of a disjunctive implication (SS4.2), which we can write using linear inequalities as: \[\forall e_{i},e_{j}\in E,-y_{ij:\textsc{Cause}}-y_{ij:\textsc{Prevent}}+2y_{ ji:\textsc{None}}\geq 0.\] (16)
3. The events should form a connected component using the non-None edges. This constraint invokes the graph connectivity construction from SS5.2. To instantiate the construction, let us introduce auxiliary Boolean variables \(z_{ij}\) that indicates that the events \(e_{i}\) and \(e_{j}\) are connected with an edge that is not labeled None in at least one direction, i.e., the edge from \(e_{i}\) to \(e_{j}\) or the one in the other direction has a non-NonElabel. As before, let \(\phi_{ij}\) denote the non-negative real valued flow variables along a directed edge \((i,j)\). Following SS5.2, we will require that the \(z_{ij}\)'s form a spanning tree. First, the auxiliary variables \(z_{ij}\) should correspond to events \(e_{i}\) and \(e_{j}\) that are connected by a non-None edge in either direction. That is, \[\forall e_{i},e_{j}\in E\text{ where }i<j,\ z_{ij}\rightarrow(\exists\ r \neq\textsc{None},\text{ s.t. }y_{ij:r}\lor y_{ji:r})\,,\] (17) The existential form on the right hand side of the implication can be written as a disjunction, thus giving us a disjunctive implication. For brevity, we will not expand these Boolean expressions into linear inequalities. Second, an arbitrarily chosen event \(e_{1}\) sends out \(n-1\) units of flow, and each event consumes one one unit of flow. \[\sum_{j}\phi_{1j}-\sum_{j}\phi_{j1}=n-1,\] (18) \[\forall e_{i}\in E,\ \ \ \ \sum_{j}\phi_{ij}-\sum_{j}\phi_{ji}=1.\] (19) Third, the commodity flow should only happen along the edges that are selected by the auxiliary variables. \[\forall e_{i},e_{j}\in E\text{ where }i<j,\ \phi_{ij}\leq(n-1)z_{ij}\] (20) Finally, the auxiliary variables should form a tree. That is, exactly \(n-1\) of them should be selected. \[\sum_{i,j}z_{ij}=n-1.\] (21)
We can write the final inference problem as the problem of maximizing the objective (14) with respect to the inference variables \(\mathbf{y}\), the auxiliary variables \(z_{ij}\) and the flow variables \(\phi_{ij}\) subject to the constraints listed in Equations (15) to (21). Of course, the decision variables \(\mathbf{y}\) and the auxiliary variables \(z_{ij}\) are 0-1 variables, while the flow variables are non-negative real valued ones.
Final Words
We have seen a collection of recipes that can help to encode inference problems as instances of integer linear programs. Each recipe focuses on converting a specific kind of predicate into one or more linear inequalities that constitute the constraints for the discrete optimization problem. The conversion of predicates to linear inequalities is deterministic and, in fact, can be seen as a compilation step, where the user merely specifies constraints in first-order logic and an inference compiler produces efficient ILP formulations. Some programs that allow declarative specification of inference include Learning Based Java (Rizzolo, 2012), Saul Kordjamshidi et al. (2016) and DRaiL Pacheco and Goldwasser (2021).
It should be clear from this tutorial-style survey that there may be multiple ways to encode the same inference problem as integer programs. The best encoding may depend on how the integer program is solved. Current solvers (circa 2022) seem to favor integer programs with fewer constraints that are dense in terms of the number of variables each one involves. To this end, we saw two strategies: We either collapsed multiple logical constraints that lead to sparse inequalities to fewer dense ones, or formulated the problem in terms of known graph problems.
While it is easy to write down inference problems, it is important to keep the computational properties of the inference problem in mind. The simplicity of design can make it easy to end up with large and intractable inference problems. For example, for the event relations example from SS6.2, if we had tried to identify both the events and their relations using a single integer program (by additionally specifying event decision variables), the approach suggested here can lead to ILP instances that are difficult to solve with current solvers.
A survey on using integer programming for modeling inference would be remiss without mentioning techniques for solving the integer programs. The easiest approach is to use an off-the-shelf solver. Currently, the fastest ILP solver is the Gurobi solver;4 other solvers include the CPLEX Optimizer,5 the FICO Xpress-Optimizer,6 lp_solve,7 and GLPK.8 The advantage of using off-the-shelf solvers is that we can focus on the problem at hand. However, using such solvers prevents us from using task-driven specialized strategies for inference, if they exist. Sometimes, even though we can write the inference problem as an ILP, we may be able to design an efficient algorithm for solving it by taking advantage of the structure of the problem. Alternatively, we can relax the problem by simply dropping the \(\{0,1\}\) constraints over the inference variables and instead restricting them to be real valued in the range \([0,1]\). We could also employ more sophisticated relaxation methods such as Lagrangian relaxation (Everett III, 1963; Lemarechal, 2001; Geoffrion, 2010; Chang and Collins, 2011), dual decomposition (Rush and Collins, 2012; Rush et al., 2010; Koo et al., 2010), or the augmented Lagrangian method (Martins et al., 2011a,b; Meshi and Globerson, 2011; Martins et al., 2015).
Footnote 4: [http://www.gurobi.com](http://www.gurobi.com)
Footnote 5: [https://www.ibm.com/products/ilog-cplex-optimization-studio](https://www.ibm.com/products/ilog-cplex-optimization-studio)
Footnote 6: [http://www.fico.com/en/products/fico-xpress-optimization-suite](http://www.fico.com/en/products/fico-xpress-optimization-suite)
Footnote 7: [https://sourceforge.net/projects/lpsolve](https://sourceforge.net/projects/lpsolve)
Footnote 8: [https://www.gnu.org/software/glpk/](https://www.gnu.org/software/glpk/)
The ability to write down prediction problems in a declarative fashion (using predicate logic or equivalently as ILPs) has several advantages. First, we can focus on the definition of the task we want to solve rather than the algorithmic details of how to solve it. Second, because we have a unifying language for reasoning about disparate kinds of tasks, we can start reasoning about properties of inference in a task-independent fashion. For example, using such an abstraction, we can amortize inference costs over the lifetime of the predictor (Srikumar et al., 2012; Kundu et al., 2013; Chang et al., 2015; Pan and Srikumar, 2018).
Finally, recent successes in NLP have used neural models with pre-trained representations such as BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019) and others. The unification of such neural networks and declarative modeling with logical constraints is an active area of research today (Xu et al., 2018; Li and Srikumar, 2019; Li et al., 2019; Fischer et al., 2019; Nandwani et al., 2019; Li et al., 2020; Wang et al., 2020; Asai and Hajishirzi, 2020; Giunchiglia and Lukasiewicz, 2021; Grespan et al., 2021; Pacheco and Goldwasser, 2021; Ahmed et al., 2022, inter alia). This area is intimately connected with the area of neuro-symbolic modeling which seeks to connect neural models with symbolic reasoning. We refer the reader to Garcez and Lamb (2020); Kautz (2022); Pacheco et al. (2022) for recent perspectives on the topic. The declarative modeling strategy supported by the kind of inference outlined in this tutorial may drive the integration of complex symbolic reasoning with expressive neural models, which poses difficulties for current state-of-the-art models.
|
2309.04863 | Design of a Low-Power High-Gain Bio-Medical Operational Amplifier in
65nm Technology using gm/ID Methodology | Operational Amplifiers (Op-Amps) play a crucial role in the field of
biomedical engineering, as they enable signal amplification and processing in
various medical devices. With the increasing demand for portable and low-power
biomedical devices, designing Op-Amps specifically tailored for such
applications is essential. In response to this need, a low-power high-gain
Op-Amp designed for biomedical applications using TSMC 65nm technology has been
proposed. This Op-Amp incorporates a two-stage miller compensated topology,
which is well-known for its superior performance in gain, gain bandwidth
product and power consumption. The proposed Op-Amp contributes to the field of
biomedical engineering by offering a tailored solution that enhances signal
processing capabilities, enables accurate data acquisition, and improves
overall efficiency in healthcare systems. The design methodology and simulation
results presented in this paper provide insights into the performance and
potential impact of the Op-Amp in advancing biomedical devices and systems. | Ayan Biswas, Supriya Dhabal, Palaniandavar Venkateswaran | 2023-09-09T19:02:22Z | http://arxiv.org/abs/2309.04863v1 | Design of a Low-Power High-Gain Bio-Medical Operational Amplifier in 65nm Technology using gm/ID Methodology
###### Abstract
Operational Amplifiers (Op-Amps) play a crucial role in the field of biomedical engineering, as they enable signal amplification and processing in various medical devices. With the increasing demand for portable and low-power biomedical devices, designing Op-Amps specifically tailored for such applications is essential. In response to this need, a low-power high-gain Op-Amp designed for biomedical applications using TSMC 65nm technology has been proposed. This Op-Amp incorporates a two-stage miller compensated topology, which is well-known for its superior performance in gain, gain bandwidth product and power consumption. The proposed Op-Amp contributes to the field of biomedical engineering by offering a tailored solution that enhances signal processing capabilities, enables accurate data acquisition, and improves overall efficiency in healthcare systems. The design methodology and simulation results presented in this paper provide insights into the performance and potential impact of the Op-Amp in advancing biomedical devices and systems.
Operational-Amplifier, Miller Compensation, Biomedical Applications
## I Introduction
The miniaturization of VLSI technologies demands low-voltage and low-power analog circuit designs [1]. However, reduced supply voltage presents challenges for analog circuit designers due to degraded transistor characteristics and limitations in traditional techniques. This is crucial for bio-measurement devices, such as portable and battery-powered devices. Bio-potential amplifiers are commonly used to amplify ECG and EEG signals, requiring high input impedance, safety protections, low output impedance, minimal distortion, high gain, and a high common mode rejection ratio. Precise measurement and amplification of small and noisy bio-signals pose challenges [2].
This work aims to develop an operational amplifier for improved biomedical signal analysis (e.g. EEG and ECG signals). The objective is to design a low-noise, low-power amplifier to enhance measurement accuracy and reliability. By addressing challenges in signal analysis, our innovative design aims to improve monitoring quality and efficiency for medical professionals and patients.
## II Design Methodology
This section outlines a step-by-step process for creating a two-stage operational amplifier (also known as a Miller amplifier), including specific dimensions for the MOSFETs (including device width (W), and channel length (L)), and the required value for the compensation capacitor (\(C_{c}\)) to meet the desired performance of the amplifier
### _Choice of Devices_
The chosen devices for the design are the **nch_lvt_mac** and **pch_lvt_mac** from the **tsmcN65** library. The **nch_lvt_mac** is an n-type MOSFET with low threshold voltage and a metal-gate structure, suitable for low-power and high-performance
Fig. 1: Design of a common EEG signal monitoring system
applications. The **pch_lvt_mac** is a complementary p-type MOSFET with similar characteristics, making it suitable for amplifier circuits.
### _The gm/id methodology_
Our Op-Amp design utilizes the gm/ID methodology, simplifying the process with pre-generated sizing charts (shown in Fig. 4 and Fig. 5) instead of complex equations. This approach achieves desired specifications in a single iteration, capturing MOSFET behavior across various inversion regions. Inspired by Hesham et al. [3], who applied similar techniques to FinFET devices, our methodology enables specification-driven design without relying on compact models.
### _Bias Circuitry for generation of design variable curves_
Fig.2 depicts the PMOS bias circuitry for generating sizing charts, such as Vov vs gm/id and gm/id vs gm/gds. The NMOS bias circuitry is illustrated in Fig.3.
At first, the gm/id curve for both of the devices is plotted against \(V_{gs}\) and then by analysis of data from the results of this plot, other plots are generated.
### _Sizing of pMOS and nNMOS Transistors_
As shown in Table. I the variables are swept over the desired ranges and corresponding data points have been collected to generate the sizing charts as shown in Fig. 4 and Fig. 5.
### _Design Equations and Methodology_
The schematic of the two-stage operational amplifier is shown in Fig. 6 and the step-by-step design procedure is described below.
_Input Referred Noise Voltage:_ We can start the design from the noise considerations. At a higher frequency range of operation, the Input Referred Noise Voltage of the amplifier is given by Eq. 1[4].
\[S_{n}(f)=2.4kT\frac{2}{3}\frac{1}{gm_{1,2}}[1+\frac{gm_{3,4}}{gm_{1,2}}] \tag{1}\]
For a lower level of noise, we can assume \(gm_{3,4}<<gm_{1,2}\) so the Eq. 1 transformed Eq. 2
\[gm_{1,2}=\frac{16}{3}\frac{kT}{S_{n}(f)} \tag{2}\]
So the value of \(gm_{1,2}\) can be directly calculated from 2.
_Miller Capacitance (\(C_{c}\)):_ For a given value of the gain-bandwidth product (GBW), the compensation capacitance \(C_{c}\) can be calculated from the Eq. 3[5].
\[\mathrm{C_{c}}=\frac{gm_{1,2}}{2\pi GBW_{Hz}} \tag{3}\]
The compensation capacitance \(C_{c}\) is an essential parameter for tuning the phase margin of the overall design. A phase margin of less than 60 degrees will lead to an unstable signal response at the output.
_Slew Rate:_ Slew Rate (as given by Eq. 4[4]) is defined as the maximum rate at which an operational amplifier (op amp) can alter its output voltage in response to abrupt changes in the input voltage.
\[SR_{I}=\frac{2.I_{D1}}{C_{c}}=\frac{I_{D5}}{C_{c}} \tag{4}\]
From the given slew rate specification the value of currents i.e. \(I_{D5}\) and \(I_{D1}\) can be easily calculated.
_Gain of 1st Stage:_ As we know, Gain is equal to the product of transconductance and output resistance, so here the signal in the 1st stage (as shown in Fig. 6) flows from the input (i.e. M2) and flows to the output (i.e. M4) so transconductance is \(gm_{1,2}\) and the overall output resistance is \(\frac{1}{gds_{1,2}+gds_{3,4}}\) and the overall gain of this stage is given by Eq. 5[5].
\[A_{V1}=\frac{gm_{1,2}}{gds_{1,2}+gds_{3,4}} \tag{5}\]
After selecting a suitable value for the DC-gain (\(A_{V1}\)) of the first stage and assuming equal transconductance values (\(gds_{1,2}=gds_{3,4}\)), we determine the transconductance value (\(gds_{1,2}\)) using Fig.4a. Then, the effective width (W) is determined by finding the intersection point between \(gm_{2}/I_{D2}\) and the selected length curve obtained from the previous step on the second sizing chart (Fig. 4c) to determine \(V_{GS1}\), which is used in the calculation of \((W/L)_{3,4}\).
_Gain of 2nd Stage:_ Similarly, as above, the Gain of 2nd Stage is given by Eq. 6.
\[A_{V2}=\frac{gm_{6}}{gds_{6}+gds_{7}} \tag{6}\]
We design the active load for the first stage (M3, M4) using the specified values of \(gds_{3,4}\) and \(I_{D3,4}\). Referring to Fig. 5c, and compute the correct \((gm/ID)_{3,4}\) using \(V_{GS3,4}\) and \(L_{3,4}\). It is crucial to verify this calculated value before proceeding. If the computed current efficiency exceeds our assumption, indicating a lower gain, we need to reassess \((gm/ID)_{3,4}\) and recalculate \(L_{3,4}\). Conversely, if the computed current efficiency falls below the assumed value, indicating a higher gain, our assumption of \((gm/ID)_{3,4}\) is appropriate. Finally, using the second sizing chart, Fig. 5b, we determine the effective width (\(W_{3,4}\)).
\begin{table}
\begin{tabular}{|c|l|l|l|} \hline
**MOS Type** & **Parameter** & **Sweep Range** & **Points Taken** \\ \hline \multirow{2}{*}{pMOS} & Vsg & 0.1 V - 0.9 V & 10 nos \\ \cline{2-4} & L & 65mm - 180mm & 10 nos \\ \hline \multirow{2}{*}{nMOS} & Vgs & 0.1 V - 0.9 V & 10 nos \\ \cline{2-4} & L & 65mm - 180mm & 10 nos \\ \hline \end{tabular}
\end{table} TABLE I: Sweep of Variables for Sizing Chart Generation
#### Common Mode Rejection Ratio (CMRR)
It is a parameter that measures an operational amplifier's ability to reject common signals at its input terminals. Mathematically, CMRR is the ratio of differential mode gain and common mode gain, which is given as per Eq. 7[5].
\[CMRR=\frac{A_{vd}}{A_{CM}}=\frac{gm_{1,2}}{gds_{1,2}+gds_{3,4}}.2gm_{3,4}.R_{ss} \tag{7}\]
\[gds_{5}=\frac{1}{R_{ss}} \tag{8}\]
To determine the aspect ratio \((W/L)_{5}\) of M5, we compute the transconductance value \(gds5\) using Eq. 8, and assume \(I_{D5}=2*I_{D4}\). Employing a similar approach as before, we apply the methodology to calculate \((W/L)_{5}\).
#### Current Ratio of M1 and M6
From the slew ratios of compensation capacitance and load capacitance, the current ratios can be derived as per Eq. 9[4].
\[\frac{I_{D1}}{I_{D6}}\leq\frac{C_{C}}{2(C_{L}+C_{C})} \tag{9}\]
#### Phase Margin
Phase margin measures the amount of additional phase shift that can be introduced into the system before instability occurs. For design considerations, we have used the dominant pole concept and have derived the equation (Eq. 10) for Phase Margin.
\[PM^{\circ}=90^{\circ}-\tan^{-1}\left[\frac{GBW}{p2}\right]-\tan^{-1}\left[ \frac{GBW}{z1}\right] \tag{10}\]
We have also defined a **Phase Margin control parameter** (\(\alpha\)) which is given in Eq. 11. This parameter \(\alpha\) acts as a controlling knob for the Phase Margin of the overall system.
\[let,\alpha=\frac{\frac{gm1}{I_{D1}}}{\frac{gm6}{I_{D6}}} \tag{11}\]
\[PM^{\circ}=90^{\circ}-\tan^{-1}\left[\alpha\frac{I_{D1}}{I_{D6}}\frac{C_{L}}{ C_{c}}\right]-\tan^{-1}\left[\alpha\frac{I_{D1}}{I_{D6}}\right] \tag{12}\]
To attain the desired phase margin as specified in Eq. 12. By following a similar methodology employed to determine
\((W/L)_{1}\) and \((W/L)_{2}\), we can determine the aspect ratio \((W/L)_{6}\) for M6.
_Design of 2nd Stage and Current Mirror (M8):_ The device width ratio in the current mirror configuration, such as (M5 and M8), is equal to the current ratio. Therefore, the width of device 8 (W8) can be determined using Eq. 13 [6].
\[\mathrm{W}8\approx\frac{2}{3}W_{5}\frac{L_{D}8}{L_{D}5} \tag{13}\]
To support the suggested process, a two-stage operational amplifier (as depicted in Fig. 6) is designed using the \(gm/ID\) methodology and simulated utilizing the 65 nm Low Voltage Threshold (lvt) MOSFET (tsmcN65 technology).
### _Calculated Dimensions of the MOSFETs_
After deriving all the dimensions of all the MOSFETs (\(M_{1}\) to \(M_{8}\)), the dimensions are listed in Table. II.
## III Results and Analysis
### _AC Analysis_
The design parameters for the two-stage (Miller) Op-amp depicted in Table. II are determined through the initial iteration based on the proposed design procedure. Table. III displays the calculated design parameters for the Op-Amp and Fig. 7 shows the Bode plot of the designed Op-Amp.
### _Bandwidth, Phase Margin and Power Considerations_
The performance of the designed operational amplifier (OpAmp) is evaluated in terms of bandwidth, phase margin, and power considerations. Table. III illustrates the obtained values for these parameters.
higher gain within a similar power range, surpassing the results reported in the work by Oreggion et al. [8].
|
2308.16828 | Machine learning assisted analysis of visible spectroscopy in
pulsed-power-driven plasmas | We use machine learning models to predict ion density and electron
temperature from visible emission spectra, in a high energy density
pulsed-power-driven aluminum plasma, generated by an exploding wire array.
Radiation transport simulations, which use spectral emissivity and opacity
values generated using the collisional-radiative code PrismSPECT, are used to
determine the spectral intensity generated by the plasma along the
spectrometer's line of sight. The spectra exhibit Al-II and Al-III lines, whose
line ratios and line widths vary with the density and temperature of the
plasma. These calculations provide a 2500-size synthetic dataset of
400-dimensional intensity spectra, which is used to train and compare the
performance of multiple machine learning models on a 3-variable regression
task. The AutoGluon model performs best, with an R2-score of roughly 98% for
density and temperature predictions. Simpler models (random forest, k-nearest
neighbor, and deep neural network) also exhibit high R2-scores (>90%) for
density and temperature predictions. These results demonstrate the potential of
machine learning in providing rapid or real-time analysis of emission
spectroscopy data in pulsed-power-driven plasmas. | Rishabh Datta, Faez Ahmed, Jack D Hare | 2023-08-31T16:02:27Z | http://arxiv.org/abs/2308.16828v1 | # Machine learning assisted analysis of visible spectroscopy in pulsed-power-driven plasmas
###### Abstract
We use machine learning models to predict ion density and electron temperature from visible emission spectra, in a high energy density pulsed-power-driven aluminum plasma, generated by an exploding wire array. Radiation transport simulations, which use spectral emissivity and opacity values generated using the collisional-radiative code PrismSPECT, are used to determine the spectral intensity generated by the plasma along the spectrometer's line of sight. The spectra exhibit Al-II and Al-III lines, whose line ratios and line widths vary with the density and temperature of the plasma. These calculations provide a 2500-size synthetic dataset of 400-dimensional intensity spectra, which is used to train and compare the performance of multiple machine learning models on a 3-variable regression task. The AutoGluon model performs best, with an R2-score of roughly \(98\%\) for density and temperature predictions. Simpler models (random forest, k-nearest neighbor, and deep neural network) also exhibit high R2-scores (\(>90\%\)) for density and temperature predictions. These results demonstrate the potential of machine learning in providing rapid or real-time analysis of emission spectroscopy data in pulsed-power-driven plasmas.
## I Introduction
Spectroscopy is a powerful technique for inferring plasma parameters from emitted electromagnetic radiation. For instance, line widths and line ratios can be used to determine electron density and temperature [1, 2, 3], velocity can be determined from the Doppler shift of spectral lines [1, 4], and magnetic field strength can be inferred from the Zeeman splitting of line radiation [5, 6]. The wide applicability of spectroscopy makes it an attractive tool for implementation in a variety of laboratory plasmas [1, 4, 7, 8, 9].
In emission spectroscopy, a typical intensity spectrum can contain several peaks (called emission lines) overlaid on a continuum [1]. The lines correspond to bound-bound electron transitions in the ions of the plasma, while the continuum emission results from free-free (Bremsstrahlung emission) and free-bound electron transitions (recombination radiation) [1, 4]. Line radiation generated by the plasma arises due to either collisional or radiative processes [1]. Collisional processes, such as electron impact excitation/de-excitation and three-body recombination, change the energy levels of bound electrons via collisions with other electrons [1, 4]. Similarly, radiative processes, such as photoexcitation/de-excitation, induce energy transitions due to the interaction of bound electrons with photons [1, 4]. Collisional-radiative models balance the rates of excitation (and ionization) against that of de-excitation (and deionization), to determine the spectral emissivity and opacity of radiation emitted from the plasma [2].
The typical approach to determining the ion density \(n_{i}\) from emission spectra is to identify lines dominated by Stark (collisional) broadening, and then to compare the line widths with tabulated data [1, 9, 10], or with the predictions of CR codes, such as PrismSPECT [11, 12]. Similarly, for the characterization of electron temperature \(T_{e}\), we typically compare the intensity ratios of two or more lines (typically, inter-stage lines for which density changes have a small effect) with the predictions of CR models [1, 3, 10].
When the plasma is not optically thin, radiation transport, which describes how the energy distribution of radiation changes as it propagates through an absorbing, emitting, and/or scattering medium, must be adequately modeled for accurate interpretation of emission spectra. The optical thickness of a material to radiation of frequency \(\omega\) is characterized by \(\tau\equiv\int\alpha(\omega,s)ds\), which is the line-integral of the spectral opacity \(\alpha(\omega)\) along the path \(s\)[4, 13]. When \(\tau\ll 1\), the plasma is optically thin, and the output spectrum is simply the line-integrated emissivity \(\epsilon(\omega)\) of the plasma along the path \(s\). Similarly, for \(\tau\gg 1\), the plasma is optically thick, and the output spectrum (for a plasma in local thermodynamic equilibrium) is Planckian [4, 13]. In plasmas that are not optically thin, the radiation spectrum recorded by the spectrometer is significantly altered by radiation transport. Furthermore, if the plasma exhibits spatial inhomogeneity along the line-of-sight (LOS), the resulting spectrum may be dominated by strongly-emitting or absorbing regions.
In high-energy-density pulsed-power-driven systems, the condition for optical thinness may not be satisfied [14, 15]. In this paper, our focus is to diagnose pulsed-power-driven plasmas for laboratory astrophysics applications, which typically exhibit ion densities and electron temperatures between \(n_{i}\approx 1\times 10^{17}-1\times 10^{19}\mathrm{cm^{-3}}\) and \(T_{e}\approx 1-50\) eV respectively [16, 17, 18, 14]. Pulsed-power devices generate plasma flows by driving large currents (1-30 MA) through thin \(\sim 10-100\,\mathrm{\SIUnitSymbolMicro m}\) diameter wires. These plasmas are not optically thin to visible radiation, therefore radiation transport modeling becomes important for spectral analysis.
A key drawback of the aforementioned emission spectroscopy analysis approach is that it requires significant CR and radiation transport modeling for the analysis of a given spectrum. The use of machine learning (ML) models can reduce the computational time required for spectral analysis, especially for large batches of spectral data, and provide rapid real-time results during experimentation. Spectroscopy has previously been combined with supervised ML techniques, primarily in lower-density plasmas. Visible emission spectroscopy combined with regression methods and neural networks has been used to predict density and temperature in
low-temperature low-density (\(n_{e}\sim 1\times 10^{10}\mathrm{cm}^{-3}\), \(T_{e}\sim 1\) eV) laboratory plasmas [19]. Similarly, neural networks have been used to predict electron energy distribution in low-temperature non-thermal plasmas [20], and classifiers have been used for trace element and impurity detection in RF-generated plasmas [21]. Neural network regressors and classifiers have also been shown to accurately predict electron temperature and divertor detachment from UV/XUV spectroscopy measurements in magnetic confinement fusion devices [22, 23, 24]. In the examples above, the labeled dataset for training is generated by simultaneous spectroscopy and independent density and temperature measurements with other diagnostics, like Thompson scattering or Langmuir probes [22, 23, 24]. This eliminates the need for complicated theoretical or computational modeling. However, simultaneous independent measurements using secondary diagnostics are not always possible; for example, in experiments with poor diagnostic access or diagnostic unavailability. Moreover, alternative diagnostics such as Langmuir probes also perturb the plasma. Lastly, in experiments with low repetition rates, such as in pulsed-power plasmas, it is challenging to generate a purely experimental dataset for the data-hungry supervised ML task. In such situations, we must therefore rely on synthetic data for training.
In this paper, we use a machine learning approach to predict electron density and temperature profiles along the spectrometer line of sight (LOS) from visible emission spectroscopy data in a pulsed-power-driven aluminum plasma. In contrast to previous work [19, 20, 23, 24], which focuses on lower-density and/or optically thin plasmas, here we aim to characterize high-density non-optically thin spatially-inhomogeneous plasmas, characteristic of pulsed-power-driven plasmas generated using exploding wire arrays [16, 17]. We approach the problem in two parallel ways. In the first approach, we frame the prediction problem as a single-objective optimization problem, where we minimize the deviation between a simulated spectrum and a target spectrum. We generate the simulated spectra using CR calculations performed using PrismSPECT, which are then used to solve radiation transport in a spatially-inhomogeneous plasma. In the second approach, we solve a multi-variable regression problem, where we predict ion density \(n_{i}(s)\) and electron temperature \(T_{e}(s)\) as a function of position \(s\) along the spectrometer LOS from a given spectrum. We train multiple supervised ML models -- linear regressor, k-nearest neighbor, decision trees, random forest, deep and convolution neural networks, and AutoGluon -- using synthetic data, generated from radiation transport simulations. The AutoGluon model performs best, with an R2-score of roughly \(98\%\) for density and temperature predictions, and significantly reduces the computation time over optimization-based curve fitting methods. Our results demonstrate the potential of machine learning methods in providing rapid or real-time analysis of emission spectroscopy data in pulsed-power-driven plasmas.
## II Dataset generation
### _Radiation Transport Modeling_
We use PrismSPECT [12] to compute emissivity \(\epsilon(\omega)\) and opacity \(\alpha(\omega)\) values for an aluminum plasma, in the visible range of the electromagnetic spectrum (\(400\,\mathrm{nm}<\lambda<700\,\mathrm{nm}\)). We use a steady-state non local thermodynamic equilibrium model with Maxwellian free electrons. We run 10,000 PrismSPECT simulations, for electron temperatures linearly distributed in the range \(T_{e}\subseteq[0.5,25]\) eV, and ion density logarithmically distributed between \(n_{i}\subseteq[1\times 10^{16},1\times 10^{19}]\,\mathrm{cm}^{-3}\).
Our in-house radiation transport solver computes the output intensity spectrum \(I_{\omega}(s)\) given spatially-varying emissivity \(\epsilon_{\omega}(s)\) and opacity \(\alpha_{\omega}(s)\) values, by solving the steady-state radiation transport equation along the 1D path \(s\)[13]:
\[\partial_{s}I_{\omega}(s)=\epsilon_{\omega}(s)-\alpha_{\omega}(s)I_{\omega}(s) \tag{1}\]
Emissivity and opacity values vary along the LOS due to spatial variations in density and temperature. PrismSPECT
Fig. 1: Comparison of the output intensity calculated by the radiation transport solver with that from a planar plasma simulation in PrismSPECT (\(n_{i}=5\times 10^{17}\,\mathrm{cm}^{-3}\), \(T_{e}=2.5\) eV, and \(L=10\,\mathrm{mm}\)), and with a zero opacity \(\tau=0\) case.
Fig. 2: Comparison of emissivity predictions made by a KNN model trained on PrismSPECT simulation results. We compare results for randomly-chosen members of the test set. Emissivity is scaled to [0,1].
computes \(\epsilon_{\omega}\) and \(\alpha_{\omega}\) for spatially-homogeneous plasmas. To construct the spatially-varying emissivity and opacity, we first assume some m-dimensional density and temperature distributions \(n_{i}(s)\), \(T_{e}(s)\in\mathcal{R}^{m}\) along the LOS. Here, \(m\) is the number of points required to capture the spatial variation along the LOS. We then calculate the emissivity and opacity values at each position \(s\) along the LOS from the values of \(n_{i}(s)\) and \(T_{e}(s)\) at that location. The radiation transport solver then determines the output intensity distribution by solving Equation 1. We use a spectral resolution of \(3.75\times 10^{-3}\) eV for our radiation transport calculations, which results in a \([400\times 1]\) dimensional intensity spectrum for each data point.
In Figure 1, we compare the output intensity generated by the radiation transport solver with that from a planar plasma simulation in PrismSPECT. Here, the ion density and electron temperature of the plasma are \(n_{i}=5\times 10^{17}\,\mathrm{cm}^{-3}\) and \(T_{e}=2.5\) eV respectively, while the size of the plasma is \(10\,\mathrm{mm}\). The planar plasma simulation includes the effect of radiation transport, for the case of a homogenous (i.e. constant density and temperature) slab of a specified size. The output of the radiation transport solver agrees with that from PrismSPECT. Figure 1 also shows the case for which the optical thickness is set to zero. For the \(\tau=0\) case, lines with high opacities are no longer damped by absorption from the plasma, and thus, the line ratios are significantly modified when compared to the case with radiation transport. This illustrates that in our pulsed-power-driven plasma of interest, optical thickness is important, and must be included in the spectroscopy analysis.
The radiation transport solver requires emissivity and opacity calculated at each density and temperature value along the LOS as inputs to generate the intensity spectrum. We interpolate the emissivities and opacities for the intermediate values not simulated with PrismSPECT, using a k-nearest neighbor (KNN) regressor, trained on the output of the 10,000 PrismSPECT simulations. To evaluate the performance of the regressor, we compare the predicted spectra with the previously unseen emissivity spectra in the test set. As observed in Figure 2, where we compare the predicted emissivity with the actual emissivity for randomly-chosen members of the test set, the predictions agree well with the actual spectra. The coefficient of determination (also called the R2-score) is a commonly-used metric to characterize the performance of regression, and is defined as:
\[R^{2}=1-\frac{\sum{(y_{i}-\hat{y}_{i})^{2}}}{\sum{\left({y_{i}-\bar{y}}\right) ^{2}}} \tag{2}\]
Here, \(y_{i}\) is the predicted value, \(\hat{y}_{i}\) is the actual value, and \(\bar{y}\equiv 1/n\sum_{i=0}^{n}y_{i}\) is the mean of the actual values. For the problem above, the KNN regressor exhibits an R2-score of 99.62%, showing that the model accurately reproduces the emissivity and opacity spectra for density and temperature values not included on the simulation grid in PrismSPECT.
### _Density and Temperature in Exploding Wire Arrays_
Exploding wire arrays, which consist of a cylindrical cage of wires around a central cathode, are commonly used sources of pulsed-power-driven plasma for laboratory astrophysics experiments [14, 16, 17]. The magnetic field is oriented in the azimuthal direction inside the wires, and results in a \(\mathbf{j}\times\mathbf{B}\) force that accelerates the ablating plasma radially outwards from the wires. Due to radially-diverging flows, the density decays rapidly with distance from the wires. This can be observed in Figure 2(a), which shows the simulated ion density distribution generated by a \(40\,\mathrm{mm}\) diameter exploding wire array with 150 aluminum wires, driven by a 10 MA current pulse with a \(300\,\mathrm{ns}\) rise time. This simulation was performed using GORGON -- a two-temperature Eulerian resistive magnetohydrodynamic (MHD) code [25].
As discussed before, we require \(m\)-dimensional arrays to fully capture the spatial variation in density and temperature along the LOS. However, we can use simplifying assumptions to make the problem more computationally tractable. If the emission is recorded along a chordal LOS, as shown in Figure 2(a), the density variation can be approximated as Gaussian (see Figure 2(b)) i.e. \(n_{i}(s)=n_{0}\exp[(s-s_{0})/2\sigma^{2}]\). Here, \(n_{0}\) is the peak density, \(\sigma\) is the standard deviation of the Gaussian function, and we set the mean \(s_{0}\) to half the total path length. The density distribution shown in Figure 3 is consistent that measured experimentally with laser imaging interferometry in previous pulsed-power experiments [16]. Our MHD simulations (Figure 2(b)) and previous experimental measurements also show little spatial variation in the temperature due to short thermal diffusion time in pulsed-power plasmas [17]. Therefore, we approximate the temperature to be constant along the LOS i.e. \(T_{e}\neq T_{e}(s)=T_{0}\). This allows us to reduce
Fig. 3: (a) Simulated ion density at peak current, generated by a \(40\,\mathrm{mm}\) diameter exploding wire array with 150 aluminum wires, driven by a 10 MA current pulse (\(300\,\mathrm{ns}\) rise time). This simulation was performed using GORGON, a two-temperature resistive MHD code (b) Variation of density and temperature along a chordal line-of-sight (LOS) as shown in (a).
our \(2\times m\) dimensional problem to just 3 variables -- \(n_{0}\), \(T_{0}\), and \(\sigma\). Both the density and temperature in Figure 2(b) also exhibit small amplitude modulations, which arise due to oblique shocks resulting from the azimuthal expansion of plasma from the discrete wires [26]. Our radiation transport calculations, however, show that the effect of these modulations on the recorded intensity spectrum is small.
Figure 4 shows a synthetic intensity spectrum, generated by the radiation transport solver, with values \(n_{0}=5\times 10^{17}\,\mathrm{cm}^{-3}\), \(T_{e}=2\) eV, and \(\sigma=5\,\mathrm{mm}\). Here, we normalize the spectrum between \([0,1]\) by dividing by the maximum intensity. The spectrum exhibits Al-II and Al-III lines, which correspond to transitions in singly- (Mg-like) and doubly-ionized (Na-like) aluminum respectively. When the temperature is increased (Figure 3(a)), the relative intensity of the Al-III lines compared to the Al-II lines increases. This is expected because the ionization is higher at a higher temperature, and thus, the relative population of the higher-\(\bar{Z}\) Al-III ions increases relative to the singly-ionized Al-II ions. The Al-II and Al-III lines only appear simultaneously between 1.5-3.5 eV. In Figure 3(a), at 4 eV, the Al-II lines are completely suppressed. When we increase the density (Figure 3(b)), the lines not only become broader (due to Stark broadening) but the line ratios change as well. This is because increasing the density also increases the optical thickness, and the optically-thick lines are damped more strongly. This can be observed in Figure 3(b) when we compare the relative intensity of the Al-II \(466.4\,\mathrm{nm}\) line (which is relatively optically thin) with that of the higher-opacity Al-II \(624.0\,\mathrm{nm}\) line. The more optically-thick Al-II \(624.0\,\mathrm{nm}\) line is strongly damped at higher densities. Finally, changing the value of \(\sigma\) (Figure 3(c)), also changes the line ratios because the optical thickness increases with the size of the plasma; however, the sensitivity of the spectrum to changes in \(\sigma\) is relatively smaller than that in density and temperature.
To generate our training dataset for the machine learning task, we randomly sample values of \(n_{0}\subseteq[0.5\times 10^{17},1\times 10^{19}]\mathrm{cm}^{-3}\), \(\sigma\subseteq[5,40]\) mm, and \(T_{0}\subseteq[0.5,25]\) eV from a uniform distribution. Our radiation transport solver then uses these sampled values to calculate intensity spectra for each \(n_{0}\), \(\sigma\), and \(T_{e}\). We generate a total of 2500 \([400\times 1]\) intensity spectra to use as training data for our 3-variable regression problem. Lastly, we scale and normalize the values of \(n_{0}\,,\sigma\) and \(T_{0}\) so that they lie within the interval \(\subseteq[0,1]\). We use linear scaling for the temperature and \(\sigma\), and logarithmic scaling for the density. We also scale the intensity output of the radiation transport solver to the range \(\subseteq[0,1]\). This means we only use the shape of the intensity spectrum for our prediction, which obviates the need for absolute intensity calibration.
## III Methodology
### _Single-Objective Optimization_
Our goal is to predict density and temperature profiles given a measured intensity spectrum \(I_{\text{target}}\). One way to frame this problem is as a single-objective optimization problem:
\[\begin{split}\min_{\mathbf{x}}:f(\mathbf{x})&= \text{MSD}\left(I_{\text{target}}\ -I(\mathbf{x})\right)\\ \text{where: }\mathbf{x}&=[n_{0},\sigma,T_{0}]\\ \text{s.t.}:0.5\times 10^{17}&\leq n_{0}\,[ \mathrm{cm}^{-3}]\leq 1\times 10^{19}\\ & 5\leq\sigma\,[\mathrm{mm}]\leq 40\\ & 0.5\leq T_{0}\,[\text{eV}]\leq 25\end{split} \tag{3}\]
We minimize the mean squared deviation (MSD) between the target and predicted intensities. The objective function can be represented as:
\[\text{MSD}=\frac{1}{N}\sum_{i}^{N}[I_{pred}(\omega_{i})-I_{target}(\omega_{i} )]^{2} \tag{4}\]
Here, \(I_{\text{pred}}(\omega_{i})\) and \(I_{\text{target}}(\omega_{i})\) are the simulated and target intensities at frequency \(\omega\), and \(N=400\) is the size of the intensity spectrum. We use the radiation transport solver described in subsection II-A to generate the predicted intensities.
We perform the optimization using a \((\mu+\lambda)\) genetic algorithm (GA) implemented using the pymoo package in Python. The GA optimization algorithm iteratively searches for solutions that minimize the objective function over multiple generations [27]. In each generation, the best-performing solutions are selected and included in the population for the next iteration. Solutions are combined in each iteration in a process called crossover to create offspring solutions. The solutions are also subject to random changes in the values of the variables \(\mathbf{x}\) to increase the diversity of the solutions, in a process called mutation. For the optimization here, we use a randomly generated initial population size of 150, with simulated binary crossover and polynomial mutation
Fig. 4: Normalized spectral intensity simulated by the radiation transport solver. In (a)-(c), the blue curves correspond to output spectra generated for a Gaussian density variation (\(n_{0}=5\times 10^{17}\,\mathrm{cm}^{-3}\), \(\sigma=5\,\mathrm{mm}\)), and constant temperature (\(T_{e}=2\) eV) along the spectroscopy LOS. (a) Change in the intensity spectrum with increasing temperature. (b) Change in the intensity spectrum with increasing density \(n_{0}\). (c) Change in the intensity spectrum with increasing \(\sigma\).
(probability = 0.5, distribution index \(\eta_{c}=1\)). We terminate the optimization when the MSD of the solutions becomes lower than a specified threshold. We repeat the optimization 50 times with different starting seeds to construct a family of optimal solutions. We exclude solutions from runs in which the GA gets stuck in a local minimum, where the objective function does not converge to a value below the required threshold.
### _Multi-Variable Regression_
An alternative approach is to formulate the problem as a multi-variable regression problem. Given an input of an unseen intensity spectrum \(I_{\text{target}}\), we predict the corresponding (normalized) values of \(n_{0}^{*}\), \(T_{0}^{*}\), and \(\sigma^{*}\) using machine learning-based regression models. Here, we compare the performance of multiple regressors for our three-variable regression task -- linear regression (LR), k-nearest neighbor regressor (KNN), decision trees (DT), random forest (RF), deep neural network (DNN), and a 1D convolution neural network (1D-CNN). The choice of regression algorithm often represents a trade-off between model precision and interpretability. Simpler models such as linear regression and KNN models are relatively easier to understand and interpret, whereas deep neural networks and AutoML, which often provide high performance, require more training time, and are challenging to interpret [28].
The linear regression, KNN, decision trees, gaussian process regressor, and random forest models are implemented using the scikit-learn [29] package in Python. The KNN algorithm predicts values based on the distance from the \(k\) nearest data points in the training set [28]. Here, we use \(k=8\) and Euclidian distance for our KNN regressor. Decision trees follow a flowchart-like structure, and make predictions by asking questions at each level [28]. We use a DT regressor with a depth of 5 and a minimum sample split of 5. Random forests are ensembles of many decision trees [28]. Our random forest regressor uses 140 estimators; minimum samples required for a split is 4, and the minimum samples per leaf are 5.
We use the TensorFlow [30] package to construct a 3-layer deep neural network (DNN). Neural networks consist of multiple 'hidden' layers, consisting of several neurons, sandwiched between an input and an output layer [31]. In our DNN architecture, each layer is a fully-connected dense layer with 100 neurons (i.e. each neuron is connected to every other neuron in the previous and next layers), with ReLu activation functions, and padded with a batch normalization layer and a dropout layer (\(p=0.3\)) to help prevent overfitting. The final layer consists of a 3-dimensional dense layer. Similarly, the 1D-CNN consists of two 1D convolution layers (kernel size = 3) with filter sizes of 8 and 16 respectively, and leaky ReLU activation layers. The convolution layer performs a convolution operation using the specified kernel on the input from the previous layer [31]. Each convolution layer is padded with a batch normalization layer, and a \(p=0.1\) dropout layer. The convolution layers are followed by a 200-neuron dense fully-connected layer, and a 3-dimensional output layer. Both neural networks use the Adam optimizer with a \(0.5\times 10^{-4}\) learning rate, and a mean squared error (MSE) loss function. The key difference between the DNN and CNN architectures is that the DNN treats the input vector as a 400-dimensional vector of parameters, whereas the CNN treats it as a 1D image, and therefore, has information about the relative spectral location of each element in the input vector.
Finally, we also implement a tabular AutoGluon model using Python's AutoGluon package [32]. AutoGluon provides an automated approach to machine learning, by automatically comparing and combining the performance of many different models. The performance of AutoGluon has previously been shown to exceed that of more traditional ML models [32].
Each regressor is trained on the \(2500\times 400\) synthetic spectra with a 2000:500 split between the training and test sets, and \(k=3\) stratified k-fold cross-validation for the training set. Members of the test set are not shown to the ML model during the regression task, and are used to evaluate the regression performance after training. The hyperparameters described in this section were determined using hyperparameter optimization implemented using the Python package Optuna [33].
We use two metrics to characterize the performance of the different regression models. These are the coefficient of determination (also called the R2-score), and the mean squared deviation (MSD) from the simulated curve. The R2-score is defined in Equation 2, and is calculated from the predicted and actual values of \(n_{0}^{*},T_{0}^{*}\) and \(\sigma^{*}\). Similarly, the MSD (Equation 4) measures the deviation of the predicted intensity spectrum from the actual spectrum. Here, the predicted spectrum is determined by feeding the predicted values of \(n_{0}^{*}\), \(T_{0}^{*}\) and \(\sigma^{*}\) from the ML model into the radiation transport solver. The R2-score and MSD are calculated for the \(N=500\) test set, which contains spectra not previously seen by our ML models. For good performance, we aim to
Fig. 5: (a) Comparison of target intensity spectra with spectra determined via optimization for the test case \(n_{0}=1\times 10^{18}\,\mathrm{cm}^{-3}\), \(T_{0}=2\,\mathrm{eV}\), \(\sigma=10\,\mathrm{mm}\). The target spectrum is the solid blue curve, while the predicted spectrum and the predicted values are in orange. (b) Robustness of the fitting to noise and stray lines.
maximize the R2-scores, and minimize the MSD. The R2-score may provide spurious performance metrics in case of non-unique solutions, whereas, the MSD characterizes how close the predicted spectra are to the actual spectra, allowing us to overcome this issue.
## IV Results and Discussion
### _Optimization results_
Using GA-based optimization, we predict values of \(n_{0},\,T_{0}\) and \(\sigma\) for several test cases. Figure (a)a compares the target intensity spectra with spectra determined via optimization for a randomly-selected test case (\(n_{0}=1\times 10^{18}\,\mathrm{cm}^{-3},\,T_{0}=2\,\mathrm{eV},\,\sigma=10\, \mathrm{mm}\)). The predicted values of \(n_{0},\,T_{0}\) and \(\sigma\) reproduce the target spectrum well. For the test case shown in Figure (a)a, the MSD from the target spectrum is roughly \(6\!\times\!10^{-5}\), and the predicted values are \(n_{0}=1.0\!\times\!10^{18}\,\mathrm{cm}^{-3}\pm 7\%\), \(T_{0}=2.0\,\mathrm{eV}\pm 1\%\), and \(\sigma=9.9\,\mathrm{mm}\pm 20\%\). The range in the value of \(\sigma\) for the family of optimal solutions is comparatively higher than that in the density and temperature. This is consistent with our observation in Figure (c)c, which shows that changes in \(\sigma\) generate linear changes in the optical depth, whereas those in \(n_{0}\) and \(T_{0}\) result in larger non-linear changes in opacity and relative intensities.
In many real situations, the experimental data that we want to fit may be noisier than the synthetic spectrum shown in Figure (a)a. Furthermore, the spectrum may also be contaminated by stray lines, generated by impurities in the plasma, or by radiation emitted from other photoionized surfaces. In order to test the robustness of the prediction, we add noise and stray lines to the synthetic target spectrum (see Figure (b)b). The predicted solution reproduces the target spectrum well, despite the added noise and stray radiation. In this case, the MSD of the optimum solutions is about \(3\times 10^{-3}\), which as expected, is larger than that for the smooth case. The predicted solutions are \(n_{0}=1.2\!\times\!10^{18}\,\mathrm{cm}^{-3}\pm 20\%\), \(T_{0}=2.0\,\mathrm{eV}\pm 2\%\), and \(\sigma=9.9\,\mathrm{mm}\pm 30\%\). There is a larger uncertainty in the solutions for the noisy target spectrum when compared to the smooth target spectrum (Figure (a)a); however, the predicted solutions still include the values used to generate the test case.
Although optimization using GA provides good results, the computational time is high (\(>10\) min per prediction). This is primarily because the GA has to simulate a large number of potential candidates iteratively using our radiation transport model over several generations before convergence is achieved. Although we use GA as an example here, other optimization-based curve-fitting algorithms can also exhibit high computation times. This method can be highly effective with small data sets; however, it can be less attractive in cases where we must analyze large datasets of spectra, or when we require quick or real-time analysis of spectral information. The use of ML-based regression models, discussed in the next section, can be more useful in such situations.
### _Multi-variable regression using machine learning_
The training of ML models can be computationally time-intensive; however, once the training is complete, large datasets can be evaluated rapidly. The training time of models typically depends on the complexity of the model. Table I compares the performance of the ML regressors described in subsection III-B. Here, the R2-score is computed using the predicted and actual values of all three variables, while we compute the two-variable R2-score using only the value of density \(n_{0}^{*}\) and temperature \(T_{0}^{*}\). AutoGluon was the best-performing model, with an R2-score of \(74.20\%\!\pm\!2.1\%\). Since AutoGluon trains and compares the performance of multiple different models simultaneously, the training time was high compared to the other models. The next best-performing models (deep neural network, Random Forest, and k-nearest neighbor) exhibited R2-scores between \(67-71\%\) and shorter training times. The two-variable R2 scores, calculated for density \(n_{0}^{*}\) and temperature \(T_{0}^{*}\) only, are roughly \(30\%\) higher than the three-variable R2-scores, which shows that the models predict \(n_{0}^{*}\) and \(T_{0}^{*}\) with better accuracy than the third variable \(\sigma^{*}\). There is a larger uncertainty in the prediction of \(\sigma^{*}\), because as observed in Figure (c)c and in subsection IV-A, changes in \(\sigma^{*}\) generate linear changes in the optical depth,
\begin{table}
\begin{tabular}{l c c c c} \hline Model & Training Time (s) & R2-Score(\(\%\)) & 2 Var. R2-Score(\(\%\)) & Median MSD [\(\times 10^{-4}\)] \\ \hline Linear Regression (LR) & 0.3 & \(-41.54\pm 80.76\) & \(64.00\pm 10.2\) & 1.1 \\ Decision Tree (DT) & 0.35 & \(53.22\pm 4.5\) & \(89.9\pm 2.57\) & 5.3 \\ k-Nearest Neighbor & 0.02 & \(67.89\pm 2.8\) & \(96.25\pm 1.8\) & 0.4 \\ Random Forest (RF) & 6.41 & \(67.06\pm 1.93\) & \(96.58\pm 0.5\) & 0.4 \\ Deep Neural Net. (DNN) & 240.12 & \(71.54\pm 1.2\) & \(94.52\pm 1.1\) & 4.2 \\ ID Conv. Neural Net. (1D-CNN) & 260.17 & \(65.34\pm 3.2\) & \(91.69\pm 1.4\) & 5.7 \\
**AutoGluon** & **4760.11** & \(\mathbf{74.20\pm 2.1}\) & \(\mathbf{98.98\pm 1.4}\) & \(\mathbf{0.3}\) \\ \hline \end{tabular}
\end{table} TABLE I: Comparison of the performance of different ML models on the regression task
Fig. 6: Violin plots of the MSD of the predicted intensity spectra from the actual intensity spectra in the test set. The red lines represent the medians of the distribution, the blue rectangle represents the inter-quartile range, and the end-caps represent \(1.5\times\) the inter-quartile range. The blue-shaded regions show the shape of the full distribution.
whereas those in \(n_{0}^{*}\) result in larger non-linear changes. We also find that in cases where \(\sigma^{*}\) is under-predicted, the density is slightly over-predicted and vice versa, indicating that the model relies on small changes in density rather than on \(\sigma^{*}\) to incorporate the effects of changing optical depth. The majority of models, with the exception of linear regression and decision trees, exhibit two-variable R2-scores \(>90\%\), showing the effectiveness of these models in predicting \(n_{0}^{*}\) and \(T_{0}^{*}\), despite the relatively poorer prediction of \(\sigma^{*}\). AutoGluon, once again, exhibits the highest two-variable R2-score (\(98.98\%\pm 1.4\%\)).
In these models, we only use the shape of the intensity spectrum to make predictions. This can be advantageous as it obviates the need for absolute intensity calibration of the intensity spectra. However, adding information about the absolute intensity can potentially improve the predictions, as the absolute intensity also depends on the temperature, density, and size of the plasma. This would, however, require accurate modeling of intensity attenuation and losses in the optics used for the experiment. When we include the absolute intensity in the training data (that is, we do not normalize the intensity between \([0,1]\)), the models show improved prediction of \(\sigma\), and the R2-scores increase by roughly 3-7%.
Figure 6 compares the MSD of the predicted intensity spectra from the actual intensity spectra in the test set for the different ML models. Here, we use the predictions of \(n_{0},\,T_{0}\,\text{and}\,\,\sigma\) from the ML regressors as inputs into the radiation transport solver to determine the predicted spectra. The red lines represent the medians of the distributions, the blue rectangles represent the interquartile range, and the shaded regions show the shape of the full distribution. As expected, the predictions of the AutoGluon model, which exhibit the highest R2-score, also exhibit the lowest MSD, i.e. the predictions match the original intensity spectra well. The median MSD is relatively small (\(<0.1\%\)) for all the models, however, for models with lower R2-scores, the distribution of the MSD exhibits a larger spread and extends to higher values. Interestingly, although the DNN exhibits R2-scores similar to the RF and KNN models, its MSD distribution is significantly wider. For the DNN, the distribution is also wider than for the 1D-CNN model, which exhibited lower R2-scores. This may indicate that although the predictions of \(n_{0},\,T_{0}\,\text{and}\,\,\sigma\) made by the DNN are close to the actual values, the values are consistently off, resulting in a relatively larger deviation between the predicted intensity calculated from these values and the actual intensity used to predict the values.
To gain further insight into the performance of the models, we plot the distribution of predicted values against the actual values. For the best-performing AutoGluon model (Figure 7), we find that predicted values of \(n_{0}^{*}\) and \(T_{0}^{*}\) from the test set exhibit the least deviation from the diagonal, whereas predicted values of \(\sigma^{*}\) exhibit a relatively larger deviation from the diagonal, as expected. However, AutoGluon is approximately able to capture the relative trend in the values of \(\sigma^{*}\), as seen in the positive slope of the distribution in Figure 7b. This is in contrast to other models, which exhibit larger deviations in the predictions of \(\sigma^{*}\). Figure 7a shows that the predicted values of density deviate from the actual values more significantly at very low (\(n_{0}<1\times 10^{17}\,\text{cm}^{-3}\)) and very high densities (\(n_{0}>5\times 10^{18}\,\text{cm}^{-3}\)). At low density, the effects of optical depth and Stark broadening are smaller, which leads to larger uncertainties in the prediction of the density. Our radiation transport simulations also show that at higher densities, line radiation is strongly damped due to the opacity effects, and continuum emission begins to dominate. This may explain why the predicted density deviates more significantly at these higher densities. For the temperature predictions (Figure 7c), the deviations are relatively small for \(T_{0}<3\) eV, and become more significant at higher temperatures. Below 3 eV, as seen in Figure 4a, the spectrum contains both Al-II and Al-III lines, and the relative intensities of the inter-stage lines are a strong function of temperature. However, at higher temperatures, only the Al-III lines are present in the spectrum, and continuum emission also begins to dominate, which makes it relatively harder to predict \(T_{0}\).
A key challenge with more complicated ML models, such as neural networks and AutoGluon, is the lack of interpretability. To gain insight into features that inform the prediction, we investigate the relative importance of different parts of the intensity spectrum for the prediction of density and temperature. In order to do so, we calculate the sensitivity of the
Fig. 7: Distribution of predicted values of \(n_{0}^{*}\), \(T_{0}^{*}\) and \(\sigma^{*}\) against the actual values for the AutoGluon Model. Here, the values of peak density, temperature, and the spread of the density function \(\sigma\) have been normalized. Blue points compare the values for the training set, while orange points do so for the test set.
density and temperature predictions to perturbations in the spectral intensity value at different wavelengths. Figure 8 shows the sensitivity map computed for the AutoGluon model for a randomly-selected member of the test set. For this intensity spectrum, the actual values of density, temperature, and \(\sigma\) are \(1.4\times 10^{18}\,\mathrm{cm}^{-3}\), \(7.3\) eV, and \(25\,\mathrm{mm}\), and the predicted values are \(1.5\times 10^{18}\,\mathrm{cm}^{-3}\), \(6.9\) eV, and \(22\,\mathrm{mm}\). We approximate the gradients in density and temperature (\(\partial n_{0}^{*}/\partial\epsilon\) and \(\partial T_{0}^{*}/\partial\epsilon\)) by perturbing the input intensity spectrum at a given wavelength by a small value \(\epsilon\), and dividing the change in the predicted value (\(|\delta n_{0}^{*}|\) or \(|\delta T_{0}^{*}|\)) by the perturbation \(\epsilon\). Here, we pick the perturbation \(\epsilon\) from a Gaussian distribution of amplitude 0.1, and the mean gradients are calculated over multiple iterations to determine the final value. Such sensitivity maps are commonly used in image classification problems to identify parts of an image that may contribute to the final classification [34]. Here, we use it to identify features of the spectra that contribute to the final predictions in our regression problem. In Figure 8, as expected, the lines (Al-III 415.2 nm, Al-III 448.1 nm, Al-III 452.5 nm, and Al-III 570.7 nm) contribute significantly to the prediction of density and temperature, whereas parts of the spectrum that correspond to the continuum are less important. This can be observed from the large gradients \(\partial n_{0}^{*}/\partial\epsilon\) and \(\partial T_{0}^{*}/\partial\epsilon\) at the positions of these lines. The Al-III 570.7 nm line appears to be particularly important for the temperature prediction, while the Al-III 448.1 nm and Al-III 452.5 nm lines (which have merged at this higher density) are relatively more important to the density prediction. In addition to the locations of the Al-III lines, smaller peaks in the gradients, particularly in temperature, also appear at parts of the spectrum devoid of lines. The locations of these peaks correspond to Al-II emission lines that appear predominantly at lower temperatures (\(0.5-1.5\) eV), indicating that the absence of these lines contributes, although less significantly, to the temperature prediction.
Another limitation of these models is the reliance on synthetic data, which, in turn, is affected by the uncertainty and assumptions in the theoretical modeling used to generate the synthetic dataset. Benchmarking the results using independent diagnostics can be one way to probe the applicability of these ML models to real experimental data. However, as mentioned earlier, independent measurements are not always possible, and the uncertainty in CR modeling is inherent in the analysis of most spectroscopic data. As discussed in subsection IV-A, the experimental data can also be noisy, and include background radiation and contamination by impurities. When we use the trained ML models to make predictions from noisy spectra contaminated with stray radiation, the R2-scores typically fall by \(10-20\%\). For a given experiment, the instrument response, the bit depth of the spectrometer, and attenuation by the optics may also need to be properly included in the synthetic dataset. The analysis of experimental spectra using these methods will be pursued in a future publication.
## V Conclusions
We explore the use of machine learning (ML) methods for rapid spectroscopic analysis of emission spectra in the visible regime. Our goal is to predict density and temperature in a pulsed-power-driven aluminum plasma generated by an exploding wire array. In contrast to previous work, which has typically focused on low-density homogenous plasmas, we aim to diagnose a high energy density non-optically thin plasma, which necessitates the use of radiation transport calculations to accurately model the recorded intensity spectrum. These radiation transport calculations use spectral emissivity and opacity values computed using the collisional-radiative code PrismSPECT. Consistent with previous observations in exploding wire arrays, we assume a constant temperature and a Gaussian density variation (peak density \(n_{0}\) and standard deviation \(\sigma\)) along the diagnostic line of sight.
The radiation transport solver is first used to directly solve a single-objective optimization problem, which varies the values of \(n_{0}\), \(T_{0}\), and \(\sigma\) to minimize the mean squared deviation between a generated spectrum and the target spectrum. This approach provides reliable fits to the target spectrum, robust to noise and stray line radiation; however, it can be time-intensive, which limits its usefulness for rapid or real-time analysis of large datasets.
We then use the radiation transport solver to generate a \([2500\times 400]\) dataset of synthetic emission spectra, and we compare the performance of different ML models on their ability to predict density, temperature, and \(\sigma\) from the given intensity spectra. The AutoGluon model performs best, with an R2-score of roughly \(98\%\) for density and temperature predictions. Simpler models (random forest, k-nearest neighbor, and deep neural network) also exhibit high R2-scores (\(>90\%\)) for density and temperature predictions, showing the potential of ML models in providing rapid and accurate analysis of spectral data. However, the prediction of \(\sigma\) is relatively poor, which is typical of radiation transport problems, and is related to the relatively smaller sensitivity of the optical depth on this parameter, when compared to the peak density \(n_{0}\).
The mean square deviation from the actual spectra of the ML predictions is typically larger than that of the more time-intensive optimization task. One way to improve the accuracy of these models could be to perform further optimization by including the ML predictions in the initial population; this can provide faster convergence of the optimization and better fitting to the target spectra.
Fig. 8: Sensitivity of the predictions of density and temperature to variations in the spectral intensity value at different wavelengths for a randomly-chosen intensity spectrum from the test set.
## Acknowledgment
This work was funded by NSF and NNSA under grant no. PHY2108050, by the EAGER grant no. PHY2213898, and supported by the MathWorks fellowship.
|
2306.17730 | Scale-free avalanches in arrays of FitzHugh-Nagumo oscillators | The activity in the brain cortex remarkably shows a simultaneous presence of
robust collective oscillations and neuronal avalanches, where intermittent
bursts of pseudo-synchronous spiking are interspersed with long periods of
quiescence. The mechanisms allowing for such a coexistence are still a matter
of an intensive debate. Here, we demonstrate that avalanche activity patterns
can emerge in a rather simple model of an array of diffusively coupled neural
oscillators with multiple timescale local dynamics in vicinity of a canard
transition. The avalanches coexist with the fully synchronous state where the
units perform relaxation oscillations. We show that the mechanism behind the
avalanches is based on an inhibitory effect of interactions, which may quench
the spiking of units due to an interplay with the maximal canard. The avalanche
activity bears certain heralds of criticality, including scale-invariant
distributions of event sizes. Furthermore, the system shows an increased
sensitivity to perturbations, manifested as critical slowing down and a reduced
resilience. | Max Contreras, Everton S. Medeiros, Anna Zakharova, Philipp Hövel, Igor Franović | 2023-06-30T15:18:48Z | http://arxiv.org/abs/2306.17730v2 | # Scale-free avalanches in arrays of FitzHugh-Nagumo oscillators
###### Abstract
The activity in the brain cortex remarkably shows a simultaneous presence of robust collective oscillations and neuronal avalanches, where intermittent bursts of pseudo-synchronous spiking are interspersed with long periods of quiescence. The mechanisms allowing for such a coexistence are still a matter of an intensive debate. Here, we demonstrate that avalanche activity patterns can emerge in a rather simple model of an array of diffusively coupled neural oscillators with multiple timescale local dynamics in vicinity of a canard transition. The avalanches coexist with the fully synchronous state where the units perform relaxation oscillations. We show that the mechanism behind the avalanches is based on an inhibitory effect of interactions, which may quench the spiking of units due to an interplay with the maximal canard. The avalanche activity bears certain heralds of criticality, including scale-invariant distributions of event sizes. Furthermore, the system shows an increased sensitivity to perturbations, manifested as critical slowing down and a reduced resilience.
**Cascading dynamics are a prominent feature of many complex systems, from information or disease spreading in social interactions to propagation of neuronal activity. Since the discovery of neuronal avalanches, it has been suggested that the brain cortex operates at criticality, leveraging this feature to maximize its dynamic range, information capacity, and dynamical repertoire. Nevertheless, in neuronal systems, the patterns of transient synchrony, such as avalanches, typically coexist and/or interact with robust collective rhythms, and the problem of generic mechanisms that give rise to avalanches and simultaneously allow for their coexistence with collective oscillations still remains unresolved. Here, we demonstrate that the avalanche activity can emerge and coexist with synchronous oscillations in a simple model of diffusively coupled neural oscillators with multiple timescale local dynamics in the vicinity of a canard transition. The avalanches are characterized by scale-invariant distributions of event sizes and an analysis of laminar, that is, inter-event, times. The latter quantifies both cascading and non-successive avalanches. At the critical transition between the states of lower and higher spiking rates that facilitates the onset of avalanches, the system exhibits an increased sensitivity to perturbations, manifested as critical slowing down and a reduced resilience. The disclosed scenario for coexistence of a well-defined oscillation rhythm and patterns with scale-invariant features may open a new avenue of research concerning multistability (and metastability) in neuronal systems.**
## I Introduction
The notions of criticality and phase transitions have gained a revived interest following the formulation of the concept of critical transitions and tipping [1; 2; 3], which essentially translate the ideas of bifurcation theory to the realm of complex systems. Naturally, the latter is not a straightforward process due to high-dimensional dynamics of complex systems. Moreover, in many applications, understanding the details of the states involved in a critical transition, as well as finding appropriate indicators of tipping proves as a difficult problem. Many complex systems exhibit multistability and metastability, an ample example being the brain activity. On the one hand, the functionality of the brain relies on generating robust collective rhythms based on synchronization at different levels of self-organization within the cortex [4; 5]. On the other hand, various types of experiments, both under in vivo and in vitro conditions, have revealed the presence of neuronal avalanches [6; 7; 8; 9], that is, cascades of quasi-synchronous bursts of activity, whose main feature is scale invariance where the spatial and temporal distri
butions of events follow power-law behaviors. The discovery of neuronal avalanches has led to the brain criticality hypothesis [10; 11; 12; 13; 14], suggesting that the emergent cortical dynamics derive from being poised at the boundary of instability or at the edge of chaos. However, the precise character of the underlying continuous phase transition remains elusive [15; 16; 17; 18]. Moreover, a question that naturally arises is how can so different types of activity, in particular those with a well-defined characteristic timescale (regular synchronous activity) and others where such timescales are absent (irregular transiently synchronous activity), coexist. Furthermore, what are the mechanisms that facilitate such a coexistence?
Recalling the classical theory of phase transitions, power-law behaviors should naturally be expected in scenarios where critical transitions can be associated with supercritical bifurcations. For instance, it is typically stated that neuronal avalanches emerge in the vicinity of a critical transition between silent (absorbing) and active states from a critical branching process [11; 19] or at the synchronization transition [16; 17; 18; 20; 21]. Nevertheless, power laws and other heralds of criticality, such as critical slowing down, have also been observed in relation to first-order phase transitions [22; 23], where criticality involves multistable and metastable behavior. This also applies to certain models of neuronal avalanches, which have indicated their onset in the vicinity of a discontinuous transition showing hysteresis between the low-activity (down) and the high-activity (up) state [24]. Nevertheless, the general mechanisms that can reconcile the emergence of avalanche-like patterns with collective rhythms in neuronal systems, are still a subject of ongoing research [17; 19; 25; 26].
Motivated by the latter problem, we show in this paper that avalanche-like bursting patterns can emerge in a rather simple model of an array of non-locally coupled FitzHugh-Nagumo (FHN) units with attractive diffusive interactions, whereby such intermittent, recurrent collective bursting activity coexists with a completely synchronous state. An important ingredient of local dynamics is that it conforms to relaxation oscillations close to a canard transition [27; 28; 29] between subthreshold and relaxation oscillations. Blending a recently introduced concept of phase-sensitive excitability of a periodic orbit [30; 31; 32] and the interaction-induced trapping of orbits [33; 34; 35], we explain the mechanism by which the interplay of interactions and the vicinity of canard transition results in quenching of relaxation oscillations. This gives rise to patterns of rare spiking which under variation of coupling strength may self-organize into avalanche-like activity with scale-invariant features. We further show that avalanche patterns emerge in vicinity of a transition between two collective regimes with a lower and higher spiking rates, exhibiting classical indicators of criticality, such as a decreased resilience to perturbations and critical slowing down [36; 37; 38; 13].
The paper is organized as follows: Section II provides the details of the model and outlines the aspects of singular perturbation theory relevant to the explanation given in Sec. III on how the interplay of interactions and structures associated with local multiple timescale dynamics may quench the spiking activity. In Sec. IV, we investigate the statistical features of avalanche patterns, and show that these patterns emerge at the transition where the system displays classical criticality features in response to external stimulation. Section V contains our concluding remarks and outlook.
## II Array of non-locally coupled FitzHugh-Nagumo units
Our model is an array of \(N\) identical FHN units [39] with a simple non-local interaction scheme where each unit is coupled to \(P\) of its neighbors to its left and to its right on a 1-dimensional ring:
\[\epsilon\dot{u}_{i} = u_{i}-\frac{u_{i}^{3}}{3}-v_{i}+\frac{\sigma}{2P}\sum_{j=i-P}^{ i+P}(u_{j}-u_{i}),\] \[\dot{v}_{i} = u_{i}+\alpha+\frac{\sigma}{2P}\sum_{j=i-P}^{i+P}(v_{j}-v_{i}). \tag{1}\]
All the indices are periodic modulo \(N\). Due to the smallness of the parameter \(\varepsilon\ll 1\), here set to \(\varepsilon=0.05\), the local dynamics feature a slow-fast structure with the fast (activator) variables \(u_{i}\) representing neuronal membrane potentials and the slow (recovery) variables \(v_{i}\) reproducing the coarse-grained behavior of ion-gating channels. The non-local interactions are assumed to be linear (diffusive) and act between the activator/recovery variables in the units' fast/slow subsystems [40; 41; 42], see the coupling scheme in Fig. 1. Apart from the coupling radius \(p=P/N\), the interactions are characterized by the coupling strength \(\sigma\), and are considered to be attractive (\(\sigma>0\)) and homogeneous over the array.
Local dynamics is controlled by the bifurcation parameter \(\alpha>0\), such that the singular Hopf bifurcation at \(\alpha=1\) mediates the transition between the excitable regime (\(\alpha\gtrapprox 1\)), featuring a stable equilibrium \((u^{*},v^{*})=(-\alpha,-\alpha+\alpha^{3}/3)\), and the oscillatory regime (\(\alpha<1\)) [39]. Within the framework of singular perturbation theory [29], which treats the limit \(\varepsilon\to 0\), an isolated FHN system has been shown to exhibit special type of trajectories, called _canards_, that closely follow the repelling branch of the slow manifold for an appreciable time [27; 28; 29] instead of rapidly departing from it. For small but finite \(\varepsilon\), such trajectories form an exponentially thin layer, whereby there exists a so-called _maximal canard_[43] that follows the entire repelling branch of the slow manifold. The presence of such trajectories strongly impacts the behavior of the bifurcating limit cycle when decreasing \(\alpha\) further below the bifurcation threshold \(\alpha=1\). In particular, the incipient limit cycle undergoes a _canard transition_[44], where its amplitude sharply increases within a narrow interval of \(\alpha\) values exponentially small
in \(\varepsilon\). The canard transition mediates between small-amplitude harmonic oscillations of period \(\mathcal{O}(\sqrt{\varepsilon})\) and large-amplitude relaxation oscillations of period \(\mathcal{O}(1)\). In the language of neuroscience, this corresponds to a transition from subthreshold oscillations to the regime of tonic spiking. A classical result from asymptotic expansion theory is that the canard transition occurs at \(\alpha=\alpha_{c}=1-\varepsilon/8\), cf. Ref. [45].
Throughout this paper, the local bifurcation parameter is set to \(\alpha=0.99\), the value below \(\alpha_{c}\), such that it supports relaxation oscillations. Nonetheless, the vicinity of the canard transition still influences the way the system responds to perturbations, be it due to interactions and/or noise. In particular, while the subthreshold oscillations below the canard transition manifest excitability in the classical sense [46], it has recently been reported that the relaxation oscillations show a specific type of excitable behavior called phase-sensitive excitability of a limit cycle [30]. The latter comprises a non-uniform response to perturbations along the orbit of relaxation oscillations, such that the FHN system provides a nonlinear threshold-like response to perturbations during the passage close to the unstable equilibrium \((u^{*},v^{*})\). Then, perturbations of sufficient amplitude and acting in the appropriate direction are capable of inducing one or more subthreshold oscillations around the unstable equilibrium, whereby the maximal canard acts as the threshold manifold. The emergence of such subthreshold oscillations in response to interactions will later prove important for understanding the mechanism giving rise to non-trivial collective dynamics behind the activity avalanches.
Our primary interest concerns the impact of coupling strength \(\sigma\) on the system's dynamics, focusing on the case of weak interactions \(\sigma\in[0,0.1]\). All the numerical experiments have been performed for the system size \(N=50\) and coupling range \(P=10\) unless stated otherwise. The numerical integration has been performed using the Cash-Karp (4, 5) method with adaptive stepsize control implemented via GNU Scientific Library (GSL) [47]. The time series in the remainder of the paper illustrate the asymptotic system behavior after discarding a sufficiently long transient of, e.g., \(5\times 10^{3}\) time units. When illustrating the dependence on \(\sigma\), the coupling strength increment is \(\Delta\sigma=10^{-3}\). For each value of \(\sigma\), we consider a set of \(10\) different random initial conditions \((\vec{u}_{0},\vec{v}_{0})\in[-2,2]^{N}\times[-2,2]^{N}\).
We find that a range of coupling strengths supports the onset of a regime where irregular asynchronous rare spiking activity is interspersed with brief intervals of cascading pseudo-synchronous bursting activity, called avalanches. The described regime is bistable with the regime of synchronous regular spiking activity, as we will demonstrate in Sec. III.
## III Array dynamics in dependence of coupling strength
Given that the units are identical and interact by attractive diffusive couplings, the system (1) possesses an invariant synchronization manifold \(u_{1}(t)=u_{2}(t)=...=u_{N}(t),v_{1}(t)=v_{2}(t)=...=v_{N}(t)\). Since the isolated dynamics of neurons comprises relaxation oscillations, this manifold contains a limit cycle attractor where all the units perform identical relaxation oscillations. In the following, we will show that under variation of the coupling strength \(\sigma\), the system (1) may exhibit non-trivial emergent dynamics that unfolds off the invariant synchronization manifold. In other words, we find a range of \(\sigma\) values where due to non-local interactions, not all of the initial conditions converge to the invariant manifold, and the completely synchronized relaxation oscillations coexist with another type of collective dynamics.
To observe such emergent dynamics, we introduce a global order parameter \(\mu\) that characterizes the synchronization of units' _average spiking frequencies_. Unlike the more classical synchronization parameters, involving synchronization error or average local variances from the mean variables, \(\mu\) is not indented to quantify both frequency and phase synchronization of units, but rather to describe the quenching of units' average spiking frequencies due to non-local interactions. By construction, \(\mu\) is introduced to indicate the relative persistence of units' trajectories in the neighborhood of the limit cycle \(S\) corresponding to relaxation oscillations of an _uncoupled_ (isolated) unit. To define \(\mu\), we first denote by \(K\) the spike count of an uncoupled unit within a sufficiently long time interval \(\Delta T\). Then, for the system of coupled units (1), we consider \(J_{i}\) as the spike count of a unit \(i\) within the time interval \(\Delta T\). Using these two quantities, the global order parameter \(\mu\) is given by
\[\mu=\frac{1}{NK}\sum_{i=1}^{N}J_{i}. \tag{2}\]
Qualitatively, \(\mu\) compares the ensemble-averaged spiking frequency of coupled units to the spiking frequency of an uncoupled unit. Naturally, these two frequencies are equal, resulting in \(\mu=1\), when the system's state lies
Figure 1: An array of FHN neurons for \(N=8\) and \(P=2\). Fast variables \(u\) are represented in blue, while slow variables \(v\) are shown in yellow.
on the large-amplitude limit cycle on the invariant synchronization manifold. Nevertheless, note that \(\mu=1\) also corresponds to such states where the units are not on the synchronization manifold, but perform relaxation oscillations mutually shifted in phase. One expects the emergent dynamics with quenched spiking of individual units to be characterized by \(\mu<1\).
Figure 2 shows the order parameter \(\mu\) in terms of the coupling strength \(\sigma\) for asymptotic dynamics obtained from three different sets of initial conditions (green up-triangles, red down-triangles, and blue circles) in the weak coupling regime (\(\sigma\ll 1\)). For sufficiently small \(\sigma\), all the initial conditions lead to frequency synchronized relaxation oscillations of individual units, see the region \(R_{1}\). Nevertheless, when increasing \(\sigma\), one observes an interval \(\sigma\sim[0.02,0.07]\) that supports asynchronous states characterized by the global order parameter \(\mu<1\). Such states emerge only for certain sets of initial conditions, and the synchronous state coexists throughout the entire \(\sigma\) interval. By the corresponding values of \(\mu\), one may distinguish between two types of asynchronous states: (i) the region \(R_{2}\) where the global order parameter attains very small values \(\mu\approx 0\); (ii) an interval \(T_{1}\) where \(\mu\) values of asynchronous states are enhanced but still lie notably below the \(\mu=1\) level. For a stronger coupling strength \(\sigma\), one finds region \(R_{3}\) where the synchronous state is regained for all sets of initial conditions. The same physical picture qualitatively holds for a range of coupling radii \(p\). Nevertheless, the width of the interval of intermediate \(\sigma\) values supporting asynchronous states reduces with \(p\), eventually vanishing for interactions of sufficiently long-range.
To gain a deeper insight into the emergent dynamics typical for different \(\sigma\) intervals, we consider the corresponding state-space projections \((u_{i},v_{i})\) for two representative units, indicated in blue solid and red dashed lines in Fig. 3. For \(\sigma=0.001\), which lies in the region \(R_{1}\), the neurons already perform relaxation oscillations along the same orbit but are shifted in phase, cf. the orbits and the time traces in Fig. 3(a). For this small coupling strength, the phases remain free along the limit cycles at the state space of different FHN units [48]. Within the region \(R_{2}\), represented by \(\sigma=0.02\) in Fig. 3(b), the units mostly perform small-amplitude oscillations around the unstable equilibrium \((u^{*},v^{*})\), and only a few or none of the units occasionally escape the trapping region generating rare spikes. Trapping of the trajectories in the vicinity of the unstable equilibrium derives from the impact of local mean-fields, whose fluctuations are reflected in the amplitude variability of subthreshold oscillations around \((u^{*},v^{*})\). The localized excitations (spikes) become more prevalent for a larger \(\sigma=0.041\) that belongs to the interval \(T_{1}\), see Fig. 3(c). Increasing \(\sigma\) within \(T_{1}\), one observes patterns comprised of local mixed-mode oscillations [44] where units fire more frequently and more correlated. The statistical properties of such solutions are a key aspect of this study and will be elucidated in the next sections. Finally, for \(\sigma=0.1\) from the region \(R_{3}\), the system dynamics are characterized by completely (both frequency and phase) synchronized relaxation oscillations of individual units, cf. Fig. 3(d).
Now let us focus on the mechanism causing the trapping of units' orbits in the vicinity of the unstable fixed point \((u^{*},v^{*})\). First, we recall the notion of phase-sensitive excitability of a periodic orbit invoked in Sec. II. At variance with [30], which introduced this notion while analysing the non-uniform response of relaxation oscillations to noise, a similar type of effect emerges due to nonlocal interactions. Namely, the units whose isolated dynamics comprise relaxation oscillations become trapped and perform subthreshold oscillations around the unstable equilibrium. Then, the maximal canard establishes a state space threshold separating the transient small-amplitude oscillation from the limit cycle of relaxation oscillations. For deterministic networked systems, the trapping of trajectories has previously been observed in the vicinity of more complex invariant sets. In particular, in Ref. [33; 34; 35], it has been demonstrated that the couplings can trap units' trajectories in the vicinity of unstable chaotic sets. Then, the trapping mechanism is based on an interplay between interactions and the dynamics in the chaotic set, which creates random perturbations that prevent the trajectories from escaping the vicinity of the invariant set via its unstable manifold. The chaotic sets involved in the trapping occur in the state space of each unit. The latter is similar to the scenario here, but instead of chaotic sets, we consider the trapping mediated by unstable equilibria encircled by the maximal canards.
In the following, we propose a mechanism to explain the dynamics in the intervals \(R_{2}\) and \(T_{1}\) that contains
Figure 2: Order parameter \(\mu\) in dependence of coupling strength \(\sigma\) for three different sets of initial conditions (green up-triangles, red down-triangles, and blue circles). Intervals of \(\sigma\) denoted by \(R_{1}\) and \(R_{3}\) are characterized by the prevalence of relaxation oscillations (\(\mu\approx 1\)). Intervals \(R_{2}\) and \(T_{1}\) respectively support coexistence of asynchronous states \(\mu\approx 0\) and \(0<\mu<1\) with the completely synchronous state. The dashed line \(\mu=1\) corresponds to the case where all the units perform completely synchronous relaxation oscillations.
the main ingredients of both phase-sensitive excitability of periodic orbits and the above described trapping phenomenon. We begin by revisiting the dynamics of an isolated FHN neuron where the maximal canard provides a threshold between different types of orbits, distinguished by the motion around the unstable fixed point \((u^{*},v^{*})\). The differences between the associated transients become apparent if one determines the corresponding _escapes time_\(t_{e}\) from the region enclosed by the maximal canard. This quantity expresses the dimensionless time required for trajectories starting from different initial conditions to reach the limit cycle of relaxation oscillations \(S\). In Fig. 4(a), color coding indicates the escape times \(t_{e}\) for a large set of initial conditions \((u_{0},v_{0})\). Note the thin boundaries between the regions with different values of \(t_{e}\) that reflect the spiraling of the maximal canard around \((u^{*},v^{*})\), and the white line just below indicating a segment of the orbit of the limit cycle corresponding to relaxation oscillations. The subtlety of such boundaries makes the system highly sensitive to perturbations. For instance, a trajectory in the maximal canard region with a certain prescribed escape time, if perturbed, may change its current escape route and perform extra loops around \((u^{*},v^{*})\). The same applies to the orbit of relaxation oscillations, which under the effect of an appropriate perturbation, may be injected into the maximal canard region when passing close to it so to perform loops around the unstable fixed point.
Let us now focus on the case of FHN neurons embedded in an array. There, it is the non-local interactions that provide perturbations to local dynamics, sensitively affecting the units' orbits around the maximal canard. Depending on the character of perturbations, the trajectories of only a subset of neurons may undergo subthreshold oscillations due to trapping by the maximal canard, while the remaining neurons continue to perform relaxation oscillations. Such a scenario gives rise to an emergent asynchronous behavior. Since the coupling function is diffusive, its amplitude increases in a desynchronized network, contributing to larger perturbations to neuronal dynamics. Consequently, the interaction between the perturbation-sensitive dynamics around the maximal canard and the couplings, i.e., local mean-fields, constitutes a positive feedback loop. One may numerically assess the range of coupling strengths \(\sigma\) where such an impact of interactions is the strongest. Appreciating that the interactions introduce a parametric perturbation of local neuronal dynamics, we introduce an effective bifurcation parameter \(\alpha_{i}\) for each neuron as
\[\alpha_{i}^{\text{eff}}(t)=\alpha+\frac{\sigma}{2P}\sum_{j=i-P}^{i+P}(v_{j}(t)- v_{i}(t)), \tag{3}\]
where \(\alpha=0.99\) is the unperturbed value defined in Sec. II.
Figure 4(b) depicts the time averages \(\hat{\alpha}_{i}^{\text{eff}}\) of the effective parameter \(\alpha_{i}^{\text{eff}}(t)\) as a function of \(\sigma\). One observes that for \(\sigma\lesssim 0.018\), the value of \(\hat{\alpha}_{i}^{\text{eff}}\approx 0.99\) approximately equals that of an isolated unit. Here, the amplitude of perturbations from the local mean-fields is subthreshold and cannot induce small-amplitude oscillations around \((u^{*},v^{*})\). Consequently, all the units perform relaxation oscillations, cf. region \(R_{1}\) in Fig. 2. However, for \(\sigma\approx 0.018\), the couplings become capable of trapping the units within the canard region to generate small-amplitude oscillations. In parallel, one observes that the value of \(\hat{\alpha}_{i}^{\text{eff}}\) begin to substantially depart from the unperturbed value \(\alpha=0.99\), cf. Fig. 4(b). Such increasing deviations are associated with the feedback from non-local interactions, whose impact on system dynamics grows as the desynchronization sets in. Enhancing \(\sigma\) further, the contribution from non-local interactions to \(\hat{\alpha}_{i}^{\text{eff}}\) peaks around \(\sigma\approx 0.04\). There, the parametric perturbation to units shows a high variability over the array, cf. the increase in the corresponding variances \(V_{\alpha,i}\) of effective bifurcation parameters in Fig. 4(c). The given value of \(\sigma\) approximately corresponds to the transition between regions \(R_{2}\) and \(T_{1}\) from Fig. 2. As \(\sigma\) is further increased, the attractive nature of the couplings begins to dominate the dynamics, contributing to units' synchronization. This is accompanied by the decrease of the amplitudes of parametric perturbations affecting the units up to the point where they become subthreshold, such that the units again perform relaxation oscillations, cf. region \(R_{3}\) in Fig. 2.
Figure 3: Main frames: orbits \((u_{i}(t),v_{i}(t))\) of two units \(i=20\) (blue solid lines) and \(i=50\) (red dashed lines); insets: time traces \(u_{i}(t)\) of the same two units for different system states. (a) \(\sigma=0.001\): phase-shifted synchronization of relaxation oscillations. (b) \(\sigma=0.02\): the feedback from local mean-fields causes trapping of orbits around the unstable equilibrium \((u^{*},v^{*})\). (c) \(\sigma=0.041\): the orbits eventually escape from the trapping region, generating rare spikes. (d) \(\sigma=0.1\): coupling strength is sufficient to induce complete synchronization of relaxation oscillations. The unstable equilibrium \((u^{*},v^{*})\) of isolated dynamics lies at the intersection of null-clines (black dashed and dotted curves).
In Sec. IV, we will explore the statistical properties of the network solutions. We pay special attention to the transition between \(R_{2}\) and \(T_{1}\), where the non-local coupling and sensitive response to perturbations of relaxation oscillations in the vicinity of the maximal canard make the largest impact.
## IV Avalanche activity
As elaborated in Sec. III, for a range of intermediate \(\sigma\) in Fig. 2, one finds activity patterns where the units spend much time trapped by the maximal canard in the vicinity of the unstable fixed point \((u^{*},v^{*})\), while being rarely released to perform spikes. In the following, we resolve the spatio-temporal structure of such emergent states showing that they conform to an avalanche-like activity, where intermittent pseudo-synchronous spiking, localized to various degrees, is separated by long periods of quiescence over the array. Note that the observed avalanches are not intended to model classical neuronal avalanches [6; 7; 8; 9], though a partial analogy may be drawn, as discussed in Sec. IV.1.
Let us first consider the spatio-temporal evolution of local membrane potentials \(u_{i}(t)\) described by Eq. (1), see Figs. 5(a) and 5(b). Indeed, the latter indicates that the typical activity patterns are self-organized into episodes of pseudo-synchronous spiking separated by silent episodes. Nevertheless, in terms of temporal organization, two types of avalanches may be distinguished, namely _cascading_ events, cf. the example of an avalanche beginning around \(t\approx 30\) in Fig. 5(b), where the (spatially localized) spiking activity propagates forming temporal sequences; and _temporally localized_ (isolated) events, where the (spatially localized) spiking occurs within a narrow time window. Note that the duration of the time window used to identify pseudo-synchronous spiking is specified in Sec. IV.1. Qualitatively, the episodes of spiking activity resemble self-localized excitations in excitable media [49; 50]. The cascading events have a step-pyramid-like space-time structure. This reflects the fact that at every next level, only the units closer to the center of the previous level perform a spike. The latter units remain active because they receive most of the input from the spiking rather than the silent units. Naturally, the units at the top level, e.g., unit \(i=68\) in Fig. 5(a) and Fig. 5(b), fire more spikes during a cascading event than the units whose spiking terminates at some of the lower levels. In contrast to cascading avalanches, each unit participating in a temporally localized event spikes only once. In terms of spatial organization, the units spiking within a given narrow time window can appear as connected clusters, or may display a multi-cluster structure forming spatially disconnected clusters.
The intrinsic structure and self-organization of avalanche patterns can be described in more detail by looking into the spatio-temporal evolution of the quantity \(\alpha_{i}^{*}(t)=\alpha_{i}^{\rm eff}(t)-\alpha_{c}\) as shown in Fig. 5(c). In particular, one observes that units spend most of the time in vicinity of the canard transition \(\alpha_{i}^{*}\approx 0\), which underlies the important role of the canard transition in the self-organization of avalanches. Moreover, for the cascading events, one observes that the units around the excited region are furthest above the bifurcation threshold, i.e., have the largest values of \(\alpha_{i}^{*}(t)\). This effectively facilitates the pattern confinement, making the avalanche events spatially localized.
The local dynamics comprise irregular mixed-mode oscillations, involving fast subthreshold oscillations interspersed with random rare spikes, cf. Fig. 5(a) which illustrates the time traces \(u_{i}(t)\) of three units highlighted in Fig. 5(b). The irregularity of single units' interspike intervals is corroborated by Fig. 6(a) showing the temporal evolution of the return times \(\Delta t_{n}(t)\) to the Poincare cross-section \(u_{k}(t)=1,\dot{u}_{k}(t)>0\) for an arbitrary unit. Together with the corresponding first return map of successive return times \(\Delta t_{n}(\Delta t_{n-1})\) in Fig. 6(b), it evinces that the units may sometimes fire spikes in close succession, but that there may also be long periods of quiescence. The spatial profile of average spiking frequen
Figure 4: (a) For an isolated FHN unit, i.e., Eq. (1) with \(\sigma=0\), the color scheme indicates escape times \(t_{e}\) from the maximal canard region for different initial conditions \((u_{0},v_{0})\); white curve: segment of the limit-cycle \(S\) close to the maximal canard. (b) Time-averaged effective bifurcation parameters \(\hat{\alpha}_{i}^{\rm eff}\) as a function of \(\sigma\). (c) Blue dots: local variances \(V_{\alpha,i}\) of effective bifurcation parameters \(\alpha_{i}^{\rm eff}(t)\); red dash-dotted curves in (b) and (c): population-averaged values for different \(\sigma\).
cies \(\omega_{k}=2\pi M_{k}/T\), where \(M_{k}\) is the spike count within a macroscopic time interval \(T\), is shown in Fig. 6(c). Expectedly, as the averaging time interval is increased, the \(\omega_{i}\) profile becomes more uniform, indicating that it should appear flat for very long \(T\) as the spiking excitations occur randomly in space. Qualitatively, our scenario involving rare and irregular recurrent spiking bears certain resemblance to the onset of extreme events in systems of diffusively coupled nonidentical FHN units with excitable local dynamics [51, 52, 53, 54], as well as identical FHN oscillators with delayed diffusive couplings [55, 56]. However, in contrast to [51], we typically find spatially localized events, rather than the bursting events spanning the entire network.
### Statistical features of avalanches
In this Section, our goal is to address in detail how the statistical features of activity patterns like the one in Fig. 5(a) depend on the coupling strength \(\sigma\). Let us first precisely define the avalanche events and the associated properties we are interested in. Starting from a set of random initial conditions \((\vec{u}_{0},\vec{v}_{0})\in[-2,2]^{N}\times[-2,2]^{N}\), we consider the evolution of an array Eq. (1) over the interval \(\Delta T=5\times 10^{4}\). An individual avalanche event comprises a joint spiking activity of a _cluster_ of a certain number of units \(k\) within the narrow time window \(\Delta t=100\delta t\), where \(\delta t\) is the integration step. The avalanche size, denoted by \(S_{k}\), then refers to the number of units that have fired at least once during this small interval, and is not related to the total number of spikes emitted by the units forming the cluster. In other words, \(S_{1}\) denotes an event where only a single unit has fired within the given time window \(\Delta t\), whereas \(S_{N}\) corresponds to an avalanche spanning the whole array. To elucidate how the avalanche properties depend on \(\sigma\) without a potential bias due to initial conditions, for each value of \(\sigma\) we perform numerical experiments with 10 different sets of random initial conditions.
Focusing on the \(\sigma\) interval associated with regimes \(R_{2}\) and \(T_{1}\), Fig. 7(a) illustrates the \(\sigma\) dependence of the _maximal_ avalanche sizes \(\max(s_{k})\) normalized over the array size \(N\), i.e., \(s_{k}=S_{k}/N\). Multiple symbols for a given value of \(\sigma\) denote the results obtained for the different sets of initial conditions, and the red curve indicates the values averaged over the ensemble of initial conditions. For smaller \(\sigma\), even the maximal avalanche sizes do not exceed the normalized coupling range \(2p=2P/N\), indicated by the horizontal green line. This implies that avalanches remain localized events focused around the initial excitation, or put differently, that the correlation
Figure 5: Self-organization of avalanches. (a) Time traces \(u_{i}(t)\) for three units \(i=2\), \(i=45\), and \(i=68\), indicated by green, blue, and red rectangles in panel (b), respectively. (b) Spatio-temporal evolution of fast variables \(u_{i}(t)\). (c) Spatio-temporal evolution of the quantity \(\alpha_{i}^{*}(t)=\alpha_{i}^{\text{eff}}(t)-\alpha_{c}\), which shows that the units spend most of the time in the vicinity of the canard transition. System parameters: \(\sigma=0.04\), \(N=100\) and \(p=0.2\).
length of spontaneous activity fluctuations remains short. However, for larger coupling strengths \(\sigma\gtrapprox 0.025\), the average values over different initial conditions exceed the coupling range, suggesting that the synchronous spiking activity typically propagates over the array, indicating an increase in the system's correlation length. Enhancing the coupling strength further into the \(T_{1}\) regime (\(\sigma>0.04\)), we observe that maximal avalanches indeed span the entire array.
To get an insight into the variability of avalanche cluster sizes, in Fig. 7(b) we show how the maximal number of different cluster sizes \(\mathcal{C}(s_{k})\) depends on \(\sigma\). Multiple symbols for any given \(\sigma\) again correspond to results for different initial conditions. One observes that the variability of cluster sizes, reflected in the number of different recorded cluster sizes, reaches a maximum around \(\sigma\approx 0.04\), the values near the transition between the regimes \(R_{2}\) and \(T_{1}\) from Fig. 2. Nonetheless, within the \(T_{1}\) regime, another form of variability increases. Namely, the diversity of cluster sizes recorded in simulations starting from different initial conditions becomes much more pronounced than in the \(R_{2}\) regime.
Both the onset of avalanches that span the entire array in Fig. 7(a), and the highest variability of avalanche sizes observed in Fig. 7(b) for \(\sigma\approx 0.04\), suggest that the change of regimes from \(R_{2}\) to \(T_{1}\) under increasing \(\sigma\) bears signatures of criticality. One may draw a partial analogy to observations on resting state (spontaneous) activity in neuronal systems. There, the neuronal avalanches [6; 7; 8; 9], found in electrophysiological recordings, both under in vitro and in vivo conditions, as well as by electroencephalography and functional magnetic resonance imaging, are known to show criticality features. Manifestations of criticality classically involve scale invariance in the distributions of relevant quantities, e.g. the size and duration of neuronal avalanches, which is reflected in the power-law behaviors of the form \(F(x)\propto x^{-\gamma}\), where \(\gamma\) is a critical exponent [57; 58]. Criticality features are generally associated with proximity to critical/phase transitions between ordered and disordered phases [11; 14; 19; 59], or in case of neuronal avalanches, between an absorbing state with a quickly decaying spiking activity and an active state with a runaway (exploding) activity propagation [60]. Nevertheless, the concept of phase transitions applies to systems in the thermodynamic limit \(N\to\infty\), so an observation of genuine power-laws cannot be expected for finite-size systems. To resolve this, one often invokes the point that the phase
Figure 7: Statistical properties of avalanches. (a) Largest, relative avalanche sizes \(\max(s_{k})\) in terms of \(\sigma\). For each \(\sigma\), dots indicate the results for 10 different initial conditions. The average values (red curve) exceed the connectivity \(2p=2P/N\) (green dashed line) for \(\sigma>0.03\). (b) Number of different cluster sizes \(\mathcal{C}(s_{k})\) as a function of \(\sigma\). The average (red curve) shows a peak in the vicinity of the transition between regions \(R_{2}\) and \(T_{1}\), cf. Figs. 2 and 4.
Figure 6: (a) Temporal evolution of the return times \(\Delta t_{n}(t)\) to the Poincaré cross-section \(u_{k}(t)=1,\dot{u}_{k}(t)>0\) of a single unit. (b) First return map \(\Delta t_{n}(\Delta t_{n-1})\) of successive return times to the Poincaré cross-section. (c) Spatial distribution of average spiking frequencies \(\omega_{k}\) over time periods \(T=5\times 10^{4}\) (empty squares), \(T=2\times 10^{5}\) (empty diamonds) and \(T=4\times 10^{5}\) (solid circles). System parameters: \(\sigma=0.04\), \(N=100\), \(p=0.2\).
transitions in finite systems extend over a critical region called Griffiths phase [61; 62; 63]. There, the system is quasi-critical and maintains certain aspects of criticality, including the truncated power-law behaviors (power laws with exponential cut-offs) of relevant quantities [64]. This also applies to neuronal avalanches, where the classically reported exponents for the avalanche size and duration are \(3/2\) (with some exceptions) and \(2\), respectively, while the cut-off typically matches the system size [6; 65], but may also deviate from it [66; 67]. One should further note that the power-law distributions of event sizes per se may not necessarily imply that the system is poised close to criticality [68; 69]. Conversely, there are instances, such as certain models of neuronal avalanches, where a critical system shows a scale-free distribution of event sizes that does not conform to a power-law [70]. Such results may partly derive from a potentially fuzzy relationship between the definition of observed events and the local dynamics behind them.
Given the arguments above, we focus on the properties of avalanches in the narrow range \(\sigma\in[0.037,0.043]\) around the transition between the regimes \(R_{2}\) and \(T_{1}\) from Fig. 2. In particular, fixing \(\sigma\), we consider the probability distribution \(\mathcal{P}(s)\) of relative avalanche cluster sizes \(s=S/N\) and the probability distribution \(\mathcal{P}(\tau)\) of time intervals \(\tau\) between the successive avalanches, see the left and the right column in Fig. 8, respectively. Both \(\mathcal{P}(s)\) and \(\mathcal{P}(\tau)\) are sampled for three different array sizes (\(N=50\), \(N=100\) and \(N=200\)) maintaining the fixed coupling radius \(p=P/N=0.2\). For all three \(N\) values, the distributions \(\mathcal{P}(s)\) show an approximate scaling regime for small and intermediate relative cluster sizes \(s\leq p\), cf. the vertical dashed lines in Fig. 8(a), (c) and (e), followed by a cut-off due to finite system size. For the largest array size \(N=200\) in Fig. 8(e), we have included as a guideline the power-law scaling \(\beta=3/2\) (black dash-dotted line) classically obtained for the distribution of neuronal avalanches.
The distributions \(\mathcal{P}(\tau)\) of intervals between the successive avalanche events, also called the _laminar times_[71; 72], indicate two different regimes that guide the avalanche recurrence processes, cf. Fig. 8(b), (d) and (f). In particular, very short laminar times describe the intrinsic dynamics of cascading avalanches, i.e., correspond to cascades' intra-event intervals between the successive bursts. For intermediate \(\tau\), one observes the peak that indicates the presence of a characteristic timescale in the avalanche recurrence process rather than the scale invariant behavior. Such traces of pseudo-regularity in avalanche recurrent times reflect an occasional degradation of the trapping mechanism associated with the maximal canard, which allows the system to intermittently evolve in the vicinity (not on) of the synchronization manifold, having the units generate spikes mutually shifted in phase.
The results in this section suggest that our system in the vicinity of the transition between the regimes \(R_{2}\) and \(T_{1}\) from Fig. 2 shows certain aspects of critical behavior, like the increase of correlation length compared to coupling radius (indirectly observed by the growth of maximal cluster sizes) and the enhanced variability of cluster sizes. To further this point, in the next section we investigate the system's response to perturbations, demonstrating evidence of _critical slowing down_ and a _decreased resilience_ of the system's dynamics in the vicinity of this transition.
### Indicators of criticality
Approaching the critical transition, complex systems tend to show progressively less resilience to perturbations, taking increasingly longer times to recover [2]. Such slower recovery rates are classically described as a herald of a critical slowing down phenomenon [36; 37; 38; 39]. The latter also influences the relaxation processes and hence the statistics of fluctuations underlying the spontaneous activity of systems near criticality. Qualitatively, this increases their short-term memory and variability, and
Figure 8: Distributions of relative avalanche sizes \(s=S/N\) and laminar times \(\tau\) for (a)-(b) \(N=50\), (c)-(d) \(N=100\) and (e)-(f) \(N=200\). Different symbols indicate the results obtained for different sets of initial conditions. \(\sigma\) is chosen in the vicinity of the transition between the regions \(R_{2}\) and \(T_{1}\). Coupling radius \(p=P/N=0.2\) (vertical dashed black lines in the left column) is kept fixed in all the simulations. Distributions \(\mathcal{P}(s)\) in (a), (c) and (e) show a power-law behavior for small and intermediate avalanches followed by a cut-off. The comparison with the power-law \(\beta=3/2\) (black dotted line) in panel (e) is provided as a guideline. Distributions of laminar times \(\mathcal{P}(\tau)\) in (b), (d) and (f) show a peak indicating the presence of a characteristic timescale.
is reflected in enhanced autocorrelation and variance of systems' observables. In terms of induced activity, systems at criticality are known to maximize their dynamic range [73; 74; 10].
In the following, our goal is to demonstrate that at the onset of the \(T_{1}\) region, or rather for \(\sigma\) values close to the transition between regions \(R_{2}\) and \(T_{1}\) from Fig. 2, an array of FHN units exhibits two signature effects of criticality, namely increased recovery times to small perturbations and a reduced resilience. To do so, we introduce two types of stimulation protocols: one, called an _LC-shift_, where a small fraction \(M\) of units is triggered to spike, i.e. their orbits are kicked toward the orbit of a relaxation oscillation limit cycle; and the other, called _FP-shift_, where the same fraction of units is injected into the vicinity of the unstable fixed point \((u^{*},v^{*})\). The described perturbations are applied at time \(t=T_{p}\), after which the array spontaneously evolves until the moment \(t=T\). To quantify the effect of perturbations, we compare the orbit of the system after introducing the stimulus to that of the unperturbed system and numerically determine the deviations \(\zeta(t)\). As a measure of the impact of the stimulus, we take the variance \(\mathrm{Var}(\zeta(t))\) of the deviations calculated over the interval \(T-T_{p}\).
In Fig. 9(a) are shown the time series of variances \(\mathrm{Var}(\zeta(t))\) for three different values of \(\sigma\) following an _FP-shift_ at \(T_{p}=150\). The horizontal red dashed lines indicate the levels of the corresponding initial _FP-shifts_. We first point out that the post-stimulus amplitude variance (shown green) is much higher than the initial amplitude of the _FP-shift_ for \(\sigma=0.04\) (middle panel), whereas it is lower for \(\sigma=0.024\) (top panel) and \(\sigma=0.043\) (bottom panel). This reflects the array's reduced resilience, i.e. the decreased recovery capability for \(\sigma=0.04\), and also shows that the perturbations from external stimuli are amplified for this value of \(\sigma\). Moreover, one observes that the post-stimulus interval of nonzero variance is much longer for \(\sigma=0.04\) than for the other two \(\sigma\) values. This evinces that the array's recovery times \(T_{R}\) from a perturbation (see the blue dashed line with arrows) are much slower for \(\sigma=0.04\). Note that the values at the top and bottom panels are selected from regions \(R_{2}\) and \(T_{1}\) from Fig. 2, while the longest recovery time and the largest variance amplitude are found approximately at the transition boundary between these regions. In other words, in the vicinity of the latter transition, the system shows two prominent features of criticality, having the recovery time and signal variance following a perturbation substantially different compared to the system's behavior below and above the transition.
To better characterize the described behavior, let us investigate the array's recovery times and variances over the continuous interval of \(\sigma\) spanning between the regions \(R_{2}\) and \(T_{1}\). Our aim is to show that the variability of the system's response to perturbations is indeed the largest in the vicinity of the transition between these two regions. Hence, for each considered value of \(\sigma\), we perform simulations of the array dynamics for 10 different initial conditions and implement either the _FP-shift_ or the _LC-shift_ stimulation protocol. Then, we numerically estimate the cumulative variance per unit time \(\phi^{2}\) for each set of initial conditions:
\[\phi^{2}=\frac{1}{T-T_{p}}\;\int_{T_{p}}^{T}\mathrm{Var}(\zeta)dt. \tag{4}\]
The dependence of the quantity \(\phi^{2}\) on \(\sigma\) is illustrated in Fig. 9(b). Note that for a given value of \(\sigma\), each symbol describes the system's response for a different set of initial conditions, whereas the responses to different types of
Figure 9: Indicators of criticality at the transition between regions \(R_{2}\) and \(T_{1}\) from Fig. 2. (a) Time traces of variance \(\mathrm{Var}(\zeta)\) after an _FP-shift_ introduced to a fraction of \(M=0.05\) units at \(T_{p}=150\) for \(\sigma=0.024\) (top panel), \(\sigma=0.04\) (middle panel) and \(\sigma=0.043\) (bottom panel); red dashed line: level of initial _FP-shift_. (b) Cumulative variance \(\phi^{2}\) and (c) normalized recovery time \(\rho\) as a function of \(\sigma\). Each symbol stands for a different set of initial conditions, and the color code refers to _LC-shift_(red) and _FP-shift_ (blue) stimulation protocols. The dash-dotted and dotted curves in panel (b) indicate the values of \(\phi^{2}\) averaged over an ensemble of initial conditions for _LC-shift_(red) and _FP-shift_(blue), respectively. System parameters: \(N=50\), \(p=0.2\).
stimulation protocols are indicated by red (_LC-shift_) and blue (_FP-shift_). The two dotted lines indicate the system's responses averaged over the ensemble of different initial conditions for the two types of stimulus. One finds that such averaged \(\phi^{2}\) quantities show peaks around the coupling strength \(\sigma\approx 0.04\), indicating that the system is most sensitive to perturbations near the transition between the regions \(R_{2}\) and \(T_{1}\). Nonetheless, for the same interval of \(\sigma\), we examine the array's recovery times after implementing both types of stimulation protocols. In particular, we collect the recovery times \(T_{R}\) (indicated in Fig. 9(a)) for 10 different sets of initial conditions. To make the observed values of \(T_{R}\) comparable, we normalize them by the total observation time after the stimulus \(T-T_{p}\), thus obtaining the normalized recovery time \(\rho=T_{R}/(T-T_{p})\). Figure 9(c) shows the observed values of \(\rho\) as a function of \(\sigma\). One readily notes that indeed the larger values of \(\rho\) occur near the transition between the regions \(R_{2}\) and \(T_{1}\).
## V Discussion
We have introduced a simple model of an array of diffusively coupled neural oscillators whose local dynamics are poised in the vicinity of a canard transition. This facilitates a coexistence between completely synchronous oscillations and avalanche-like patterns of pseudo-synchronous bursting activity. The onset of avalanches is shown to be associated with an inhibitory effect of interactions. This effect is manifested at a range of small coupling strengths, where interactions quench local relaxation oscillations due to an interplay with a maximal canard, a structure that stems from local multiple timescale dynamics. The observed long-term trapping of orbits in the vicinity of an unstable fixed point derives from a combination of a recently introduced concept of phase-sensitive excitability of a periodic orbit [30] and the trapping mechanism from [33; 34; 35]. Essentially, each unit, as an oscillating system driven by a fluctuating local mean-field, provides a non-uniform response to perturbations along the orbit of a limit cycle, which leads to persistent strong deviations from the unperturbed orbit. Compared to [33; 34; 35], the trapping phenomenon is here extended to a confinement of orbits to a region of maximal canard instead of the original confinement by a chaotic saddle. In terms of concept, one should note that distinct from the classical notion of excitability, the phase-sensitive excitability is not immediately related to the system being close to a bifurcation between stable stationary and oscillatory states, but is instead connected to a canard transition between subthreshold and relaxation oscillations. In a broader context, the important role of canard transition in pattern formation has already been shown in the cases of alternating (leap-frog) dynamics in small motifs of units [31] or the different types of coherence-incoherence patterns (solitary states and patched patterns) in non-locally coupled arrays with repulsive and attractive interactions [32; 50], involving either coupled excitable units or self-oscillating units close to the bifurcation toward the excitable state. Complementing this, here we have shown the impact of canard transition on the self-organization and intrinsic structure of avalanche patterns.
We have further demonstrated that avalanches can emerge at the transition between two collective regimes featuring lower and higher spiking activity rates. The avalanches have been shown to satisfy power-law behaviors regarding avalanche cluster sizes and laminar times. Moreover, the system generating avalanches has been found to bear classical indicators of criticality under external perturbations, including a reduced resilience and critical slowing down. So far, neuronal avalanches have primarily been suggested to arise in the vicinity of two very different types of continuous transitions, namely the transition between absorbing and active phases or at the onset of synchronization. Also, implementing various adaptation rules, such as synaptic plasticity or excitability adaptation, it has been indicated that models of neuronal networks may self-organize to a critical state facilitating avalanches, which has linked the onset of avalanches to self-organized criticality [75; 76; 23; 77]. At the other hand, it has been found that avalanches may emerge from critical dynamics in balanced excitatory-inhibitory networks, where they can be combined with different types of collective oscillation rhythms [78; 79]. The latter can involve two types of scenarios: one with collective rhythms and avalanches coexisting (either independently or with rhythms modifying the features of avalanches), and the other having the rhythms embedded in avalanche activity [20; 80]. Finally, it has been reported that scale-invariant avalanches may also emerge without the neural network operating at criticality, but just due to a balanced input or its interaction with noise [78; 79; 81; 82].
In light of the above studies, our findings apparently point to a possibility of an independent coexistence between a synchronous oscillation rhythm and transiently synchronous avalanche activity, whereby the mechanism facilitating such coexistence requires two ingredients: the non-local diffusive interactions and local dynamics in the vicinity of a canard transition between subthreshold and relaxation oscillations. In terms of the states involved, the character of the critical transition supporting avalanches is most similar to the one in [24], in the sense that it also mediates between the states with lower- and higher spiking rates. Nevertheless, in contrast to our study, the model in [24] has a more complex structure combining stochastic local dynamics with a quenched disorder in network topology, and criticality occurs in the vicinity of a spinodal line of a discontinuous transition. For future research, it would be important to gain insight into the switching dynamics between the coexisting regimes in our model, both under the impact of noise and when applying different types of external stimulation.
## Acknowledgments
M.C. thanks Javiera Contreras for the support and help with the design of the figures.
E.S.M. acknowledges the support by the Deutsche Forschungsgemeinschaft (DFG) via the project number 454054251. This work was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Projektnummer - 163436311 - SFB 910.
A.Z. acknowledges support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Projektnummer - 163436311 - SFB 910.
P.H. acknowledges support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Project-ID 434434223 - SFB 1461.
I.F. acknowledges the funding from the Institute of Physics Belgrade through grant by the Ministry of Science, Technological Development and Innovation of the Republic of Serbia and the partial support by the ANSO - Alliance of International Science Organizations Collaborative Research Projects and Training Projects, grant number ANSO-CR-PP-2022-05.
## Data Availability
The data that support the findings of this study are available from the corresponding author upon reasonable request.
|
2309.06127 | Accelerating Edge AI with Morpher: An Integrated Design, Compilation and
Simulation Framework for CGRAs | Coarse-Grained Reconfigurable Arrays (CGRAs) hold great promise as
power-efficient edge accelerator, offering versatility beyond AI applications.
Morpher, an open-source, architecture-adaptive CGRA design framework, is
specifically designed to explore the vast design space of CGRAs. The
comprehensive ecosystem of Morpher includes a tailored compiler, simulator,
accelerator synthesis, and validation framework. This study provides an
overview of Morpher, highlighting its capabilities in automatically compiling
AI application kernels onto user-defined CGRA architectures and verifying their
functionality. Through the Morpher framework, the versatility of CGRAs is
harnessed to facilitate efficient compilation and verification of edge AI
applications, covering important kernels representative of a wide range of
embedded AI workloads. Morpher is available online at
https://github.com/ecolab-nus/morpher-v2. | Dhananjaya Wijerathne, Zhaoying Li, Tulika Mitra | 2023-09-12T11:06:01Z | http://arxiv.org/abs/2309.06127v1 | Accelerating Edge AI with Morpher: An Integrated Design, Compilation and Simulation Framework for CGRAs
###### Abstract
Coarse-Grained Reconfigurable Arrays (CGRAs) hold great promise as power-efficient edge accelerator, offering versatility beyond AI applications. Morpher, an open-source, architecture-adaptive CGRA design framework, is specifically designed to explore the vast design space of CGRAs. The comprehensive ecosystem of Morpher includes a tailored compiler, simulator, accelerator synthesis, and validation framework. This study provides an overview of Morpher, highlighting its capabilities in automatically compiling AI application kernels onto user-defined CGRA architectures and verifying their functionality. Through the Morpher framework, the versatility of CGRAs is harnessed to facilitate efficient compilation and verification of edge AI applications, covering important kernels representative of a wide range of embedded AI workloads. Morpher is available online at [https://github.com/ecolab-nus/morpher-v2](https://github.com/ecolab-nus/morpher-v2).
## I Introduction
In the ever-evolving era of artificial intelligence, the growing need for edge devices to adeptly manage advanced machine learning (ML) workloads is becoming increasingly important. These devices, operating under severe power and computational performance constraints, must not only efficiently execute a wide range of ML algorithms, but also cater to an array of diverse workloads such as signal and image processing. Even within the ML sphere, the advent of new kernel types presents a continuous challenge, outpacing the capabilities of current ML accelerators. Despite their efficiency in traditional ML workloads, these accelerators fall short in adaptability for non-ML tasks, highlighting the urgent need for more flexible solutions.
Emerging as a solution to this, Coarse-Grained Reconfigurable Arrays (CGRAs) represent an innovative class of hardware accelerators, combining the flexibility of FPGAs with a higher energy efficiency comparable to ASIC-based ML accelerators. Their inherent word-level reconfigurability and high energy efficiency make them ideal for power and area-constrained edge devices. Additionally, the use of dataflow computing in CGRAs naturally aligns with the computational patterns of many AI workloads. Their unique blend of adaptability and efficiency has led to adoption of CGRAs commercially in Samsung Exynos 7420 SoC [9], Intel Configurable Spatial Accelerator [10], Sambanova RDU [17], Renesas Configurable Processor [11], and academic prototypes like HyCUBE [2, 3] among others [20, 26, 27, 23].
A CGRA architecture, as depicted in Figure 1, is characterized by a grid of interconnected Processing Elements (PEs) and multi-banked memories accessible to a subset of PEs, rendering it simple yet robust. The PEs consist of configurable switches, a register file, ALU, and control memory, enabling the time-multiplexed execution of instructions. The use of static scheduling negates the need for hardware structures for conflict resolution and synchronization, leading to a lightweight footprint for CGRAs. However, the effectiveness of CGRAs relies heavily on high-quality compiler mapping of application kernels onto the architecture, marking the CGRA compilation problem a substantial research area [12, 13, 14, 15, 7, 22].
Application kernels are typically statically scheduled on CGRAs, a process that exposes all architectural features to the compiler for spatio-temporal mapping of dataflow graph (DFG) nodes, as illustrated in Figure 2. This mapping includes assigning operations to CGRA PEs and routing data dependencies through configurable switches and registers. A common strategy, loop pipelining, allows concurrent scheduling of operations from different iterations, enhancing the kernel's throughput as shown in Figure 1(b). This process, also known as modulo scheduling, can be achieved by mapping DFG onto a modulo spatio-temporal resource graph, known as the Modulo Routing Resource Graph (MRRG) (Figure 1(c)) [8].
In this study, we direct our attention to the optimization of ML workloads on user-defined CGRAs, employing the Morpher tool chain--a comprehensive design tool developed for CGRA modeling and compilation. Morpher is an open-source framework that provides comprehensive support for modeling diverse CGRA architectures, supporting complex kernels, and verifying functionality. It enables users to design architecture characteristics through its architecture description language (ADL) and proficiently maps complex compute
Fig. 1: A 4x4 CGRA Architecture
kernels. Morpher also auto-generates Verilog RTL for custom CGRAs, and validates functionality through Verilator-based simulations [13, 28]. Hosted on GitHub, it integrates workflows for compilation, RTL generation, simulation, and verification, while incorporating Continuous Integration (CI) workflows to ensure error-free code and tested use cases.
The organization of this paper is as follows: Section II provides an overview of Morpher framework. In Section III, we detail how the target CGRA is modeled. Section IV not only outlines our approach to accelerate ML kernels on CGRAs, but also presents evaluations of the applied optimizations and verification process of their execution.
## II Morpher Framework Overview
Fig. 3 illustrates the overall Morpher framework. The pieces of the framework are numbered for easy reference. Yellow pieces represent user-provided inputs, blue pieces represent the functional components, and grey ones represent intermediate results generated by the functional components. The framework has three inputs: application source code with annotated kernel 1, the abstract architecture model 2, and a library of hardware description of basic CGRA modules 3. The main components of the framework are Data-Flow Graph (DFG), and data layout generation 4, CGRA Mapper 5, hardware (RTL) generation 6, test data generation 7, simulation and emulation 8.
CGRAs target loop kernels where the application spends a significant fraction of the execution time. The DFG generator 4 is an LLVM-based pass that extracts the DFG of the target loop annotated in the application source code. Additionally, it constructs the multi-bank data layout by allocating the variables in the loop kernel to the on-chip memories of the target CGRA. The CGRA mapper 5 maps the extracted DFG onto the CGRA fabric to maximize parallelism by exploiting intra- and inter-iteration parallelism with software pipelining (i.e., modulo scheduling) [8]. Morpher ADL supports a rich set of primitive constructs that model functional units, register files, complex software-defined routers, and multi-banked memories accessible via shared bus interfaces. The mapped models the CGRA as a time-extended resource graph called MRRG [5] where the nodes of the DFG are mapped to the time-space resource instances to maximize throughput and minimize data routing cost. The resultant mapping configuration file describes the configuration for each resource cycle-by-cycle.
The architecture generator 6 generates the Verilog RTL of the target CGRA design based on the user-provided abstract architecture model and the library of basic CGRA modules written in Chisel [13]. The test data generator 7 for an application creates the data required for simulation and verification of the application execution. Finally, the simulator and emulator 8 use the mapping configurations, the test data, and Verilog RTL to simulate and emulate the execution of the application on the specified architecture.
## III Modeling the target CGRA architecture
For this study, the target CGRA is designed with an \(8\times 8\) PE array, comprising 8 data memories connected to the boundary PEs located on the left and right sides. The CGRA is logically structured into four clusters, with each cluster accommodating a 4x4 PE array and two 8kB memory banks, in line with a 16-bit data path, as shown in Figure 1. This arrangement is described using the abstract ADL of Morpher, a flexible tool in.json format adept at capturing a range of CGRA architectures.
Morpher's ADL balances high abstraction for user-friendliness with the need to handle intricate architectural specifics vital for verilog RTL generation. It does this by incorporating a library of crucial CGRA hardware modules like ALUs, LSUs, register files, multiplexers, and memory units, all developed in the Chisel language. This lets users tailor optimized architectures. Consequently, Morpher streamlines the design process, translating the ADL into an scala based Intermediate Representation (IR) that forms the Chisel top design and verilog RTL.
## IV Accelerating ML kernels on CGRA
In this section, we illustrate our strategy for accelerating diverse ML workloads, focusing primarily on the General Matrix Multiply (GEMM) and Convolution (CONV) kernels, using the Morpher toolchain. These kernels, despite being merely two examples among many ML kernels, act as crucial components in a plethora of ML models, contributing significantly to layers such as Fully Connected (FC), Convolution, Transformer models, LSTM, GRU, Bilinear, Self-Attention, and Graph Neural Networks (GNN) layers. While our methodology is demonstrated using GEMM and CONV kernels, it maintains broad applicability to numerous ML kernels on user defined CGRAs. Our attention is centered on diverse optimization strategies, including loop tiling, unrolling, and loop coalescing, which when combined, facilitate improved
Fig. 3: Overview of Morpher Framework
Fig. 2: (a) a loop DFG (b) a loop schedule (c) a loop scheduled on MRRG (Modulo Resource Routing Graph).
utilization of the CGRA resources and substantially boost performance.
### _Tiling Strategy for ML Kernels_
The GEMM and CONV (single-input multiple-output channels) kernels are implemented on CGRA in a tiled manner using an output stationary dataflow, as shown in Listing 1 and 2. This is just one instances of many tiling techniques widely explored for spatial accelerators and effectively applicable to CGRAs [24, 25].
At the single CGRA level (lines 9-12 in Listing 1 and lines 9-16 in Listing 2), a specific-sized kernel, called "TILE," is mapped to an individual CGRA cluster. For GEMM, a matrix multiplication of size \(TI\times TK\times TJ\) is mapped to a CGRA cluster. For CONV, a convolution of a tile of size \(TO1\times TO2\times TCo\) with a filter of size \(K\times K\) is carried out. Each cluster computes an output matrix (O) with weights (W) and an input matrix (I) as inputs, the sizes of which are governed by the capacity of on-chip memory banks within each CGRA cluster. This mapping process is facilitated by the Morpher tool chain, detailed further in the next section.
The sequential loops manage data transfer from off-chip to on-chip memory, while computation is handled by parallel loops mapped onto CGRA clusters (lines 2-7 in Listing 1 and Listing 2). At the CGRA cluster level, data parallelism among different output tiles is leveraged. This allows multiple tiles to be spatially mapped on the CGRA cluster array, with each CGRA computing a single output tile. At the off-chip to on-chip level, any data exceeding the capacity of the on-chip memory banks is stored in off-chip memory, ensuring efficient data management throughout the system.
```
1//Sequentialloop:fromoff-chiptoon-chip
2forminrange(M/(TI=X)):
3forinrange(M/(TJ=Y));
4forkinrange(K/(TK)):
5//Parallelloop:CGRAclusters
6forxinrange(X):
7foryinrange(Y):
8//SingleCGRAlevel
9foriinrange(TI):
10forjinrange(TJ):
11forkinrange(TK)://mapthis
1200[][]:+=W[][][]*I[][];
```
Listing 1: GEMM loop tiling and dataflow
```
1//Sequentialloop:fromoff-chiptoon-chip
2fori.0inrange(01/X*T01):
3forj.0inrange(02/Y*T02):
4forc.0inrange(Co/TCO):
5//Parallelloop:CGRAclusters
6forxinrange(X):
7foryinrange(Y):
8//SingleCGRAlevel
9foriinrange(T01):
10forjinrange(T02):
11forkinrange(TCO):
120temp=0;
13fork1inrange(K):
14fork2inrange(K)://mapthis
15temp+=T[]*W[];
16O[]=temp;
```
Listing 2: CONV loop tiling and dataflow
### _Micro Kernel Mapping on CGRA_
In this section, we explore the process of optimizing and mapping GEMM and CONV kernels, onto a single CGRA cluster. We further analyze the performance implications of these optimizations. The evaluated GEMM and CONV kernels have dimensions of \(64^{3}\) and \(64^{3}\times 3^{2}\), respectively. Their corresponding tile sizes fitting into onchip memory banks of single CGRA cluster are \(64\times 16\times 64\) for GEMM and \(64^{2}\times 1\times 3^{2}\) for CONV.
Both GEMM and CONV kernels consists of nested loops. The user only needs to provide the application C source code to the Morpher toolchain and annotate the innermost loop that should be mapped onto the CGRA, here annotated as "map this" (Line 11 in Listing 1 and line 14 in Listing 2). The toolchain then generates a dataflow graph, representing the innermost loop body, and maps it onto the CGRA cluster. This mapping generates the necessary configurations to exploit the parallel computational capacity of the CGRA for executing the kernel. The toolchain also manages data layout in memory banks, mapping data arrays onto them to synchronize computation and data mapping.
Table I provides a performance evaluation. The Initiation Interval (II) represents the cycle count between start of two consecutive iterations, while the Minimum II (MII) is the smallest possible II dictated by the CGRA resource and loop's recurrence constraints [21]. For both the base GEMM kernel (26 DFG nodes) and the base CONV kernel (27 DFG nodes), total execution times are 2.69 ms and 314.70 ms. In both cases, Morpher succeeds in achieving the theoretical MII of 4. Notably, the lower performance of the CONV kernel arises due to an increased kernel invocation overhead, which includes transferring outer loop iteration variables from the host processor to the CGRA, as well as extended pipeline draining time. The latter refers to the period during which the pipeline completes executing instructions after the final loop iteration has commenced. These factors are amplified due to the CONV kernel's higher number of nested loop levels (5 compared to GEMM's 3). This overhead, combined with less than optimal resource utilization (40% for GEMM and 42.19% for CONV), spotlights the opportunities in optimizing CGRA performance for complex ML kernels.
Incorporating loop unrolling optimization into the GEMM kernel, as shown in Listing 3, significantly elevates performance. This optimization inflates the number of operations within the loop body, hence increasing the number of DFG nodes and amplifying parallelism, which optimizes the utilization of the CGRA PEs. As a result, the unrolled version (GEMM-U) demonstrates an increased DFG nodes count from 26 to 58 and an enhancement in utilization from 40% to 60%. This culminates in a decrease in computation time from 0.56ms to 0.25ms. These reductions confirm that loop unrolling efficiently enhances the compute utilization, resulting in an an improved performance, which is 1.13\(\times\) better than the base kernel.
Loop coalescing significantly enhances the efficiency of the CGRA implementation, by reducing invocation overheads and pipeline draining time. This is clearly demonstrated in the results for the GEMM-U-C (Listing 4) and the CONV-U-C (Listing 5) kernels. The GEMM-U-C kernel coalesces all three loops, resulting in a DFG with 79 nodes and an II of 8. This kernel requires only a single loop invocation per CGRA cluster to complete \(64^{3}\) kernel size. The data transfer time is reduced to 0.49 ms, culminating in a total execution time of 0.76 ms. This significantly enhances the overall performance, as evidenced by a performance boost of 3.54 times compared to base kernel. Similarly, the CONV kernel also shows marked improvement when optimized. The CONV-U-C-1 kernel, which coalesces the innermost two loops and fully unrolls them when K=3, results in a DFG with 100 nodes and an II of 12. The compute time is significantly reduced to 1.53 ms, as is the data transfer time to 12.75 ms, yielding a total execution time of 14.28 ms. This optimization leads to an impressive performance increase to 22.03\(\times\) compared to the base kernel.
Finally, the CONV-U-C-2 kernel, which coalesces all five loop levels, demonstrates a further improvement. This kernel necessitates 16 invocations per CGRA cluster to complete \(64^{3}\times 3^{2}\) kernel size. It results in a DFG with 153 nodes and an II of 11 with 86% utilization. This optimization results in a performance boost of 25.28\(\times\) compared to the base CONV implementation. These findings underscore the vital role and efficacy of loop coalescing in achieving significant performance gains in CGRA implementations.
```
1for(i=0;i=0;j=0;k=0;n=CT+I*J*T;n++){://mapthis
2O[i[]]+=W[i][k]*T[k][]+W[i][k+1]*T[k+1][]
3W[i][K+2]*I[k+2][]*W[i][k+3]*I[k+3][]j;k-k+4;
4if(k+1>=TK){k=0;++j;}
5if(j==T){j=0;++j;}
```
Listing 4: Unrolled & coalesced GEMM kernel (GEMM-U-C)
### _Functional Verification_
Morpher simplifies the task of generating application test data for the simulation of loop kernels, an indispensable component of CGRA functional verification. It instruments the application source code by inserting data recording functions to capture the live-in (I, W, O arrays, iteration variables transferred from outermost loops) and live-out (output O array) variables of the target loop kernel. This instrumented program is then run on a general-purpose processor, and the variables are recorded as test data for use by the simulator. In the ensuing simulation and verification phase, the Chisel top design of the target CGRA is simulated using Verilator and Chisel I/O testers, with the CGRA model functioning as a memory-mapped slave device to a host processor. The live-in variables from the recorded test data are loaded into each memory unit, and the mapping configurations from the mapper are uploaded into the automatically generated control modules. The simulator then carries out the operations, routing data through multiplexers, operating on the functional units, and recording the results to registers and memories, all as per the mapping configurations. The post-simulation memory content is finally compared with the expected results, validating the CGRA functionality with Morpher generated configurations.
## V Conclusion and future works
CGRAs backed by the efficient kernel mapping of the Morpher toolchain, offer a promising route for ML application acceleration. In our future work, we aim to merge the Morpher toolchain with MLIR's high-level compilation front-end. This integration will automate optimization techniques, further exploring the CGRA design space, and enhancing performance. This effort continues to strive towards unlocking the full potential of CGRA technology.
\begin{table}
\begin{tabular}{|l|r|r|r|r|r|r|} \hline
**Kernel** & **Nodes** & **II (MI)** & **Utilization** & \begin{tabular}{c} **Compute** \\ **time (ms)\({}^{*}\)** \\ \end{tabular} & \begin{tabular}{c} **Data transfer** \\ **time (ms)\({}^{*}\)** \\ \end{tabular} &
\begin{tabular}{c} **Total execution** \\ **time (ms)\({}^{*}\)** \\ \end{tabular} & **Speedup** \\ \hline
**GEMM** & 26 & 4 (4) & 40.63\% & 0.56 & 2.13 & 2.69 & 1\(\times\) \\ \hline
**GEMM-U** & 58 & 6 (4) & 60.42\% & 0.25 & 2.13 & 2.38 & 1.1\(\times\) \\ \hline
**GEMM-U-C** & 79 & 8 (8) & 61.72\% & 0.27 & 0.49 & 0.76 & 3.5\(\times\) \\ \hline
**CONV** & 27 & 4 (4) & 42.19\% & 8.32 & 306.38 & 314.70 & 1\(\times\) \\ \hline
**CONV-U-C-1** & 100 & 12 (7) & 52.08\% & 1.53 & 12.75 & 14.28 & 22\(\times\) \\ \hline
**CONV-U-C-2** & 153 & 11 (10) & 86.93\% & 1.26 & 11.19 & 12.45 & 25.2\(\times\) \\ \hline \end{tabular}
* The evaluation is conducted at a 100 MHz CGRA frequency and a 50 MBps host-to-CGRA data transfer rate. The primary focus of this study is to evaluate performance enhancement through compilation, not striving for the maximum performance achievable through efficient RTL silicon implementation, which is currently in the development phase.
```
1for(i=0;i+CT;j++)
2for(j=0;j+CT;j++)
3for(k=0;k<TK;k+k+4)://mapthis
4O[i][j]+=W[i][k]*I[k][j]+W[i][k+1]*I[k+1][j]
5if(j+1>O2){j=0;++j;}
6if(i==0){j=0;++j;}
```
Listing 5: Unrolled & coalesced CONV kernel (CONV-U-C-2)
### _Functional Verification_
Morpher simplifies the task of generating application test data for the simulation of loop kernels, an indispensable component of CGRA functional verification. It instruments the application source code by inserting data recording functions to capture the live-in (I, W, O arrays, iteration variables transferred from outermost loops) and live-out (output O array) variables of the target loop kernel. This instrumented program is then run on a general-purpose processor, and the variables are recorded as test data for use by the simulator. In the ensuing simulation and verification phase, the Chisel top design of the target CGRA is simulated using Verilator and Chisel I/O testers, with the CGRA model functioning as a memory-mapped slave device to a host processor. The live-in variables from the recorded test data are loaded into each memory unit, and the mapping configurations from the mapper are uploaded into the automatically generated control modules. The simulator then carries out the operations, routing data through multiplexers, operating on the functional units, and recording the results to registers and memories, all as per the mapping configurations. The post-simulation memory content is finally compared with the expected results, validating the CGRA functionality with Morpher generated configurations.
## V Conclusion and future works
CGRAs backed by the efficient kernel mapping of the Morpher toolchain, offer a promising route for ML application acceleration. In our future work, we aim to merge the Morpher toolchain with MLIR's high-level compilation front-end. This integration will automate optimization techniques, further exploring the CGRA design space, and enhancing performance. This effort continues to strive towards unlocking the full potential of CGRA technology.
\begin{table}
\begin{tabular}{|l|r|r|r|r|r|r|} \hline
**Kernel** & **Nodes** & **II (MI)** & **Utilization** & \begin{tabular}{c} **Compute** \\ **time (ms)\({}^{*}\)** \\ \end{tabular} & \begin{tabular}{c} **Data transfer** \\ **time (ms)\({}^{*}\)** \\ \end{tabular} &
\begin{tabular}{c} **Total execution** \\ **time (ms)\({}^{*}\)** \\ \end{tabular} & **Speedup** \\ \hline
**GEMM** & 26 & 4 (4) & 40.63\% & 0.56 & 2.13 & 2.69 & 1\(\times\) \\ \hline
**GEMM-U** & 58 & 6 (4) & 60.42\% & 0.25 & 2.13 & 2.38 & 1.1\(\times\) \\ \hline
**GEMM-U-C** & 79 & 8 (8) & 61.72\% & 0.27 & 0.49 & 0.76 & 3.5\(\times\) \\ \hline
**CONV** & 27 & 4 (4) & 42.19\% & 8.32 & 306.38 & 314.70 & 1\(\times\) \\ \hline
**CONV-U-C-1** & 100 & 12 (7) & 52.08\% & 1.53 & 12.75 & 14.28 & 22\(\times\) \\ \hline
**CONV-U-C-2** & 153 & 11 (10) & 86.93\% & 1.26 & 11.19 & 12.45 & 25.2\(\times\) \\ \hline \end{tabular}
* The evaluation is conducted at a 100 MHz CGRA frequency and a 50 MBps host-to-CGRA data transfer rate. The primary focus of this study is to evaluate performance enhancement through compilation, not striving for the maximum performance achievable through efficient RTL silicon implementation, which is currently in the development phase.
\begin{table}
\begin{tabular}{l}
1 \\ \hline
**for(i=0;i+CT;j++)** \\
**for(j=0;j+CT;j++)** \\
**for(k=0;k<TK;k+k+4)://mapthis** \\
**O[i][j]+=W[i][k]*I[k][j]+W[i][k+1]*I[k+1][j]
**M[i][k+2]*I[k+2][j]*W[i][i][k+3]*I[k+3][j];** \\ \hline \end{tabular}
Listing 3: Unrolled GEMM kernel (GEMM-U)
\end{table} TABLE I: Performance comparison of different kernels on target CGRA with speedup compared to base kernels
## VI Acknowledgment
This work was partially supported by the National Research Foundation, Singapore under its Competitive Research Programme Award NRF-CRP23-2019-0003 and Singapore Ministry of Education Academic Research Fund T1 251RES1905.
|
2309.12152 | Unveiling Challenges in Mendelian Randomization for Gene-Environment
Interaction | Many diseases and traits involve a complex interplay between genes and
environment, generating significant interest in studying gene-environment
interaction through observational data. However, for lifestyle and
environmental risk factors, they are often susceptible to unmeasured
confounding factors and as a result, may bias the assessment of the joint
effect of gene and environment. Recently, Mendelian randomization (MR) has
evolved into a versatile method for assessing causal relationships based on
observational data to account for unmeasured confounders. This approach
utilizes genetic variants as instrumental variables (IVs) and aims to offer a
reliable statistical test and estimation of causal effects. MR has gained
substantial popularity in recent years largely due to the success of
large-scale genome-wide association studies in identifying genetic variants
associated with lifestyle and environmental factors. Many methods have been
developed for MR; however, little work has been done for evaluating
gene-environment interaction. In this paper, we focus on two primary IV
approaches: the 2-stage predictor substitution (2SPS) and the 2-stage residual
inclusion (2SRI), and extend them to accommodate gene-environment interaction
under both the linear and logistic regression models for the continuous and
binary outcomes, respectively. Extensive simulation and analytical derivations
show that finding solutions in the linear regression model setting is
relatively straightforward; however, the logistic regression model is
significantly more complex and demands additional effort. | Malka Gorfine, Conghui Qu, Ulrike Peters, Li Hsu | 2023-09-21T15:06:49Z | http://arxiv.org/abs/2309.12152v1 | # Unveiling Challenges in Mendelian Randomization for Gene-Environment Interaction
###### Abstract
Many diseases and traits involve a complex interplay between genes and environment, generating significant interest in studying gene-environment interaction through observational data. However, for lifestyle and environmental risk factors, they are often susceptible to unmeasured confounding factors and as a result, may bias the assessment of the joint effect of gene and environment. Recently, Mendelian randomization (MR) has evolved into a versatile method for assessing causal relationships based on observational data to account for unmeasured confounders. This approach utilizes genetic variants as instrumental variables (IVs) and aims to offer a reliable statistical test and estimation of causal effects. MR has gained substantial popularity in recent years largely due to the success of large-scale genome-wide association studies in identifying genetic variants associated with lifestyle and environmental factors. Many methods have been developed for MR; however, little work has been done for evaluating gene-environment interaction. In this paper, we focus on two primary IV approaches: the 2-stage predictor substitution (2SPS) and the 2-stage residual inclusion (2SRI), and extend them to accommodate gene-environment interaction under both the linear and logistic regression models for the continuous and binary outcomes, respectively. Extensive simulation and analytical derivations show that finding solutions in the linear regression model setting is relatively straightforward; however, the logistic regression model is significantly more complex and demands additional effort.
**Key words:** linear regression; logistic regression; measurement error; interaction effect; instrumental variable; colorectal cancer; polygenic risk score
Introduction
There is a great interest to study the interaction between genes and environmental risk factors in complex diseases (Virolainen et al., 2023). However, environmental risk factors are often susceptible to unmeasured confounding, which, if not properly accounted, can yield under- or over-estimation of the effect. To study gene-environment interaction, it is inevitable to study the joint effect of gene and environment. If these factors, especially environmental factors, are subject to confounding, then proper inference of gene-environment interaction may be compromised.
In epidemiology, the approach of Mendelian randomization (MR) has evolved into a versatile method for evaluating causal associations based on observational data, when unmeasured confounding variables are abundant. The approach uses genetic variants as instrumental variables (IVs), and the goal is to provide a reliable statistical test and estimation of causal effects when unmeasured confounding factors are present. MR capitalizes on a natural experiment in which the genotypes are randomly assigned during meiosis, given the parents' genes, and it is assumed that genotypes are indirectly impact the disease status, independently from any potential confounders. A partial list of econometrics, causal and epidemiological works presenting the basic ideas and important discussions includes Bowden and Turkington (1984); Angrist et al. (1996); Davey Smith and Ebrahim (2003); Katan (2004); Pearl (2000); Greenland (2000); Martens et al. (2006); Hernan and Robins (2006) and Didelez and Sheehan (2007).
Consider the variables \(X\) and \(Y\), where \(X\) is the non-randomized exposure or treatment and \(Y\) is the response. Intervening on \(X\) refers to the act of setting \(X\) to a specific value of our choice, which does not alter the distributions of the other variables in the system except through the effects induced by the changes made to \(X\). Also, consider a variable denoted as \(G_{IV}\), which we aim to utilize as the IV. In our context \(G_{IV}\) is the genotype or a polygenic risk score (PRS). An unobservable variable \(U\) represents the potential confounding that may exist between \(X\) and \(Y\). The following are the required assumptions related to \(G_{IV}\) which define an IV:
**A.1**: IV is associated with the exposure of interest, i.e., \(G_{IV}\not\perp X\).
**A.2**: IV is independent of unmeasured confounders of the exposure and outcome, i.e., \(G_{IV}\perp\!\!\!\perp U\).
**A.3**: IV is independent of the outcome outside of the mediating effects of the exposure, i.e.,
\(Y\perp\!\!\!\perp G_{IV}|(X,U)\).
A wealth of literature is available reviewing various aspects of MR and IVs, such as the underlying MR assumptions (Hernan and Robins, 2006; Didelez and Sheehan, 2007; Glymour et al., 2012; Didelez et al., 2010), the available methods (Angrist et al., 1996; Baiocchi et al., 2014), multiple IVs (Palmer et al., 2012; Clarke and Windmeijer, 2012), and non-linear regression models such as logistic, Poisson, Cox proportional hazards and additive hazards (Palmer et al., 2011; Burgess et al., 2017; Wan et al., 2018). In short, two main IV approaches have been extensively developed within the context of linear models. The first approach, known as 2-stage predictor substitution (2SPS) or 2-stage least squares, starts with a linear regression model of the exposure on the IV. The fitted value obtained from this first-stage regression is replacing the exposure in the second-stage linear regression of the outcome \(Y\). The second approach, equally popular, is called 2-stage residual inclusion (2SRI). In this approach, the residuals from the first-stage regression are included as an additional covariate, along with the exposure \(X\), in the second-stage linear regression of the outcome \(Y\). Both estimators, 2SPS and 2SRI, are consistent when applied to linear models (Terza et al., 2008; Cai et al., 2011). Extending 2SPS and 2SRI to address non-linear models simply involves replacing the second-stage linear regression model with alternative models such as Poisson, logistic, or Cox proportional hazard models.
As nicely summarized by Wan et al. (2018), the literature has conflicting views on the consistency of two-stage IV methods for nonlinear models. Focusing on the conditional treatment effect based on observational data, Terza et al. (2008) showed the consistency of 2SRI in a wide range of nonlinear models and under restrictive assumptions, while this consistency has not been established for 2SPS. As a consequence, 2SRI has gained widespread acceptance as the preferred method in studies that deal with discrete and survival outcomes (Wan et al., 2018). Conversely, within the context of MR, both the 2SRI and 2SPS approaches have demonstrated bias when estimating the log odds ratio, where a dichotomous genetic marker was utilized as IV and the exposure \(X\) was a continuous phenotype. Studies have revealed that this bias amplifies with a greater magnitude of unmeasured confounding (Palmer et al., 2008; Burgess and Collaboration, 2013; Wan et al., 2018). Methods leveraging gene-environment interactions within MR analyses (Spiller et al., 2019; Tchetgen Tchetgen et al., 2021; Spiller et al., 2022) estimate causal associations while correcting for instrumental invalidity, in particular when assumptions A.2 or A.3 are
violated.
Because of conflicting recommendations in the existing literature and because of the increasing popularity of applying 2SRI to control for unmeasured confounding in clinical studies, Wan et al. (2018) revisited the conditions under which one can establish the consistency of 2-stage IV nonlinear models. Their main conclusions include:
1. Previous findings on 2SRI (Terza et al., 2008) rely on an (unrealistic) underlying assumption that only one unmeasured confounder exists.
2. When estimating non-null conditional treatment effect, 2SRI suffers from the same problem as 2SPS does and is not consistent in non-collapsible models even given a perfect IV.
3. 2SRI and 2SPS estimators are consistent when estimating collapsible effect measures (such as risk rates or risk differences) using Poisson or additive hazards models.
4. In practice, one may consider using collapsible models instead of non-collapsible ones. For example, in case of rare binary events, one may consider log-linear models instead of logistic models, if possible.
Terza et al. (2008) and Burgess et al. (2017) also considered an adjusted 2SPS method where the fitted value of exposure \(X\) and the residual from the first-stage regression are included additively in the second-stage regression of \(Y\). Some investigators have therefore recommended this adjusted two-stage method when the second-stage regression is logistic on the premise that it is less biased than the unadjusted two-stage method (Burgess et al., 2017). The discussion so far assumed no interaction between the exposure \(X\) and any other observed variables, here, genes, in the regression model of \(Y\).
The present study focuses on testing and estimating the effect of interaction between \(G\) and \(X\) on \(Y\), as well as the main effects of \(G\) and \(X\), which are necessary to facilitate the inference of the joint effect size of \(G\) and \(X\). Here, \(Y\) may either be a continuous or a dichotomous variable, and linear and logistic regression models are adopted, respectively. For each model we explore the naive estimators that ignore the existence of unmeasured confounders, and also straightforward modified versions of 2SPS and 2SRI to accommodate effect modification. By extensive simulation the biases of causal effects and type-I error of these estimators are demonstrated.
Our work offers three key contributions: (i) We demonstrate the validity of straightforward extensions of MR methods designed for linear outcomes \(Y\) when dealing with gene-environment interaction. These extensions are valid in terms of bias and type-I error rates. (ii) In the context of binary \(Y\) and logistic regression, we elucidate the significant distinctions between the 2SPS and 2SRI methods. Furthermore, we reveal that none of these methods exhibit consistency; however, they generally maintained type I error for gene-environment interaction or with limited inflation, except for some scenarios. The quest for a valid approach in the realm of non-linear regression models remains an unresolved challenge. (iii) We apply the MR methods to a large-scale gene-environment interaction analysis of BMI in association with colorectal cancer risk, encompassing 44,500 cases and 52,235 controls, and compare the performance of these methods in a real data setting. While there is no evidence of type-I error inflation for gene-BMI interaction, it remains unclear what the true effect sizes of gene-BMI interaction should be from the MR results. Interestingly, we demonstrate that the true causal effect size of BMI is likely greater than the effect size based on measured BMI.
## 2 Linear Regression
We start with a continuous outcome \(Y\) such that the true model is given by
\[Y=\beta_{0}+\beta_{1}X+\beta_{2}G+\beta_{3}XG+\beta_{Z}Z+\beta_{U}U+\epsilon_{ Y}, \tag{1}\]
where \(U\) represents unmeasured confounding variable(s) that is also affecting \(X\), \(Z\) is an observed covariate affecting \(Y\) and \(X\), \(\beta_{j},j=1,\ldots,3\), and \(\beta_{Z}\) are regression coefficients, and \(\epsilon_{Y}\) is the residual error with mean 0 and variance \(\sigma_{Y}^{2}\), and is independent of \((X,G,Z,U)\). For simplicity of presentation, a univariate \(Z\) is considered. Without loss of generality, it is assumed that \(U\) is a mean 0 and variance \(\sigma_{U}^{2}\) random variable and is independent of \(Z\). By the MR assumption, \(U\) is independent of \(G\) as well as the genetic instruments defined below for the model of the exposure \(X\). The coefficient \(\beta_{3}\) represents the effect-modification parameter. Our goal is estimating \(\beta=(\beta_{1},\beta_{2},\beta_{3})^{T}\) and testing the null hypothesis \(H_{0}:\beta_{3}=0\) against a two-sided alternative.
In this work we focus on the setting in which there are an observed confounder \(Z\) and an
unobserved confounder \(U\). In addition, it is assumed genetic instrument variable \(G_{IV}\) is available and has a relationship with \(X\) as described in the following
\[X=\gamma_{0}+\gamma_{IV}G_{IV}+\gamma_{Z}Z+\gamma_{U}U+\epsilon_{X}, \tag{2}\]
where \(\gamma_{0},\gamma_{IV}\), \(\gamma_{Z}\), and \(\gamma_{U}\) are the regression coefficients, \(\epsilon_{X}\) is the residual error with mean 0 and variance 1 and is independent of \((G_{IV},Z,U)\) and \(Y\).
As \(U\) is unobserved, modeling \(X\) relies on the observable data by fitting a linear regression model, namely,
\[E(X|G_{IV},Z)=\alpha_{0}+\alpha_{1}G_{IV}+\alpha_{Z}Z\]
and getting \((\widehat{\alpha}_{0},\widehat{\alpha}_{1},\widehat{\alpha}_{Z})\) by using e.g., ordinary least squares. Given that \(U\) and \(\epsilon_{X}\) are independent, the estimator \((\widehat{\alpha}_{0},\widehat{\alpha}_{1},\widehat{\alpha}_{Z})\) is consistent to the true value of \((\gamma_{0},\gamma_{IV},\gamma_{Z})\) in model (2) and has the usual asymptotic normality property following the Central Limit Theorem. Then, \(X\) can be predicted by
\[\widehat{X}=\widehat{\alpha}_{0}+\widehat{\alpha}_{1}G_{IV}+\widehat{\alpha} _{Z}Z\]
and the residuals are given by
\[\widehat{\delta}=X-\widehat{X}\,.\]
Alternatively, we can consider \(X\) to be predicted only by \(G_{IV}\), i.e.,
\[\widehat{X}_{a}=\widehat{\alpha}_{1}G_{IV}\,,\]
so the residual is given by
\[\widehat{\delta}_{a}=X-\widehat{X}_{a}\,.\]
In the subsequent discussion, we consider six methods to estimate \(\beta\), denoted as \(\widehat{\theta}=(\widehat{\theta}_{1},\widehat{\theta}_{2},\widehat{\theta}_ {3})^{T}\). It is important to note that each approach computes \(\widehat{\theta}\) differently, although we use a unified notation for ease of presentation:
**1. Naive:**: The naive estimator ignores the fact that there could be unobserved confounders and uses the following model \(E(Y|X,G,Z)=\theta_{0}+\theta_{1}X+\theta_{2}G+\theta_{3}XG+\theta_{Z}Z\) to get \(\widehat{\theta}\).
**2. 2SPS:**: The two-stage prediction substitution (2SPS) under Eq. (1) starts with the first-stage regression, where the exposure \(X\) is regressed on the IVs, \(G_{IV}\), and known confounders \(Z\) to obtain fitted values of the exposure, \(\widehat{X}\). In the second-stage regression, the outcome \(Y\) is regressed on the fitted values for the exposure \(\widehat{X}\) from the first-stage regression, i.e., \(E(Y|\widehat{X},G,Z)=\theta_{0}+\theta_{1}\widehat{X}+\theta_{2}G+\theta_{3} \widehat{X}G+\theta_{Z}Z\).
**3. 2SPSadj:**: Based on the regression model of \(X\), the residual, \(\widehat{\delta}=X-\widehat{X}\), is obtained, which comprises of both the unobserved confounders \(U\) and the error \(\epsilon_{X}\). In the second-stage regression, the outcome \(Y\) is regressed not only on the fitted values for the exposure \(\widehat{X}\) and known confounders \(Z\), but also \(\widehat{\delta}\) in order to capture some of the unobserved confounders \(U\). That is, the second-stage model is given by \(E(Y|\widehat{X},G,Z,\widehat{\delta})=\theta_{0}+\theta_{1}\widehat{X}+\theta _{2}G+\theta_{3}\widehat{X}G+\theta_{Z}Z+\theta_{4}\widehat{\delta}\), with further adjustment of \(\widehat{\delta}\).
**4. 2SPSa:**: Sometimes only the regression coefficient associated with \(G_{IV}\) is available. An alternative to the 2SPS method is to obtain the fitted value for \(X\) based only on \(G_{IV}\), i.e., \(\widehat{X}_{a}=\widehat{\alpha}_{1}G_{IV}\). In the second-stage regression for \(Y\), the incorporation of the interaction terms between \(Z\) and \(G\) is required in order to absorb the effect of \(Z\) on \(X\). Hence, the outcome regression model is given by \(E(Y|\widehat{X}_{a},G,Z)=\theta_{0}+\theta_{1}\widehat{X}_{a}+\theta_{2}G+ \theta_{3}\widehat{X}_{a}G+\theta_{Z}Z+\theta_{GZ}ZG\).
**5. 2SPSadj-a:**: The second stage regression will additionally include the residuals \(\delta_{a}=X-\widehat{X}_{a}\) similarly to the adjusted 2SPS estimation method, namely, \(E(Y|\widehat{X}_{a},G,Z,\widehat{\delta}_{a})=\theta_{0}+\theta_{1}\widehat{X }_{a}+\theta_{2}G+\theta_{3}\widehat{X}_{a}G+\theta_{Z}Z+\theta_{GZ}ZG+\theta_ {4}\widehat{\delta}_{a}\).
**6. 2SRI:**: The two-stage residual inclusion (2SRI) under Eq. (1) is an adjusted two-stage method, where the residual \(\widehat{\delta}\) from the first-stage regression (2) is included in the second-stage regression (i.e., Eq. (1)) (Terza et al., 2008, and references therein). The second-stage model is given by \(E(Y|X,G,Z,\widehat{\delta})=\theta_{0}+\theta_{1}X+\theta_{2}G+\theta_{3}XG+ \theta_{Z}Z+\theta_{4}\widehat{\delta}\).
### Justification of 2SPS and 2SRI approaches
We begin by taking a closer look at the 2SPS approach. Plugging in Eq. (2) into Eq. (1), we obtain
\[Y = \beta_{0}+\beta_{1}(\gamma_{0}+\gamma_{IV}G_{IV}+\gamma_{Z}Z+ \gamma_{U}U+\epsilon_{X})+\beta_{2}G\] \[+\beta_{3}(\gamma_{0}+\gamma_{IV}G_{IV}+\gamma_{Z}Z+\gamma_{U}U+ \epsilon_{X})G+\beta_{Z}Z+\beta_{U}U+\epsilon_{Y},\] \[= \beta_{0}+\beta_{1}\widetilde{X}+(\beta_{2}+\beta_{3}\gamma_{U}U+ \beta_{3}\epsilon_{X})G+\beta_{3}\widetilde{X}G\] \[+\beta_{Z}Z+(\beta_{1}\gamma_{U}U+\beta_{U}U)+\beta_{1}\epsilon _{X}+\epsilon_{Y}\]
where
\[\widetilde{X}=\gamma_{0}+\gamma_{IV}G_{IV}+\gamma_{Z}Z\,.\]
Then, taking the conditional expectation of \(Y\) given \(G\), \(G_{IV}\), and \(Z\), we have
\[E(Y|G,G_{IV},Z) = \beta_{0}+\beta_{1}\widetilde{X}+\beta_{2}G+\beta_{3}\widetilde{ X}G+\beta_{Z}Z\,.\]
This supports the use of the 2SPS approach, wherein fitting a model of \(Y\) with \((\widehat{X},G,\widehat{X}G,Z)\) results in a consistent estimator for \(\beta=(\beta_{1},\beta_{2},\beta_{3})\). This justification stems from the collapsibility property of the linear regression model, allowing for the substitution of \(\widetilde{X}\) with \(\widehat{X}\). Furthermore, including \(\widehat{\delta}\) in the model for the adjusted 2SPS approach does not compromise the consistency of the estimators for \(\beta\); however, it has the potential to decrease the standard errors. This is due to the assumption that \(\widehat{\delta}\) is independent of \(\widehat{X}\) and encompasses the effects of \(U\). If the influence of \(U\) is large, substantial gain in efficiency can be achieved without jeopardizing the consistency of the estimators.
Next, when we contemplate an alternative predictor \(\widetilde{X}_{a}=\gamma_{IV}G_{IV}\), it yields
\[Y = \beta_{0}+\beta_{1}(\gamma_{0}+\widetilde{X}_{a}+\gamma_{Z}Z+ \gamma_{U}U+\epsilon_{X})+\beta_{2}G\] \[+\beta_{3}(\gamma_{0}+\widetilde{X}_{a}+\gamma_{Z}Z+\gamma_{U}U+ \epsilon_{X})G+\beta_{Z}Z+\beta_{U}U+\epsilon_{Y},\] \[= \widetilde{\beta}_{0}+\beta_{1}\widetilde{X}_{a}+(\beta_{2}+ \beta_{3}\gamma_{0}+\beta_{3}\gamma_{Z}Z+\beta_{3}\gamma_{U}U+\beta_{3} \epsilon_{X})G+\beta_{3}\widetilde{X}_{a}G\] \[+\widetilde{\beta}_{Z}Z+(\beta_{1}\gamma_{U}U+\beta_{U}U)+\beta_ {1}\epsilon_{X}+\epsilon_{Y},\]
where \(\widetilde{\beta}_{0}\) and \(\widetilde{\beta}_{Z}\) are generic symbols for the intercept and the regression coefficient associated with \(Z\). The expectation of \(Y\) conditional on \(G\), \(G_{IV}\) and \(Z\), gives
\[E(Y|G,G_{IV},Z) = \widetilde{\beta}_{0}+\beta_{1}\widetilde{X}_{a}+(\beta_{2}+\beta _{3}\gamma_{0})G+\beta_{3}\widetilde{X}_{a}G+\widetilde{\beta}_{Z}Z+\beta_{GZ} GZ\,,\]
where \(\beta_{GZ}\) is the regression coefficent for the interaction term between \(G\) and \(Z\). Drawing from the expectation outlined above, modeling \(Y\) with \((G,\widehat{X}_{a},G\widehat{X}_{a},Z,GZ)\) produces estimates that are consistent for \(\beta_{1}\) and the interaction term \(\beta_{3}\). However, it does not yield consistent estimates for the main effect of \(G\), except for when \(\gamma_{0}=0\). Notably, in contrast to the adjusted 2SPS approach, this method necessitates the inclusion of an additional interaction term \(GZ\).
Moving on, let us delve into the 2SRI approach. We make an assumption that the joint distribution of \((X,U)\), given \(G_{IV}\) and \(Z\), conforms to a bivariate normal distribution with
\[\left(\begin{array}{c}U\\ X\end{array}\right)|\,G_{IV},Z \sim N\left(\left(\begin{array}{c}0\\ \mu_{1}\end{array}\right),\left(\begin{array}{cc}\sigma_{U}^{2}&\gamma_{U} \sigma_{U}^{2}\\ \gamma_{U}\sigma_{U}^{2}&\sigma_{X}^{2}\end{array}\right)\right)\]
where \(\mu_{1}=\gamma_{0}+\gamma_{IV}G_{IV}+\gamma_{Z}Z\) and \(\sigma_{X}^{2}=\gamma_{U}^{2}\sigma_{U}^{2}+\sigma_{\epsilon_{X}}^{2}\). Hence, the conditional distribution of \(U\) given \(X\) is
\[U|X\sim N\left(\gamma_{U}\sigma_{U}^{2}/\sigma_{X}^{2}(X-\mu_{1}),\sigma_{U}^{ 2}(1-\gamma_{U}^{2}\sigma_{U}^{2}/\sigma_{X}^{2})\right)\,.\]
Lastly, in the context of Eq. (1), we can calculate the expected value of \(Y\) given \((X,G,Z)\), resulting in
\[E(Y|X,G,Z) = \beta_{0}+\beta_{1}X+\beta_{2}G+\beta_{3}XG+\beta_{U}E(U|X,G)+ \beta_{Z}Z\] \[= \beta_{0}+\beta_{1}X+\beta_{2}G+\beta_{3}XG+\beta_{U}\frac{\gamma _{U}\sigma_{U}^{2}}{\sigma_{X}^{2}}(X-\mu_{1})+\beta_{Z}Z\]
and \(\delta=X-\mu_{1}\). This computation provides a rationale for utilizing the 2SRI estimator in the linear model. This also suggests that the naive method provides a consistent estimator for the main effect of \(G\), \(\beta_{2}\), and the interaction effect \(\beta_{3}\), while it does not yield a consistent estimator for the main effect \(X\), \(\beta_{1}\), the latter of which has also been proposed by VanderWeele
et al. (2012). It is worth noting that even if we substitute \(\mu_{1}\) with \(\gamma_{IV}G_{IV}\), the 2SRI estimator remains consistent for \(\beta\), as the influence of \(Z\) is absorbed into the main effect of \(Z\).
### Simulation Study - Linear Regression
A total of \(10,000\) observations were considered for Model (2) for the exposure \(X\) with \(\gamma_{0}=0\), \(\gamma_{Z}=\gamma_{IV}=0.5\), \(G_{IV}\sim\mathrm{N}(0,1)\), \(U\sim\mathrm{N}(0,1)\), \(Z\sim\mathrm{N}(0,1)\) and \(\epsilon_{X}\sim N(0,1)\). The instrumental variable \(G_{IV}\) represents a polygenic risk score. Multiple values of \(\gamma_{U}\) were explored, \(\gamma_{U}\in\{0,0.5,1,2,4\}\). The values of \(X\) underwent a transformation to standardize their variance to 1 before being used in the primary \(Y\) outcome model to facilitate ease of comparison. For the linear regression model of Eq. (1) for the outcome \(Y\), \(10,000\) observations were generated with \(G\sim\mathrm{Bin}(2,0.3)\) representing the number of copies (0, 1 or 2) of a single bi-allelic SNP. Also, \((\beta_{0},\beta_{1},\beta_{2},\beta_{Z})=(0,1,0.5,0.5)\), multiple values of \(\beta_{U}\), \(\beta_{U}\in\{0,1.5,3\}\) and \(\epsilon_{Y}\sim N(0,1)\). Each configuration was examined twice: once with \(\beta_{3}=0.5\) and once under the null hypothesis, \(H_{0}:\beta_{3}=0\), to explore whether the presence of the unmeasured variable \(U\) has an impact on the type-I error. Unless stated otherwise, the simulation results are derived from 500 repetitions of each configuration. We also included a scenario in which \(X\) and \(G\) are dependent through a dependence between \(G_{IV}\) and \(G\). In particular, the following conditional distribution is adopted
\[G|G_{IV}\sim\mathrm{Bin}(2\,,\,0.3+0.3I\{G_{IV}>0\})\,.\]
Table 1 provides a summary of the configurations studied.
Figures 1-2, Tables 2-3 and Tables S1-S4 of the Supporting Information (SI) summarise the results. Figures 1-2 display results specifically for \(\gamma_{U}=1\) as the overall findings remain consistent across various values of \(\gamma_{U}\). The complete results are in Tables S1-S4 of SI. Biased results are marked in bold.
As expected, when \(\beta_{U}=0\), all the methods yield unbiased estimators for \(\beta=(\beta_{1},\beta_{2},\beta_{3})\) across all scenarios. Notably, in many cases, the naive approach demonstrates significantly higher efficiency compared to the other five estimators. In cases where \(\beta_{U}>0\) and there is independence between \(G\) and \(G_{IV}\), the naive approach exhibits bias in estimating \(\beta_{1}\) while maintaining its unbiasedness in estimating \((\beta_{2},\beta_{3})\). All MR-based estimators are unbiased for \(\beta\).
When \(G\) and \(G_{IV}\) are dependent, the naive estimator exhibits bias in estimating \((\beta_{1},\beta_{2})\), while the MR estimators provide unbiased estimations for \(\beta\). Among the five MR-based estimators, 2SRI demonstrates the highest level of efficiency, comparable to the naive estimator. For the 2SPS estimators, adjusting for \(\widehat{\delta}\) improves the efficiency and 2SPS estimators that include both genetic instrument and non-genetic predictors have better efficiency. The type-I error rates of the naive and the MR-based approaches for testing \(H_{0}:\beta_{3}=0\) against a two-sided alternative are reasonably close to the nominal level, 0.05.
In conclusion, all methods, including the naive approach, are suitable for conducting an interaction test. However, the naive estimator yields biased estimators for main effects, especially the main effect of the exposure, which makes it difficult to gauge the joint effect of exposure and \(G\) accurately. Each of the five MR-based techniques demonstrates consistency in estimating both the main effects and interaction effect. However, among them, 2SRI emerges as the favored option owing to its superior efficiency.
## 3 Logistic Regression
We are interested in assessing the association of exposure \(X\), genotype \(G\) and their interaction with binary outcome \(Y\). Define \(\mbox{logit}(p)=\log\{p/(1-p)\}\). Suppose the true model is
\[\mbox{logit}\{\mbox{Pr}(Y=1|X,G,Z,U)\} = \beta_{0}+\beta_{1}X+\beta_{2}G+\beta_{3}XG+\beta_{2}Z+\beta_{U}U, \tag{3}\]
where \((X,G,Z,U)\) are defined in Section 2, and we have a genetic instrument variable \(G_{IV}\) for exposure \(X\) and the model of Eq. (2) holds. Similar to the linear setting, we adapted the naive and the five MR-based approaches to fit the logistic regression model. Specifically, when performing the second-stage regression of \(Y\) for each of the five estimators discussed in Section 2, the linear model should be substituted with a logistic regression model incorporating the relevant components.
While modifying the MR-based estimators to fit the logistic regression model is simple, justifying these MR-based estimators is challenging due to the to non-linear structure of the logit link. Here we examine the analytical form for the 2SRI estimator. As \(U\) is not directly observable,
we focus on the expected value of \(Y\) given \((X,G,Z)\) using an approximation method inspired by Carroll et al. (2006). This approximation leverages the relationship between probit and logit models, particularly in the context of measurement error. Namely,
\[\mathrm{logit}\{E(Y|X,G,Z)\} \approx \left\{\beta_{0}+\beta_{1}X+\beta_{2}G+\beta_{3}GX+\beta_{U}E(U|X,G,Z)+\beta_{2}Z\right\}/\phi\] \[= \left\{\beta_{0}+\beta_{1}X+\beta_{2}G+\beta_{3}GX+\beta_{U} \frac{\gamma_{U}\sigma_{U}^{2}}{\sigma_{X}^{2}}(X-\mu_{1})+\beta_{Z}Z\right\}/ \phi\,,\]
where
\[\phi=\left(1+\beta_{U}^{2}\,\mathrm{var}(U|X)/1.7^{2}\right)^{1/2}\,,\]
and \(\mathrm{var}(U|X)=\sigma_{U}^{2}(1-\gamma_{U}^{2}\sigma_{U}^{2}/\sigma_{X}^{2})\). Testing, for instance, the hypothesis that \(\beta_{3}=0\) is approximately equivalent to testing \(\beta_{3}/\phi_{1}=0\). This may also explain that for the naive approach where \(\delta=X-\mu_{1}\) is not included, the bias only impacts the main effect of \(X\), but not the main effect of \(G\) and the interaction coefficient \(\beta_{3}\). Adding \(\widehat{\delta}\), as in the 2SRI approach, corrects the bias for the main effect of \(X\) to some extent. Even in this situation, all parameters are attenuated by \(\left(1+\beta_{U}^{2}\,\mathrm{var}(U|X)/1.7^{2}\right)^{1/2}\) and when \(\beta_{U}=0\), the 2SRI estimator is consistent. We attempted to derive a similar analytical approximation for the 2SPS estimators; however, the approximation is rather complex and does not have a close form.
### Simulation Study - Logistic Regression
A case-control study design with logistic regression model was employed, involving 5,000 cases and 5,000 controls, unless otherwise specified. The exposure \(X\) was generated based on the regression model of Eq. (2) as described in Section 2.2. For the outcome of logistic regression model of Eq. (3), \((\beta_{0},\beta_{1},\beta_{2},\beta_{Z})=(\log(0.01/0.99),1,0.5,0.5)\) and \(\beta_{U}\in\{0,1.5,3\}\). The remaining elements adhere to the description provided in Section 2.2.
The results are summarized in Figures 3-8. The naive estimator for \((\beta_{1},\beta_{2},\beta_{3})\) exhibits bias if \(\beta_{U}\neq 0\), even when \(\gamma_{U}=0\). Notably, unlike the linear model, this bias pertains to each parameter, including \(\beta_{3}\) for the effect of GxE, and is not limited to just \(\beta_{1}\) for the effect of exposure \(X\). This bias arises from the non-collapsibility property inherent to logistic regression. While the naive estimator for \(\beta_{2}\) and \(\beta_{3}\) is generally biased downward, when \(\gamma_{U}>0\) and \(\beta_{U}=1.5\) or \(3\), the
naive estimator overestimates \(\beta_{1}\) (Figures 3-5). When \(\beta_{U}=-3.0,-1.5\), or \(-0.5\) in the opposite direction of \(\gamma_{U}>0\), the naive estimator underestimates \(\beta_{1}\) (Figure 7). A similar observation is for when \(\beta_{U}\) is positive and \(\gamma_{U}\) is negative (Figure 8).
A concerning aspect of the 2SPS estimators is their notable bias, even when the naive approach exhibits only a minor empirical bias for the interaction parameter \(\beta_{3}\) (Figure 3 and 5). The adjusted 2SPS estimators, 2SPSadj and 2SPSadj-a, improves the bias to some extent, especially when \(\gamma_{U}\) is large. It appears that the only potentially valuable MR-based estimator is the 2SRI, where it occasionally improves bias for \(\beta_{3}\). For the main effects, \(\beta_{1}\) and \(\beta_{2}\), the 2SPS estimators again exhibit notable bias. In contrast, the 2SRI estimator that includes the additional residual term \(\widehat{\delta}\) frequently demonstrates lower bias for \(\beta_{1}\) compared to the naive estimator, which can be substantial, and improves bias for \(\beta_{2}\) occasionally. The naive and 2SRI estimators display a high degree of correlation. As in the linear model of \(Y\), among all the MR-based estimators, 2SRI has the smallest standard error.
Tables 4-5 show the results of type-I error of testing the null hypothesis of \(\beta_{3}=0\). The type-I error of the naive and MR-based approaches exhibit occasional inflation when \(\beta_{U}>0\), particularly noticeable in Setting IV. Consequently, we increased the overall sample size from 10,000 to 30,000 and expanded the number of replications from 500 to 1000 and we also explored additional scenarios, summarized in Table 6. Our primary observation is that, under small values of \(\beta_{U}\) (e.g., \(\beta_{U}=0.5\)), the naive and 2SRI estimators tend to closely approximate the nominal type-I error. However, under large values of \(\beta_{U}\), the statistical tests of the naive and 2SRI approaches may become invalid. The 2SPS estimator sometimes have noticeable type-I error inflation when the effect for the known confounder \(Z\) is nonzero (e.g., \(\beta_{U}=0.5,\beta_{Z}=\gamma_{Z}=0.5,\beta_{2}=0.5\)), but the adjusted 2SPS estimator reduces the type-I error inflation somewhat, except for when \(\gamma_{U}=0\). The 2SPS alternative estimator based on only the genetic instrument generally approximate the type-I error well with occasionally limited inflation. When the effect of \(U\) on \(X\), i.e., \(\gamma_{U}\), is large, all estimators appear to have a reasonable type-I error. Interestingly, when the main effect of \(G\) is null, all estimators, including the naive estimator, maintains the correct type-I error.
An Application to Colorectal Cancer
We applied the naive (or conventional) and the five MR-based methods to assess the genetic interaction with the body mass index (BMI) in association with colorectal cancer (CRC) risk using the pooled data of studies from the Colon Cancer Family Registry, the Colorectal Transdisciplinary Study, the Genetics and Epidemiology of Colorectal Cancer Consortium, and the United Kingdom Biobank. The details of these studies have been previously published (Huyghe et al., 2019; Schmit et al., 2019; Schumacher et al., 2015). Briefly, the studies were either population-based case-control studies or nested case-control studies assembled from cohort studies via risk-set sampling. Cases and controls were matched on age, sex, race, and enrollment date or trial group, when applicable. Colorectal adenocarcinoma cases were confirmed by medical records, pathological reports, or death certificate information. All participants gave written informed consent and studies were approved by their respective Institutional Review Boards.
Analyses were limited to individuals of European ancestry based on self-reported race and clustering of principal components with 1000 Genomes European population. Individuals were further excluded based on cryptic relatedness or duplicates (prioritizing cases and/or individuals genotyped on the better platform), genotyping or imputation errors, non-CRC outcomes, and age outliers. The final pooled sample size was 52,235 controls and 44,500 cases. Demographics and environmental exposures, here, BMI, were self-reported either at in-person interviews or via structured questionnaires and were harmonized across studies through a multi-step procedure (Hutter et al., 2012). All individuals were genotyped by microarrays and the quality control steps had been previously published (Huyghe et al., 2019). Briefly, genotyped SNPs were excluded based on call-rate (\(<95-98\%\)), lack of Hardy-Weinberg equilibrium (p-value \(<10^{-4}\)), discrepancies between reported and genotypic sex, discordant calls between duplicates. All autosomal SNPs were imputed to the Haplotype Reference Consortium r1.1 (2016) reference panel via the Michigan Imputation Server (Das et al., 2016). Imputed common SNPs were restricted based on a pooled MAF \(\geq 5\%\) and imputation accuracy (\(\text{R}^{2}>0.9\)). After imputation and quality control analyses, a total of about 5.5 million common SNPs were included.
To predict BMI, we fit a linear regression model including three PRS previously developed by (Yengo et al., 2018; Kichaev et al., 2019) (1,368 SNPs), Bull et al. (2020) (312 SNPs), and
Pulit et al. (2019) (1,092 SNPs) using the 52,235 controls from the pooled dataset. The model also includes study, age, sex, and three principal components (PCs) for genetic ancestry. The R\({}^{2}\) explained by all predictors is 9.3%. All three PRSs were significantly associated with BMI (p-values are 1.6e-4, 1.4e-4, and \(<2\)e-16, respectively) and the three PRSs alone explained 6.2% of variation of BMI, which is considerably greater than typical biomarkers and lifestyle factors, see e.g., Burgess and Thompson (2013) and references therein. Based on this model, we constructed two scores for predicting BMI. The first overall score was the weighted sum of the three PRSs as well as other non-genetic predictors including age, sex, PCs, and study, with weights being the regression coefficient estimates in the linear model. The second score was based on only the three PRS. For the 2SPS and adjusted 2SPS estimators, we used the first overall score, and for the 2SPSa and adjusted 2SPSa we used the second PRS-only score. For the 2SRI, we calculated the residual by subtracting the first overall score from BMI. All models (naive and MR) were adjusted for study, age, sex, and three PCs.
Figure 9 shows the boxplots of regression coefficient estimates and standard errors for the main effects of BMI (i.e., \(X\)) and \(G\) and the interaction effect between BMI and \(G\) for the naive and the five MR-based estimators. Here we focus on the results for Chromosome 1 (in each run \(G\) is one of the 414,107 SNPs of Chromosome 1) to compare the relative performance of all estimators (the results for other chromosomes are similar). As it can be seen from Figure 9, the estimates of interaction effect of GxBMI centered around 0 for all estimators, suggesting that there was no overall GxBMI interaction. The QQplot that compared \(-\log_{10}\) p-value with the expected under the uniform distribution also showed no evidence of inflation for GxBMI interaction for all estimators (Figure 10). As majority of SNPs are not associated with CRC risk, this result is consistent with our simulation results under the main effect for \(G\) being null where the type-I error for all estimators is generally maintained (Table 6). The QQplot also shows that there was no evidence for GxBMI interaction under the overall null. The naive and 2SRI estimators had the smallest standard errors with mean \(\sim 0.015\) for both estimators. The four variations of 2SPS estimators all have greater standard errors with 2SPSa and adjusted 2SPSa estimators having largest standard errors \(\sim 0.063\) vs \(\sim 0.050\), which may be due to the fact that the BMI predictor for the 2SPSa estimators based on only PRS was less predictive than that for the 2SPS estimators that included both the PRS and other non-genetic predictors. For
the 2SPS and 2SPSa, adjusting for the residuals did not improve the efficiency. The pairwise plots also showed that the 2SPS and adjusted 2SPS were highly correlated in both the estimates and p-values, so were the 2SPSa and adjusted 2SPSa estimators (Figures 11 and 12). 2SPS and 2SPSa were also correlated, but not as strongly. The naive and 2SRI estimators were highly correlated in both the estimates and p-values. However, the 2SPS and 2SPSa estimators had little correlation with the naive and 2SRI estimators with the 2SPS and 2SPSa estimators having wider ranges, possibly because of their greater variances. The patterns for the main effects of \(G\) and BMI were similar (Figure 9, Figures S1-S4 of SI). The notable difference is that when the main effect is non-zero, here, BMI, the p-values for various estimators were more correlated than when the main effect is null, here, \(G\). We conducted a simulation study mimicking the real data and found the patterns observed for the real data were consistent to the simulated data (Figure S5 of SI).
It is worth noting that the main effect estimates of BMI based on all the methods had a non-zero median. While it is generally not meaningful to examine the main effects in the presence of an interaction effect, since there was no evidence for GxBMI interaction, we could evaluate the effect of BMI on CRC risk. All estimators suggested that higher BMI is associated with increased CRC risk. Interestingly, the log-odds ratio estimates were 0.228, 0.234, 0.228, 0.235, and 0.234 per 5 unit increase in BMI for the 2SPS, adjusted 2SPS, 2SPSa, adjusted 2SPSa, and 2SRI estimators, respectively, all of which were greater than the naive estimator, 0.139. This difference, especially the MR estimate greater than the naive estimator based on conventional observational studies, has been noted before (Suzuki et al., 2021). Possible explanations include inherent limitations of conventional observational studies such as reverse causation (Mandic et al., 2023) and measurement error (i.e. residual confounding) (Davey Smith and Ebrahim, 2003). For example, if a particular behavior is associated with increased risk but this behavior is also associated with under reporting BMI, failing to adjust for this behavior will result in attenuation of the effect of BMI. This is consistent with our simulation result when the effect of the unobserved confounder \(U\) has opposite direction for the exposure and outcome (Figure 7). Other possible explanations may be that genetically determined BMI is likely to reflect lifelong exposure and there is empirical evidence showing that the magnitude of the association for lifelong exposure for the risk of disease is larger than that for the short-term exposure beginning
later in life (Ference et al., 2012). Another possibility is that the MR estimates might have been inflated, see for example, Figure 7; however, such possibility seems to be less likely, in part because the inflation, if exists, limits to the 2SPS estimators not the 2SRI estimator and in our real data analysis, 2SPS and 2SRI estimates were consistent and they were all greater than the naive estimate.
## 5 Discussion
This research focuses on practical MR models designed for both testing and estimating the effect of GxE on \(Y\). Extensions of the 2SPS and 2SRI methods tailored for linear and logistic regression models of the outcome variable \(Y\) have been introduced and extensively evaluated through a simulation study. In the case of the linear model, all the MR-based approaches, as well as the naive approach, provide a valid test for GxE. However, the naive estimators often exhibit bias. Among the MR-based methods, 2SRI is recommended for mitigating the bias associated with the naive approach, as it is shown to be the most efficient approach. Regarding the logistic regression model, the MR-based methods generally maintains type-I error with limited inflation, when the effect of unobserved confounder \(U\) on \(Y\) is weak or when there is no measured confounder \(Z\). All estimators were biased, though the 2SRI estimator has the smallest bias, especially for the main effect of the exposure, which can be substantial under the naive method. This was demonstrated by our application to colorectal study, where, in fact, the effect of observed BMI was substantially attenuated compared to the MR estimators.
Our research can be expanded in several directions. First, we have considered perhaps the simplest model for the unmeasured confounder. In practice, it is likely that the unmeasured confounder interacts with the exposure or with \(G\). Then, some of the current observations may no longer hold. Second, in the real data analysis, three PRSs were used, each involving hundreds to over a thousand genetic variants, as genetic instruments for BMI in order to increase the power of the MR analysis (Burgess and Thompson, 2013). However, it is likely that some of the genetic variants in the PRSs may not be valid IVs, in the sense that they may have direct effects on the outcome, beside mediating through the BMI. Including invalid IVs may bias the results. Many methods have been developed to relax this assumption by incorporating a horizontal
pleiotropy effect (Burgess and Thompson, 2017), allowing for some direct effects under the InSide assumption (Bowden and Turkington, 1984), or identifying invalid IVs (Verbanck et al., 2018), but not in the context of gene-environment interaction. Third, the popular logistic regression model for binary outcome clearly has limitation when it comes to the MR analysis due to its non-collapsibility. Due to lack of a close form or approximation of a close form for the commonly used MR estimators, especially the 2SPS-type estimators, it is not easy to understand the magnitude and direction of the biases based on entirely empirical results, the complexity of which have been shown in our extensive simulation study. Future research towards better understanding of the MR methods for binary outcome is needed.
As put by VanderWeele (2009), it is important to distinguish between interaction and effect modification. In particular, "interaction is defined in terms of the effects of two interventions whereas effect modification is defined in terms of the effect of one intervention varying across strata of a second variable". Under the above effect-modification definition, there is asymmetry between \(G\) and \(X\). The role of \(G\) concerns whether the effect of primary interest varies across strata defined by \(G\). Effect modification can have important implications for public health as the impact of the exposure \(X\) may differ across sub-populations defined by \(G\). This may indicate the necessity for different interventions among these sub-populations. Based on causal DAGs and Rule 2 of Pearl's do-calculus (Pearl, 1995), VanderWeele (2009) provided the required condition for interaction and effect modification to coincide. Hence, some of the results presented in this work are also applicable in effect modification formulation, although, methods directly targeting estimands of effect modification could be more efficient, and are beyond the scope of this work.
## 6 Acknowledgements
The work is supported in part by grants from the National Institutes of Health (R01CA189532, R01HL145806, U01 CA164930, R01 CA20140), the Israel Science Foundation (ISF) grant number 767/21 and by a grant from the Tel-Aviv University Center for AI and Data Science (TAD). The authors thank the Genetics and Epidemiology of Colorectal Cancer Consortium (GECCO) and the participating studies, investigators, staff, and study participants for their dedication and contributions. A full list of these studies and funding is provided in the Supporting Information. |
2309.12634 | Learning Actions and Control of Focus of Attention with a Log-Polar-like
Sensor | With the long-term goal of reducing the image processing time on an
autonomous mobile robot in mind we explore in this paper the use of log-polar
like image data with gaze control. The gaze control is not done on the
Cartesian image but on the log-polar like image data. For this we start out
from the classic deep reinforcement learning approach for Atari games. We
extend an A3C deep RL approach with an LSTM network, and we learn the policy
for playing three Atari games and a policy for gaze control. While the Atari
games already use low-resolution images of 80 by 80 pixels, we are able to
further reduce the amount of image pixels by a factor of 5 without losing any
gaming performance. | Robin Göransson, Volker Krueger | 2023-09-22T06:02:58Z | http://arxiv.org/abs/2309.12634v1 | # Learning Actions and Control of Focus of Attention with a
###### Abstract
With the long-term goal of reducing the image processing time on an autonomous mobile robot in mind we explore in this paper the use of log-polar like image data with gaze control. The gaze control is not done on the Cartesian image but on the log-polar like image data. For this we start out from the classic deep reinforcement learning approach for Atari games. We extend an A3C deep RL approach with an LSTM network, and we learn the policy for playing three Atari games and a policy for gaze control. While the Atari games already use low-resolution images of \(80\times 80\) pixels, we are able to further reduce the amount of image pixels by a factor of 5 without losing any gaming performance.
## I Introduction
One goal of autonomous robots is to let them act autonomously over an extended period of time without human intervention. The autonomous robots would use processed video input to decide which action they should take. Small scale autonomous robots such as drones have limited battery power and the necessary on-line processing of the video data has an impact on their runtime. One classic idea to reduce the image processing resources is to take inspiration from the human visual system (HVS): the human eyes famously have a non-homogeneous resolution, with a high-resolution fovea in the center and with a linearly decreasing resolution towards the periphery [6]. This resolution comes with the challenge of deciding where to focus the attention, i.e., what part of the field of view should be foveated. The HVS employs a process that continuously controls the gaze. Many computer vision systems exploit the idea of controlling the focus of attention by computing salient feature points. This usually comes at a computational cost because the processing is done on the Cartesian image. The gaze control of the HVS, however, is based on the low resolution retinal image, which implies lower computational needs [6, 11].
In this paper we explore first steps towards the idea of a) controlling a robot based on a retinal image and b) controlling the gaze based on the retinal image as well, i.e., we want to use only retinal image data at every point in the artificial visual pathway.
In detail, the starting point of our exploration is the classic deep RL approach for playing Atari games [18]. We revised the original approach by using the S.o.t.A. deep RL approach A3C [19] for learning two policies: one for controlling the movements of the robot player and one for controlling the gaze. The input to the deep RL process are log-polar-like images that are computed directly from the Cartesian Atari images. As the smaller log-polar-like images made state-estimation harder, we extended the A3C approach with an LSTM [12] and Generalized Advantage Estimation (GAE) [23]. We have explored different log-polar-like sensor sizes and resolutions. We have evaluated the performance on three different Atari games: Pong, Breakout and Beam Rider, with Pong being the simplest and Beam Rider the most challenging one of the three. The experiments for Pong and Breakout show that we can reduce the image data by a factor of 5 while still achieving similar game performance. The learning of the two policies took approx. 5 times longer, but since this is done off-line we do not consider this to be a problem. As the results for Pong and Breakout were similar, we will report only the Breakout results in detail. In case of Beam Rider we were not able to complete policy training as the learning took considerably longer. However, intermediate results were similar to Pong and Breakout.
In summary, our contributions are:
* We explored the use of log-polar like image data and gaze control to reduce image computation time.
* We used an A3C deep RL approach, extended it with an LSTM network and GAE, and we used this to train two policies, one for playing the game and one for controlling the gaze.
* We only used the log-polar like image data throughout the entire artificial visual pathway.
* We evaluated this on three Atari games and the results hint that we can reduce the image data by a factor of approx. 5 without losing performance.
## II Background and Related Work
The ability to control gaze direction allows to efficiently sample the visual field of view [8, 17]. This should reduce computation time. In computer vision, salient features are often used to focus in on relevant image areas [26, 21, 16, 13, 5, 20]. The idea of log-polar sensors is frequently used for a variety of applications incl. navigation [25, 24], object recognition [10, 4, 22, 1, 9] and motion computation [7, 15, 22]. See [25] for a good overview. Most publications that use (log-)polar transforms aim to exploit the rotational invariance and the simplified use of scale that comes with that transform. Recent work here includes [9, 4, 10, 14, 1].
For better readability of our paper we provide below a brief overview of the employed deep RL techniques.
**A3C**: Combining reinforcement learning, where updates usually are heavily correlated, and deep learning, where the input samples generally are assumed to be independent, can be problematic. The Asynchronous Advantage Actor-Critic (A3C) algorithm [19] is a deep reinforcement learning algorithm that decorrelates updates by asynchronously executing multiple agents in parallel. Each agent uses its own set of network parameters and local copy of the environment which means that at every given time-step the agents experience a variety of different states and consecutive updates will no longer be heavily correlated [19].
The A3C has two outputs: a softmax output for the policy \(\pi_{\theta}(s)\) and a linear output for the value function \(V_{\theta}(s)\). The value function can be used to calculate the advantage function as \(A_{\theta}(s_{t})=(\sum_{i=0}^{k-1}\gamma^{i}R_{t+1+i}+\gamma^{k}V_{\theta}(s_ {t+k}))-V_{\theta}(s_{t})\) where \(s_{t}\) is the state for time-step \(t\), \(R_{t+1}\) is the reward received when taking action \(a_{t}\) in state \(s_{t}\) and \(\gamma\) is the discount factor [19]. With this advantage function the value loss is defined as \(L_{value}=\frac{1}{2}\sum_{i=0}^{k-1}A_{\theta}(s_{t+i})^{2}\) and the policy loss as \(L_{policy}=\sum_{i=0}^{k-1}(-\log\pi_{\theta}(a_{t+i}|s_{t+i})A_{\theta}(s_{t+i })-\beta H(\pi_{\theta}(s_{t+i})))\) where \(H(\pi_{\theta})=-\sum\pi_{\theta}\log\pi_{\theta}\) is an entropy term and \(\beta\) controls the strength of this entropy term [19]. When an agent reaches a terminal state, or has performed \(t_{max}\) steps, the local gradient is computed using the total loss. This local gradient is then applied to the global network in an asynchronous fashion and the local network is reinitialized using the now updated global parameters.
Exploration vs. exploitation is an important concept for a reinforcement learning algorithm. The softmax policy output of the A3C can be seen as a probability distribution over the actions in each state and the next action can thus be chosen probabilistically [19]. Due to the nature of the softmax function every action will always have a non-zero probability of being chosen, making it possible for the algorithm to explore the full state-space.
**A3C-LSTM**: In the Atari environments, and in many robotics environments as well, consecutive states are correlated and more than one state is needed to determine which way something is moving. In order to give the agent access to information from more than the current frame we introduce recurrent elements to the network. More specifically, a Long Short-Term Memory (LSTM) [12] module is added to the network after the convolutional layers. The LSTM is fed with the output from the convolutional layers and the output from the LSTM from the previous time-step.
**GAE**: Generalized Advantage Estimation (GAE) is used to make a more robust estimation of the advantage function [23]. Different TD-errors (1-step, 2-step,..., \(k\)-step) are used to form advantage estimators \(\hat{A}_{t}^{(k)}\). The generalized advantage estimator \(\hat{A}_{t}^{GAE}\) is then defined as the exponentially-weighted average of the different estimators \(\hat{A}_{t}^{(k)}\). When forming the average a parameter \(0\leq\lambda\leq 1\) is used. This parameter works as a trade-off between bias and variance: for \(\lambda=1\) the variance is usually high while a lower \(\lambda\) reduces the variance but introduces bias [23].
## III Focus-of-Attention
When an agent is trained on an Atari environment full pre-processed screens are used as input. In this paper the input is of size 80 \(\times\) 80 pixels, making the state-space very large. The state-space is, in fact, unnecessarily large as many input pixels aren't important. Again, this is not ATARI-specific but a general common problem. The _focus-of-attention_ (FoA) mechanism can reduce the amount of irrelevant pixels in the input and allow the agent to focus on certain parts of the screen while giving less attention to other parts [11].
The addition of the FoA mechanism modifies the screens in a way that focuses the view of the agent on the relevant parts. For this, we introduce _visual actions_ that are used to move the center of attention at every time-step. These actions enables the agent to be trained to focus its attention at interesting parts of the screen while simultaneously learning to play the Atari game. By introducing FoA we make sure that input pixels with no or little importance will, with enough training, be disregarded by the model. This will effectively reduce the size of the state-space.
The introduction of FoA makes the true game state only partially observable, which to some degree violates the Markov property. However, by moving the center of attention through the use of the visual actions the agent can access the relevant part of game state. This dynamic is meant to be similar to the human visual system where the human eyes are moving around and locating interesting parts in order to build up a mental map of a scene.
### _Dual-head Architecture_
The model with the added FoA mechanism was created by duplicating the head in the A3C-LSTM model described earlier, see Fig. 1. One of the heads, the _natural head_, controls the player movements or robotic behaviour within the game through _natural actions_ while the other head, the _vision head_, controls the movements of the center of attention through _visual actions_. The duplication of the head results in a model with four outputs: two softmax outputs for policies \(\pi_{\theta^{nat}}^{nat}\), \(\pi_{\psi^{vis}}^{vis}\) and two linear outputs for value functions \(V_{\nu^{nat}}^{nat}\), \(V_{\nu^{vis}}^{vis}\). Note that the four sets of parameters \(\theta^{nat}\), \(\theta^{vis}\), \(\nu^{nat}\) and \(\nu^{vis}\) share the parameters for the convolutional torso and the LSTM. This means that most parameters are shared and because of this the expressions are simplified as \(\theta=\theta^{nat}=\theta^{vis}=\nu^{nat}=\nu^{vis}\) in this paper.
A transition from state \(S_{t}\) to \(S_{t+1}\) for this model is made through a natural action \(A_{t}^{nat}\) and a simultaneous visual
action \(A^{vis}_{t}\). The reward \(R_{t+1}\) from this transition depends only on the natural action \(A^{nat}_{t}\), however. The visual action \(A^{vis}_{t}\) doesn't directly affect the state of the game, but instead affects what part of the true game state that is accessible to the agent. This means that while \(R_{t+1}\) is not affected by \(A^{vis}_{t}\), the state \(S_{t+1}\) is affected and thus also the following natural action \(A^{nat}_{t+1}\) and the next reward \(R_{t+2}\). To account for this behavior, the loss functions have to be modified. The loss functions for the natural head require no modifications but in the loss functions for the vision head the reward \(R_{t+1}\) has to be replaced with the next reward \(R_{t+2}\). Both \(R_{t+1}\) and \(R_{t+2}\) are thus needed and a transition can be represented by \((S_{t},A^{nat}_{t},A^{vis}_{t},R_{t+1},R_{t+2},S_{t+1})\). A transition is illustrated in Fig. 2.
### _Dual-head Loss Function_
With the dual-head architecture the total loss \(L_{total}\) needs to be divided in four parts: \(L^{nat}_{value}\), \(L^{nat}_{policy}\), \(L^{vis}_{value}\) and \(L^{vis}_{policy}\). The value loss for the natural head can be defined as:
\[L^{nat}_{value}=\frac{1}{2}\sum_{i=0}^{k-1}A^{nat}_{\theta}(s_{t+i})^{2}\enspace,\]
\[A^{nat}_{\theta}(s_{t})=(\sum_{i=0}^{k-1}\gamma^{i}R_{t+1+i}+\gamma^{k}V^{nat} _{\theta}(s_{t+k}))-V^{nat}_{\theta}(s_{t})\]
where \(A^{nat}_{\theta}(s_{t})\) is the non-generalized advantage estimation for the natural head. The next loss, the policy loss for the natural head, can be defined as:
\[L^{nat}_{policy}=\sum_{i=0}^{k-1}(-\mathrm{log}\pi^{nat}_{\theta}(a_{t+i}|s_{ t+i})\hat{A}^{GAE,nat}_{t+i}-H^{nat}_{t+i})\enspace,\]
\[H^{nat}_{t}=\beta H(\pi^{nat}_{\theta}(s_{t}))\enspace,\]
\[\hat{A}^{GAE,nat}_{t}=\sum_{i=0}^{k-1}(\gamma\lambda)^{i}\delta^{nat}_{t+i}\enspace,\]
\[\delta^{nat}_{t}=R_{t+1}+\gamma V^{nat}_{\theta}(s_{t+1})-V^{nat}_{\theta}(s_ {t})\]
where \(H^{nat}_{t}\) is the entropy term, \(\hat{A}^{GAE,nat}_{t}\) is the generalized advantage estimation and \(\delta^{nat}_{t}\) is the TD(0)-error. The losses for the vision head can be defined in similar manners, starting with the value loss:
\[L^{vis}_{value}=\frac{1}{2}\sum_{i=0}^{k-2}A^{vis}_{\theta}(s_{t+i})^{2}\enspace,\]
\[A^{vis}_{\theta}(s_{t})=(\sum_{i=0}^{k-2}\gamma^{i}R_{t+2+i}+\gamma^{k}V^{vis} _{\theta}(s_{t+k}))-V^{vis}_{\theta}(s_{t})\enspace.\]
Note how the reward has been replaced in the loss for the vision head. The last loss, the policy loss of the vision head, can be defined as:
\[L^{vis}_{policy}=\sum_{i=0}^{k-2}(-\mathrm{log}\pi^{vis}_{\theta}(a_{t+i}|s_{ t+i})\hat{A}^{GAE,vis}_{t+i}-H^{vis}_{t+i})\enspace,\]
\[H^{vis}_{t}=\beta H(\pi^{vis}_{\theta}(s_{t}))\enspace,\]
\[\hat{A}^{GAE,vis}_{t}=\sum_{i=0}^{k-2}(\gamma\lambda)^{i}\delta^{vis}_{t+i}\enspace,\]
\[\delta^{vis}_{t}=R_{t+2}+\gamma V^{vis}_{\theta}(s_{t+1})-V^{vis}_{\theta}(s_ {t})\enspace.\]
The definitions above are slightly simplified as only one set of hyperparameters \(\gamma,\beta,\lambda\) is considered instead of using one set \(\gamma^{nat},\beta^{nat},\lambda^{nat}\) for the natural head and one set \(\gamma^{vis},\beta^{vis},\lambda^{vis}\) for the vision head. Throughout this project \(\gamma=\gamma^{nat}=\gamma^{vis}\), \(\beta=\beta^{nat}=\beta^{vis}\) and \(\lambda=\lambda^{nat}=\lambda^{vis}\) are used.
Pseudo-code for the A3C-LSTM with FoA can be found in Algorithm 1.
### _Region of Interest Layer_
In order to use the game screens as input to the model without the focus-of-attention algorithm some pre-processing is needed. This pre-processing includes standard operations like cropping, resizing, gray-scale conversion and normalization. When the model with the FoA algorithm is used for an Atari environment an extra pre-processing layer is needed. This layer modifies the game screens in a way that focuses the attention of the agent at a specific region. This extra layer will be referred to as a _Region of Interest_ (RoI) layer in this paper. The different RoI layer investigated in the paper are presented below.
The simplest RoI layer defines the _focal area_, the area which holds all the pixels that are visible to the agent, using a rectangle. To define said rectangle a focal point \((f_{x},f_{y})\), a focal width and a focal height are needed. The focal point marks the center of attention and can be moved through the
Fig. 1: Overview of the A3C-LSTM with FoA.
visual actions of the agent. The resolution of the focal area also needs to be decided and is defined using a _subsampling factor_. Due to the fact that all visible pixels are displayed in the same resolution, this RoI layer will be referred to as a _Constant_ resolution RoI layer.
Through a small change the focal area can be made more complex. By introducing more than one resolution within the focal area the RoI layer will better mimic the human visual system. This new layer still needs a focal point \((f_{x},f_{y})\) and will need three sets of focal widths, focal heights and subsampling factors. The first set will form a small rectangular area with high pixel resolution, the second set will form a medium-sized rectangular area with medium pixel resolution and the last set will form a large rectangular area with low pixel resolution. This RoI layer will be referred to as a _Decreasing_ resolution RoI layer. A comparison between the _Constant_ and the _Decreasing_ RoI layers can be found in Fig. 3.
To further mimic the human visual system it is of interest to introduce peripheral vision to the agent. The peripheral vision of a human is wide but blurry, making it useful for noticing movements but ill-suited for making out details. Both the _Constant_ and the _Decreasing_ RoI layers are extended with something that simulates peripheral vision by making changes to the pixels outside the focal area. These pixels were previously set to zero, but are now replaced with a very low resolution representation of the game state. With this low resolution representation it is not possible to make
Fig. 3: Comparison of _Constant_ and _Decreasing_ RoI layers.
Fig. 2: Transition of A3C-LSTM with FoA.
out any detail, but it should be possible for the agent to detect changes. The resulting RoI layers will be referred to as _Constant(P)_ and _Decreasing(P)_, respectively. Input states with corresponding visualizations for the _Constant_ RoI layer can be seen in Fig. 4 while input states and visualizations for the _Decreasing(P)_ RoI layer can be seen in Fig. 5.
## IV Experiments
The experiments were carried out on different Atari 2600 games from the Arcade Learning Environment (ALE) suite [2]. The Atari games were implemented using OpenAI Gym [3] and while experiments were run on Pong, Breakout and Beam Rider. Due to limited space we will discuss only the Breakout results in this paper in detail but the other games performed similarly1. A total of seven models were trained. The first one was a non-FoA model which would serve as a baseline. The other six models use different RoI layers. A summary of the models can be seen in Table I and the hyperparameters used can be seen in Appendix A.
Footnote 1: The complete results for all three games can be found here: [https://lup.lulu.lsu.se/student-papers/search/publication/9095697](https://lup.lulu.lsu.se/student-papers/search/publication/9095697)
### _Constant Resolution Models_
First, the models using a _Constant_ RoI layer are compared with each other and with the non-FoA model. The performances of these different models can be seen in Fig. 6. From the figure it can be seen that the addition of the focus-of-attention algorithm generally deteriorates performance, particularly during the initial episodes where the agents don't seem to learn anything at all. This slow start is probably due to the fact that the size of the state-space is actually increased for untrained models when focus-of-attention is introduced. When applying the RoI layer the amount of actual game states remain unchanged and for each of these states there now exists many different input states based on the location of the focal point. It is not until the agent learns to control its focus-of-attention that the size of the state-space is effectively reduced. The reinforcement learning task also becomes more complex as the action-space grows. The effect of this can to some extent be seen when looking at the performance of the _Constant 70x70, sub2_ model in Fig. 6. This model barely learns anything during the initial episodes but after approximately 40000 episodes the performance improves at almost the same pace as the _Non-FoA, sub1_ model.
Fig. 6 also shows that lowering the resolution for a \(50\times 50\) focal area deteriorates performance. Using a small focal area along with a lower resolution seems to remove too much information about the game state from the agent. Because of this the _Constant 70x70, sub2_ model is introduced. This model uses a larger focal area than the _Constant 50x50, sub1_ model but with a larger subsampling factor. Making these changes the model performance is improved, both when it comes to training speed and maximum score. Thus, it seems like the total size of the focal area is more important than the resolution for model performance. The amount of pixels visible to the agents can also be compared. The _Constant 50x50, sub1_ model uses a focal area of 2500 pixels with full resolution while the _Constant 70x70, sub2_ model uses a focal area of 4900 pixels where only a fourth of all pixels is kept due to the lowered resolution. This means that an agent of the _Constant 50x50, sub1_ model has access to 2500 pixels at
Fig. 4: Input states (top) and corresponding visualizations (bottom) for the _Constant_ RoI layer.
Fig. 5: Input states (top) and corresponding visualizations (bottom) for the _Decreasing(P)_ RoI layer.
Fig. 6: Performance in Breakout for _Constant_ resolution models.
any time while an agent of the _Constant 70x70, sub2_ model only has access to 1225 pixels. In other words, the _Constant 70x70, sub2_ model outperforms the _Constant 50x50, sub1_ model even though only approximately half the amount of pixels are visible to the agent.
### _Decreasing Resolution Models_
Next, the _Decreasing_ RoI layer is introduced with the _Decreasing 30-50-70_ model and compared with the best performing model using the _Constant_ RoI layer, as can be seen in Fig. 7. The _Decreasing 30-50-70_ model uses an inner focal area of size \(30\times 30\) with a subsampling factor of 1, a middle focal area of size \(50\times 50\) with a subsampling factor of 2 and an outer focal area of size \(70\times 70\) with a higher subsampling factor of 4. This means that the two models are using the same total focal area. When computing how many pixels are visible to the agent, it can be shown that an agent of the _Decreasing 30-50-70_ model has access to 1450 pixels while an agent of the _Constant 70x70, sub2_ model has access to 1225 pixels. The amount of pixels visible to agents of the two models are thus almost the same.
The performances of the two models are quite similar even though increasing the resolution in a small focal area should improve performance, as seen when comparing the models using the _Constant_ RoI layer in Fig. 6. However, as no boost in performance can be seen by increasing the resolution of the innermost focal area, this effect seems to be negated by the lowered resolution in the outermost focal area. It should nevertheless be noted that the low resolution outer focal area does add valuable information to the model. This can be seen by comparing the performance of the _Decreasing 30-50-70_ model in Fig. 7 with the performance of the _Constant 50x50, sub1_ model in Fig. 6. The _Decreasing 30-50-70_ can be seen to perform better than the _Constant 50x50, sub1_ model and the only thing added to the RoI layer for this model is an extra low resolution padding (the outer focal area).
The different resolutions of the _Decreasing 30-50-70_ model can possibly make the training process more challenging for the agent. When only using one subsampling factor a certain game object always look the same but when using three different subsampling factors the same game object can take on multiple shapes. A ball can look like a sharp dot if located in the center of the focal area but it can also look like a blurry blob if located further from the focal point. Also, at some point in time a certain area of the screen can be given in high resolution and at another point in time this same area can be given in a lower resolution. In other words, the multiple resolutions of the _Decreasing_ RoI layer introduces a new challenge for the agent.
### _Peripheral Vision_
In the final models, peripheral vision is introduced. The models with peripheral vision, _Constant(P) 70x70, sub2_ and _Decreasing(P) 30-50-70_ are compared with their non-peripheral counterparts _Constant 70x70, sub2_ and _Decreasing 30-50-70_ in Fig. 8. In these new models the zero-valued background is replaced with a \(5\times 5\) resolution version of the game state. The peripheral vision makes a maximum of 16 extra pixels visible to the agent as a few of the 25 background pixels always will be hidden by the focal area. The few extra pixels do, however, improve performance significantly as both the _Constant(P) 70x70, sub2_ model and the _Decreasing(P) 30-50-70_ model are faster to train than their counterparts. This improvement can be explained with the help of the heat maps in Fig, 9. In the heat maps it can be seen that the attention of the agent usually is focused on the lower half of the screen. This means that without the peripheral vision the agent will not know where the remaining brick are located. However, when adding the low resolution background the agent can make out in which area of the screen the remaining bricks are located and thus,
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Model name & Focal area(s) & Subsampling factor(s) & Peripheral & Pixels-values \\ \hline Non-FoA, sub1 & - & 1 & No & 6400 \\ Constant 50x50, sub1 & \(50\times 50\) & 1 & No & 2500 \\ Constant 50x50, sub2 & \(50\times 50\) & 2 & No & 625 \\ Constant 70x70, sub2 & \(70\times 70\) & 2 & No & 1225 \\ Decreasing 30-50-70 & \(30\times 30\), \(50\times 50\), \(70\times 70\) & 1, 2, 4 & No & 1450 \\ Constant(P) 70x70, sub2 & \(70\times 70\) & 2 & Yes & 1241 \\ Decreasing(P) 30-50-70 & \(30\times 30\), \(50\times 50\), \(70\times 70\) & 1, 2, 4 & Yes & 1466 \\ \hline \end{tabular}
\end{table} TABLE I: Model overview that includes focal area(s) in pixels, the subsampling factor(s), if a peripheral is used and the amount of total pixel-values that the agent has access to.
Fig. 7: Performance in Breakout for _Decreasing_ resolution models.
more easily, progress in the game. The peripheral thus adds valuable information to the model which is in accordance with the previous observation that the total size of the focal area is more important than the resolution.
### _Agent Behavior_
The behavior of the agent can be seen in the heat maps in Fig. 9. The left heat map shows how a trained agent of the _Constant 70x70, sub2_ model uses its visual actions to move its focal point while the right heat map shows the corresponding behavior for the _Decreasing(P) 30-50-70_ model. From the heat maps it can be seen that the agent does not move its focal point very much. In fact, the focal point is mostly kept centered at the bottom half of the screen, which is the most important area of the screen when playing Breakout. The lack of focal point movement is likely due to the large size of the focal areas; even without movement the majority of the screen will be visible to the agent. It should however be noted that there are more focal point movement for the better performing _Decreasing(P) 30-50-70_ model. This indicates that even though a trained model will keep its focus-of-attention in a specific area when playing the movement of the focal point is still of importance.
## V Conclusions
In this paper the A3C-LSTM network was extended with a focus-of-attention mechanism and trained on the Atari 2600 game Breakout. An agent was required to both learn to control its movements and its focus-of-attention. Region of Interest layers with constant resolution, decreasing resolution and with peripheral vision were used. In general, the introduction of the focus-of-attention task slowed down the training process. The drop in performance is probably due to the larger action-spaces and the larger state-spaces of the untrained FoA models.
First, a model using a _Constant_ resolution RoI layer with a focal area of size \(50\times 50\) was shown to achieve high scores on Breakout. By then introducing a RoI layer with focal area of size \(70\times 70\) with lower resolution is could be shown that in these games the size of the focal area is more important than the resolution of the input state. High resolution pixels do not seem to hold much more valuable information than lower resolution pixels but the lower resolution pixels hold more valuable information than no pixels at all.
When changing the \(70\times 70\)_Constant_ RoI layer into a _Decreasing_ resolution RoI layer, still with a focal area of size 70 \(\times\) 70 but with varying resolution, the performance was not improved. The higher resolution closest to the focal point does not make up for the loss of information related to the lower resolution far from the focal point. The _Decreasing_ resolution RoI layer does, however, perform better than the \(50\times 50\)_Constant_ RoI layer. The addition of the extra low resolution area does, in other words, improve the performance significantly. Again, it would seem that in these games the size of the focal area is more important than the use of a high resolution.
The addition of the peripheral vision only gives an agent access to a few extra pixel-values but affects the performance quite a lot. When the peripheral is added to the two previously mentioned RoI layers the performance is improved. In both these cases the peripheral seems to provide important information to the agent.
In summary, we were able to demonstrate working gaze control based on log-polar-like images. Pong and Breakout did not seem to take much advantage from the gaze control. On the other hand, a low-resolution broad field of view proved important. The heat maps in Fig. 9 reflects the most important areas of the screen when playing Breakout, and one can see that the gaze did not move much. Still, without the gaze control and only a low-resolution image the performance was much worse. We are now exploring our approach on object detection, and we expect that this task would profit more from of the high-resolution center area.
## Acknowledgements
This was was supported by the Wallenberg Autonomous Systems Program (WASP), Sweden. The computations were enabled by the supercomputing resource Berzelius provided by National Supercomputer Centre at Linkoping University and the Knut and Alice Wallenberg foundation.
Fig. 8: Performance in Breakout for models with peripheral vision.
Fig. 9: Heat map for focal point movement for the _Constant 70x70, sub2_ and _Decreasing(P) 30-50-70_ models. |
2302.00142 | Leveraging Interactions in Microfluidic Droplets for Enhanced
Biotechnology Screens | Microfluidic droplet screens serve as an innovative platform for
high-throughput biotechnology, enabling significant advancements in discovery,
product optimization, and analysis. This review sheds light on the emerging
trend of interaction assays in microfluidic droplets, underscoring the unique
suitability of droplets for these applications. Encompassing a diverse range of
biological entities such as antibodies, enzymes, DNA, RNA, various microbial
and mammalian cell types, drugs, and other molecules, these assays demonstrate
their versatility and scope. Recent methodological breakthroughs have escalated
these screens to novel scales of bioanalysis and biotechnological product
design. Moreover, we highlight pioneering advancements that extend
droplet-based screens into new domains: cargo delivery within human bodies,
application of synthetic gene circuits in natural environments, 3D-printing,
and the development of droplet structures responsive to environmental signals.
The potential of this field is profound and only set to increase. | Carolus Vitalis, Tobias Wenzel | 2023-01-31T23:20:01Z | http://arxiv.org/abs/2302.00142v2 | # Emerging Trends in Droplet Microfluidic Screens for Biotechnology
###### Abstract
Droplet microfluidic screens are an important high-throughput technology in the discovery, product optimization, and analysis of biotechnology applications. This review discusses the current use of droplet-based screens, explains the emerging trend of interaction assays in microfluidic droplets, and explains why droplets are particularly suited for this application. The interactions of many different biological entities can be screened, including antibodies, enzymes, DNA, RNA, eukaryotic cells and microorganisms, drugs, and other molecules. Recent breakthroughs have enabled new scales of bioanalysis and biotechnological product design. Recent methodology advances that expand droplet-based screens to new environments are also highlighted, including cargo delivery inside the body, application of synthetic gene circuits in the natural environment, 3D-printing, and designing droplet structures that respond to environmental signals.
keywords: biotechnology, microfluidics, droplets, droplet screen, interactions, single cell analysis, biosensors, directed evolution Pacs: 87.80.-y, 87.80.Rb, 87.14.-g, 47.80.-v Msc: 76T30, 78A70, 68U10 +
Footnote †: journal: XXX
## 1 Current Applications for Droplet Screens in Biotechnology
Biotechnological product development often involves screening for specific biological features among many possible options, and in enterprises where know-how and instruments are available, such screens can be performed at a particularly high throughput inside microfluidic droplets. A droplet microfluidic screen typically involves encapsulating cells and reagents into thousands to tens of millions of microfluidic droplets (fl to nL volume), incubating, enriching target droplets by droplet sorting [1; 2] (e.g., based on a fluorescent marker), and performing follow-up analysis (e.g., cultivation [3], DNA sequencing [4], or imaging [5]). Whichever trait may be targeted in a given biotechnological screen, encapsulating the assay into droplets allows ultra-high throughput experiments and closes a methodological gap between flow cytometry and the use of liquid-handling robots [6]. Flow cytometry is the benchmark for reliable high-throughput analysis and can only screen single cells for intracellular or surface-bound fluorescence markers and light scattering. In a liquid-handling robot, almost any other biological assay can be performed, including the screening of secreted molecules, interactions, and using genetic circuits. Such robots are already widely used in industry and well-funded biotechnology research laboratories, but their purchase and operation material costs, such as well plates, pipette tips, and reagents, are high, and their throughput lags behind that of flow cytometry and droplet microfluidic screens by orders of magnitude. Microfluidic droplets fill this gap in terms of their throughput and versatility.
The high throughput of droplet-based screens and their reagent efficiency and precision have already been utilized in different biotechnological applications. As illustrated in Figure 1, droplet screens in biotechnology are widely deployed for the product discovery of antibody targets [4; 7; 8] for which commercial screening instruments exist, and they are also used for drug discovery, such as antibiotics [9; 10; 3]. Product discovery screens are designed such that candidates with the desired cellular response generate a quantitative signal inside their droplet that can be detected in the droplet sorter; for example, the absence of fluorescent target cells killed by antibiotic molecules. Enriched candidate droplets may be DNA sequenced to obtain a gene library encoding the metabolite or antibody phenotype, or they may be further cultivated and examined by mass spectrometry.
In addition to discovering new functional molecules, droplet sorting is a popular tool for optimization procedures in biotechnology. Optimiza
tion screens with microfluidic droplets are mainly applied to select the best candidates for pathways [11; 12; 13; 14], enzymes [15; 16], and genetic circuits [17; 18; 19]. Such workflows usually start with computational, transposon, or mutation-generated gene libraries that are expressed individually inside droplets by cells or cell-free reaction master mix, which must then be evaluated for their best-performing members, in a manner similar to other droplet screens. If selected high-performance genes are used as the starting point for an additional round of variation and selection, the process is called directed evolution [15].
Finally, droplets are routinely used to characterize the behavior of single cells of production strains and target tissues in biotechnology [20; 21]. Reliable single-cell analysis can be performed by simply reducing the concentration of the encapsulated cells during droplet generation. Such assays include single-cell transcriptomics gene expression profiling of eucaryote cells [22], including yeast [23], combining sequencing with imaging [5], cultivation of cells in droplets, and optimizing culture conditions [24], including the cultivation of spheroids and organoids in droplets [25], single genome sequencing of bacteria [26; 27], and the absolute quantification of molecules using droplet-based digital PCR [28].
Droplet-based screens are not only advantageous because of their high throughput but also lower reagent use, cost, and contamination [29], while improving the cultivability of microorganisms [30; 29; 31]. At the same time, to translate methods into a droplet microfluidic screen, one needs experimental knowledge, equipment, and interdisciplinary communication skills [32]. Furthermore, the respective methods may have to be re-optimized, for example, in terms of reagent concentrations, and benchmarked, which requires an up-front time investment.
## 2 Interactions--Emerging Assays in Droplets
Droplets should be chosen when more traditional screens, such as flow cytometry and antibiotic selection, cannot be used for the biotechnologically desired process. The commonly highlighted selling point of droplets is their ability to perform extracellular screens; in other words, they detect molecules in solution and use noncell-bound fluorescent dyes [6]. An emerging trend in droplet-based screens is to focus on an even more impactful aspect of droplets: their ability to analyze interactions between cells or molecules. In fact, microfluidic droplets are not only uniquely able to process interacting
Figure 1: **Major droplet-based applications in biotechnology.** Microfluidic droplets are widely used to discover new functional antibodies and drugs, as well as to optimize pathways, enzymes, and genetic circuits by selection or directed evolution. Droplets are furthermore often utilized to characterize or grow single cells and other more specialized applications.
cells or molecules at ultra-high throughput but are also particularly suitable for facilitating strong interactions and fast interaction readouts [24; 33]. Here is why: Interactions in biology are often based on a limited number of molecules and cells, which can come into contact to initiate meaningful interactions in the small and confined spaces of microfluidic droplets. Many such interactions would not yet take effect in well-plates that contain microliters instead of picoliter volumes for the same number of molecules, alongside potential contaminants. Considering these advantages, it is worthwhile to re-frame droplet screens in terms of interactions, as this perspective allows for the discovery of new valuable screens that were previously unavailable and might have significant potential for scientific and commercial innovation.
Interaction screens cover many different biological assays, such as interactions between different cell types, proteins, DNA/RNA, drugs, and other molecules, as illustrated in Figure 2. Literature examples can already be found for all 15 interaction categories shown in the figure's central graph, and there are further examples combining more than two types of interacting entities. At the same time, most interaction screen types are still in the early stages of development, and we expect many more research studies to be published in the coming years, which will help mature biotechnological screening applications. In the following paragraphs, we highlight screening examples of different types of interactions to stimulate the research and industrial use of droplet-based interaction assays.
Microfluidic droplets allow the quantitative analysis of the interaction networks of different cells and even additional substances, such as drugs or minerals, at different concentrations. Figure 2 A shows an example of a microbial network analysis of three labeled auxotrophic bacterial cultures [34]. We are particularly interested in these types of emerging screens owing to their potential application in modular genetic engineering approaches [40], which promises a better and broader biochemical synthesis space in co-cultures [41]. There are also promising microfluidic co-culture methods not performed in droplets [42]. Scalable analysis of microbial interactions will further benefit the field of microbiome analysis and its bioengineering applications [43].
Protein-protein interactions are another important field for which many microfluidic tools are available, which have been reviewed elsewhere [44]. Figure 2 B illustrates a method that profiles the co-occurrence of cell-surface proteins on different cell types using a labeled antibody pool [35].
Encapsulating interaction partners such as single cells or molecules into droplets is usually achieved by controlling the input concentration of each
Figure 2: **Schematic representation of different kinds of droplet-based interaction screens.** The Interactions of many biological entities can be screened in droplets, including proteins such as antibodies and enzymes, eukaryotic cells, microorganisms, drugs, and other molecules, and DNA/RNA. At the center, binary combinatorial interaction possibilities are shown by connecting edges in a graph, and screens can incorporate more than two categories. Droplet schematics **A**-**H** highlight some examples of noteworthy interaction screens: (**A**) Quantitative network analysis of interactions between microorganisms and antibiotic drugs [34]; (**B**) Protein-protein interaction (co-occurrence) screens on the surface of different cell types [35]; (**C**) Active merging of droplets for interaction screens where all droplets have the same content type combination of cells and beads [36]; (**D**) Co-localization screen for functional antibodies that bind antigens to magnetic beads in droplets. The functional screen is followed by transcriptomics analysis in barcoded droplets to sequence the functional antibody coding genes [4]; (**E**) Microbiota-algae co-cultivation screen in gel-microdroplets to select and sequence synergistic helper strains that aid the growth of algae culture [37]; (**F**) Yeast production cell-biosensor interaction in droplets to quantitatively indicate excreted product concentrations [38]; (**G**) DNA-protein interaction screen to detect all human viruses in multiple samples at once in a highly multiplexed diagnostic assay in a double-droplet trap array [39]; (**H**) Cell-cell interactions to form spheroids and organoids in gel-microdroplets [25].
sample, resulting in a stochastic encapsulation distribution following the Poisson distribution. Therefore, obtaining the desired combination of different cells and molecules can be inefficient for a larger number of interacting partners. A number of methods have attempted to address this issue. Figure 2 C highlights a recent method for actively selecting and merging encapsulated entities into desired combinations [36]. The authors applied this method to screen for cytokines on beads that are released during the interaction of two cell types.
Interaction screens may combine many types of interactions. For example, in the functional antibody screen shown in Figure 2 D, the droplet contains a cell expressing antibodies, which interact with an antigen and labeled antibodies to co-locate fluorescence onto magnetic beads inside droplets. The screen continues by recovering the cells and re-encapsulating them into droplets with reagents and DNA-barcoded gel-microdroplets that aid in recovering transcribed genes of the antibody heavy and light chain [4]. In addition to the bead-based assay, the authors showed the screen directly on target cells, similar to an earlier antibody gen-target cell screen performed by another research team [8].
Many interaction screens can help improve the production of biotechnological products. For example, a higher yield of algal culture can be achieved when cells are grown in a co-culture with certain microorganisms instead of monocultures. Figure 2 E illustrates a screen that selects synergistic bacterial strains using the autofluorescence of growing _Chlorella sp._ colonies in gel-microdroplets [37]. The authors also showed that co-cultures in droplets could be incubated for 60 days. A similar co-culture approach has been used to bring microorganisms to laboratory cultures that did not grow in traditional cultures [45]. Similarly, the method illustrated in Figure 2 F used droplets for yeast metabolic engineering to quantify the product (p-coumaric acid) excretion into the droplet with biosensor reporter cells. Biosensor cells carry an operon with a transcription factor that responds to p-coumaric acid and codes for a fluorescent protein signal [38]. More recently, a cellular biosensor was developed to screen for the secretion of the industrial chemical 3-hydroxypropionic acid in droplets [46].
Droplets do not always need to be sorted in order to be characterized. The diagnostic assay illustrated in Figure 2 G can screen several clinical samples for all 169 human-associated viruses ( viruses with \(>\)10 published sequences) in a protein-DNA interaction screen. First, all DNA-amplified patient sample-derived droplets and all the Cas13-based virus detection mix
droplets were color-barcoded. Droplets were then trapped pairwise side-by-side in a large double-droplet trap array. Finally, the color barcode combinations were imaged before and after the droplet merger, which resulted in a fluorescent readout where the target viral sequences were present [39].
Cell-cell interactions are also essential for physiologically relevant tissue formation, an essential field for improving the early stages of clinical trials. Droplets are suitable for both spheroid and organoid generation, often in gel-microdroplets, such as Matrigel, agarose, alginate, gelatin, and others (see Figure 2 H), or gel-shell capsules [25].
## 3 Expanding Applications of Droplet-Based Interactions Screens
As we have seen, high-throughput interaction screens based on microfluidic droplets can be performed with a broad spectrum of droplet ingredients, such as cells and molecules, but also magnetic beads, color or DNA barcodes, and gels. In particular, developments in gel-microdroplet materials have helped expand the applications of droplets to cell cultures, multi-step procedures, and simplified molecular processing protocols. In this section, we highlight a few emerging material innovations that allow the expansion of applications of droplet-based interaction screens to a broader audience or new environments, including interactions between droplets and the environment itself.
For example, alginate hydrogels with an alginate-polyacrylamideing coating have been used to physically contain genetically modified microorganisms [47], allowing them to be incorporated into the environment in a controlled manner, as illustrated in Figure 3 A. After 72 h, no microorganism escape was detected, but nutrients and signal molecules were exchanged between the environment and neighboring capsules.
A widely used advantage of gel-microdroplets and double emulsion droplets is that they can be processed in an aqueous environment and therefore sorted in a regular flow cytometer [48; 53]. Figure 3 B illustrates the use of double emulsion for cell sorting in a commercial flow cytometer. Such machines are available in a wider range of research laboratories and can therefore increase access to droplet-based interaction screens. Unfortunately, the generation of double emulsions and gel microdroplets is more challenging than that of regular droplets.
Cells can be enclosed in shell droplets, which can be 3D-printed and cross-linked into cell-containing structures [49], as shown in Figure 3 C. This
Figure 3: **Illustration of emerging droplet-based interaction screens facilitated by using new materials.****(A)** Biocontainment of microorganisms is enabled by an alginate-polyacrylamideamide coating that allows controlled incorporation of these organisms into the environment [47]. **(B)** Cell sorting using flow cytometer and double emulsion [48]. **(C)** Bioprinting of cells encapsulated in shell-droplets that deliver more precise spatial cell-cell interactions [49]. **(D)** Hair follicles delivered by microfluidic droplets of gelatin methacryloyl and chitosan hydrogel, present a new method of targeted delivery [50]. **(E)** DNA droplets contain a genetic circuit capable of sensing the presence of microRNAs by disrupting the homogeneous distribution of DNA, separating them into three distinct droplets [51]. **(F)** Wastewater treatment using micromotors whose generation was assisted by microfluidic droplets and which catalyze the degradation of organic waste [52].
process opens new possibilities to place interaction partners in space for interactions between different encapsulated organisms and their environments. This technology may also enable the co-design of organs on a chip with a spatially and compositionally engineered microbiome.
A key environment for the placement and release of drugs and organisms is the human body. Indeed, droplets can be used for injection and targeted delivery [54], such as injecting gel-microdroplets loaded with hair formation cells and plasma with platelets into hairless tissue for regeneration [50]. The method illustrated in Figure 3 D utilizes gelatin methacryloyl (GelMA) and chitosan hydrogel for targeted delivery. Delivery systems can be designed in several layers and with multiple functions, such as featuring a liposome-based self-renewable hydration layer for delivery into joints [55].
Finally, the droplets can be designed to actively respond to or actuate their environment. While these examples still tend to be in the proof-of-concept stage, they allow us to think about new types of interaction screens. Figure 3 E shows an example of a responsive droplet used for a bio-based assay, a single droplet made of a purpose-designed DNA gel with different DNA motifs and linkers to bind them. The linkers feature a base complementarity for four different target miRNAs. In the presence of target miRNAs, the linkers hybridize to them, which unbinds the motif and compartmentalizes the droplet into three single-motive droplets [51]. Another example of a responsive microfluidic droplet is the stimulated drug-content ejection of a multiphase Janus microparticle [56]. Droplet-based micromotors are not responsive, but are constantly actuated [57], which can include catalytic interaction properties used for organic waste degradation in wastewater treatment [52] (Figure 3 F).
## 4 Conclusion
Droplet microfluidics is a record-holding high-throughput methodology for screening in biotechnological applications. The importance of droplet-based screens is increasing with the availability of more methods and tools. A key emerging trend in this field is the rise in interaction assays. Because of their small volume, droplets are ideal for facilitating and measuring interactions, and many recent high-impact studies have exploited them. We highlighted different types of interaction studies, including protein-protein interactions, cell-cell interactions, DNA-protein interactions, drug-microorganism interactions, and many more studies that are emerging in this growing and
exciting field. In addition to droplet contents, materials, and processing protocols, there are also new approaches to placing droplets in new environments, such as using them in more available instruments, the body, for organism and drug delivery, or the natural environment. These approaches open new opportunities for interaction studies in which the droplet content or even the responsive droplet material itself can interact with the biological environment. This is likely to lead to new biotechnological applications. Finally, we acknowledge that there are many other measurement methods whose importance grow in the single-cell analysis field, such as mass-spectrometry and Raman spectrometry, to name just two. We look forward to seeing different branches of method development link to further expand and mature interaction screens for biotechnology.
## 5 Acknowledgements
This work is part of a funded project granted to T.W. from ANID FONDECYT Iniciacion 11200666. The funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
## 6 Credit Author Statement
Tobias Wenzel: Funding acquisition, Supervision. Tobias Wenzel and Carlos Vidal-Cespedes: Conceptualization, Data curation, Visualization, Writing - original draft, Writing - review & editing.
|
2309.14245 | Do We Run How We Say We Run? Formalization and Practice of Governance in
OSS Communities | Open Source Software (OSS) communities often resist regulation typical of
traditional organizations. Yet formal governance systems are being increasingly
adopted among communities, particularly through non-profit mentor foundations.
Our study looks at the Apache Software Foundation Incubator program and 208
projects it supports. We assemble a scalable, semantic pipeline to discover and
analyze the governance behavior of projects from their mailing lists. We then
investigate the reception of formal policies among communities, through their
own governance priorities and internalization of the policies. Our findings
indicate that while communities observe formal requirements and policies as
extensively as they are defined, their day-to-day governance focus does not
dwell on topics that see most formal policy-making. Moreover formalization, be
it dedicating governance focus or adopting policy, has limited association with
project sustenance. | Mahasweta Chakraborti, Curtis Atkisson, Stefan Stanciulescu, Vladimir Filkov, Seth Frey | 2023-09-25T16:04:11Z | http://arxiv.org/abs/2309.14245v1 | # Do We Run How We Say We Run? Formalization and Practice of Governance in OSS Communities
###### Abstract
Open Source Software (OSS) communities often resist regulation typical of traditional organizations. Yet formal governance systems are being increasingly adopted among communities, particularly through non-profit mentor foundations. Our study looks at the Apache Software Foundation Incubator program and 208 projects it supports. We assemble a scalable, semantic pipeline to discover and analyze the governance behavior of projects from their mailing lists. We then investigate the reception of formal policies among communities, through their own governance priorities and internalization of the policies. Our findings indicate that while communities observe formal requirements and policies as extensively as they are defined, their day-to-day governance focus does not dwell on topics that see most formal policy-making. Moreover formalization, be it dedicating governance focus or adopting policy, has limited association with project sustenance.
Open Source Software Peer Production Online Communities Ostrom Collective Action OSS Governance
## 1 Introduction
An exemplary instance of online peer production [3], Open Source Software (OSS) has emerged as a multi-billion dollar informal industry supporting major contemporary tech enterprises, academia, and scientific research and development. Over the past three decades, the increasing stakes of OSS have paved the way for several non-profit OSS foundations providing standardized project support and governance frameworks to hundreds of projects, notable among them, the Apache Software Foundation (ASF). These organizations serve OSS projects by providing mentoring, much-needed infrastructure (servers, centralized storage [90], etc.), legal aid around OSS licensing[57], and well-maintained technical support [22]. OSS foundations like the ASF have brought OSS into the mainstream, attracting large numbers of contributors and financial support [50].
OSS projects have often benefited from some degree of overarching coordination and governance [40, 66]. Several of these foundations implement their own governance to manage projects and the developers they mentor. Written, well-laid-out _formal_ policies steer and synchronize community operations, thus minimizing the costs of coordination and management [6, 27]. At the same time, communities have often observed their own _informal_ rules and normative codes to structure activities, assign responsibilities, utilize project resources, and ensure sustained development [45, 88, 14, 50, 52, 32, 40, 30]. Consequently, community governance within foundation-mented projects is a product of the foundation's policies, the project's own specific practices, and any interactions between those two sources of institutional structure. Hence, even among OSS projects from the same foundation, their decisions, actions, and ensuing interactions may reflect varied degrees of involvement with the centralized governance, as they may prefer to manage their community in their own fashion.
Non-profit OSS foundations are steadily rising, with one survey finding 101 active organizations that host over 1,600 OSS projects as of 2018 [33; 34]. With mentored projects generally showing higher survival rates over independent communities [69; 89], they are being increasingly viewed as a model to raise thriving projects producing usable, compliant software. Yet, OSS governance is not without its quirks and challenges [40; 14; 72]. While foundations may bolster communities with resources and support, the implications of such formalization for OSS has recently drawn significant research interest. Indeed, there have been instances where formal governance has produced little impact or has actually limited community flexibility and autonomy [57; 72; 35]. Hence, to assess the contribution of foundations towards OSS sustainability, we need to examine how they structure the mentored communities. We particularly look at how foundation policies are received in communities, reflected through operations as well as how they determine a project's governance focus.
The Apache Software Foundation Incubator (ASFI) was founded by the Apache Software Foundation (ASF) in 2002, in part to propagate Apache's approach to OSS governance, and has mentored over 300 projects ('podlings') since. Several non-profits require interested projects to undergo initiation through an incubation program to learn the ways and requirements of the foundation. ASFI also evaluates projects for performance and overall organizational fit throughout their incubation, before accepting ('graduating') them for continued support, or'retiring' them from the foundation. Being cognizant of the importance of project self-governance, ASFI empowers every project [23] to oversee its own governance with Podling Project Management Committees (PPMC). These PPMCs act as the interface between project developers and the ASF. The foundation's commitment to project self-reliance raises a fundamental question: what is the relationship of each project's emergent governance structure to the formal policies representing governance across the foundation?
Our study focuses on community-level governance among mentored projects and how they relate to foundation-level policies. We leverage developer conversations from ASFI's public mailing lists. Compared to traditional approaches like surveys, interviews, or other forms of qualitative inference, retrieving behavioral measures from trace data is faster, convenient for replication across foundations, and less susceptible to reporting bias while offering more granular, real-time insight. We assess each project's governance efforts and resulting operational structuring through the routinized _governed activities_ they perform. Next, we evaluate their policy _internalization_, i.e. the extent to which ASFI formal policies structure their community governance and frame their governed activities. We analyze how the extent of community governance efforts and policy internalization relate to ASFI's extent of regulation (number of rules) across different governance topics. Finally, we empirically investigate how community governance and its extent of formal policy internalization together explain its Incubator outcomes. Our contributions and findings are as follows:
1. We demonstrate a scalable approach, based on semi-supervised learning, to understand governance across peer-production communities, both its formal specification and lived instantiation.
2. A foundation-level analysis of ASFI projects shows that the extent of policy regulation -- the number of rules structuring different governance topics -- is not mirrored in practice through the extent of governed activity. Yet governed activity tends to be framed by policies in topics where they are extensively defined, as indicated through policy internalization among projects. Therefore, while communities show greater acknowledgment of formal policies in the topics where they are extensively laid out by the ASFI, such topics do not necessarily elicit more governance efforts from communities.
3. When it comes to sustaining the community and efficient development towards graduation, dedicating governance focus or internalizing policies from topics highly regulated/prioritized in formal policies had little association with the odds of success. All in all, formalized policies in OSS communities may not accurately reflect their underlying patterns of governance.
## 2 Related Work
Open Source governance includes all organizational structures and coordination mechanisms that regulate community interactions as well as product development. Prior work has extensively explored OSS community governance in terms of decision-making [30; 89], assignment of tasks [14; 50], managing developer roles and access [52; 32], mentorship [1], code quality, review, and contribution [40; 77], etc.
Community governance has been treated as an expansive, multi-level system of mutually interactive socio-technical networks [35; 26]. Meanwhile, Schweik et al. studied OSS projects at scale on SourceForge and found governance structures to be generally informal and lean, with increased sophistication and formal rules as communities grew [67], Similar findings are also echoed by O'Mahoney's work on the Debian Linux community's evolving governance [58]. Community-level analysis of Apache Incubator projects also found that more successful projects showed greater adoption and use of definitive rules and norms [89]. Heckmann et al's investigation of decision-making processes
further found that in well-performing projects developers and users participated more proactively in steering the course of the project [30].
Leadership is a crucial aspect of OSS governance, where developers with greater technical initiative, development prowess, and effective communication strategies generally emerge to fill administrative roles [31]. Analysis of decision episodes in communities found administrators to be critical drivers during the initial phases of a project [30]. Meanwhile, Atkinson specifically examined individual mentors of the Apache Incubator and found a significant correlation between who managed a project and its odds of graduation [2]. Investigation of communities on SourceForge found that while a sizeable fraction (around 15-20%) of successful projects comprised a stable community with dedicated users, the rest showed rapid growth and were often led by a 'benevolent dictator' [68; 69].
Prior work has explored the challenges of OSS moderation. Attempts towards greater inclusiveness by enforcing community codes of conduct (CoC's) have often received limited engagement or been perceived as distractions from core development priorities [42]. Several studies have focused on interactions within foundation-led communities. A qualitative cost-benefit analysis of Apache Incubator policies found that the implementation efforts and payoffs are evenly balanced between projects and the ASF [71]. The implications of congruence/dissonance become particularly salient when it concerns software licensing. The rigor of the licensing requirements, including ASF's rights over individual contributions, has often seen varied reception and interpretation among OSS developers [57]. Sun's introduction of changes in the Netbeens licensing scheme threatened the collapse of the very project [35]. Stringent terms set by corporations supporting gated OSS communities often turned away sincere contributors or restricted usage of the product, thus hindering developer engagement and community health [72].
While prior work has either focused on foundations or community dynamics, a limited number have empirically treated their mutual interactions unraveling in real-time [56; 89]. Moreover, they have generally focused on a particular aspect of governance, such as licensing, through case studies of a select number of projects. We attempt to capture the multifacetedness of OSS governance (including but not limited to licensing, trademarks, documentation, committees, voting, etc.) and study hundreds of mentored projects. Motivated by collective action theory and behavior in communities of practice, we proceed to investigate the governance behavior of OSS communities around formalization.
## 3 Theoretical Motivation
### Institutional Theory
OSS communities, generally comprising transient volunteer developers centered around a core of long-term contributors, organize in a decentralized fashion to create software for open use and distribution. This phenomenon has been framed in terms of the peer production of public goods, making OSS communities an increasingly important locus of online collective action research [3].
Institutions are defined as "... prescriptions that humans use to organize all forms of repetitive and structured interactions..." [55]. For a collectively maintained resource such as an OSS community, governance includes all formal and informal rules for management and production, along with the mechanisms for such policy design, reform, and implementation. [69; 46].
OSS governance lies on the spectrum between purely self-interest-driven spontaneous governance ("the invisible hand") and intentional governance [16]. Polycentric governance refers to a condition where there are overlapping interests between multiple centers of authority [46; 47; 36]. This often implies varying degrees of interdependence and autonomy among concurrent governments. For example, while ASFI encourages projects to admit consistent contributors, the specific process of admission is left to each project community itself [71; 90]. The dynamic nature of organizational fit is especially evident in decentralized[16; 83], ideology-rich environments like OSS projects [75], notably as resource abundance varies [70]. It is our goal in this paper to study the extent of internalization of ASFI's governance in regular project operations, along with the different themes of its rules and policies, across graduated and retired projects.
### Communities of Practice and Organizational Learning
OSS projects are essentially online communities of practice [39; 5; 54], where coordinated operations are studied in terms of routines. Routines stem from beliefs, cognitive scripts, habitual conventions as well as evolving norms as they translate into'repeated patterns of actions' across appropriate settings [10; 41]. These include management, standard operating procedures e.g. workflows, or experiential strategies encoded into everyday activities and associated interactions [41; 15; 86]. Community routines may not be only technical, and may also emerge to coordinate developers through informal norms and social control [45; 52; 32]. For example, developers use their particular routines for managing and deploying builds, incorporating patches, testing, prioritizing issues, et cetera. Similarly, communities also
perform a sequence of routines when it comes to more formal events like setting up committees, organizing conferences, and ratifying releases.
Routines are generally stable [9], until changes in organization, technology, development goals, or other events cause them to evolve [20; 60; 21]. OSS projects are dynamic and decentralized with fluid membership [82] and may thus be inventive and flexible in their norms [49; 28]. Consider the following email from Apache Netbeans dated 9/13/2017. ASFI does not cover code management. Yet their projects themselves usually chose between two approaches: review-then-commit (RTC) and commit-then-review (CTR). The example shows deliberation among Netbeans developers on their appropriateness and scope:
different asf projects have different policies. the important part is that we should have a common understanding about our commit policy. there might e.g. be a branch for the next release where rtc (review then commit) is applied. that's useful when preparing a release or for maintenance releases we still actively maintain. and beside that we might have a 'future' branch (e.g. on master) or multiple feature branches where ctr (commit then review) is standard. most asf projects have the whole repo on ctr...
Incubator policies are set up through pragmatic planning. The observed influence of the foundation's policies on a mentored project's routine operations indicates how its governance has been internalized in the community. The more community members discuss and describe activity in a way that resembles the framed policy, the more we can argue that members have internalized the formal description. At the same time, through learning and discovery [43], communities may also prefer procedures and protocols when Incubator policies are deemed less effective, inadequately defined [49; 83] or fall short of their needs. This further motivates us to understand the impact of foundations on projects through policy internalization across sustained community practices.
## 4 Research Questions
Formal rules and policies are critical in shaping the basic structure and guiding activity in an organization [60; 28]. Foundation Incubators implement systematic policies to coordinate and promote community engagement and productivity. These establish baseline standards and rules for participation across all the diverse member projects, may define certain roles and offices for leadership, assign responsibilities, as well as lay out the scope of various activities. At the same time, routines also reflect the project community's own implicit governance, i.e. informal beliefs, norms, codes of conduct, and other practices. Therefore, polycentric governance in foundation projects stems from governance among individual communities (Project Management committees (PMCs), as well as all other informal rules and developer norms) alongside the ASFI itself. Situated in the backdrop of OSS-foundation polycentricity, this section presents our research questions which look at community governance and policy internalization across the different aspects of ASFI governance.
The formalization of governance in traditionally volunteer-driven communities has been a contentious theme. OSS pioneer Eric Raymond observed that the "number of hoops" or too many formalized procedures and rules may drive away potential skilled contributors [63; 67]. Extensive regulation may introduce additional requirements and necessitate the enactment of institutional obligations. Therefore, communities may be expected to show more governed activity in domains that are heavily policied, given their presumed importance in the ASFI ecosystem. As a result, we may expect a positive relation between the number of policies and the frequency of observed routine activities in a particular area of governance.
While there are concerns about redundant routines and overheads, lack of regulation may cause individuals/communities to draw upon larger social and cultural constructs for predictability. Such "tyranny of structurelessness" may perpetuate broader social inequalities [25]. The idea of "green tape" encapsulates the potential of policy to provide clarity and certainty, focus organizational attention, and convey legitimacy [17]. Implications may also extend to OSS formalization, whereby extensive yet well-designed policies may streamline rather than divert developer efforts. However, in domains where regulatory clarity is limited, greater project activity may become necessary to sustain development.
RQ1 explores how the extent of policy-making relates to the governance priorities and operations among mentored projects. We identify governance concerns/topics actively shared between the ASFI and its projects, through policy documents and extensive mailing lists across 208 communities. Since structuration from the mutual interaction of foundation policies and community governance determines the routine behavior of projects, we aggregate all similar activities from email conversations and examine their correlation with the topical distribution of ASFI policies.
**RQ1:** How does Incubator regulation relate to community-level governed activities across different governance topics?
Institutions manifest through the practice of routines formalized by such established rules[38]. As mentored projects increasingly internalize foundation policies, their operations are expected to be generally constrained and enacted through routines prescribed by such rules. Yet, community governance also requires the dynamic selection and adaptation of various other routines (Section 3.2 ). Therefore, we may expect variation in the influence of ASFI policy on governed activity, along the different governance concerns.
Well-designed rules seek to reduce uncertainty and can act as formulaic precedents to replicate success across mentored projects [49], or at least help standardize the provision for Incubator resources. Therefore, extensive regulation in a certain area of policy-making (i.e., more rules outlining a wide range of organizational possibilities), may induce greater adoption if it facilitates project functioning and improves efficiency.
On the other hand, activities and related exchanges in a topic may deliberate policy to only an extent, while their actual operations may reflect a marked departure from formal structure [83, 19]. This may be especially true when certain institutional obligations are ceremonial or necessary to maintain affiliation with the ASFI but are less relevant in day-to-day development. If such is the case, the observable policy internalization among communities across different governance topics may not be correlated to the extent of policy overseeing the topic.
We might expect alignment between the amount of formal policy on a topic and how resulting policy prescriptions are internalized in practice. Organizations engage in many functions, some of which are more critical than others. More important functions may be marked by a greater amount of policy formalizing behavior and may elicit greater internalization, toward more compliant execution. On the other hand, if policy extent is driven more by the complexity than the criticality of a governance subject, then that complexity may paradoxically predict a greater quantity of policy, for its various cases, and also less internalization, as practitioners take license from that very complexity to exercise greater discretion in how they execute.
RQ2 explores how the extent of policy-making relates to the formal policy internalization among projects. For all topical governed activities we measure policy internalization in terms of how discourse about those activities in general semantically reflects the policies formalizing those activities. Finally, we examine how such internalization varies with the extent of regulation across topics.
**RQ2:** How do the levels of policy internalization in governed activities relate to ASFI policy extent across different topics?
For an Incubator program to realize its goals, it is important to assess the association between its governance and project outcomes. At the same time, it becomes equally important for aspiring communities to understand behavior associated with communities that succeed in Incubator programs, particularly the extent of community governance as well the impact of foundation governance on such operations.
ASFI lays down three primary criteria to determine if a project has potential and is capable of sustaining development: 1) there is community activity evidenced by at least two releases, 2) the releases are compliant with the Apache license, and 3) the committers of a project are drawn from at least three entities (companies, research groups, etc.) [24]. The remainder of the policies serve to help the project achieve those goals.
While RQ1 and RQ2 measure _if_ there is a relationship between formal policy and community governance, RQ3 uses an externally valid measure of project outcomes to determine whether there _should be_ a relationship i.e. whether communities align governance focus or internalize policies in topics with more formal rules, in order to successfully realize their objectives. In particular, it examines if community governance efforts or the adoption of policies around formalization correlates to their graduation odds in the ASFI.
We pursue RQ3 through a project-level regression of all governed activities (frequency of structured, routine operations) among individual projects alongside the policy internalization among such operations (semantic similarity of governed activities to policies) against a binary measure of project success (graduation/retirement from the Incubator).
**RQ3:** How do governed activities and the extent of policy internalization relate to the success of projects?
## 5 Data and Methods
### Variables of Interest
#### 5.1.1 Governance Measures:
We pursue two discursive measures of community governance from developer conversations in mailing lists, namely all governed activity and their internalization of Incubator policies. Traditionally public and open access, OSS mailing lists are key to collaboration as they promote transparent peer review [40] and solicit reciprocal contributions [57]. Unlike
issue tracking and version control logs, these also contain exchanges beyond technical development, such as product planning, community management, ratification of major decisions, licensing, etc. Further, due to explicit ASF policies, all project activity are comprehensively archived across public mailing lists ("If it didn't happen on the mailing list, it didn't happen" [90]).
Prior work has extensively used organizational communications for understanding participant behavior and performance, including OSS [35, 89, 31]. Li et al. used a grounded theoretic approach to understand the adoption and reception of community codes of conduct from developer exchanges. Affective features in developer messages have been used to predict leadership qualities among OSS developers [31], while Srivastava et al. studied enculturation and employee exit, where they treated individual's linguistic divergence as a measure of cultural fit [74].
We described in Sec. 3.2 how routines reflect all prevailing governing norms among projects. We first identify the different governance concerns shared between projects and the Incubator by means of topic modeling of policies and conversations, and represent the following two measures by project and governance topic:
_Governed Activity_: The total number of recurring or routine activities about a governance topic, as discussed in a project's mailing list. Higher presence of governed activity indicates greater governance efforts to structure and routinize community operations. For example, if a community establishes a norm for ratifying releases, future releases will likely follow the established schema. In ASFI projects, such governance is a culmination of the foundation's policies as well as the underlying codes and norms of the community developers. Recurring activities are aggregated over their textual similarity.
_Policy Internalization_: This measure represents the extent to which governed activities are structured by ASFI policies. Therefore, higher policy internalization in governed activities indicates greater integration of the foundation into the community's governance.
Methods explored to operationalize internalization included evaluating direct compliance/entailment between an observed activity and policies. Such binary measurements were found to be insufficient to account for the drift between formally articulated statements (framed policies) and informal, practical discourse (conversations) or importantly, reflect graded changes along the rates of institutional diffusion [76]. For example, observations from initial developer discussions over a release vote, an ASF-specific requirement, to an actual voting event are important to understand the gradual internalization of governing institutions.
For a topical governed activity in a project, we measure policy internalization through its semantic similarity against policies within the respective topic. Semantic similarity is an assessment of meaningful and conceptual relationships between texts [37]. Measured on a continuous [0,1] scale, semantic similarity rates text pairs higher (lower) values for agreement (contradiction) [7, 85]. Moreover, semantic similarity can be used to quantify activities that are neutral but indicate institutional diffusion, through their degree of resemblance in how they invoke roles, designated responsibilities, and requirements outlined by a policy.
_Policy Extent:_ A foundation-level variable indicating the extent of ASFI's regulation across topics. It is represented as the frequency (count) of formal rules overseeing each governance topic, with higher values (number of rules) in a topic indicating greater ASFI regulation.
#### 5.1.2 Project membership and activity
: Projects in ASFI are diverse, and their governance and Incubator outcome may also be subject to community structure, activity levels, etc. Since we are interested in analyzing how governance behavior correlates to project sustainability, our analysis has to simultaneously control for project attributes, such as community size and development intensity. We incorporate four suitable covariates in our analysis through _community size (committers), number of commits, code base size (lines of code; LOC)_, and finally the frequency of interaction among the project developers (_developer emails_) over project mailing lists.
### Datasets
We center our analysis of ASFI governance through a set of 234 comprehensive policies which were coded across the key ASFI documents and guidelines [71]. These span multiple sources such as the official Apache Incubator policy manual, the community guide, the Podling Project Management Committee (PPMC) guide, the Apache cookbook, the mentorship guide, the graduation and retirement guides, and finally the release management guide.
In the ASFI, project incubation lasts up to several months followed by an assessment and a formal vote to decide on graduation into ASF for continued support or retirement. Yin et al. scraped all mailing lists across 269 Apache projects from when they joined the Incubator and up to their last day in the ASFI [90]. Since we solely focus
on norms and activities within communities, we only retain the 'dev' (community developers) subdirectory emails across all projects. We exclude redundant content such as auto-generated emails, for issues posted and resolved, and other development-related notifications (JIRA, Github) through source address-based filtering. Periodic emails were also circulated by the Incubator Project Management Committees (IPMC) or project mentors, which were formal, administrative, and generally concerned progress reporting. All such emails have a fixed format and were identified and filtered through string matching. This mitigates potential bias in measurements from to superfluous policy content from the administration, as our subsequent analysis concerns governance-related behavior within and among community developers only.
For project-level covariates, we obtain commits, lines of code, and the number of active contributors. ASFI projects use GitHub, Subversion, or a combination of both for maintaining their codebase. Stanciulescu et al. [78] extracted monthly performance metrics for 218 ASFI projects their incubtion. However, the tooling infrastructure they developed only supported mining software metrics from Git repositories. Moreover, Yin et al. mined project mailing lists up to Jan 2021, including ones that were mostly SVN-based, while Stanciulescu et al. span projects from March 2003 up to May 2021. Given these differences, we based our study only on those projects that are common to both datasets. This yielded 214 projects for which both project measures and email data were available.
Moreover, there were some differences in the way these data were collected. Yin collected data in time windows of 30 days, whereas the other dataset collected data on a monthly basis (calendar timestamps). To resolve this mismatch, we modified the collection timeline to a 30 days time window in the tool provided by Stanciulescu et al, to match the time window in the dataset from Yin et al., and repeated the measurements for our variables of interest for these 214 projects.
### Measurements
#### 5.3.1 Extracting activities
Routines have been studied at multiple levels, from the most nuclear activities to complete processes. The most fundamental unit, the performance program [61; 44] is defined as a 'chunk' of scripted activity, generally a routine in itself or part of a larger process. To capture organizational routines from ASF email discourse, email texts and policies were first tokenized into sentences through StanfordNLP's Stanza library [62]. We next turn our attention to extracting different activities from within these sentences.
This serves several purposes. Firstly, most existing language models, including ones subsequently used, encounter complexity overheads and truncate long sentence inputs beyond a token length. Secondly, sentences can be compound,
Figure 1: Language modeling pipeline for extracting activities, aggregating routine governed behavior, and evaluating internalization.
conveying multiple activities with their specific context, and possibly spanning different topics (Table 2). Therefore decomposing sentences into granular units of analysis like performance programs allows depth and insight in subsequent analysis.
We decompose sentences while preserving their context. Context is important in understanding different routines and their place in the development ecosystem (E.g. _'Projects **requesting** Apache infrastructure'_ vs. _'Project Management Committee **requesting** progress report'_ or _'Projects **issuing** press release'_ vs. _'Resolve **issues** that are release blockers'_). To attain fine-grained extraction of different activities and their context nested within sentences, we use semantic role labeling.
Semantic role labeling or SRL [37] is an NLP task that extracts roles (actors, direct or indirect objects, etc.) associated with an action (verb) along with other modifiers from a sentence. Additionally, SRL also extracts constituents with contextual information such as the time of act, manner, direction, goal, purpose, cause, etc [4].
**Original Policy:**
'After a vote has finished, the ipmc must send a notice email to the board and then wait for 72 hours before inviting the proposed member'
**Semantic Role Parsing:**
'ARG0': ['the ipmc'], 'ARGM-MOD': ['must'], 'V': ['send'], 'ARG1': ['a notice'], 'ARGM-DIR': ['email'], 'ARG2': ['to the board'], 'ARGM-TMP': ['after a vote has finished']
'ARG1': ['the ipmc'], 'ARGM-MOD': ['must'], 'V': ['wait'], 'ARGM-TMP': ['after a vote has finished', 'then', 'for 72 hours', 'before inviting the proposed member']
**Performance Programs (After reconstitution):**
'After a vote has finished the ipmc must send a notice email to the board'
'After a vote has finished the ipmc must then wait for 72 hours before inviting the proposed member'
**Original Sentence:**
'( 1 ) I'll be away from my computer starting Friday and through the New Year, so I won't be able to do much to help if folks want to release 2.1 during that time ( not even testing ).' (Apache Roller, 12/21/2005)
**After SRL and reconstitution:**
'I'll be away from my computer starting Friday and through the New Year' (Schedules/Events)
I won't be able to do much to help if folks want to release 2.1 during that time ( not even testing )'
(Release Management)
We chose a BERT [18] based implementation of SRL [73] developed by AllenNLP on the Propbank annotation scheme. The model holds a state-of-the-art performance on the English Propbank (Newswire) as well as a test F1 score of 0.864 on the Ontonotes 5.0 dataset. We identify all possible semantic roles associated with each distinct verb from compound sentences. These SRL frames were reconstituted into distinct activities, by reordering the semantic roles and all other contextual arguments for each verb, along with their relative positions from the original sentence. The 723,863 developer emails in our data generated 2,248,950 expressions of activities.
In governance research, rules are specified in terms of grammatical constituents representing the governing (committees, boards, etc.), the governed (e.g. committers), the activities they undertake, and the conditions they entail (e.g. voting before a release) [13]. Our policy reference data [71] comprised descriptive policies spanning multiple nested rules (Table. 1). Therefore, SRL-based preprocessing was also extended to the policy documents, whereby the 234 policy descriptions from Sen et al. were parsed into 422 individual rules.
\begin{table}
\begin{tabular}{|p{142.3pt}|} \hline
**Original Sentence:**
’( 1 ) I’ll be away from my computer starting Friday and through the New Year, so I won’t be able to do much to help if folks want to release 2.1 during that time ( not even testing ).’ (Apache Roller, 12/21/2005)
**After SRL and reconstitution:**
’I’ll be away from my computer starting Friday and through the New Year’ (Schedules/Events)
I won’t be able to do much to help if folks want to release 2.1 during that time ( not even testing )’ (Release Management)
\end{table}
Table 2: Capturing granularity: Sentences spanning multiple, thematically distinct operations. In this example, a developer shares their vacation timeline to the community in general, while also discussing implications for a tentative release. Topics indicated for each activity are inferred as described in Section 5.3.3
Finally, we conduct an additional pre-processing step. Developers often use mailing lists for technical discussions and clarifications. As a result, they often contain stack traces, logs, etc. which may be parsed as regular activities. We restrict our analysis to human-readable, standard English-language data, which can be compared and interpreted against governance policies such as those of ASFI. We detect and retain only English texts using a HuggingFace XLM-Roberta-base model [11] trained for language identification. This reduced the number of extracted activities to 2,029,691.
#### 5.3.2 Governed Activities: Aggregating routines
As described in (Section 3.2), routines are activities carried out time and again, under specific circumstances [10]. Unlike well-documented formal policies, routines are more dynamic and span activities dictated by emerging norms and operational priorities. Hence, it is extremely challenging to comprehensively codify activities in a community and train models that can discriminate routine behavior from non-routine ones.
Importantly, we are interested in a pipeline that supports governance analysis across diverse online communities. Since routines are influenced by technological trends, the nature of the product, the specific community, utilities involved, etc., there may arise inaccuracies from data when extending a supervised model specifically built on ASFI data, to other communities and foundations. Based on theoretical definitions of our construct of interest (i.e. governed activities are routine or'recurring' operations), we leverage alternative learning methods compatible with our goals. We hereby describe our approach to discovering routines as similar activities in email data, through semi-supervised clustering.
We find similar ('recurring') activities through semantic similarity-based aggregation [64]. Popular approaches to semantic representations include word level [51, 59], sentence level [12, 8, 84], and more recently language model-based approaches which allow for more advanced representation learning for different semantic tasks.
The biconder architecture was developed for computationally efficient semantic encoding of texts [64]. They involve training a Siamese network of two identical, transformers to generate contextual encodings for two distinct text inputs. The averaged output from each transformer is then subjected to a cosine similarity loss objective function. By the end of the joint fine-tuning, both the transformers are capable of independently generating semantic embeddings for any given text input. Huggingface [87] hosts multiple domain-specific biconcoders. We use a general-purpose bi-encoder pre-trained on the domain-relevant corpus from Stack Overflow, a question-answer platform specially used by developers. All transformer-based experiments henceforth were conducted through a single Tesla T4 GPU.
Next, for aggregating encoded texts, we use BERTopic [29]. It supports hierarchical density-based clustering or HDBSCAN [48] for most Huggingface binocularors, followed by topic modeling of the inferred clusters. To train the clustering model, we uniformly sample 100,000 activities out of all the 2,029,691 activities previously extracted. Modeling activities across projects together allows for identifying and grouping them under a set of shared governance topics.
To cluster community activities intersecting with ASFI concerns, the 422 rules from ASF policies are passed as initial seeds to BERTopic. For best clustering results, we conducted hyperparameter tuning for BERTopic's HDBSCAN through Density-based clustering validity or DBCV measures [53]. DBCV scores rate density-based models from -1 to +1, with higher values indicating better clustering quality. To find hyperparameters returning maximum DBCV, we tune over the following HDBSCAN arguments: minimum cluster size and minimum samples. Higher values of cluster size threshold might lead to the merging of clusters, while sample size promotes dense clustering and more outliers. Both parameters were varied in combinations from 10 (0.0001% of sample size) to 100 (0.001% of sample size). Prior to clustering, BERTopic also uses Uniform Manifold Approximation and Projection or UMAP for dimension reduction of embeddings. The number of neighbors parameter in UMAP decides the trade-off between preserving the global and local structure and was also varied between 10 and 100. We retain the model with the best relative DBCV score.
#### 5.3.3 Topic modeling of governed activities
BERTopic finally conducts TF-IDF across the dense clusters of governed activities to assign them topics. Words from the rules were used to suitably reweigh inverse document frequency of words. Topic coherence metrics [65] supported by Gensim [91] evaluate topic modeling performance on a scale of 0 to 1. Our final model shows a topic coherence \(C_{v}\) of 0.683, indicating strong topic correlation.
Policy documents often contain canonical descriptions of norms and processes, that are often dated and removed from practical operations [5, 54]. After clustering, 106 out of the 422 policy rules were disregarded as outliers, due to negligible mention over emails. A total of 42 distinct topics were identified between ASFI policies and email activities, and 211 topic clusters were discovered among all activities. Around 493,008 activities were found to belong under
these 42 governance topics from ASF. Final topic label assignments were deduced based on the assigned policies and top keywords from each topic, and overall domain knowledge of ASFI.
#### 5.3.4 Measuring institutional internalization
For governed activities under any ASFI governance topic, we measure the extent to which they reflect ASFI policies overseeing the same topic. Cross-encoders or poly-encoders [64] are a standard language model for semantic comparison. They treat sentences or text to be compared as simultaneous inputs and attend them jointly for semantic scoring. Biencoders and cross-encoders are often used together for information retrieval and text ranking. While biencoders can encode individual sentences to support high-level clustering over large sets of text, cross encoders are suitable for more precise, pairwise comparison between smaller sets of texts [80].
We use a Distil-RoBERTa base cross-encoder from Huggingface which rates text pairs on a continuous scale of 0 to 1, with higher scores indicating greater similarity The model demonstrated a Spearman rank correlation of 0.87 with respect to the human-annotated scores from the STS text similarity benchmark [7]. Using this cross-encoder, we compare every governed activity against all the rules assigned to the same governance topic to find the ones it resembles most closely. The mutual semantic similarity score of the governed activity and the closest policy is used to represent the activity's extent of ASFI policy _internalization_. Consequently, we obtain internalization scores for all the 493,008 governed activities under each of the 42 governance topics.
### Analysis
RQ1 and RQ2 pursue an ASFI-level exploratory analysis of our governance measures along the policy extent. RQ1 compares the proportions of ASFI rules (level of regulation) and project-level governed activity across the topics, while RQ2 follows up by assessing the distribution of ASF policy internalization in activities.
Finally, for RQ3, we examine governance behavior among projects, against their graduation or retirement from incubation. We fit a generalized logistic regression (GLM) binomial model of project-level measurements of governance as well as the covariates, against their respective incubation outcome. We conduct our analysis through the GLM suite (regression, multicollinearity check, and validation of assumptions) supported by the _statsmodel_ package in Python. LASSO-based variable selection is conducted prior to regression and inference, for which we use the _group-lasso_ Python package. We set the significance level of our analysis at the standard \(p<0.05\).
RQ1: How does Incubator regulation relate to community-level governed activities across different governance topics?
As described in (Section 3.1), we focus our analysis on governance topics shared between the ASFI and its mentored projects. We visualize ASF's policy extent against the distribution of governed activity along topics (Figure 2). A Pearson correlation test between the distributions was found to be 0.23 (_p = 0.13_), indicating that how communities perform governed activities across topics is uncorrelated with the amount of policy structuring those topics.
RQ2: How do the levels of policy internalization in governed activities relate to ASFI policy extent across different topics?
To explore RQ2, we additionally examine the distribution of internalization scores of governed activities conditioned on governance topics (Figure 3). Higher mean internalization scores indicate that in a particular topic, the projects' practiced routines are more framed by formalized Incubator policy. We observe a trend of generally greater internalization with increasing policy extent: a Pearson correlation test between the topic-wise policy extent and mean internalization scores was found to be 0.744 (_p < 0.001_). In other words, areas of governance that receive more attention in formal policy also tend to be enacted by participants in a way close to the policy descriptions.
RQ3: How do governed activities and extent of policy internalization relate to the success of projects?
ASFI strives to build meritocratic communities and assesses projects' performance throughout the incubation time frame. As membership and activity levels undergo constant changes in OSS, we average the monthly measures of active committers, developer emails, and commit activity to capture their sustained levels. The code base variable was represented as the net size of the project repository in terms of overall lines of code (LOC) written by the project while in ASFI. Prior work on ASFI has shown that successful projects tend to graduate early [89], so we incorporate the total number of months spent by the project in the Incubator as one of the covariates. To similarly adapt our governance measures, we represent governed activity through the total number of routine activities observed in a project during
incubation, across the mailing list. The overall policy internalization along a governance topic every project was similarly evaluated, by averaging the scores across all the governed activities. The resulting number of predictors was 89, including five covariates and the two distinct governance measures across each of the 42 topics. Six projects were dropped as their commit history was unmeasurable through our metrics tool, leading to 208 observations.
Certain project mailing lists did not reflect governed activity under some of the topics, making the _governed activity_ of that topic equal to 0. There are 54 projects with 0 observed _governed activity_ in at least one topic. Rather than dropping those observations entirely, we retained them in a way that minimizes information added to the system through the imputation procedure but allows us to retain the information in the non-missing variables: unmeasured internalization scores were filled through iterative round-robin imputing supported by the Python package Sklearn. This method of imputation, a pythonic implementation of MICE [81] is unbiased relative to other choices we could have made, such as assigning 0.1
Footnote 1: The model was run with list-wise deletion for all projects with at least 1 missing value and the only difference was a change of the sign for the number of commits, an effect for which we have low confidence in all our models. We further repeated the analysis without 5 topics with more than 10% missing entries. We did not observe major changes in effects (size, direction) and significance.
Project-level covariates (committers, emails, codebase, and commit activity), as well as governed activity for every topic, were log-scaled to address skew as well as to facilitate comparison along the scale of different projects. Subsequently,
Figure 2: Left: Distribution of ASFI policy extent across governance topics. Right: Distribution of governed activity of projects across different governance topics. Governed activity was not found to be significantly correlated to policy extent.
all variables were standardized through z-score standardization. We then addressed multi-collinearity by removing all variables with Variance Inflation Factors \(>5\). We then performed a logistic LASSO-based variable selection over 5-fold cross-validation and hyperparameter tuning over the log loss. After multicollinearity tests and variable selection, we have a reduced set of 11 significant predictors.
We construct nested linear regressions, whereby we fit four models to assess the contribution from different groups of variables (Table. 3). These are the "baseline" model with only covariates as predictors (M1), a second model adding topical governed activity variables to the baseline (M2), a third model adding only policy internalization variables to the baseline (M3), and the final full model including all three groups of variables: baseline covariates, governance activity, and policy internalization measures (M4). For every model, we additionally checked for outlier influence using Cook's distance and found no data points with extreme leverage (\(D>1\)). The assumptions of log odds linearity were validated using the Box-Tidwell test, whereby no interaction terms \(x*log(x)\) were found significant. We observe that the predictive efficiency and fit of the models improve with step-wise addition of governance variables, a reassuring sign of valid model construction across the three types of variables. The full variable model M4 was found to be the most parsimonious (\(\Delta\text{AIC}=23.05\) with second-best model) with goodness of fit at 0.648 (Tjur's psuedo-R\({}^{2}\)). Further, it showed a weighted F1 score and accuracy of 93.6% and 93.7% respectively. We hereby report our findings based on M4.
Figure 3: Left: Distribution of ASFI policy extent across governance topics. Right: Distribution of internalization scores within topics. Red and Green markers indicate the median and mean respectively. Internalization is observed to be higher in governance topics which are more regulated.
Factors that correlate positively with a project's chance of graduating include greater internalization of policies related to "Project configuration", "Graduation requirements/Maturity Model", and "Voting protocol/Timeline." Moreover, projects that govern patch-handling activities i.e. more governed activity in "Patches", are associated with higher graduation odds. On the other hand, factors that correlate negatively with successful graduation include high internalization of "Project Wiki" and a higher volume of governed activity on Incubator reporting.
We observe that neither governed activity around nor internalization of the five most highly regulated topics (those on committees, licensing, email communications, and releases) predicts project success. In fact, project success seems to be correlated mostly with the internalization of policies that receive little attention in formal policy. This further complements our overall finding that projects do not run how they say they run, to suggest that, formal policies may not present the full picture of how communities govern to sustain themselves.
Our primary analysis is correlational and not causal. This is important to emphasize because our findings for the "Graduation Requirements" topics are probably a spurious but encouraging validity check: it is likely that the act of a project graduating and conducting necessary protocols explains the positive effect of internalization of graduation policies. Similarly, "Project Wiki" is composed of a policy that is only activated once the Incubator has voted to retire a project. The most likely explanation for its negative effect is that project retirement is causing policy enactment, not the other way around.
To check the robustness and probe some unidirectional interpretations, we perform a post-hoc analysis where we repeat all experiments with a modified policy dataset that excludes these confounding end-of-incubation-related policies that happen after a determination of graduation or retirement has been made. We focus this robustness analysis exclusively on policies that are relevant to the active incubation and growth phase of ASFI projects. Therefore, we removed 34 out of the 234 policies that are generally applicable for projects post-graduation/retirement or only at the terminal stage of incubation (graduation vote, transferring trademarks, or ceremonial protocols of graduation/retirement, etc.). For RQ1 and RQ2, we once again retain the previously observed trend, or lack thereof, between policy extent, governed activity, and internalization. For RQ3, we retain significant effects from three out of the six variables that stood out in our original analysis. These include "Patches" (governed activity), "Incubator Reporting" (governed activity), and "Voting Protocols/Timeline" ("Internalization). As expected, we no longer observe the significant effect associated with 'Graduation requirements' which comprised several policies (now removed) closely related to the graduation event, while 'Project Wiki' which treated post-retirement project wrap-up, was not among the topics inferred from the reduced set of policies. Lastly, the topic 'Project Configuration' does not exert a significant influence on project outcomes. Details are provided in the Appendix. A.
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline & **Couraites** & \multicolumn{2}{c}{**Couraites and Governed Activity**} & \multicolumn{2}{c}{**Couraites and Internalization**} & \multicolumn{2}{c}{**All**} \\ \hline Predictor & Coefficient & **p** & Coefficient & **p** & Coefficient & **p** & Coefficient & **p** \\ \hline Intercept & 2.490 & 0.000 & 3.032 & 0.000 & 3.252 & 0.000 & 4.427 & 0.000 \\ Commuters2 & 0.077 & 0.874 & -0.018 & 0.973 & -0.3074 & 0.637 & 0.127 & 0.875 \\ Comnia3 & 0.705 & 0.140 & 0.615 & 0.243 & 0.772 & 0.195 & 0.197 & 0.793 \\ Developer Emuli3 & 0.807 & **0.016** & 1.069 & **0.020** & 1.000 & **0.020** & 1.188 & 0.079 \\ Incubation time4 & -0.518 & **0.011** & -0.181 & 0.555 & -0.799 & **0.004** & -0.334 & 0.420 \\ \hline Incubator Reporting5 & & & -1.210 & **0.011** & & & -1.827 & **0.002** \\ Patches6 & & & 0.688 & **0.011** & & & 1.009 & **0.009** \\ \hline Project Configuration6 & & & & 0.765 & **0.002** & 0.623 & **0.043** \\ Task Handling7 & & & & -0.511 & 0.054 & -0.069 & 0.084 \\ Project Wiki8 & & & & & -0.720 & **0.032** & -1.417 & **0.005** \\ Voting Protocol/Timeline8 & & & & 0.428 & 0.129 & 0.933 & **0.013** \\ Graduation Requirements/Maturity Model8 & & & & 0.898 & **0.001** & 1.058 & **0.002** \\ \hline Observations: 208 & R2 (Tjar): 0.258 & R2 (Tjar): & 0.360 & R2 (Tjar): & 0.486 & R2 (Tjar): & 0.648 \\ & AIC: & 139.91 & AIC: & 124.96 & AIC: & 113.34 & AIC: & 90.29 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Summary RQ3: Binomial(Logit) GLM regression of project governance against Graduation/Retirement
## 6 Findings
We find substantial differences between the policy-making attention of the ASFI and community governance across topics. Results from RQ1 (Figure 2) show that overall, policy extent has no significant correlation with the frequency of governed activities observed across topics. Yet through RQ2 (Figure 3), we observe that topics with higher policy extent see greater policy internalization. Therefore, while project governance efforts do not mirror the distribution of policy across governance topics, the internalization of policies is highly correlated with how much formal policy governs that topic.
In RQ3 where we test our governance constructs against project outcomes, we find that neither governed activity around nor policy internalization along the most highly regulated subjects predicts project outcomes. Also, most of the topics correlated with project success are relatively lightly regulated.
Domain knowledge of the ASF Incubator can help us further contextualize the results from RQ3 (Table 3). Rules from the 'Project configuration' topic oversee the steps and requirements for setting up ASF infrastructure. Higher internalization associated with more successful projects likely indicates that the development team is more experienced in navigating and utilizing ASF's resources.
Democratic communities and consensus building are encoded in ASF's functioning ('The Apache Way') and are a hallmark of the OSS movement generally. ASF requires project-level voting for approving releases, appointing members to the project PMC, admitting committers, etc. Observance of ASF's standard voting procedures likely indicates shared understanding and streamlined decision-making. Projects that have high internalization with ASF's policies regarding "Voting protocol/Timeline" are successfully hosting and running those votes according to ASF's policies, and mobilizing community participation.
We find a large negative relationship between the frequency of activities around "Incubator reporting" and the likelihood of graduation. We further investigate and find that projects generally discuss and work on reports only when they are due, except when they 1. miss a deadline and are assigned a new report date, 2. need to keep working to resolve issues in a submitted report, 3. are struggling and asked to report more often.
Projects often lag in reporting when their development stalls and the community is struggling. In such a situation, the ASFI intervenes actively and necessitates more efforts to motivate the projects to meet standards and resume compliance with Incubator requirements. Therefore the effect is likely associated with struggling projects and how the Incubator interacts with them. If this interpretation holds, the mechanism for our correlative findings is that an outside factor ("struggling project") is driving more reporting and reduced graduation chances.
## 7 Discussion
Our goal was to investigate the relationship between formal policies overseeing OSS communities and their actual self-organizing tendencies. OSS-supporting foundations create policies to encode their concerns and priorities. ASFI introduces formal hierarchies through various offices and committees to organize traditionally free-form OSS communities. They also include requirements to ensure standards of development and conduct among projects.
Governed activities or routine operations indicate the extent of community governance. Structured activities along a governance topic indicate how developers coordinate and conduct the bulk of their activities from the underlying beliefs and current needs. Therefore, more governed activities are expected as a community seeks to structure and routinize more of its operations.
As communities undergo formalization, their governance may be expected to reflect their overarching policy focus. The conventional perception of OSS formalization anticipates more institutional formalities and obligations (Section. 3.2). This may be observed as increasing community attention on domains on which ASFI sets more rules, and ensuing routine activity from such structuring. RQ1 tests whether the attention of community governance aligns with that of formal policies across shared governance domains.
While governed activities reflect the extent of community governance across topics, we are also interested in how communities align formal rules and actual governance behaviors. In their efforts to structure activities, projects may choose formal policies, implement their own norms or a combination of both (Section. 3.2). RQ2 further examines if the extent of formal regulation is related to how community governance integrates them, as observable through the policy internalization of governed activities.
Our results from RQ1(Figure 2) indicate that the extent of ASF's regulation does not, in general, seem to proportionally increase the intensity of "on-the-ground" governed operations. At the same time, our findings from RQ2 (Figure 3) suggest that through extensive policy-making along specific concerns, ASFI succeeds in using policy to orient
community governance, which shows up through policy internalization in governed activity along domains with more extensively defined policies.
We reconcile the implications of the two approaches to understanding formalization. RQ1 dwells on convergence/divergence in ASFI/community effective governance efforts, i.e. formulating, establishing, and implementing rules and norms to structure activities. Meanwhile, RQ2 examines to what extent community governance incorporates ASFI's formal policies: literally how much communities internalize formal policy's framing of a governance issue. The positive correlation between internalization and policy extent likely indicates that certain governance topics that are extensively codified considerably structure governed activity. Yet results from RQ1 indicate that highly formalized governance topics elicit relatively less or no more governance effort from communities as compared to those where fewer formal rules exist. In fact, in several crucial topics with limited regulation, projects exercise substantial governance efforts to sustain The takeaway is that the effect of more formalization in policy seems to be reflected less in the volume of governance activity it spurs, and more in how closely that activity hews with prescribed standards.
The ASFI's policy coverage is largely administrative, and it outlines appropriate protocols for governance concerns it deems important. Consequently, when projects engage in highly regulated domains, they respect and internalize such specifications. Therefore, while the focus of policy-making may not be reflected in the regular governance concerns of developers, policies still act as a layer of fundamental governance that is seamlessly integrated into communities. Simply put, developers respect policies that are evidently important and extensively specified, but they are also faced with other concerns beyond those where ASF largely institutes policies.
The ASFI's policies show relatively less attention to the technical aspects that constitute communities' main governance activities (issues/patches, artifacts, etc.), suggesting that the foundation defers to the discretion and objectives of developers on these subjects. The generally lower policy internalization along core development concerns may be also explained by the fact that technical regulations in ASF are few and rather basic guidelines and expectations than specific conditions. We hence see considerable governed activity along some of these ('issues'/'patches'/'builds'), reflecting efforts to coordinate fluid communities, channel their contributions, adapt to emerging technology, and meet release targets.
RQ3 examines the association of self-governance and internalization of foundation policies, with the objective success of projects (Table 3). It is based on the implicit assumption that projects will perform governance and adopt policies in a manner that helps them attain their objective, which is to graduate from the Incubator. The Incubator assesses projects based on the diversification of the community, the capability to produce compliant software and consistent releases. Interestingly governance behavior around the more highly regulated governance topics does not stand out as significant discriminants between graduated and retired projects.
Foundation policies may play a role in furthering development, facilitating coordination, and consensus among communities, as analyses showed positive associations between internalization of voting and infrastructure use protocols and odds of graduation. We also find some evidence that community initiative in less regulated governance areas supports project sustainability. Projects that coordinate submission and incorporation of patches more often are both building their community and improving their product, making them more likely to graduate. Such projects were likely able to step up to the limited explicit technical governance to institute their own routines to sustain development.
We have one significant finding around a highly regulated topic: Incubator reporting. We found a negative association between levels of governed activity around Incubator reporting and the odds of graduation. Reporting to the Apache Incubator is intended to motivate project performance as well as track their progress [90]. Therefore, it is interesting that more formalization is associated with a reduced likelihood of graduation for a highly regulated topic. We further explain that this effect from Incubator reporting likely does not imply a straightforward causal relation between formalization and success. It also presents a delicate situation for already struggling projects as they are necessitated to focus their governance more towards the priorities of formal policy. This has sometimes proven to be especially burdensome for small projects. Apache Gossip is such an example, where the small community struggled with the overhead of implementing the regular reporting protocols set by the ASFI and was eventually retired.
All in all, communities are bound by foundation requirements, especially in domains that elicit a greater volume of formalization. At the same time, their actual governance concentrates on aspects distinct from the ones in which ASFI regulates the most. Importantly, we find limited support for the argument that projects should embrace formalization, be it in terms of aligning governance focus or internalizing policies in more regulated topics, in order to successfully realize their objectives. Therefore, written formal policies from OSS communities may not be a comprehensive account for how their actual governance unfolds.
### Recommendations
Our findings may carry certain implications for community members in the ASF, or the OSS ecosystem more generally. For example, since policy internalization around project configuration, and voting seem to correlate with project graduation, more formal policy (or informal attention) to these topics may help projects succeed. However, we caution against too literal an interpretation of our findings for practice. Our results may be specific to ASF, and as we have seen, some of these effects are unlikely to have a straightforward causal interpretation.
Our most responsible recommendation from this research, for practitioners in technology policy in general and OSS in particular, is to be pragmatic about governance, be cognizant of organizational variability and uncertainty, and be watchful but permissive about letting projects drift in their interpretation of policy. This allows volunteer communities to focus on self-regulation, activity, and enforcement of issues that they identify as requiring more clarity or structure. By subsuming policy development processes to community will, foundations are posed to gain a policy design that is informed by low-level daily experiences of contributors, and enjoys the legitimacy of its membership.
## 8 Threats and Validity
The findings presented in this study apply to only the ASF. Future replication across more organizations is hoped to enrich OSS governance research with more general insights. For the purposes of our study, we treat ASFI's standards for graduation as an evaluation of OSS success and viability. The ASFI's stated objectives and standards provide a well-rounded criteria to assess the relation of governance behavior with viable and sustainable communities (Section. 5.6). It should be noted, however, that projects sometimes have varied reasons for choosing to graduate or discontinuing incubation. Reasons include but are not limited to their sense of cultural fit, or need for ASF's specific portfolio of support servers. Therefore ASFI graduation, while considered a respected and tested model of evaluation, may not generalize to a conclusive metric of OSS success.
Our work is based on large public mailing lists. While these are the central channels for ASFI projects, they also maintain private lists reserved for certain project businesses, including committer voting, etc. These are restricted from public access and are currently beyond our scope. ASFI leadership discourages the use of these lists as much as possible, and they are typically only used for "personnel" matters such as if a contributor is breaking a project's code of conduct or to vote in new committers.
Our study rests on information extracted by semi-supervised learning. The choice of semi-supervised learning was largely motivated by our constructs (Section 5.3), the limits of supervised learning, and most importantly to facilitate scalable organizational insight. Unsupervised/semi-supervised methods have known limitations, and are particularly difficult to validate. We tuned the performance of our clustering models utilizing established measures such as clustering validity and NPMI-based topic coherence. However, the very high values of \(R^{2}\) that we report for our models are an encouraging sign that these constructs are credibly capturing important aspects of project governance activity.
We named the resulting topic clusters by examining the most frequent words used in them as well as the policies to which they were assigned. This qualitatively distills the essence of the clusters and makes it possible for us to interpret them for purposes of downstream analyses. Therefore, interpretations of topics and associated effects may vary across researchers and leaves room for reification. Through further checks, we find that the topics found in the main and supplementary analysis are largely even if not perfectly identified.
While we used domain-adapted language models wherever available, some tasks like semantic role labeling and semantic scoring were more specialized with limited models and datasets available. Annotating training data consistent with benchmark datasets is complicated for such tasks and limits the scope of the methodology for replicating results. We used models trained on standard benchmark datasets in such cases.
Certain project mailing lists did not reflect governed activity under all of the 42 different governance topics. This could be attributed to the extent of engagement or varied priorities across projects. For example, resource object management routines are likely exclusive to Java-based projects. Moreover, HDBSCAN sets a lower threshold on cluster size (0.001% of sample size). This leaves a possibility for the merger of minor routines into clusters representing more general themes, or being classified as outliers.
In (Section 5.3), we explain the computing overheads and limits on input size for transformer-based language models, often truncating broader text context in social interactions [89]. Moreover, we conduct a granular, performance frame-level analysis of community operations. We encountered a few cases in our dataset where extensive policies with multiple nested or bulleted conditions were truncated, during intermediate preprocessing or parsing stages. Ongoing efforts at supporting longer context windows [79] for representation learning should expand the scope of language models for discourse analysis.
## 9 Conclusion
Open source software projects join foundations like the Apache Software Foundation despite the "anti-regulatory" tendency of many OSS developers. They do so because the standardized, streamlined governance systems that foundations operate provide clarity, best practices, mentorship, economies of scale, and lower administrative overhead. Yet OSS projects may simultaneously find themselves benefiting from formal structure and/or constrained by it to varying degrees.
While it is a widely accepted truism that governance in practice often differs from governance in form, demonstrating this at scale, and determining the manner in which formal depictions and ground behavior diverge, has been a challenge. Articulating fundamental questions about governance practices through NLP methods, particularly language modeling, enables us to quantify the governance behavior of projects, including how they govern themselves and internalize formal policy.
We find that while OSS communities are generally framed by formal regulations, they focus their practical governance efforts in a manner distant from the thrust of formal policy-making. Further, their governance behavior around highly formalized concerns seems to have little bearing on their rates of success and sustainability. What stands out is the adaptability of their governance efforts as well as their internalization of policy around relatively less regulated topics. In conclusion, a comprehensive understanding of peer production, and likely other types of collective action, must account for the institution's formal structure while also measuring how such structuration is received in practice.
## Appendix A Supplementary Analysis
### Policies excluded
The supplementary analysis looks at community and foundation interaction over governance concerns and policies applicable over active incubation and mentoring. We remove certain categories of policy documents based on subsection headings in the original dataset [71]. These include "Steps to Retirement", "Deciding to Retire", "Graduation discussion", "Graduation Approval vote", " The Graduation Process", "Preparing a Charter", "The Recommendation Vote", "Submission of Resolution to the board", "Community Graduation Vote", "Press Releases for new Top Level Projects (TLP)", "Whether to graduate to subproject or to top level project", "post-graduation tasks', "Transfering trademarks to the ASF" and "Subproject Acceptance vote". The aforelisted sections span terminal formalities and procedures to initiate and garner community/ASF approval for graduation as well as steps towards formal induction into Apache. ASFI projects may pursue two modes of post-graduation affiliation, to function as a full fledged independent top level project (TLP) or as a subproject under a TLP. Sections also cover protocols to be observed when a project is being retired. All in all, 34 entires were removed out of the original 234. However, we retain policies which state the goals of ASFI, expected standards, evaluation criteria, and other requirements meant to guide and mentor projects towards success.
### Topic Modeling and Correspondence
We repeated all steps through Sec. 5.3. The modified policy set produced 328 rules which were used to guide clustering and topic modeling. In order to draw parallels between the topical effects from the two analysis, we tested the extent of topic correspondence between this topic model and the topic model from our primary analyses. We discovered 24 topics among governed activity, of which 22 were a subset of the 42 topics from our primary analysis. A top-N word match between the topics produced by the two models found 82.5% overlap, while the topical assignment of rules showed a correlation of 0.76 with the primary topic model.
### Results
For RQ1, the correlation between the distributions of policy extent and governed activity was found to be 0.18 (\(p=0.41\)). For RQ2, the correlation between the distribution of policy extent and the mean internalization by topic was found to be 0.69 (\(p<0.001\)). These findings are nearly identical to those reported in the main text. The analyses for RQ3 is as below (see Table 4). The differences between these findings are those reported in the main text are discussed in the main text. |
2309.10023 | Searching for axion forces with spin precession in atoms and molecules | We propose to use atoms and molecules as quantum sensors of axion-mediated
monopole-dipole forces. We show that electron spin precession experiments using
atomic and molecular beams are well-suited for axion searches thanks to the
presence of co-magnetometer states and single-shot temporal resolution.
Experimental strategies to detect axion gradients from localised sources and
the earth are presented, taking ACME III as a prototype example. Other
possibilities including atomic beams, and laser-cooled atoms and molecules are
discussed. | Prateek Agrawal, Nicholas R. Hutzler, David E. Kaplan, Surjeet Rajendran, Mario Reig | 2023-09-18T18:00:00Z | http://arxiv.org/abs/2309.10023v2 | # Searching for axion forces with spin precession in atoms and molecules
###### Abstract
We propose to use atoms and molecules as quantum sensors of axion-mediated monopole-dipole forces. We show that electron spin precession experiments using atomic and molecular beams are well-suited for axion searches thanks to the presence of co-magnetometer states and single-shot temporal resolution. Experimental strategies to detect axion gradients from localised sources and the earth are presented, taking ACME III as a prototype example. Other possibilities including atomic beams, and laser-cooled atoms and molecules are discussed.
## I Introduction
Axions are well-motivated pseudo-scalar particles beyond the Standard Model (SM). Due to their appearance in the Peccei-Quinn solution to the strong CP problem [1; 2], their role as dark matter [3; 4; 5] and ubiquity in String Theory compactifications [6; 7] they have been receiving increased attention recently in both theory and experiment.
On the experimental side, this surge in interest has led to a variety of searches both for cosmological relic axions, which contribute to the dark matter (DM) abundance, and DM-independent searches in the lab. Due to their CP-conserving dipole coupling to fermions, \(c_{\psi}\frac{\rho_{e}\phi}{f_{\phi}}\bar{\psi}\varphi^{\mu}\gamma^{5}\psi\), spin precession experiments are particularly appealing to look for these particles. Indeed, in the non-relativistic limit a coherent axion field \(\phi\) gives rise to an energy shift with a spin \(\mathbf{S}\) given by the interaction Hamiltonian [8]:
\[H_{\phi}=-\frac{1}{f_{a}}\nabla\phi\cdot\mathbf{S}\,. \tag{1}\]
Analogously to the well-known electromagnetic (EM) effects, in the presence of an axion background, the spin will precess around the gradient \(\nabla\phi\). The origin of this gradient can be either a relic axion DM background, or a sourced coherent axion field.
The last possibility is particularly interesting in the case where the axion has a Yukawa-like scalar coupling to nucleons, \(g_{s}\phi\bar{N}N\). In this scenario the axion mediates a new kind of long-range interaction known as monopole-dipole force [9], usually given in terms of the potential:
\[V(r)=\frac{g_{s}g_{p}^{\psi}}{8\pi m_{\psi}}\left(\frac{1}{\lambda_{\phi}r}+ \frac{1}{r^{2}}\right)e^{-m_{\phi}r}\mathbf{S}\cdot\hat{r}\,, \tag{2}\]
with \(m_{\phi}\) the mass of the axion, and \(\lambda_{\phi}\sim m_{\phi}^{-1}\) the associated wavelength setting the effective reach of the force1. The couplings \(g_{s}\) and \(g_{p}=c_{\psi}\frac{m_{\phi}}{f_{\phi}}\) are the so-called monopole (CP violating) and dipole (CP preserving) coupling, respectively. Despite the expectation that these couplings should be very small to satisfy existing bounds, the coherent effect of around an Avogadro's number of source particles builds up leading to a potentially observable, macroscopic effect on a detector made of polarised spins.
Footnote 1: See [10] for a discussion about generalised potentials and their phenomenology in low-energy experiments.
The interaction of an electron with a combined magnetic \(\mathbf{B}\) and axion \(\phi\) field is given by:
\[H = -g_{e}\mu_{B}\mathbf{S}\cdot\mathbf{B}-\frac{1}{f_{a}}\mathbf{S} \cdot(\nabla\phi) \tag{3}\] \[= -\mathbf{S}\cdot(g_{e}\mu_{B}\mathbf{B}+\nabla\phi/f_{a})\,,\]
where \(g_{e}\) is the electron \(g\)-factor and \(\mu_{B}\) is the Bohr magneton. The gradient of the axion field therefore acts similarly to a magnetic field in that it causes electron spin precession. Axion mediated forces can therefore be searched for with precision electron spin precession experiments, similar to those used in electron electric dipole moment (EDM) searches or in precision magnetometry, though with some important differences which we will discuss.
The current experimental bounds on axion-mediated forces on electron spins are given in Fig.1. In this Letter we propose to use atomic and molecular beams and traps to look for axion-mediated forces, showing that these types of experiments have a promising potential for axion searches. To this end, we study the expected reach and describe qualitatively the main systematic effects and how to control them. As a specific example, we consider how ACME III could be adapted to search for these new macroscopic axion forces. We also propose a dedicated axion gradient-specific experiment using a beam of ytterbium \({}^{171}\)Yb, and discuss the possibility of using laser-cooled molecules for axion force searches. Note that using atomic and molecular EDM searches to put limits on axionlike particles has been previously considered in other contexts, for example via couplings
within the atoms or molecules [11; 12; 13], or via oscillating EDMs [14; 15].
### Geometry of an axion search with a beam experiment
Spin precession experiments using the Ramsey method of separated oscillating fields constitute one of the most efficient methods to measure magnetic and electric dipole moments. By creating a superposition state and measuring the relative phase of the eigenstates after some time, very small energy splittings can be measured. This phase builds up during the spin coherence time, \(\tau\), and the sensitivity of the experiment is proportional to \(\propto\tau\sqrt{N}\), where \(N\) is the number of particles measured.
Experiments searching for the electron EDM (\(\mathbf{d}_{e}\)), for example, are designed to measure small energy shifts of the type \(H_{\mathrm{edm}}=-\mathbf{d}_{e}\cdot\mathbf{E}\), and can in principle also measure energy splittings from an axion field provided the geometry of the experiment is appropriate. As we have seen in Eq.(1) an axion field generates an energy splitting which depends on the relative orientation between the spin and the gradient. The axion contribution to the phase will only be constructive if the orientation of the gradient and the quantisation axis is maintained during the coherence time2.
Footnote 2: For example, this requirement is not satisfied in some EDM searches with ion traps [16; 17], leading to an axion gradient effect which averages out. This kind of search will be sensitive only to spatial variations of the axion gradient and not to the gradient itself since a molecule in an ion trap has a rotating quantisation axis set by a rotating electric field. We thank Kia Boon Ng for pointing out this.
To achieve their full sensitivity, experiments using spin precession in atoms or molecules usually require that the configuration of the experiment, such as the direction of the relevant fields, can be switched to measure differences in the spin precession frequency. This makes the experiment much more robust against slow drifts and offsets, which can be challenging to overcome. In the case of a coherent axion field sourced by a test mass, in addition to aligning the gradient along the quantisation axis, the ability oscillate or reverse the position of the mass within periods of \(\sim O(1)\) seconds, or faster, is very helpful. The effect induced by the axion gradient on the spins will be obtained from the change to the precession frequency that is correlated with the position of the mass. The distance from the axion field source to the spins sets the smallest (largest) wavelength (mass) that can be tested. This method has been used by QUAX [18] and SMILE [19], setting the strongest lab bounds in the short range regime. A similar scheme will be employed in ARIADNE to test the monopole-dipole interaction on nucleon spins [20].
We now turn to the question of measuring an axion gradient from the earth's nucleons. This radial field will induce a DC signal on spins that cannot be reversed and therefore it seems difficult to measure it reliably; in particular, eq. (3) suggests that the axion gradient is indistinguishable from an uncontrolled background magnetic field, which will always be present. However, as we will show later, experiments using molecular or atomic beams provide an opportunity to measure this DC signal thanks to the presence of co-magnetometer states and single-shot temporal resolution.
Schematically, a strategy to measure the earth's axion gradient is to set a weak magnetic field in the lab vertical direction, aligning the quantisation axis with the earth radial direction, and causing the electron spin to precess around it. We then measure the precession frequency in two different configurations: \(\mathbf{B}_{\mathrm{lab}}\) oriented vertically upwards, and \(\mathbf{B}_{\mathrm{lab}}\) oriented vertically downwards. Since the precession is dominated by the magnetic field, reversing the field orientation will induce spin precession in the opposite direction up to the earth gradient contribution, which remains fixed. Neglecting momentarily background B fields, the measured frequencies differ by the earth axion gradient contribution:
\[\Delta\omega=\omega_{\mathrm{up}}-\omega_{\mathrm{down}}=2\omega_{\mathrm{a} }^{\mathrm{earth}}\,. \tag{4}\]
By switching the B field in short periods of order seconds and measuring the frequency we are sensitive to the earth's axion gradient. Note that in principle we do not need the lab B field at all, but in practice since there will always be some sort of magnetic field in the apparatus, it is best to apply a larger, controlled field.
In a realistic situation there will always exist stray magnetic fields in the vertical direction which do not reverse with the applied magnetic field, therefore mimicking an axion gradient signal. Reaching shot noise-limited sensitivity therefore requires the use of co-magnetometer states to disentangle an axion gradient from DC magnetic field background. Additionally, being a DC signal, one has to worry about phase offsets and similar effects which in some sense arise from the fact that the earth gradient cannot be reversed; for example, if one sees a small, constant spin precession, it could be due to an axion field, or the fact that the preparation and readout stages have some small phase offset, which they always will. This issue is addressed by the single-shot temporal resolution of beam experiments, which enables measurement of the temporal dependence of spin precession and suppresses systematic effects associated with phase offsets. Such systematics will be considered later in more detail.
## II Axion experiments with atoms and molecules
Beam experiments using atoms and molecules have several features that make them particularly interesting to look for axion forces. These experiments usually have good sensitivity to quasi-DC signals, that is, they are
well-suited to observe differences in the frequency as the experiment conditions are changed within a \(\sim O(1)\) second scale. This is convenient, for example, when the source masses are relatively heavy, and cannot be moved to frequencies higher than \(O(10)\) Hz.
In this section we first discuss different possibilities for measuring axion gradients with molecule and atom beam experiments.
### The need for co-magnetometry
As mentioned earlier, and shown in Eq. (3), an axion gradient looks very similar to a magnetic field pointing in the direction of the axion gradient. In principle one could search for the axion field by performing a spin precession measurement in zero magnetic field, but this presents practical limitations. In particular, there will always be some magnetic field component along every direction; every real material is slightly magnetic, and real magnetic shields must have holes in them for experimental access. This problem is overcome in some EDM experiments by employing co-magnetometer schemes: the use of a species (or other internal state) with different relative magnetic and EDM sensitivity. The Tl EDM experiment [22] used a co-propagating beam of Na atoms, which are effectively insensitive to the eEDM, as an indepent measurement of the magnetic field. The ACME [23] and JILA [17] experiments use pairs of internal molecular states where the relative orientation of the internuclear axis and electron spin are different, thereby giving similar magnetic but opposite EDM sensitivity. This "internal co-magnetometry" scheme is very powerful, but will unfortunately not work for the problem at hand.
To understand why, consider eq. (3); the electron spin interacts with the sum of the magnetic and axion terms, and therefore cannot distinguish between them. However, this difficulty can be circumvented if the species has orbital angular momenta which provides a Zeeman interaction but does not couple to the axion field (as it is not a spin). For example, consider an atom with electron spin \(\mathbf{S}\), electron orbital angular momentum \(\mathbf{L}\), and spin-orbit coupling \(\beta\), so that the Hamiltonian for an atom interacting with a magnetic and axion field is given by
\[H=-\mu_{B}(\mathbf{L}+2\mathbf{S})\cdot\mathbf{B}+\beta\mathbf{L}\cdot\mathbf{ S}-\mathbf{S}\cdot\nabla\phi/f_{a}, \tag{5}\]
where we have set the electron \(g-\)factor to be 2. In the physically relevant limit where \(\beta\) is much larger than any other energy scale in the problem, the good quantum number is \(\mathbf{J}=\mathbf{S}+\mathbf{L}\), and the energy shifts in a magnetic field are given by
\[\Delta E_{B}=-g_{J}M_{J}\mu_{B}|\mathbf{B}|, \tag{6}\]
where
\[g_{J}=1+\frac{J(J+1)+S(S+1)-L(L+1)}{2J(J+1)} \tag{7}\]
is the Lande \(g\)-factor and \(M_{J}\) is the projection of \(\mathbf{J}\) on the quantization axis. We can go through a similar argument used to derive this equation to find the energy shift from an axion gradient,
\[\Delta E_{\phi} = -\left\langle\mathbf{S}\cdot\nabla\phi/f_{a}\right\rangle \tag{8}\] \[= -\left\langle\mathbf{J}\cdot\nabla\phi/f_{a}\right\rangle\left\langle \frac{\mathbf{S}\cdot\mathbf{J}}{|\mathbf{J}|^{2}}\right\rangle\] (9) \[= -g_{a}M_{J}|\nabla\phi/f_{a}|, \tag{10}\]
where we have defined the axion Lande factor
\[g_{a}=\left\langle\frac{\mathbf{S}\cdot\mathbf{J}}{|\mathbf{J}|^{2}}\right\rangle =\frac{J(J+1)+S(S+1)-L(L+1)}{2J(J+1)}. \tag{11}\]
Note that \(g_{J}\neq g_{a}\). If we can find states in the atom or molecule where the values of \(g_{J}/g_{a}\) are different, then we can use these states as co-magnetometers. For example, the spin-orbit components \({}^{2}P_{1/2}\) and \({}^{2}P_{3/2}\) of a \({}^{2}P\) electronic state have \(g_{J,1/2}=2/3,g_{a,1/2}=-1/3\) and \(g_{J,3/2}=4/3,g_{a,3/2}=1/3\), respectively. Thus, the relative shift due to a magnetic or axion field between these states are not proportional, and they can be distinguished.
Note that not all spin-orbit states have this feature; the \({}^{3}P_{0,1,2}\) components of a \({}^{3}P\) electronic state all have \(g_{J}=3/2,g_{a}=1/2\) so comparing the shifts in these states cannot be used to disentangle a magnetic and axion field. Hyperfine structure, and the fact that \(g_{e}\neq 2\) exactly, make these conclusions not entirely valid, but it means that the \(g-\)factors differ by \(O(10^{-3})\) so their utility as co-magnetometers is suppressed.
Thus, a useful co-magnetometer scheme for the approach under discussion requires states with different relative contributions of electron spin and electron orbital angular momentum to the magnetic moment. This shows why the internal co-magnetometry scheme for ACME is not immediately useful - the pairs of states have, to good approximation, the same relative orientation of electron spin and orbital angular momenta. This is also the case with tuning magnetic interactions in polyatomics with parity doublets [24]; these work by changing the average spin projection on the laboratory field, within a single state where the magnetic interactions come almost entirely from the electron spin, and are therefore not immediately useful for axion co-magnetometry.
### Molecular probes
Polarized diatomic molecules have been used to search for the electron's EDM [16; 17; 23; 25]. One example is the ACME experiment which sets a bound3 on this parameter using the metastable \(H\) state in ThO molecule,
\(|d_{e}|<1.1\times 10^{-29}\) e cm [23]. This state has \(J=1\) and enjoys a natural immunity to stray magnetic fields, due to a cancellation between the spin and orbital angular momentum of the valence electrons which leads to a small net magnetic moment \(\mu_{H}=g_{H}\mu_{B}\), with \(g_{H}=0.008\)[26]. Note, however, that since only the electron spin contributes to axion precession, and the stretched states in the \(H,J=1\) manifold have fully-aligned electron spins, this state can still be used to search for the axion gradient.
The value of \(d_{e}\) is extracted from the change in the precession frequency that is correlated with the molecular axis orientation, given by \(\Omega=\mathbf{J}_{e}\cdot\hat{\mathbf{n}}\), and the orientation of the effective electric field, \(\omega_{\rm edm}=d_{e}E_{\rm eff}\Omega\), which is reversed every few seconds. In a later section, we discuss how ACME III (or a similar experiment) could be modified to search for axion forces by searching for spin precession arising from the axion gradient as opposed to the electron EDM. As mentioned earlier, many of the challenges of electron EDM experiments, such as the need for large electric polarization and the need for heavy species to make use of relativistic enhancements, are not needed; however, since ACME III could make the proposed measurements with minimal modifications, we present the details. We also discuss simpler dedicated approaches which would not offer electron EDM sensitivity.
We now estimate the reach of the experiment. The condition for the axion energy shift is \(\Delta E_{\phi}>\delta\omega\), with \(\delta\omega\) the smallest measurable frequency. Assume we use a cubic brick of a dense material with number density of nucleons \(n_{N}\) and size \(D\) at a distance \(d\) from the molecules. In the case \(D\sim d\), we have:
\[\Delta E_{\phi}\sim\frac{g_{s}g_{p}^{\psi}}{8\pi m_{\psi}}n_{N}D\left(\frac{D} {\lambda_{\phi}}+1\right)e^{-D/\lambda_{\phi}}\,. \tag{12}\]
As an example, axions with wavelength comparable to the other scales in the problem, \(2d\sim D\sim\lambda_{\phi}\), can be detected provided that
\[g_{s}g_{\psi}^{p}>\frac{\pi\delta\omega\,m_{\psi}}{n_{N}\lambda_{\phi}}\,, \tag{13}\]
which shows how we can gain sensitivity by increasing (decreasing) the parameters \(n_{N}(\delta\omega)\). The reach of ACME III, where a sensitivity at the level of \(\delta\omega^{III}\sim 15\)\(\mu\)rad/s is expected, is shown in Fig.1 for axion gradients from the earth and from test masses. We assume lead or tungsten bricks of size \(D^{3}\sim(10\) cm\()^{3}\) next to the beam, at a distance of order \(O(10)\) cm.
## III Experimental setup and background overview
Molecular beam experiments are well-suited to search for axionic forces on electrons. In this section we first consider using ACME III. We also discuss the experimental
Figure 1: Axion-mediated monopole-dipole forces on electrons at spin precession experiments. The QCD axion prediction is shown in light green, taking \(\theta_{\rm eff}\) to lie in the range \(10^{-20}<\theta_{\rm eff}<10^{-10}\). The green solid (dashed) line corresponds to the sensitivity at ACME with moving test masses (earth) acting as the source. The blue lines stand for a dedicated spin precession experiment using Yb beams. Finally, in orange, we have the expected reach with an experiment using ultracold molecules, assuming the existence of co-magnetometer states. In either case, and specially thanks to the ability to test the earth gradient, new parameter space beyond astrophysical bounds will be covered. Bounds adapted from [21].
set up and the protocol to control systematic effects. Similar strategies are expected for a dedicated axion search using, for example, the Yb atom beam co-magnetometer described in section III.3.
### Acme III
An axion gradient generates an additional term to the spin evolution (see Eq. 3), with the precessed angle due to the axion contribution given by:
\[\theta_{\rm axion}=\int_{0}^{L}\frac{\nabla\phi}{f_{\phi}}\frac{dx}{v_{mol}}\,, \tag{14}\]
where \(v_{mol}\) is the velocity of the molecules, \(x\) is the position, and \(L\) is the precession length. As in EDM searches, the precessed phase can be detected by measuring the population in the spin quadrature states, \(S_{x,y}\).
Unlike for EDM searches, the polarizing electric field \(E_{\rm lab}\) is not needed and one could in principle operate with only a weak applied B field. Assuming that the magnetic field is adjusted so that the phase is \(\theta_{B}+\theta_{\rm offset}\approx\pi/4\), the relevant measurable quantity is given by the asymmetry [27]:
\[\mathcal{A}= \frac{S_{x}-S_{y}}{S_{x}+S_{y}}=\mathcal{C}\cos(\theta_{B}+\theta _{\rm axion}+\theta_{\rm offset}) \tag{15}\] \[\approx\text{sgn}(B)\theta_{\rm axion}=\text{sgn}(B)\left(\frac{ \nabla\phi}{f_{\phi}}\right)\tau_{\rm coh}\,. \tag{16}\]
The constant \(0\leq\mathcal{C}\leq 1\) is the contrast, which is \(\sim 1\) for ACME, and indicates the efficiency in the preparation and detection of the states.
We discuss two scenarios: one to look for the axion field from a test mass, and one from the earth. The test mass case is in principle straightforward, as one merely needs to add a moving mass near the ACME beam line. Let the test mass be movable between positions 1 and 2, sourcing an averaged gradient over the beam path of \(-\nabla\phi_{1}/f_{a}\) and \(-\nabla\phi_{2}/f_{a}\), respectively. Considering the test mass position as a binary "switch" which can be in state \(\mathcal{M}=\pm 1\), analogous to other switches in ACME [27], we can write the spin precession angle due to the axion gradient as
\[\theta_{\rm axion} = \int_{0}^{L}\frac{\nabla\phi_{1}+\nabla\phi_{2}}{2f_{\phi}}\frac{ dx}{v_{mol}}+\mathcal{M}\int_{0}^{L}\frac{\nabla\phi_{1}-\nabla\phi_{2}}{2f_{ \phi}}\frac{dx}{v_{mol}} \tag{17}\] \[\equiv \theta_{0,\rm axion}+\mathcal{M}\theta_{\mathcal{M}}.\]
Note that we have defined a mean, offset spin precession \(\theta_{0,\rm axion}\) which does not depend on the position of the test mass, and a term \(\theta_{\mathcal{M}}\) which changes sign when the test mass is moved.
The experimental protocol is therefore to add the \(\mathcal{M}\) switch where we operate the experiment with the test mass in two positions. Moving the mass every few seconds should give sufficient robustness against drifts in other experimental quantities, comparable to other switches in ACME. This should give a spin precession signal which is proportional to the axion field gradient, but possibly other systematic effects. Note that the masses can be between the preparation and readout stages, so that they won't interfere with the current optical preparation and readout schemes.
One of the effects of most obvious concern is that moving a large mass will change the electromagnetic environment. Shielding the electric field from the test mass is straightforward: simply use a conducting shield around the molecules, which ACME already has in the form of electric field plates. The ACME spin precession scheme is fairly robust against electric field drifts by design, as they result in a common-mode offset between the two precessing states, so this is not likely to be a concern.
A greater concern is magnetic offsets. To get large signals, we would like the test mass to be inside the magnetic shields, as close to the molecules as possible. Magnetic impurities in the test mass would also correlate with \(\mathcal{M}\). A magnetic field shift of \(\sim\)1 nanoGauss would give rise to a spin precession signal correlated with \(\mathcal{M}\) comparable to the projected statistical sensitivity of ACME III. Quasi-DC magnetic fields on the order of nanoGauss are challenging to measure, though not impossible; commercially-available optical magnetometers can get to near this sensitivity4. However, since these magnetometers work via the interaction of a valence electron on an atom with the magnetic field, they are also in principle sensitive to the axion gradient5. Thus it is more robust to rely on axion co-magnetometry states, which ACME III is already setting up to use but for a different reason; the \(Q^{3}\Delta_{2}\) state in ThO [28] has a magnetic moment of \(\sim\)2 \(\mu_{B}\), versus \(\sim\) 0.01 \(\mu_{B}\) for the \(H^{3}\Delta_{1}\) state, and since the magnetic moment arises mostly from orbital angular momentum, the states \(H\) and \(Q\) represent an axion co-magnetometry state pair. Thus, one can measure the spin precession dependence on \(\mathcal{M}\) in both the \(H\) and \(Q\) states; since they have different relative magnetic and axion gradient sensitivity, the relative contributions of these two effects will be different in these two states, thus enabling their disentanglement. Note that for cases where it is technically feasible, the mass could be periodically rotated or re-oriented to change the direction of the residual fields for further rejection of systematic effects.
Footnote 4: See for example QuSpin, www.quspin.com
Footnote 5: Note that this could be combined with another magnetometer technology not relying on atomic electron spins, such as SQUIDs, as another avenue to search for axion gradients.
A related concern is magnetic Johnson noise (MJN) [29; 30], which arises due to thermal fluctuation currents in a conductor at finite temperature. This will not necessarily add systematically offset spin precession, but it will result in magnetic field noise which could reduce the contrast and statistical sensitivity of the measurement. The calculation of the effect would depend on
the specific geometry and material chosen for the moving masses, but we can make some estimates. For a conductor with resistivity \(\rho\) at temperature \(T\) having thickness \(\sim t\) a distance of \(\sim D\) away, the magnetic field power spectral density at the molecules is given approximately by [31]
\[\widetilde{\mathcal{B}}(f)\sim\frac{\mu_{0}}{4\pi}\left[\frac{8tk_{B}T}{3\rho D ^{2}}\right]^{1/2}\sim\frac{1\;\mathrm{pG/\sqrt{Hz}}}{\sqrt{\rho/(1\;\Omega\cdot m )}}, \tag{18}\]
where \(k_{B}\) is Boltzmann's constant, and for the rightmost term we have assumed \(D\sim 50\) mm, \(t\sim 100\) mm, and \(T=300\) K (though the mass could be cooled if needed).
Tungsten [32] would be a natural choice for a test mass given its very high mass density of 19.3 g/cm\({}^{3}\), and its resistivity of 5.44\(\times 10^{-8}\)\(\Omega\cdot\)m would give rise to MJN on the order of \(\sim\)5 nG/\(\sqrt{\mathrm{Hz}}\). This would give rise to magnetic spin precession noise on the order of \(\sim 2\pi\times\mu\)Hz/\(\sqrt{\mathrm{Hz}}\) for the ThO H state, and around a hundred-fold larger for the ThO Q state, which might sound problematic; however, as it is still smaller than other more dominant forms of noise, it will not be a limitation as it will average away faster than other noise sources.
One such source is the velocity dispersion \(\Delta v\) in the molecular beam, which also is not a fundamental limitation as it averages away [27], though it can add excess noise [23]. The phase noise \(\Delta\phi\) from magnetic spin precession in magnetic field noise \(\Delta\mathcal{B}\) and precession time noise \(\Delta\tau\), is given by
\[\Delta\phi=\tau\Delta\mathcal{B}+\mathcal{B}\Delta\tau=\tau\left(\Delta \mathcal{B}+\mathcal{B}\Delta v/v\right). \tag{19}\]
We have used the fact that \(|\Delta\mathcal{B}|\ll|\mathcal{B}|\) as the applied (and residual) magnetic fields are larger than the MJN, and that the precession is over \(L\) so \(L=v\tau\) and therefore \(\Delta\tau/\tau=\Delta v/v\). Since the ACME beam has \(\Delta v/v\sim 0.1\)[33] due to velocity dispersion within a single shot, and has shot-to-shot changes in the mean velocity of \(\Delta v/v\sim 10^{-3}\)[23], the \(\Delta\mathcal{B}\) component should not be a major limitation to statistical sensitivity. However, should MJN (or the cost of tungsten) ultimately be a limiting factor, there are materials such as zirconia or leaded glass with over 10 orders of magnitude larger resistivity yet only a factor of 4 to 5 less density.
Now we describe a protocol to measure an axion gradient from the Earth. In this case there is no moving mass, and therefore no \(\mathcal{M}\) switch. This introduces a challenge, as we no longer have a way to change the axion field, and are therefore potentially susceptible to the many DC drifts and offsets in EDM-style spin precession experiments [27]. However, we discuss two methods which can help mitigate this.
We can continue to the use the \(H,Q\) co-magnetometer pair to distinguish between a constant background magnetic field and the background axion field. A greater challenge is absolute phase offsets, for example arising from the fact that the state initialization and readout stages will always have some finite, drifting offset set by the polarization of lasers with different beam paths. To address this, we propose to use the fact that the velocity dispersion in the molecular beam [33, 34] results in an accumulated phase angle which is time-dependent relative to the time after the production of the molecular beam pulse at \(t=0\)[27]:
\[\theta_{\mathrm{axion}}(t)=\int_{0}^{L}\frac{\nabla\phi}{f_{\phi}}\frac{dx}{v_ {mol}(t)}\,. \tag{20}\]
This arises from the fact that slower molecules take longer to get to the spin precession region, and when they arrive they spend more time precessing and therefore accumulate more phase. Because the ACME spin precession readout protocol involves rapid, time-resolved readout to normalize against molecular beam yield fluctuations [35], this also gives the ability to resolve this time-dependence on a single shot. This has the advantage of offering significant robustness against preparation and measurement phase errors, which will be constant, and physical spin precession phases, which will have a time-dependence. Note that inferring the spin precession from the asymmetry time-dependence also provides robustness against sources of offsets such as light shifts from the lasers themselves, which will not accumulate over the entire spin precession period (and which can be probed by varying laser parameters.)
Therefore, the proposed protocol to measure the axion gradient from the earth is the following.
* Switch between the \(H\)-state and \(Q\)-state in periods of around 1 second. This will enable robust co-magnetometry, in particular the measurement of background magnetic fields.
* Measure the time dependence of the asymmetry, in particular its slope: \(\frac{\partial A}{\partial t}\).
* Compare the asymmetry slope for the \(Q\) and \(H\) states. The axion field should cause a component of \(\frac{\partial A}{\partial t}\) which changes between \(Q\) and \(H\), but otherwise does not change.
An important observation is that the ACME quantization axis, which is set by the electric field, is horizontal and therefore has vanishing sensitivity to the Earth axion gradients. There are two potential approaches to address this. One could rotate the electric field plates so that the applied field aligns with gravity. This would be highly non-trivial, not only because the plate mounting would have to be re-engineered, but all the laser paths would need to be re-engineered as there are some lasers which must go through the plates and some which cannot. Another option would be to operate without an electric field at all, and use a weak vertical magnetic field to set the quantization axis. This would require modifying the state preparation and readout protocols, but possibly in a way which would not require major redesign of the apparatus.
### Ultracold atoms and molecules
There are proposals to use ultracold molecules to search for the electron EDM surpassing the current bounds by several orders of magnitude [36; 37; 38]. The shot noise-limited uncertainty of the frequency in a measurement is given by \(\delta\omega=\frac{1}{\tau_{e}\sqrt{N}}\), with \(N\) the number of measured molecules. Assuming \(N=10^{6}\) molecules and coherence times around 10-100 seconds, just one measurement would be equivalent to the expected sensitivity at ACME III. With a preparation/detection efficiency around \(O(10)\%\) it is expected to have sensitivities of the order \(\delta\omega\sim 1-10\) nrad/s by operating for around \(10^{7}\) seconds. These numbers suggest the ultracold molecule proposal as being very compelling to test axion forces on electrons.
To achieve full sensitivity it would be important to have a co-magnetometer species that reduces the impact of systematic effects. Co-trapped species [39; 40], with different origins of the magnetic moment, would allow the distinction between stray magnetic fields and the sourced axion gradient.
It may also be interesting to consider for axion force searches the \({}^{171}\)Yb optical trap in [41], which used the ground state, \({}^{1}S_{0}\), to look for a permanent atomic EDM. In that work the authors show that the coherence time exceeds \(\tau_{c}\sim O(100)\) seconds, implying that if a number large number of atoms, \(N\sim O(10^{6})\), can be trapped the sensitivity to monopole-dipole forces on nucleons may extend beyond astrophysical constraints (see Fig.2). This scheme may also require the presence of a co-magnetometer species.
### Atomic beam experiments
Atom beam experiments are also excellent candidates to look for axion forces. Since there is no "molecular enhancement" of the axion signal, unlike EDM searches, it is attractive to use atoms as their simpler structure generally leads to more intense beams, more efficient optical control, easier laser cooling for beam brightening, etc. Note that an experiment built for the purpose of searching for axion gradients would not need an electric field at all, further simplifying the experimental requirements.
An interesting possibility is an experiment using \({}^{171}\)Yb, which can be used to make intense beams [42; 34], and has two valence electrons in the \(6s\) orbital (\(L=S=0\)) giving rise to a \({}^{1}S_{0}\) ground electronic state, and nuclear spin \(I=1/2\). The \({}^{3}P_{2}\) (\(L=S=1\)) excited state is relatively long-lived with a life-time around \(\tau\sim 10\,\mathrm{s}\)[43].
An experiment could use the ground state \({}^{1}S_{0}\), which is only sensitive to the axion gradient through the nucleon spin, and the excited state \({}^{3}P_{2}\), which has a very different magnetic moment and can therefore be used as a co-magnetometer. The Hamiltonians for these states are:
\[H_{{}^{1}S_{0}} =-\mu_{\mathbf{N}}\cdot\mathbf{B}+c_{N}\frac{\nabla\phi}{f_{\phi }}\cdot\mathbf{I}\,, \tag{21}\] \[H_{{}^{3}P_{2}} =-(\mu_{\mathbf{N}}+Mg_{P}\mu_{\mathbf{B}})\cdot\mathbf{B}+\frac {\nabla\phi}{f_{\phi}}\cdot(c_{N}\mathbf{I}+c_{e}\mathbf{S})\,, \tag{22}\]
where \(\mu_{N(B)}\) is the nuclear (Bohr) magneton, \(M\) is the projection of the total angular momentum onto the quantisation axis, and \(\mathbf{I}\) (\(\mathbf{S}\)) is the nucleon (electron) spin. The coefficients \(c_{e},c_{N}\) reflect the fact that in principle the axion coupling to electrons and nucleons may differ. These states have different origins for their magnetic moment and can be used as a co-magnetometer by comparing how the precession frequency changes with the B field and axion gradient orientation, as discussed in a previous section. Since the ground state is only sensitive to the axion coupling to nucleons, this experiment would be sensitive to both coupling to electrons and nucleons. See Figures 1 and 2. Note that in the event of a positive signal, it would be critical to perform the measurements in different states or species with additional different relative sensitivities to magnetic fields, electron couplings, and nuclear couplings, in order to conclusively disentangle these effects.
The shot noise-limited sensitivity using \(\tau=5-10\) ms, \(\dot{N}=10^{10}-10^{11}\) atoms/s, and \(T_{int}=10^{6}-10^{7}\) s, we get an expected sensitivity in the range \(\delta\omega\sim 10^{-7}\) Hz to \(10^{-6}\) Hz. In Fig.1 we show the expected reach assuming \(\delta\omega\sim 10^{-7}\) Hz.
Alternatively one could use an indium or thallium beam using the \({}^{2}P_{1/2}\) and \({}^{2}P_{3/2}\) spin orbit components of the ground electronic state. As discussed earlier, these two states can be used as as axion co-magnetometers since their \(g\)-factors have different contributions from electron spin and orbital angular momenta. Whether they offer any advantage over Yb depends largely on experimental considerations, such as beam fluxes, laser wavelengths, detection strategies, etc.
## IV Advantages of beam experiments
ACME III will measure the spin precession at the level of approximately 10 microrad/s. This corresponds, in terms of a Bohr magneton frequency, to a magnetic field of around 10 aT. It is interesting to compare this result to similar searches using co-magnetometers (see Fig.1). In the SMILE experiment [19] an alkali-noble co-magnetometer is employed. These detectors are currently one of the most sensitive magnetometers, able to measure magnetic fields at the level of \(O(1-10)\mathrm{fT}/\sqrt{\mathrm{Hz}}\).
The spin projection noise, given by
\[\delta B=\frac{1}{\mu_{B}}\frac{1}{\sqrt{T_{2}t_{int}N}}\,, \tag{23}\]
indicates that, in principle, this kind of co-magnetometer would surpass the ACME III expected sensitivity using
the values: \(T_{2}\sim 10^{-3}\) s, \(N\sim 10^{11}\) and \(t_{int}\sim 10^{5}\) s. However, the co-magnetometer sensitivity also depends on the photon shot noise which at frequencies around 0.1 Hz (given by the moving mass period) dominates and lies around \(\sim 20\) fT\(/\sqrt{\text{Hz}}\). For an integration time \(t_{int}\sim 10^{5}\) s this corresponds to an uncertainty \(\delta B\sim\)100 nT for the effective magnetic field, i.e. the axion gradient, around an order of magnitude larger than the expected at ACME III.
This explains why in terms of reach, the use of EDM experiments like ACME III is expected to improve the co-magnetometer results by around an order of magnitude if moving test-masses are employed. Additionally, thanks to the time-dependence of the asymmetry, one can use the _imperfect behaviour_ of the molecular beam in terms of velocity dispersion to measure the Earth gradient as discussed above. This fact will further improve the reach by several orders of magnitude, in particular for light axions. For axions with mass \(m_{a}\lesssim 10^{-10}\) eV, regions of the parameter space beyond the constrained by astrophysics may be tested.
## V Conclusion
One of the generic ways in which new physics can interact with the Standard Model is through the spin of standard model particles. Spin precession experiments are thus well placed to search for a variety of such effects, ranging from time varying effects caused by dark matter to new fields sourced by terrestrial and laboratory sources. For the latter class of experiments, the signal is fundamentally a dc signal. A variety of low frequency systematic effects must be overcome in order to see this signal. Interestingly, there are spin precession experiments that are adept at managing such low frequency systematic effects - namely, experiments that are aimed at measuring the permanent electric dipole moment of electrons and nucleons. In this paper, we have highlighted the opportunities that exist in using the well developed technology of experiments such as ACME III, as well as motivated to build dedicated experiments using atomic beams, and laser-cooled atoms or molecules to search for spin precession induced by test masses and the earth in the laboratory.
###### Acknowledgements.
PA is supported by the STFC under Grant No. ST/T000864/1. N.R.H. is supported by U.S. National Science Foundation (NSF) CAREER Award (PHY-1847550), the Heising-Simons Foundation (2022-3361), the Gordon and Betty Moore Foundation (GBMF7947), and the Alfred P. Sloan Foundation (G-2019-12502). D. E. K and S.R. are supported in part by the NS under Grant No. PHY-1818899. This work was supported by the U.S. Department of Energy (DOE), Office of Science, National Quantum Information Science Research Centers, Superconducting Quantum Materials and Systems Center (SQMS) under contract No. DE-AC02-07CH11359. S.R. is also supported by the DOE under a
Figure 2: Axion-mediated monopole-dipole forces on nucleons at spin precession experiments searching for axion gradients source by a test mass or the earth. The blue lines stand for a dedicated spin precession experiment using Yb co-magnetometer beams. Finally, in orange, we have the expected reach with an experiment using cold trapped Yb atoms, assuming the existence of co-magnetometer states. Bounds adapted from [21]
QuantISED grant for MAGIS, and the Simons Investigator Award No. 827042. This article is based upon work from COST Action COSMIC WISPers CA21106, supported by COST (European Cooperation in Science and Technology). NH and MR thank the Perimeter Institute and the organisers of the _School on Table-Top Experiments for Fundamental Physics_, where this work was initiated, for providing a friendly and exciting atmosphere.
|
2309.16452 | On the Trade-offs between Adversarial Robustness and Actionable
Explanations | As machine learning models are increasingly being employed in various
high-stakes settings, it becomes important to ensure that predictions of these
models are not only adversarially robust, but also readily explainable to
relevant stakeholders. However, it is unclear if these two notions can be
simultaneously achieved or if there exist trade-offs between them. In this
work, we make one of the first attempts at studying the impact of adversarially
robust models on actionable explanations which provide end users with a means
for recourse. We theoretically and empirically analyze the cost (ease of
implementation) and validity (probability of obtaining a positive model
prediction) of recourses output by state-of-the-art algorithms when the
underlying models are adversarially robust vs. non-robust. More specifically,
we derive theoretical bounds on the differences between the cost and the
validity of the recourses generated by state-of-the-art algorithms for
adversarially robust vs. non-robust linear and non-linear models. Our empirical
results with multiple real-world datasets validate our theoretical results and
show the impact of varying degrees of model robustness on the cost and validity
of the resulting recourses. Our analyses demonstrate that adversarially robust
models significantly increase the cost and reduce the validity of the resulting
recourses, thus shedding light on the inherent trade-offs between adversarial
robustness and actionable explanations. | Satyapriya Krishna, Chirag Agarwal, Himabindu Lakkaraju | 2023-09-28T13:59:50Z | http://arxiv.org/abs/2309.16452v2 | # On the Trade-offs between Adversarial Robustness and Actionable Explanations
###### Abstract
As machine learning models are increasingly being employed in various high-stakes settings, it becomes important to ensure that predictions of these models are not only adversarially robust, but also readily explainable to relevant stakeholders. However, it is unclear if these two notions can be simultaneously achieved or if there exist trade-offs between them. In this work, we make one of the first attempts at studying the impact of adversarially robust models on actionable explanations which provide end users with a means for recourse. We theoretically and empirically analyze the cost (ease of implementation) and validity (probability of obtaining a positive model prediction) of recourses output by state-of-the-art algorithms when the underlying models are adversarially robust vs. non-robust. More specifically, we derive theoretical bounds on the differences between the cost and the validity of the recourses generated by state-of-the-art algorithms for adversarially robust vs. non-robust linear and non-linear models. Our empirical results with multiple real-world datasets validate our theoretical results and show the impact of varying degrees of model robustness on the cost and validity of the resulting recourses. Our analyses demonstrate that adversarially robust models significantly increase the cost and reduce the validity of the resulting recourses, thus shedding light on the inherent trade-offs between adversarial robustness and actionable explanations.
## 1 Introduction
In recent years, machine learning (ML) models have made significant strides, becoming indispensable tools in high-stakes domains such as banking, healthcare, and criminal justice. As these models continue to gain prominence, it is more crucial than ever to address the dual challenge of providing actionable explanations to individuals negatively impacted (e.g., denied loan applications) by model predictions, and ensuring adversarial robustness to maintain model integrity. Both prior research and recent regulations have emphasized the importance of adversarial robustness and actionable explanations, deeming them as key pillars of trustworthy machine learning [34; 10; 8] that are critical to real-world applications.
Existing machine learning research has explored adversarial robustness and actionable explanations independently. For instance, prior research has proposed various approaches for implementing actionable explanations in practice, with counterfactual explanations or algorithmic recourse considered particularly promising [35]. These explanations inform individuals denied a loan by a bank's predictive model about the specific profile aspects (features) that need modification to achieve a positive outcome. Several recent works have tackled the generation of counterfactual explanations [35; 29; 20; 12]. Concurrently, previous studies have shown that complex models, such as deep neural networks, are susceptible to adversarial examples--infinitesimal input perturbations designed
to achieve adversary-selected outcomes [27; 9]. Adversarial training has been proposed as a defense against adversarial examples, aiming to learn adversarially robust models [18].
Despite numerous works addressing adversarial robustness or actionable explanations, only a few efforts have investigated the possibility of simultaneously achieving both or the potential trade-offs between them. Only a few works exist at the intersection of these areas [25; 21]. For example, Pawelczyk et al. [21] showed that the distance between counterfactuals (recourses) generated by specific state-of-the-art methods and adversarial examples is quite small for linear models. While these findings highlight the need for a deeper examination of the connections between adversarial robustness and actionable explanations, the potential trade-offs or deeper links remain unexplored.
**Present work.** In this study, we address the aforementioned gaps by presenting the first-ever investigation of the impact of adversarially robust models on algorithmic recourse. We provide a theoretical and empirical analysis of the _cost_ (ease of implementation) and _validity_ (likelihood of achieving a desired model prediction) of the recourses generated by state-of-the-art algorithms for adversarially robust and non-robust models. In particular, we establish theoretical bounds on the differences in cost and validity for recourses produced by various gradient-based [17; 35] and manifold-based [20] recourse methods for adversarially robust and non-robust linear and non-linear models (see Section 4). To achieve this, we first derive theoretical bounds on the differences between the weights (parameters) of adversarially robust and non-robust linear and non-linear models, and then use these bounds to establish the differences in cost and validity of the corresponding recourses.
We conducted extensive experiments with multiple real-world datasets from diverse domains (Section 5). Our theoretical and empirical analyses provide several interesting insights into the relationship between adversarial robustness and algorithmic recourse: i) the cost of recourse increases with the degree of robustness of the underlying model, and ii) the validity of recourse deteriorates as the degree of robustness of the underlying model increases. Additionally, we conducted a qualitative analysis of the recourses generated by state-of-the-art methods, and observed that the number of valid recourses for any given instance decreases as the underlying model's robustness increases. More broadly, our analyses and findings shed light on the the inherent trade-offs between adversarial robustness and actionable explanations.
## 2 Related Work
**Algorithmic Recourse.** Several approaches have been proposed in recent literature to provide recourses to affected individuals [35; 29; 31; 20; 19; 12; 14]. These approaches can be broadly categorized along the following dimensions [33]: _type of the underlying predictive model_ (e.g., tree-based vs. differentiable classifier), _type of access_ of the underlying predictive model (e.g., black box vs. gradient access), whether they encourage _sparsity_ in counterfactuals (i.e., allowing changes in a small number of features), whether counterfactuals should lie on the _data manifold_, whether the underlying _causal relationships_ should be accounted for when generating counterfactuals, and whether the produced output by the method should be _multiple diverse counterfactuals_ or a single counterfactual. Recent works also demonstrate that recourses output by state-of-the-art techniques are not robust, i.e., small perturbations to the original instance [5; 26], the underlying model [28; 24], or the recourse [22] itself may render the previously prescribed recourse(s) invalid. These works also proposed minimax optimization problems to find _robust_ recourses to address the aforementioned challenges.
**Adversarial Examples and Robustness.** Prior works have shown that complex machine learning model, such as deep neural networks, are vulnerable to small changes in input [27]. This behavior of predictive models allows for generating adversarial examples (AEs) by adding infinitesimal changes to input targeted to achieve adversary-selected outcomes [27; 9]. Prior works have proposed several techniques to generate AEs using varying degrees of access to the model, training data, and the training procedure [3]. While gradient-based methods [9; 16] return the smallest input perturbations which flip the label as adversarial examples, generative methods [38] constrain the search for adversarial examples to the training data-manifold. Finally, some methods [4] generate adversarial examples for non-differentiable and non-decomposable measures in complex domains such as speech recognition and image segmentation. Prior works have shown that Empirical Risk Minimization (ERM) does not yield models that are robust to adversarial examples [9; 16]. Hence, to reliably train adversarially robust models, Madry et al. [18] proposed the adversarial training objective which minimizes the worst-case loss within some \(\epsilon\)-ball perturbation region around the input instances.
**Intersections between Adversarial ML and Model Explanations.** There has been a growing interest in studying the intersection of adversarial ML and model explainability [10]. Among all these works, two explorations are relevant to our work [21; 25]. Shah et al. [25] studied the interplay between adversarial robustness and post hoc explanations [25], demonstrating that gradient-based explanations violate the primary assumption of attributions - features with higher attribution are more important for model prediction - in case of non-robust models. Further, they show that such a violation does not occur when the underlying models are robust to \(\ell_{2}\) and \(\ell_{\infty}\) input perturbations. More recently, Pawelczyk et al. [21] demonstrated that the distance between the recourses generated by state-of-the-art methods and adversarial examples is small for linear models. While existing works explore the connections between adversarial ML and model explanations, none focus on the trade-offs between adversarial robustness and actionable explanations, which is the focus of our work.
## 3 Preliminaries
**Notation.** In this work, we denote a model \(f:\mathbb{R}^{d}\rightarrow\mathbb{R}\), where \(\mathbf{x}\in\mathcal{X}\) is a \(d\)-dimensional input sample, \(\mathcal{X}\) is the training dataset, and the model is parameterized with weights \(\mathbf{w}\). In addition, we represent the non-robust and adversarially robust models using \(f_{\text{NR}}(\mathbf{x})\) and \(f_{\text{R}}(\mathbf{x})\), and the linear and neural network models using \(f^{\text{L}}(\mathbf{x})\) and \(f^{\text{NTK}}(\mathbf{x})\). Below, we provide a brief overview of adversarially robust models, and some popular methods for generating recourses.
**Adversarially Robust Models.** Despite the superior performance of machine learning (ML) models, they are susceptible to adversarial examples (AEs), i.e., inputs generated by adding infinitesimal perturbations to the original samples targeted to change prediction label [1]. One standard approach to ameliorate this problem is via adversarial training which minimizes the worst-case loss within some perturbation region (the perturbation model) [15]. In particular, for a model \(f\) parameterized by weights \(\mathbf{w}\), loss function \(\ell(\cdot)\), and training data \(\{\mathbf{x}_{i},y_{i}\}_{i=\{1,2,\dots,n\}}\in\mathcal{D}_{\text{train}}\), the optimization problem of minimizing the worst-case loss within \(\ell_{p}-\)norm perturbation with radius \(\epsilon\) is:
\[\min_{\mathbf{w}}\frac{1}{|\mathcal{D}_{\text{train}}|}\sum_{(x,y)\in \mathcal{D}_{\text{train}}}\max_{\delta\in\Delta_{p,\epsilon}}\ell(f(\mathbf{ x}+\delta)),y), \tag{1}\]
where \(\mathcal{D}_{\text{train}}\) denotes the training dataset and \(\Delta_{p,\epsilon}=\{\delta:\|\delta\|_{p}\leq\epsilon\}\) is the \(\ell_{p}\) ball with radius \(\epsilon\) centered around sample \(\mathbf{x}\). We use \(p=\infty\) for our theoretical analysis resulting in a closed-form solution of the model parameters \(\mathbf{w}\).
**Algorithmic Recourse.** One way to generate recourses is by explaining to affected individuals what features in their profile need to change and by how much in order to obtain a positive outcome. Counterfactual explanations that essentially capture the aforementioned information can be therefore used to provide recourses. The terms _counterfactual explanations_ and _algorithmic recourse_ have, in fact, become synonymous in recent literature [13; 29; 32]. In particular, methods that try to find algorithmic recourses do so by finding a counterfactual \(\mathbf{x}^{\prime}=\mathbf{x}+\zeta\) that is closest to the original instance \(\mathbf{x}\) and change the model's prediction \(f(\mathbf{x}+\zeta)\) to the target label, where \(\zeta\) determines a set of changes that can be made to \(\mathbf{x}\) in order to reverse the negative outcome. Next, we describe three popular recourse methods we analyze to understand the implications of adversarially robust models on algorithmic recourses.
**Score CounterFactual Explanations (SCFE).** For a given model \(f:\mathbb{R}^{d}\rightarrow\mathbb{R}\), a distance function \(d:\mathbb{R}^{d}\times\mathbb{R}^{d}\rightarrow\mathbb{R}_{+}\), and sample \(\mathbf{x}\), Wachter et al. [35] define the problem of generating a counterfactual \(\mathbf{x}^{\prime}{=}\mathbf{x}+\zeta\) using the following objective:
\[\operatorname*{arg\,min}_{\mathbf{x}^{\prime}}(f(\mathbf{x}^{\prime})-s)^{2}+ \lambda d(\mathbf{x}^{\prime},\mathbf{x}), \tag{2}\]
where \(s\) is the target score for the counterfactual \(\mathbf{x}^{\prime}\), \(\lambda\) is the regularization coefficient, and \(d(\cdot)\) is the distance between sample \(\mathbf{x}\) and its counterfactual \(\mathbf{x}^{\prime}\).
**C-CHVAE.** Given a Variational AutoEncoder (VAE) model with encoder \(\mathcal{I}_{\gamma}\) and decoder \(\mathcal{G}_{\theta}\) trained on the original data distribution \(\mathcal{D}_{\text{train}}\), C-CHVAE [20] aims to generate recourses in the latent space \(\mathcal{Z}\), where \(\mathcal{I}_{\gamma}:\mathcal{X}\rightarrow\mathcal{Z}\). The encoder transforms a given sample \(\mathbf{x}\) into a latent representation \(\mathbf{z}\in\mathcal{Z}\) and the decoder takes \(\mathbf{z}\) as input and generates \(\hat{\mathbf{x}}\) as similar as possible to \(\mathbf{x}\). Formally, C-CHVAE generates recourse using the following objective function:
\[\zeta^{*}=\operatorname*{arg\,min}_{\zeta\in\mathcal{Z}}\lVert\zeta\rVert\ \ \ \text{such that}\ \ \ f(\mathcal{G}_{\theta}(\mathcal{I}_{\gamma}(\mathbf{x})+\zeta))\neq f( \mathbf{x}), \tag{3}\]
where \(\zeta\) is the cost for generating a recourse, \(\mathcal{I}_{\gamma}\) allows to search for counterfactuals in the data manifold and \(\mathcal{G}_{\theta}\) projects the latent counterfactuals back to the input feature space.
**Growing Spheres Method (GSM).** While the above techniques directly optimize specific objective functions for generating counterfactuals, GSM [17] uses a search-based algorithm to generate recourses by randomly sampling points around the original instance \(\mathbf{x}\) until a sample with the target label is found. In particular, GSM method involves first drawing an \(\ell_{2}\)-sphere around a given instance \(\mathbf{x}\), randomly samples point within that sphere and checks whether any sampled points result in target prediction. This method then contracts or expands the sphere until a (sparse) counterfactual is found and returns it. GSM defines a minimization problem using a function \(c:\mathcal{X}\times\mathcal{X}\rightarrow\mathbb{R}_{+}\), where \(c(\mathbf{x},\mathbf{x}^{\prime})\) is the cost of going from instance \(\mathbf{x}\) to counterfactual \(\mathbf{x}^{\prime}\).
\[\mathbf{x}^{\prime*}=\operatorname*{arg\,min}_{\mathbf{x}^{\prime}\in \mathcal{X}}\{c(\mathbf{x},\mathbf{x}^{\prime})\;\mid\;f(\mathbf{x}^{\prime}) \neq f(\mathbf{x})\}, \tag{4}\]
where \(\mathbf{x}^{\prime}\) is sampled from the \(\ell_{2}\)-ball around \(\mathbf{x}\) such that \(f(\mathbf{x}^{\prime})\neq f(\mathbf{x})\), \(c(\mathbf{x},\mathbf{x}^{\prime})=\|\mathbf{x}^{\prime}-\mathbf{x}\|_{2}+ \gamma\|\mathbf{x}^{\prime}-\mathbf{x}\|_{0}\), and \(\gamma\in\mathbb{R}_{+}\) is the weight associated to the sparsity objective.
## 4 Our Theoretical Analysis
Here, we perform a detailed theoretical analysis to bound the cost and validity differences of recourses generated by state-of-the-art methods when the underlying models are non-robust vs. adversarially robust, for the case of linear and non-linear predictors. In particular, we compare the cost differences (Sec. 4.1) of the recourses obtained using 1) gradient-based methods like SCFE [35] and 2) manifold-based methods like C-CHVAE [20]. Finally, we show that the validity of the recourses generated using existing methods for robust models is lower compared to that of non-robust models (Sec. 4.2).
### Cost Analysis
The cost of a generated algorithmic recourse is defined as the distance between the input instance \(\mathbf{x}\) and the counterfactual \(\mathbf{x}^{\prime}\) obtained using a recourse finding method [33]. Algorithmic recourses with lower costs are considered better as they achieve the desired outcome with minimal changes to input. Next, we theoretically analyze the cost difference of recourses generated for non-robust and adversarially robust linear and non-linear models. Below, we first find the weight difference between non-robust and adversarially robust models and then use these lemmas to derive the recourse cost differences.
**Cost Analysis of recourses generated using SCFE method.** Here, we derive the lower and upper bound for the cost difference of recourses generated using SCFE [35] method when the underlying models are non-robust vs. adversarially robust linear and non-linear models. We first derive a bound for the difference between non-robust and adversarially robust linear model weights.
**Lemma 1**.: _(Difference between non-robust and adversarially robust linear model weights) For an instance \(\mathbf{x}\), let \(\mathbf{w}_{\text{NR}}\) and \(\mathbf{w}_{\text{R}}\) be weights of the non-robust and adversarially robust linear model. Then, for a normalized Lipschitz activation function \(\sigma(\cdot)\), the difference in the weights can be bounded as:_
\[\|\mathbf{w}_{\text{NR}}-\mathbf{w}_{\text{R}}\|_{2}\leq\Delta \tag{5}\]
_where \(\Delta=n\eta(y\|\mathbf{x}^{T}\|_{2}+\epsilon\sqrt{d})\), \(\eta\) is the learning rate, \(\epsilon\) is the \(\ell_{2}\)-norm perturbation ball around the sample \(\mathbf{x}\), \(y\) is the label for \(\mathbf{x}\), \(n\) is the total number of training epochs, and \(d\) is the dimension of the input features. Subsequently, we show that \(\|\mathbf{w}_{\text{NR}}\|_{2}-\Delta\leq\|\mathbf{w}_{\text{NR}}\|_{2}\leq\| \mathbf{w}_{\text{NR}}\|_{2}+\Delta\)._
Proof Sketch.: We separately derive the gradients for updating the weight for the non-robust and adversarially robust linear models. The proof uses sigmoidal and triangle inequality properties to derive the bound for the difference between the non-robust and adversarially robust linear model. In addition, we use reverse triangle inequality properties to show that the weights of the adversarially robust linear model are bounded by \(\|\mathbf{w}_{\text{NR}}\|_{2}\pm\Delta\). See Appendix 7.1 for detailed proof.
_Implications:_ We note that the weight difference in Eqn. 1 is proportional to the \(\ell_{2}\)-norm of the input and the square root of the number of dimensions of \(\mathbf{x}\). In particular, the bound is tighter for samples with lower feature dimensions \(d\) and models with a smaller degree of robustness \(\epsilon\).
Next, we define the closed-form solution for the cost \(\zeta^{*}\) to generate a recourse for the linear model.
**Definition 1**.: _(Optimal cost for linear models [21]) For a given scoring function \(f(\mathbf{x}){=}\mathbf{w}^{T}\mathbf{x}\), the SCFE method generates a recourse for an input \(\mathbf{x}\) using cost \(\zeta\) such that:_
\[\zeta^{*}=m\frac{\lambda}{\lambda+\|\mathbf{w}\|_{2}^{2}}\cdot\mathbf{w}, \tag{6}\]
_where \(m=s-h(\mathbf{x}^{\prime})\) is the target residual, \(s\) is the target score for \(\mathbf{x}\), \(\mathbf{w}\) is the weight of the linear model, and \(\lambda\) is a given hyperparameter._
We now derive the cost difference bounds of recourses generated using SCFE when the underlying model is non-robust and adversarially robust linear models.
**Theorem 1**.: _(Cost difference of SCFE for linear models) For a given instance \(\mathbf{x}\), let \(\mathbf{x}_{\text{NR}}^{\prime}=\mathbf{x}+\zeta_{\text{NR}}\) and \(\mathbf{x}_{\text{R}}^{\prime}=\mathbf{x}+\zeta_{\text{R}}\) be the recourse generated using Wachter's algorithm for the non-robust and adversarially robust linear models. Then, for a normalized Lipschitz activation function \(\sigma(\cdot)\), the difference in the recourse for both models can be bounded as:_
\[\|\zeta_{\text{NR}}\|_{2}{-}\|\zeta_{\text{R}}\|_{2}{\leq}\ \Big{|}\ \lambda\frac{2\|\mathbf{w}_{\text{NR}}\|_{2}+\Delta}{\| \mathbf{w}_{\text{NR}}\|_{2}(\|\mathbf{w}_{\text{NR}}\|_{2}{-}\Delta)}\ \Big{|}, \tag{7}\]
_where \(\mathbf{w}_{\text{NR}}\) is the weight of the non-robust model, \(\lambda\) is the scalar coefficient on the distance between original sample \(\mathbf{x}\) and generated counterfactual \(\mathbf{x}^{\prime}\), and \(\Delta\) is defined in Lemma 1._
Proof Sketch.: We use the optimal cost for recourses in linear models (see Def. 1) for deriving the cost difference bounds. The proof for the weight difference uses linear algebra and triangle inequality properties. See Appendix 7.2 for the complete proof.
_Implications:_ The derived bounds imply that the differences between costs are a function of the quantity \(\Delta\) (RHS term from Lemma 1), the weights of the non-robust model \(||\mathbf{w}_{\text{NR}}||_{2}\), and \(\lambda\), where the bound of the difference between costs is tighter (lower) for smaller \(\Delta\) values and when the \(\ell_{2}\)-norm of the non-robust model weight is large (due to the quadratic term in the denominator). We note that the \(\Delta\) term is a function of the \(\ell_{2}\)-norm of the input \(\mathbf{x}\) and the square root of the number of dimensions \(d\) of the input sample, where the bound is tighter for smaller feature dimensions \(d\), models with a smaller degree of robustness \(\epsilon\), and \(\mathbf{x}\) with larger \(\ell_{2}\)-norms.
Next, we define the closed-form solution for the cost \(\zeta^{*}\) required to generate a recourse when the underlying model is a wide neural network.
**Definition 2**.: _(Kernel Matrix for ReLU networks [6, 37]) The closed-form solution of the Neural Tangent Kernel for a two-layer neural network model with ReLU non-linear activation is given by:_
\[\mathbf{K}^{\infty}(\mathbf{x}_{i},\mathbf{x}_{j})=\mathbf{x}_{i}^{\text{T}} \mathbf{x}_{j}\Big{(}\pi-\arccos(\frac{\mathbf{x}_{i}^{\text{T}}\mathbf{x}_{j }}{\|\mathbf{x}_{i}\|\ \|\mathbf{x}_{j}\|})\Big{)}/2\pi, \tag{8}\]
_where \(\mathbf{K}^{\infty}\) is the Neural Tangent Kernel matrix and \(\mathbf{x}_{i}\in\mathbb{R}^{d}\)._
We now derive the difference between costs for generated SCFE recourses when the underlying model is non-robust vs. adversarially robust wide neural network model.
**Theorem 2**.: _(Cost difference for SCFE for wide neural network) For an NTK model with weights \(\mathbf{w}_{\text{NR}}^{\text{NTK}}\) and \(\mathbf{w}_{\text{NR}}^{\text{NTK}}\) for the non-robust and adversarially robust models, the cost difference between the recourses generated for sample \(\mathbf{x}\) is bounded as:_
\[\|\zeta_{\text{NR}}\|_{2}{-}\|\zeta_{\text{R}}\|_{2}{\leq}\ \Big{|}\ \frac{2}{\mathbf{H}(\|\mathbf{w}_{\text{NR}}^{\text{NTK}}\|_{2},\| \mathbf{w}_{\text{R}}^{\text{NTK}}\|_{2})}\ \Big{|}, \tag{9}\]
_where \(\mathbf{H}(\cdot,\cdot)\) denotes the harmonic mean, \(\mathbf{w}_{\text{NR}}^{\text{NTK}}{=}\nabla_{\mathbf{x}}\mathbf{K}^{\infty}( \mathbf{x},\mathbf{X})\mathbf{w}_{\text{NR}}^{\text{NTK}}\), \(\mathbf{K}^{\infty}\) is the NTK associated with the wide neural network model, \(\bar{\mathbf{w}}_{\text{R}}^{\text{NTK}}{=}\nabla_{\mathbf{x}}\mathbf{K}^{ \infty}(\mathbf{x},\mathbf{X}_{\text{R}})\mathbf{w}_{\text{NR}}^{\text{NTK}}\), \(\mathbf{w}_{\text{NR}}^{\text{NTK}}{=}(\mathbf{K}^{\infty}(\mathbf{X},\mathbf{X })+\beta\mathbf{I}_{n})^{-1}\mathbf{Y}\), \(\mathbf{w}_{\text{R}}^{\text{NTK}}{=}(\mathbf{K}^{\infty}(\mathbf{X}_{\text{ R}},\mathbf{X}_{\text{R}})+\beta\mathbf{I}_{n})^{-1}\mathbf{Y}\), \(\beta\) is the bias of the NTK model, \((\mathbf{X},\mathbf{X}_{\text{R}})\) are the training samples for the non-robust and adversarially robust models, and \(\mathbf{Y}\) are the labels of the training samples._
Proof Sketch.: The proof follows from Def. 2, where we use data processing, Taylor expansion, and triangle inequalities to bound the difference between costs of recourses output by SCFE for non-robust vs. adversarially robust wide neural network models. See Appendix 7.3 for the complete proof.
_Implications:_ The proposed bounds imply that the difference in costs is bounded by the harmonic mean of the NTK models weights of non-robust and robust models, _i.e.,_ the bound is tighter for larger harmonic means, and vice-versa. In particular, the norm of the weight of the non-robust and adversarially robust NTK model is large if the gradient of NTK associated with the respective model is large.
**Cost Analysis of recourses generated using C-CHVAE method.** We extend our analysis of the cost difference for recourses generated using manifold-based methods for non-robust and adversarially robust models. In particular, we leverage C-CHVAE that leverages variational autoencoders to generate counterfactuals. For a fair comparison, we assume that both models use the same encoder \(\mathcal{I}_{\gamma}\) and decoder \(\mathcal{G}_{\theta}\) networks for learning the latent space of the given input space \(\mathcal{X}\).
**Definition 3**.: _(Bora et al. [2]) An encoder model \(\mathcal{I}\) is \(L\)-Lipschitz if \(\forall\mathbf{z}_{1},\mathbf{z}_{2}\in\mathcal{Z}\), we have:_
\[\|\mathcal{I}(\mathbf{z}_{1})-\mathcal{I}(\mathbf{z}_{2})\|_{p}\leq L\| \mathbf{z}_{1}-\mathbf{z}_{2}\|_{p}. \tag{10}\]
Next, we derive the bounds of the cost difference of recourses generated for non-robust and adversarially robust models using Eqn. 10.
**Theorem 3**.: _(Cost difference for C-CHVAE) Let \(\mathbf{z}^{\prime}_{\text{NR}}\) and \(\mathbf{z}^{\prime}_{\text{R}}\) be the solution returned by the C-CHVAE recourse method by sampling from \(\ell_{p}\)-norm ball in the latent space using an \(L_{G}\)-Lipschitz decoder \(\mathcal{G}(\cdot)\) for a non-robust and adversarially robust model. By definition of the recourse method, let \(\mathbf{x}^{\prime}_{\text{NR}}\)=\(\mathcal{G}(\mathbf{z}^{\prime}_{\text{NR}})\)=\(\mathbf{x}+\zeta_{\text{NR}}\) and \(\mathbf{x}^{\prime}_{\text{R}}\)=\(\mathcal{G}(\mathbf{z}^{\prime}_{\text{R}})\)=\(\mathbf{x}+\zeta_{\text{R}}\) be the respective recourses in the input space whose difference can then be bounded as:_
\[\|\zeta_{\text{NR}}\|_{2}-\|\zeta_{\text{R}}\|_{2}\leq\Big{|}\ L_{G}(r_{\text{ R}}+r_{\text{NR}})\ \Big{|}, \tag{11}\]
_where \(r_{\text{NR}}\) and \(r_{\text{R}}\) be the corresponding radii chosen by the algorithm such that they successfully return a recourse for the non-robust and adversarially robust model._
Proof Sketch.: The proof follows from Def. 3, triangle inequality, L-Lipschitzness of the generative model, and the fact that the \(\ell_{p}\)-norm of the model's outputs are known in the latent space. See Appendix 7.4 for detailed proof.
_Implications:_ The right term in the Eqn. 11 entails that the \(\ell_{p}\)-norm of the difference between the generated recourses using C-CHVAE is bounded if, 1) the Lipschitz constant of the decoder is small, and 2) the sum of the radii determined by C-CHVAE to successfully generate a recourse, such that they successfully return a recourse for the non-robust and adversarially robust model, is smaller.
### Validity Analysis
The validity of a recourse \(\mathbf{x}^{\prime}\) is defined as the probability that it results in the desired outcome [33], i.e., \(\Pr(f(\mathbf{x}^{\prime})=1)\). Below, we analyze the validity of the recourses generated for linear models and using Lemma 1 show that it higher for non-robust models.
**Theorem 4**.: _(Validity comparison for linear model) For a given instance \(\mathbf{x}\in\mathbb{R}^{d}\) and desired target label denoted by unity, let \(\mathbf{x}^{\prime}_{\text{R}}\) and \(\mathbf{x}^{\prime}_{\text{NR}}\) be the counterfactuals for adversarially robust \(f_{\text{R}}(\mathbf{x})\) and non-robust \(f_{\text{NR}}(\mathbf{x})\) models, respectively. Then, \(\Pr(f_{\text{NR}}(\mathbf{x}^{\prime}_{\text{NR}})=1)\geq\Pr(f_{\text{R}}( \mathbf{x}^{\prime}_{\text{R}})=1)\) if \(|f_{\text{NR}}(\mathbf{x}^{\prime}_{\text{R}})-f_{\text{NR}}(\mathbf{x}^{ \prime}_{\text{NR}})|\leq\Delta\|\mathbf{x}^{\prime}_{\text{R}}\|_{2}\)._
Proof Sketch.: We derive the conditions under which the probability of obtaining a valid recourse is higher for a non-robust model compared to its adversarially robust counterpart using Lemma 1, natural logarithms, data processing, and Cauchy-Schwartz inequalities. See Appendix 7.5 for the complete proof.
_Implications:_ We show that the condition for the validity is dependent on the weight difference \(\Delta\) of the models (from Lemma 1). Formally, the validity of non-robust models will be greater than or equal to that of adversarially robust models only if the difference between the prediction of the non-robust model on \(\mathbf{x}^{\prime}_{\text{NR}}\) and \(\mathbf{x}^{\prime}_{\text{R}}\) is bounded by \(\Delta\) times the \(\ell_{2}\)-norm of \(\mathbf{x}^{\prime}_{\text{R}}\).
Next, we bound the weight difference of non-robust and adversarially robust wide neural networks.
**Lemma 2**.: _(Difference between non-robust and adversarially robust weights for wide neural network models) For a given NTK model, let \(\mathbf{w}^{\text{NTK}}_{\text{NR}}\) and \(\mathbf{w}^{\text{NTK}}_{\text{R}}\) be weights of the non-robust and adversarially
robust model. Then, for a wide neural network model with ReLU activations, the difference in the weights can be bounded as:_
\[\|\mathbf{w}_{\text{NR}}^{\text{NTK}}-\mathbf{w}_{\text{R}}^{\text{NTK}}\|_{2} \leq\Delta_{\text{K}}\|\mathbf{Y}\|_{2} \tag{12}\]
_where \(\Delta_{\text{K}}\!=\!\|(\mathbf{K}^{\infty}(\mathbf{X},\mathbf{X})\!+\! \beta\mathbf{I}_{n})^{-1}\!-\!(\mathbf{K}^{\infty}(\mathbf{X}_{\text{R}}, \mathbf{X}_{\text{R}})\!+\!\beta\mathbf{I}_{n})^{-1}\|_{2}\), \(\mathbf{K}^{\infty}\) is the kernel matrix for the NTK model defined in Def. 2, \((\mathbf{X},\mathbf{X}_{\text{R}})\) are the training samples for the non-robust and adversarially robust NTK models, \(\beta\) is the bias of the ReLU NTK model, and \(\mathbf{Y}\) are the labels of the training samples. Subsequently, we show that \(\|\mathbf{w}_{\text{NR}}^{\text{NTK}}\|_{2}\!-\!\Delta_{\text{K}}\|\mathbf{Y }\|_{2}\leq\|\mathbf{w}_{\text{NR}}^{\text{NTK}}\|_{2}\!\leq\|\mathbf{w}_{ \text{NR}}^{\text{NTK}}\|_{2}\!+\!\Delta_{\text{K}}\|\mathbf{Y}\|_{2}\)._
Proof Sketch.: We derive the bound for the weight of the adversarially robust NTK model using the closed-form expression for the NTK weights. Using Cauchy-Schwartz and reverse triangle inequality, we prove that the \(\ell_{2}\)-norm of the difference between the non-robust and adversarially robust NTK model weights is upper bounded by the difference between the kernel matrix \(\mathbf{K}^{\infty}\) of the two models. See Appendix 7.6 for detailed proof.
_Implications:_ Lemma 2 implies that the bound is tight if the generated adversarial samples \(\mathbf{X}_{\text{R}}\) are very close to the original samples, i.e., the degree of robustness of the adversarially robust model is small.
Next, we show that the validity of recourses generated for non-robust wide neural network models is higher than their adversarially robust counterparts.
**Theorem 5**.: _(Validity Comparison for wide neural network) For a given instance \(\mathbf{x}\in\mathbb{R}^{d}\) and desired target label denoted by unity, let \(\mathbf{x}_{\text{R}}^{\prime}\) and \(\mathbf{x}_{\text{NR}}^{\prime}\) be the counterfactuals for adversarially robust \(f_{\text{R}}^{\text{NTK}}(\mathbf{x})\) and non-robust \(f_{\text{NR}}^{\text{NTK}}(\mathbf{x})\) wide neural network models respectively. Then, \(\Pr\big{(}f_{\text{NR}}^{\text{NTK}}(\mathbf{x}_{\text{NR}}^{\prime})=1 \big{)}\geq\Pr\big{(}f_{\text{R}}^{\text{NTK}}(\mathbf{x}_{\text{R}}^{\prime })=1\big{)}\) if \(\big{\|}(\mathbf{K}^{\infty}(\mathbf{x}_{\text{R}}^{\prime},\mathbf{X}_{\text {R}})-\mathbf{K}^{\infty}(\mathbf{x}_{\text{NR}}^{\prime},\mathbf{X}))^{\text {T}}\mathbf{w}_{\text{NR}}^{\text{NTK}}\big{\|}\leq\big{\|}\mathbf{K}^{\infty }(\mathbf{x}_{\text{R}}^{\prime},\mathbf{X}_{\text{R}})^{\text{T}}\big{\|} \Delta_{\text{K}}\|\mathbf{Y}\|_{2}\)._
Proof Sketch.: We extend Theorem 2 by deriving an analogous condition for wide neural network models using Lemma 2, natural logarithms, data processing, and Cauchy-Schwartz inequalities. See Appendix 7.7 for the complete proof.
_Implications:_ Our derived conditions show that if the difference between the NTK \(\mathbf{K}^{\infty}\) associated with the non-robust and adversarially robust model is bounded (i.e., \(\mathbf{K}^{\infty}(\mathbf{x}_{\text{R}}^{\prime},\mathbf{X}_{\text{R}}) \approx\mathbf{K}^{\infty}(\mathbf{x}_{\text{NR}}^{\prime},\mathbf{X})\)), then it is likely to have a validity greater than or equal to that of its adversarial robust counterpart. Further, we show that this bound is tighter for smaller \(\Delta_{\text{K}}\), and vice-versa.
## 5 Experimental Evaluation
In this section, we empirically analyze the impact of adversarially robust models on the cost and validity of recourses. First, we empirically validate our theoretical bounds on differences between the cost and validity of recourses output by state-of-the-art recourse generation algorithms when the underlying models are adversarially robust vs. non-robust. Second, we carry out further empirical analysis to assess the differences in cost and validity of the resulting recourses as the degree of the adversarial robustness of the underlying model changes on three real-world datasets.
### Experimental Setup
Here, we describe the datasets used for our empirical analysis along with the predictive models, algorithmic recourse generation methods, and the evaluation metrics.
**Datasets.** We use three real-world datasets for our experiments: 1) The _German Credit_[7] dataset comprises demographic (age, gender), personal (marital status), and financial (income, credit duration) features from 1000 credit applicants, with each sample labeled as "good" or "bad" depending on their credit risk. The task is to successfully predict if a given individual is a "good" or "bad" customer in terms of associated credit risk. 2) The _Adult_[36] dataset contains demographic (e.g., age, race, and gender), education (degree), employment (occupation, hours-per week), personal (marital status, relationship), and financial (capital gain/loss) features for 48,842 individuals. The task is to predict if an individual's income exceeds $50K per year. 3) The _COMPAS_[11] dataset has criminal records and demographics features for 18,876 defendants who got released on bail at the U.S state courts during 1990-2009. The dataset is designed to train a binary classifier to classify defendants into bail (i.e., unlikely to commit a violent crime if released) vs. no bail (i.e., likely to commit a violent crime).
**Predictive models.** We generate recourses for the non-robust and adversarially robust version of Logistic Regression (linear) and Neural Networks (non-linear) models. We use two linear layers with
ReLU activation functions as our predictor and set the number of nodes in the intermediate layers to twice the number of nodes in the input layer, which is the size of the input dimension in each dataset.
**Algorithmic Recourse Methods.** We analyze the cost and validity for recourses generated using four popular classes of recourse generation methods, namely, gradient-based (SCFE), manifold-based (C-CHVAE), random search-based (GSM) methods (described in Sec. 3), and robust methods (ROAR) [28], when the underlying models are non-robust and adversarially robust.
**Evaluation metrics.** To concretely measure the impact of adversarial robustness on algorithmic recourse, we analyze the difference between cost and validity metrics for recourses generated using non-robust and adversarially robust model. To quantify the cost, we measure the average cost incurred to act upon the prescribed recourses across all test-set instances, i.e., \(\text{Cost}(\mathbf{x},\mathbf{x}^{\prime})=\frac{1}{|\mathcal{D}_{\text{ test}}|}\|\mathbf{x}-\mathbf{x}^{\prime}\|_{2}\), where \(\mathbf{x}\) is the input and \(\mathbf{x}^{\prime}\) is its corresponding recourse. To measure validity, we compute the probability of the generated recourse resulting in the desired outcome, i.e., \(\text{Validity}(\mathbf{x},\mathbf{x}^{\prime})=\frac{|\{\mathbf{x}^{\prime}:f( \mathbf{x}^{\prime})=1\cap\mathbf{x}^{\prime}=g(\mathbf{x},f)\}|}{|\mathcal{ D}_{\text{test}}|}\), where \(g(x,f)\) returns recourses for input \(\mathbf{x}\) and predictive model \(f\).
**Implementation details.** We train non-robust and adversarially robust predictive models from two popular model classes (logistic regression and neural networks) for all three datasets. In the case of adversarially robust models, we adopt the commonly used min-max optimization objective for adversarial training using varying degree of robustness, i.e., \(\epsilon\in\{0,0.02,0.05,0.10,0.15,0.20,0.25,0.3\}\). Note that the model trained with \(\epsilon{=}0\) is the non-robust model.
### Empirical Analysis
Next, we describe the experiments we carried out to understand the impact of adversarial robustness of predictive models on algorithmic recourse. More specifically, we discuss the (1) empirical verification of our theoretical bounds, (2) empirical analysis of the differences between the costs of recourses when the underlying model is non-robust vs. adversarially robust, and (3) empirical analysis to compare the validity of the recourses corresponding to non-robust vs. adversarially robust models.
**Empirical Verification of Theoretical Bounds.** We empirically validate our theoretical findings from Section 4 on real-world datasets. In particular, we first estimate the empirical bounds (RHS of Theorems 1,3) for each instance in the test set by plugging the corresponding values of the parameters in the theorems and compare them with the empirical estimates of the cost differences between recourses generated using gradient-based and manifold-based recourse methods (LHS of Theorems 1,3). Figure 2 shows the results obtained from the aforementioned analysis of cost differences. We observe that our bounds are tight, and the empirical estimates fall well within our theoretical bounds. A similar trend is observed for Theorem 2 which is for the case of non-linear models, shown in Figure 10 in Appendix 8. For the case of theoretical bounds for validity analysis in Theorem 4, we observe that the validity of the non-robust model (denoted by \(\Pr(f_{\text{NR}}(x)=1)\) in Theorem 4) was higher than the validity of the adversarially robust model for all the test samples following the condition in Theorem 4 (\(>\) 90% samples) for a large number of training iterations used for training adversarially robust models with \(\epsilon\in\{0,0.02,0.05,0.1,0.15,0.2,0.25,0.3\}\), shown Figure 2.
Figure 1: Analyzing cost and validity differences between recourses generated using non-robust and adversarially robust wide neural neural networks for Adult and COMPAS datasets. We find that the i) cost difference (i.e., \(\ell_{2}-\)norm) between the recourses generated for non-robust and adversarially robust models increases for increasing values of \(\epsilon\) and ii) validity decreases for increasing values of \(\epsilon\). Refer to Appendix 8.1 for similar results on larger neural networks.
**Cost Analysis.** To analyze the impact of adversarial robustness on the cost of recourses, we compute the difference between the cost for obtaining a recourse using non-robust and adversarially robust model and plotted this difference for varying degrees of robustness \(\epsilon\). Results in Figure 1 show a significant increase in costs to find algorithmic recourse for adversarially robust neural network models with increasing degrees of robustness for all the datasets. In addition, the recourse cost for adversarially robust model is always higher than that of non-robust model (see appendix Figure 5 for similar trends for logistic regression model). Further, we observe a relatively smoother increasing trend for SCFE cost differences compared to others. We attribute this trend to the stochasticity present in C-CHVAE and GSM. We find a higher cost difference in SCFE for most datasets, which could result from the larger sample size used in C-CHVAE and GSM. Further, we observe a similar trend in cost differences on increasing the number of iterations to find a recourse.
**Validity Analysis.** To analyze the impact of adversarial robustness on the validity of recourses, we compute the fraction of recourses resulting in the desired outcome, generated using a non-robust and adversarially robust model under resource constraints, and plot it against varying degrees of robustness \(\epsilon\). Results in Figure 2 show a strong impact of adversarial training on the validity for logistic regression and neural network models trained on three real-world datasets (also see Appendix 8). On average, we observe that the validity drops to zero for models adversarially trained with \(\epsilon>0.2\). To shed more light on this observation, we use t-SNE visualization [30] - a non-linear dimensionality reduction technique - to visualize test samples in the dataset to two-dimensional space. In Figure 3 (see appendix), we observe a gradual decline in the number of valid recourses around a local neighborhood with an increasing degree of robustness \(\epsilon\). The decline in the number of valid recourses suggests that multiple recourses in the neighborhood of the input sample are classified to the same class as the input for higher \(\epsilon\), supporting our hypothesis that adversarially robust models severely impact the validity of recourses and make the recourse search computationally expensive.
## 6 Conclusion
In this work, we theoretically and empirically analyzed the impact of adversarially robust models on actionable explanations. We theoretically bounded the cost differences between recourses output by state-of-the-art techniques when the underlying models are adversarially robust vs. non-robust. We also bounded the validity differences between the recourses corresponding to adversarially robust vs. non-robust models. Further, we empirically validated our theoretical results using three real-world datasets and two popular classes of predictive models. Our theoretical and empirical analyses demonstrate that adversarially robust models significantly increase the cost and decrease the validity of the resulting recourses, thereby highlighting the inherent trade-offs between achieving adversarial robustness in predictive models and providing reliable algorithmic recourses. Our work also paves the way for several interesting future research directions at the intersection of algorithmic recourse and adversarial robustness in predictive models. For instance, given the aforementioned trade-offs, it would be interesting to develop novel techniques which enable end users to navigate these trade-offs based on their personal preferences, e.g., an end user may choose to sacrifice the adversarial robustness of the underlying model to secure lower cost recourses.
Figure 2: **Cost differences (left): Empirically calculated cost differences (in orange) and our theoretical lower (in blue) and upper (in green) bounds for C-CHVAE and SCFE recourses corresponding to adversarially robust (trained using \(\epsilon{=}0.3\)) vs. non-robust linear models trained on the Adult dataset. Similar bounds for adversarially robust (trained using \(\epsilon{=}0.3\)) vs. non-robust neural-network models are shown in Figure 10. **Validity (right):** Empirical difference between the validity of recourses for non-robust and adversarially robust linear and neural network model trained on Adult dataset. Results show no violations of our theoretical bounds. |
2309.08678 | Evaluating the Impact of Local Differential Privacy on Utility Loss via
Influence Functions | How to properly set the privacy parameter in differential privacy (DP) has
been an open question in DP research since it was first proposed in 2006. In
this work, we demonstrate the ability of influence functions to offer insight
into how a specific privacy parameter value will affect a model's test loss in
the randomized response-based local DP setting. Our proposed method allows a
data curator to select the privacy parameter best aligned with their allowed
privacy-utility trade-off without requiring heavy computation such as extensive
model retraining and data privatization. We consider multiple common
randomization scenarios, such as performing randomized response over the
features, and/or over the labels, as well as the more complex case of applying
a class-dependent label noise correction method to offset the noise incurred by
randomization. Further, we provide a detailed discussion over the computational
complexity of our proposed approach inclusive of an empirical analysis. Through
empirical evaluations we show that for both binary and multi-class settings,
influence functions are able to approximate the true change in test loss that
occurs when randomized response is applied over features and/or labels with
small mean absolute error, especially in cases where noise correction methods
are applied. | Alycia N. Carey, Minh-Hao Van, Xintao Wu | 2023-09-15T18:08:24Z | http://arxiv.org/abs/2309.08678v1 | # Evaluating the Impact of Local Differential Privacy on Utility Loss via Influence Functions
###### Abstract
How to properly set the privacy parameter in differential privacy (DP) has been an open question in DP research since it was first proposed in 2006. In this work, we demonstrate the ability of influence functions to offer insight into how a specific privacy parameter value will affect a model's test loss in the randomized response-based local DP setting. Our proposed method allows a data curator to select the privacy parameter best aligned with their allowed privacy-utility trade-off without requiring heavy computation such as extensive model retraining and data privatization. We consider multiple common randomization scenarios, such as performing randomized response over the features, and/or over the labels, as well as the more complex case of applying a class-dependent label noise correction method to offset the noise incurred by randomization. Further, we provide a detailed discussion over the computational complexity of our proposed approach inclusive of an empirical analysis. Through empirical evaluations we show that for both binary and multi-class settings, influence functions are able to approximate the true change in test loss that occurs when randomized response is applied over features and/or labels with small mean absolute error, especially in cases where noise correction methods are applied.
local differential privacy, randomized response, influence functions, noise correction
## I Introduction
Due to increased public awareness and demand of data privacy over the last decade, it has become common for companies like Google [15], Apple [1], and Microsoft [10] to integrate local differential privacy (LDP) [11, 20] into their data collection procedures. In LDP, users perturb their data locally using a randomization procedure before they are collected, thereby eliminating the reliance on a trustworthy aggregation server common in global DP applications. While several randomization procedures have been proposed [3, 15, 31] to increase the utility of the randomized data, most proposed works build upon the randomized response (RR) [33] process proposed by Warner in 1965. RR was originally proposed as a technique to improve bias in collected survey responses and it ensures individual-level privacy by injecting plausible deniability into the collected data.
In RR, the probability that a user answers truthfully (or reports their true value to an aggregation server) is set by the privacy parameter \(\epsilon\), with smaller values of \(\epsilon\) leading to greater privacy for the users. When deploying DP in real world settings, selecting \(\epsilon\) that ensures a meaningful degree of privacy without significantly degrading the utility of the underlying system that uses the randomized data is a non-trivial task. It often requires heavy effort from skilled practitioners to choose the \(\epsilon\) value that best balances this trade-off. If the practitioner chooses too small of an \(\epsilon\), while the end-users' personal data are kept private, the general population-level statistics cannot be learned. On the other hand, if the practitioner chooses too large of an \(\epsilon\), while being able to infer the wanted statistics, no meaningful notion of privacy is employed and it is as if the end-users' data were collected in the clear. Additionally, the application of DP is problem specific and there is minimal understanding or or guidance on how to choose \(\epsilon\) for the specific task at hand [13]. Further, while the paradigm of trading privacy for utility (and vice versa) is well understood, it is difficult to know a priori the effect that a certain \(\epsilon\) will have on the utility of the model without actually perturbing the data and retraining the model.
In this work, we take a step towards solving the problem of selecting \(\epsilon\) in randomized response-based local differential privacy. We consider a modified LDP scenario in which a trusted data curator owns the original (non-privatized) data and wishes to use it to train a model that will be deployed publicly. Currently, selecting \(\epsilon\) which gives the best privacy-utility trade-off requires perturbing the data and retraining the model for every \(\epsilon\) under consideration - which is a time and resource intensive process. We propose to overcome this prohibitive retraining requirement by leveraging influence functions from robust statistics to show the approximate change that would occur to a model's test loss under different \(\epsilon\) values. Specifically, we present a solution that requires a fraction of the amount of retraining and does not require perturbing the original data. In short, we estimate the effect that a specific \(\epsilon\) would have on the test loss of the model if it was actually used to perturb the data and the model was actually retrained. In this manner, we perform a _'What...if...?'_ analysis that answers questions analogous to _'What would happen to the final model's performance if a certain group of training points were perturbed by a specific \(\epsilon\)?'_ The focus on a group of training points may sound unreasonable, but we offer the following example to further motivate our work.
_Consider a company that routinely collects information from their end-users to train an in-house prediction model. The end-users consent to their data being used to train the model - as long as the model is only used internally by the company._
_However, it is highly likely that at some point the company will decide to publish the model online, or sell it to another company, for profit. Some end-users wouldn't mind their data being used in this manner, but others would1 and would require the company to prioritize their data using local differential privacy and retrain the model before its release. The company is therefore tasked with finding \(\epsilon\) such that it provides sufficient privacy for the concerned end-users, but does not degrade the utility of the model._
Footnote 1: For example, smokers will be more concerned about their status as a smoker than non-smokers are due to stigma and impacts to health care if their status is revealed.
There are several works similar in spirit to ours that aim to provide insight on selecting \(\epsilon\) and the proper application of DP in practice. In [23], the authors state that while the privacy given by a certain \(\epsilon\) has an intuitive theoretical interpretation, understanding the privacy of \(\epsilon\) in practice is non-trivial. They additionally demonstrate the harm that can occur when \(\epsilon\) is not carefully chosen to suit the problem at hand. Similarly, in [25], the authors offer an intuitive interpretation of \(\epsilon\) based on quantifiable parameters, such as the number of end users and the sensitivity of the underlying system, to provide a more understandable statement on the the overall privacy risk posed by a certain \(\epsilon\) value. Further, in [18] the authors present a simple economic model for selecting \(\epsilon\) by studying the impact of \(\epsilon\) on both the data analyst and the prospective participants who contribute private data. We note, however, that all of these works focus on the central model of differential privacy, not local, and do not specifically consider the setting of machine learning. Furthermore, the previous work is quite different from ours and focuses on _explaining the privacy_ that a certain \(\epsilon\) provides. While this work is crucial, we choose to focus on the equally important problem of _explaining the effect_ a certain \(\epsilon\) has on the utility of a machine learning model when it is used to perturb a certain group of data points. To our knowledge, our work is the first to do so and analyze the impact of a certain \(\epsilon\) on the test loss of a model in the randomized response-based LDP setting.
Our contributions are as follows: (1) we present an approach for approximating the effect of perturbing a certain group of data points using randomized response-based \(\epsilon\)-LDP on a model's test loss based on influence functions. Our approach works in multiple common LDP scenarios, including randomizing the features, and/or randomizing the labels, as well as in the setting where label noise correction methods are applied. Additionally, our method allows for significant savings in computational costs when a large number of \(\epsilon\) values and/or a large number of groups are involved in performing a _'What... '...?'_ analysis; (2) We perform empirical evaluations over two binary datasets and one multi-class dataset to show the ability of our method to accurately estimate the resulting change in test loss when different \(\epsilon\) values and/or group sizes are used; and (3) we provide detailed timing analysis that shows using influence functions to approximate the true change in test lost saves computational time and resources compared to actually retraining the model for every \(\epsilon\) and/or group construction under consideration.
The rest of the publication is as follows. We begin in Section II by introducing closely related work. Section III presents an overview of local differential privacy, randomized response, and influence functions. Building on Section III, in Section IV we introduce both the generic and label specific methodologies to estimating the effect that a specific \(\epsilon\) has on the final test loss of a model as well as perform a detailed time complexity analysis. Section V details our experimentation and gives an empirical evaluation of our techniques. Finally, in Section VI we offer our concluding remarks and a discussion over our future work.
## II Related Work
#### Ii-1 Local Differential Privacy
Differential privacy (DP) is a formal notion of privacy that allows analysts to learn trends in sensitive data without revealing information specific to the individuals in the dataset [12]. In central (also known as global) DP, we assume a trusted data curator collects all the sensitive data and ensures any queries over the data are made differentially private by adding noise proportional to the sensitivity of the query. Local differential privacy (LDP) [20] eliminates the requirement of a trusted curator by requiring each user to perturb their own data before sending them to the server. Numerous research works have been published over LDP [5, 34], and LDP has been utilized in a myriad of different problem settings such as: data statistics and analysis [31], graph neural networks [29], and federated learning [17]. LDP is also commonly used by the likes of Google [15], Apple [1], and Microsoft [10] to privatize collected information from their end users. Despite the real-world applications of LDP, it has been repeatedly noted that choosing a proper \(\epsilon\) is complex and problem dependent. In [16], the US Census Bureau note that to pick a proper \(\epsilon\) for US Census data release, they constructed a set of graphs showing the trade-off between \(\epsilon\) and accuracy. However, constructing these graphs requires training multiple models, which is costly. Our approach based on influence functions eliminates the necessity of excessive model retraining and data randomization, enabling analysts to choose the proper \(\epsilon\) faster and by using less compute resources.
One common method to ensure LDP is randomized response. Randomized response was first proposed to improve bias in survey responses about sensitive issues like drug use [6, 33]. However, as techniques for differential privacy were being developed in the late 2000s, statisticians realized that randomized response inherently satisfied the requirements for being a differentially private algorithm. Since randomized response injects plausible deniability into the collected data, it naturally protects user's private data. In [32], the authors study how to enforce differential privacy by using randomized response in the data collection scenario and their work plays an important role in our formulations in Section IV.
#### Ii-2 Influence Functions
Influence functions are a product of influence analysis from the field of robust statistics [8].
In influence analysis, small perturbations are introduced into the problem formulation (e.g., into the data or assumptions) and the resulting change in the outcome of the analysis is monitored [7, 8]. The idea of using influence functions to monitor change in a statistical model was extended in [21] to show how a single training point influences the final machine learning model's parameters and/or the test loss of a single test point. This work was further extended in [22] in which the authors showed how influence functions can be used to estimate the influence that a group of training points has on the model parameters and/or the loss of a single test point. The methods formulated in these works serve as the basis for several extension works - ours included. Specifically, influence functions have been used to estimate the influence of individual end-users in federated learning [35], analyze the solutions produced by robust optimization [9], perform out-of-distribution generalization analysis [36], and evaluate the fairness of a machine learning model [24, 30]. Further, several works have been published over how to improve the approximation ability of influence function-based methods [2, 4, 27].
While both influence functions and differential privacy have strong ties to robust statistics [12, 14], to our knowledge only one work has been published utilizing both methods [19]. However, [19] aims to improve the performance of differentially private empirical risk minimization by seeking out the training points with high influence and adding additional Gaussian noise to them. In contrast, our work focuses on showing the effect of \(\epsilon\) on the model test loss and is the first to capitalize on the ability of influence functions to show the effect that a chosen \(\epsilon\) has on the utility loss of a model.
## III Preliminaries
In this section, we present the required background information on local differential privacy and influence functions for understanding the discussions of Section IV. We begin by detailing the notation used through the remainder of the paper. Let \(\epsilon\) represent the privacy parameter in LDP and let \(\mathbb{P}[\cdot]\) denote probability. Let \(\mathcal{X},\mathcal{Y}\) be the feature and label domain and \(Z,Z_{te}\) represent the training and testing datasets. Let \(z=(x\in\mathcal{X},y\in\mathcal{Y})\in Z\) represent one of \(n\) training points and let \(z_{te}=(x_{te}\in\mathcal{X},y_{te}\in\mathcal{Y})\in Z_{te}\) represent one of \(m\) testing points. We denote our model \(h:\mathcal{X}\rightarrow\mathcal{Y}\), the model parameters by \(\theta\in\Theta\), and use \(\hat{\theta}\) to denote the optimal model parameters. We use \(\ell(z,\theta)=\ell(h(x;\theta),y)\) to denote the loss function and \(\mathcal{L}(Z,\theta)=\frac{1}{n}\sum_{i=1}^{n}\ell(z^{i},\theta)\) to denote the empirical risk. The empirical risk minimizer is given by \(\hat{\theta}=\arg\min_{\theta\in\Theta}\mathcal{L}(Z,\theta)\) and we assume that the empirical risk is twice-differentiable and strictly convex in \(\theta\)[21].
### _Local Differential Privacy_
As mentioned previously, local differential privacy allows an analyst to learn population statistics without violating the privacy of individuals. More formally, \(\epsilon\)-LDP is defined as follows:
**Definition 1** (\(\epsilon\)-LDP [20]): _A randomized mechanism \(\mathcal{M}\) satisfies \(\epsilon\)-local differential privacy if and only if for any pair of input values \(r,r^{\prime}\) in the domain of \(\mathcal{M}\), and for any possible output \(o\in\text{Range}(\mathcal{M})\), it holds:_
\[\mathbb{P}[\mathcal{M}(r)=o]\leq e^{\epsilon}\cdot\mathbb{P}[\mathcal{M}(r^{ \prime})=o]\]
_Definition 1 states that the probability of outputting \(o\) on record \(r\) is at most \(e^{\epsilon}\) times the probability of outputting \(o\) on record \(r^{\prime}\)._
#### Iii-A1 Randomized Response
One popular method used to implement LDP is randomized response. Let \(u\) be a private variable that can take one of \(C\) values. We can formalize the randomize response access as a \(C\times C\) distortion matrix \(\textbf{P}=(p_{uv})_{C\times C}\) where \(p_{uv}=\mathbb{P}[v|u]\in(0,1)\) denotes the probability that the output of the randomized response process is \(v\in\{1,\ldots,C\}\) when the real attribute value is \(u\in\{1,\ldots,C\}\). Note that the entries of the distortion matrix are probabilities, and therefore the sum of the probabilities of each row is 1 [32]. Further, **P** can be altered to achieve both optimal utility and \(\epsilon\)-DP by setting as follows [32]:
\[p_{uv}=\begin{cases}\frac{e^{\epsilon}}{C-1+e^{\epsilon}}&\text{if }u=v\\ \frac{1}{C-1+e^{\epsilon}}&\text{if }u\neq v\end{cases} \tag{1}\]
If we have a dataset that was collected using randomized response, then using the distortion matrix **P** that was used during data collection, we can estimate the true population distribution from the noisy collected data. Let \(\boldsymbol{\pi}=\{\pi_{1},\ldots,\pi_{C}\}\) be the true (to be estimated) proportion of the values in the original population and let \(\boldsymbol{\lambda}=\{\lambda_{1},\ldots,\lambda_{C}\}\) be the observed proportion of the values in the collected noisy dataset. Using the relationship \(\boldsymbol{\pi}\approx\textbf{P}^{-1}\boldsymbol{\lambda}\) we can estimate the true underlying population \(\boldsymbol{\pi}\) based on the observed values in the collected noisy dataset \(\boldsymbol{\lambda}\). We note that in our setting a trustworthy server has all the original statistics \(\boldsymbol{\pi}\). Therefore, to estimate the values of \(\boldsymbol{\lambda}\) without actually perturbing the data, we can calculate \(\boldsymbol{\lambda}\approx\textbf{P}\boldsymbol{\pi}\). We discuss this idea further in Section IV.
### _Influence Functions_
In [21], the authors propose the use of influence functions to study machine learning models through the lens of their training data. Specifically, they show that the influence a single training point \(z=(x,y)\) has on the model parameters can be calculated without actually removing \(z\) from the training set and retraining the model on the resulting dataset. They instead simulate the removal of \(z\) by upweighting it by a small value \(\frac{1}{n}\) (where \(n\) is the total number of training points). They then calculate the influence the training point has on the model parameters as:
\[\mathcal{I}_{up,params}(z)=-H_{\hat{\theta}}^{-1}\nabla_{\theta}\ell(z,\hat{ \theta}) \tag{2}\]
where \(H_{\hat{\theta}}^{-1}\) is the inverse Hessian matrix:
\[H_{\hat{\theta}}^{-1}=\left(\frac{1}{n}\sum_{i=1}^{n}\nabla_{\theta}^{2}\ell(z ^{i},\hat{\theta})\right)^{-1} \tag{3}\]
Note that the inverse Hessian matrix can be calculated explicitly as defined in Eq. 3 or efficiently estimated using the conjugate gradient or stochastic estimation approaches [21]. We discuss the implications of this further in Section IV-C1.
Eq. 2 is obtained by performing a quadratic expansion around the optimal parameters \(\hat{\theta}\) which gives an approximation of the function locally using information about the steepness (the gradient) and the curvature (the Hessian). Eq. 2 can be used to approximate the parameters that would be obtained if \(z\) was actually removed from the dataset and the model was retrained as:
\[\hat{\theta}_{-z}\approx\hat{\theta}-\frac{1}{n}\mathcal{I}_{up,params}(\hat{ \theta},z) \tag{4}\]
In [21], the authors further extend Eq. 2 to show the influence a training instance \(z\) has on the loss of a test instance \(z_{te}\):
\[\mathcal{I}_{up,loss}(z,z_{te})=-\nabla_{\theta}\ell(z_{te},\hat{\theta})^{T} H_{\hat{\theta}}^{-1}\nabla_{\theta}\ell(z,\hat{\theta}) \tag{5}\]
where \(\nabla_{\theta}\ell(z_{te},\hat{\theta})\) is the gradient of the test instance w.r.t. the optimal model parameters and \(\frac{1}{n}\mathcal{I}_{up,loss}(z,z_{te})\) gives the approximate change in loss for test point \(z_{te}\). The authors of [21] also consider the effect that perturbing a training point has on the parameters/loss of a test point. Consider a training point \(z\) and its perturbed2 value \(z_{\beta}\). Let:
Footnote 2: Here, we consider a general perturbation that can be discrete or continuous. We are not specifically considering randomized response perturbations.
\[\hat{\theta}_{z_{\beta},-z}=\arg\min_{\theta\in\Theta}\mathcal{L}(Z,\theta)+ \frac{1}{n}\ell(z_{\beta},\theta)-\frac{1}{n}\ell(z,\theta) \tag{6}\]
be the empirical risk minimizer on the training points with \(z_{\beta}\) in place of \(z\). The approximate effect that changing \(z\) to \(z_{\beta}\) has on the model parameters can be computed as:
\[\mathcal{I}_{pert,params}(z_{\beta},-z)=-H_{\hat{\theta}}^{-1}\Big{(}\nabla_{ \theta}\big{(}\ell(z_{\beta},\hat{\theta})-\ell(z,\hat{\theta})\big{)}\Big{)} \tag{7}\]
which can then use to approximate the new parameters as:
\[\hat{\theta}_{z_{\beta},-z}\approx\hat{\theta}+\frac{1}{n}\mathcal{I}_{pert, params}(z_{\beta},-z) \tag{8}\]
As before, the authors of [21] extend Eq. 7 to show the approximate effect that perturbing \(z\) to \(z_{\beta}\) has on the loss of a test point \(z_{te}\):
\[\mathcal{I}_{pert,loss} (z_{\beta},-z,z_{te})= \tag{9}\] \[-\nabla_{\theta}\ell(z_{te},\hat{\theta})^{T}H_{\hat{\theta}}^{- 1}\Big{(}\nabla_{\theta}\big{(}\ell(z_{\beta},\hat{\theta})-\ell(z,\hat{ \theta})\big{)}\Big{)}\]
and \(\frac{1}{n}\mathcal{I}_{pert,loss}(z_{\beta},-z,z_{te})\) is the approximate change in loss for test point \(z_{te}\).
## IV Methodology
We now detail our approach based on influence functions for estimating the effect that perturbing a group of training points \(S\subset Z\) under randomized response-based \(\epsilon\)-LDP would have on the test loss of a model if the perturbation was actually performed and the model was retrained. We begin by giving a formulation of Eq. 9 in the _Group-to-Group_ setting which is followed by the presentation of the general influence formula for approximating the effect of applying randomized response-based \(\epsilon\)-LDP on the features, labels, or features and labels. We then focus specifically on the label perturbation scenario and show how our proposed formulation can be altered to approximate the effect of applying a noise correction method after randomized response-based \(\epsilon\)-LDP is performed. Finally, we offer an analysis of the computational complexity of our approach compared to the naive approach of model retraining as well as a discussion over the ability of our method to extend to other LDP protocols beyond randomized response.
### _Influence of \(\epsilon\)-LDP on Model Utility Loss_
```
0:\(Z\), \(G\), \(G_{te}\), \(\epsilon\)
1:\(\hat{\theta}=\arg\min_{\theta}\frac{1}{n}\sum_{i=1}^{n}\ell(h(x^{i};\theta),y^ {i})\)\(\triangleright\) Train model
2:\(H_{\hat{\theta}}^{-1}=\big{(}\frac{1}{n}\sum_{i=1}^{n}\nabla_{\theta}^{2} \ell(z^{i},\hat{\theta})\big{)}^{-1}\)\(\triangleright\) Compute Hessian
3:\(\mathcal{L}_{te}=0\)\(\triangleright\) Calculate test loss
4:for\(z_{te}\in\mathcal{L}_{te}\)do
5:\(\mathcal{L}_{te}=\mathcal{L}_{te}+\ell(z_{te},\hat{\theta})\)
6:endfor
7:\(\eta=-\nabla_{\theta}\frac{\mathcal{L}_{te}}{m}\)\(\triangleright\) Gradient of average test loss
8:\(\mathcal{L}\) = 0
9:for\(z\in G\)do\(\triangleright\) Calculate loss difference
10:\(z_{\beta}=\mathcal{R}(z,\epsilon)\)
11:\(\mathcal{L}=\mathcal{L}+(\ell(z_{\beta},\hat{\theta})-\ell(z,\hat{\theta}))\)
12:endfor
13:\(\gamma=\nabla_{\theta}\mathcal{L}\)\(\triangleright\) Gradient of total loss difference
14:\(\mathcal{I}_{pert,loss}=\eta\cdot H_{\hat{\theta}}^{-1}\cdot\gamma\)
15:return\(\mathcal{I}_{pert,loss}\)
```
**Algorithm 1** Approximate Effect of \(\epsilon\)
#### Iv-A1 Group-to-Group Influence
In [22], the authors showed that influences are additive with respect to a single test point. For example, given a group of training points \(G\subset Z\) and a single test point \(z_{te}\):
\[\mathcal{I}_{pert,loss} (G_{\beta},-G_{z_{te}}) \tag{10}\] \[=\sum_{i\in G}\mathcal{I}_{pert,loss}(z_{\beta}^{i},-z^{i},z_{te})\] \[=-\nabla_{\theta}\ell(z_{te},\hat{\theta})^{T}H_{\hat{\theta}}^{- 1}\Big{(}\nabla_{\theta}\sum_{i\in G}\big{(}\ell(z_{\beta}^{i},\hat{\theta})- \ell(z^{i},\hat{\theta})\big{)}\Big{)}\]
where \(G_{\beta}=\{z_{\beta}^{i}\}_{i=1}^{|G|}\), \(G=\{z^{i}\}_{i=1}^{|G|}\). However, in this work we consider the influence that a group of training points has on the loss of a group of test points. To achieve this goal, we begin by extending Eq. 10 to calculate the influence on the loss of a group of test points \(G_{te}\subseteq\mathcal{L}_{te}\):
\[\mathcal{I}_{pert,loss}^{\text{GO2}}(G_{\beta},-G,G_{te})=\sum_{j\in G_{te}} \sum_{i\in G}\mathcal{I}_{pert,loss}(z_{\beta}^{i},-z^{i},z_{te}^{j}) \tag{11}\] \[=-\nabla_{\theta}\mathcal{L}(G_{te},\hat{\theta})^{T}H_{\hat{ \theta}}^{-1}\Big{(}\nabla_{\theta}\sum_{i\in G}\big{(}\ell(z_{\beta}^{i},\hat{ \theta})-\ell(z^{i},\hat{\theta})\big{)}\Big{)}\]
where \(G_{te}=\{z_{te}^{j}\}_{j=1}^{|G_{te}|}\).
We present the process of calculating \(\mathcal{I}_{pert,loss}^{\text{GO2}}\) in Alg. 1. In the first line, we train the model \(h\) over the training data \(Z\) to produce the optimal parameters \(\hat{\theta}\) - which is used as the baseline model in the calculation of the influence function. In line 2, we begin to calculate \(\mathcal{I}_{pert,loss}^{\text{GO2}}\) by first computing the inverse Hessian over the original training data. We note that the inverse Hessian only needs to be computed once even if multiple different \(\epsilon\) values and group constructions \(G\) are being tested. In lines 3-7, we compute the gradient of the loss over the entire test group \(G_{te}\). As with calculating the inverse Hessian matrix, as long as the construction of \(G_{te}\) does not change, this line only has to be performed once. Lines 8-12 compute the aggregated differences between the loss when
the data point \(z\in G\) is perturbed to \(z_{\beta}\) using randomized response-based \(\epsilon\)-LDP (line 10) and the original loss on point \(z\). Lines 13 and 14 finish out the computation by first taking the gradient of the aggregated loss computed in lines 8-12 and then multiplying the results of lines 2 (the inverse Hessian), 7 (the gradient of the test loss), and 13 (the gradient of the aggregate loss). In Alg. 1, randomization is actually performed on the training point \(z\). However, in the next subsection we will show how \(\mathcal{I}_{pert,loss}^{\text{G2G}}\) can be modified to obtain the estimated effect of randomization without actually having to perturb the training point. We also note that \(z_{\beta}^{i}\) can be constructed three different ways - \(z_{\beta}^{i}=(x_{\beta}^{i},y^{i})\), \(z_{\beta}^{i}=(x^{i},y_{\beta}^{i})\), and \(z_{\beta}^{i}=(x_{\beta}^{i},y_{\beta}^{i})\) - without altering the formulation of Eqs. 8 - 11. This idea is utilized in the next subsection to formulate our general influence formula to estimate the effect of randomized response \(\mathcal{I}_{pert,loss}^{\text{RR}}\).
#### Iv-A2 \(\mathcal{I}_{pert,loss}^{\text{RR}}\)
In this subsection, we make the generalization of \(\mathcal{A}=\mathcal{X}\cup\mathcal{Y}\). In other words, instead of considering the features and label domain as separate (e.g., \(z=(x,y)\)) we instead think of them as a combined attribute domain (e.g., \(z=(a)\)). We place a further restrictions on \(\mathcal{A}\) such that \(\mathcal{A}=\{A_{1},\ldots,A_{T}\}\) is the set of attributes where each \(A_{t}=\{a_{1},\ldots,a_{d_{t}}\}\) has \(d_{t}\) mutually exclusive and exhaustive categories. In our scenario, we consider that a subset of attributes \(\mathcal{F}\subseteq\mathcal{A}\) are perturbed via randomized response (meaning that both the features and the label could possibly be perturbed). Let \(\mathcal{F}_{\times}\) represent the Cartesian product of the elements of \(\mathcal{F}\). Let each feature \(A_{f}\in\mathcal{F}\) be associated with a randomized response distortion matrix \(\textbf{P}^{f}=\{p_{uv}\}_{d_{f}\times d_{f}}\) where:
\[p_{uv}=\begin{cases}\frac{\epsilon_{f}}{d_{f}-1+e^{\epsilon_{f}}}&\text{if }u=v\\ \frac{1}{d_{f}-1+e^{\epsilon_{f}}}&\text{if }u\neq v\end{cases} \tag{12}\]
and \(\epsilon_{f}\) represents the privacy parameter for attribute \(A_{f}\).
Let \(\mathcal{V}=\mathcal{A}-\mathcal{F}\). Since \(\mathcal{V}\cap\mathcal{F}=\emptyset\), we can write \(z=(a)\) as \(z=(\alpha\cup\delta)\) where \(\alpha=\{a_{t}\in\mathcal{F}\}\) and \(\delta=\{a_{t}\in\mathcal{V}\}\). For example, let \(\mathcal{A}=\{gender,race,age,income\}\) and let \(\mathcal{F}=\{gender,income\}\). If \(gender=\{m,f\}\) and \(income=\{0,1\}\), then, in this setting, \(\mathcal{F}_{\times}=\{(m,0),(m,1),(f,0),(f,1)\}\). If we have a training point \(z=(a=\{f,B,45,1\})\), then we can rewrite it as \(z=(\alpha\cup\delta)\) where \(\alpha=\{f,1\}\) and \(\delta=\{B,45\}\).
When computing the influence function, we only consider cases where the attributes that are modified under randomized response are not the same as the original attribute combination. I.e., we want \(f_{\times}\in\mathcal{F}_{\times}-\alpha\). Note that according to Eq. 12 we have probability \(\frac{1}{d_{f}-1+e^{\epsilon_{f}}}\) of changing the one attribute value \(a_{t}\in\mathcal{A}\) to \(a_{t_{\beta}}\). In order to correctly calculate the probability of \(a\) being perturbed to another element in \(\mathcal{F}_{\times}-\alpha\), we have to consider the probability of all elements in \(\mathcal{F}_{\times}-\alpha\) being the outcome of the randomized response process. This combined probability can be calculated as \(1-\prod_{f\in\mathcal{F}}\frac{e^{\epsilon_{f}}}{d_{f}-1+e^{\epsilon_{f}}}\).
Using these ideas, we can write the group to group influence of applying randomized response (\(\mathcal{I}_{pert,loss}^{\text{RR}}\)) as:
\[\begin{split}&\mathcal{I}_{pert,loss}^{\text{RR}}(S,S_{tie}, \boldsymbol{\epsilon})=-\nabla_{\theta}\mathcal{L}(S_{tie};\hat{\theta})H_{ \theta}^{-1}\cdot\\ &\nabla_{\theta}\bigg{(}\Big{(}1-\prod_{f\in\mathcal{F}}\frac{e ^{\epsilon_{f}}}{d_{f}-1+e^{\epsilon_{f}}}\Big{)}\sum_{i\in S}\sum_{j_{\times} \in\mathcal{F}_{\times}-\alpha^{i}}\left(\ell(f_{\times},\hat{\theta})-\ell(z ^{i};\hat{\theta})\right)\bigg{)}\end{split} \tag{13}\]
where \(\boldsymbol{\epsilon}=\{\epsilon_{f}\}_{f\in\mathcal{F}}\) and \(\frac{1}{n}\mathcal{I}_{pert,loss}^{\text{RR}}\) gives the estimated change in test loss.
### _Influence of \(\epsilon\)-LDP Labels on Model Utility Loss_
We now use Eq. 13 as the base of our formulation for estimating the effect that perturbing only the training labels using randomized response-based \(\epsilon\)-LDP has on the model's final test loss. We note that in this section we detail the formulation for approximating the effect of label perturbation (and not feature or feature and label perturbation) to give a foundation upon which we can build our construction of approximating the effect of applying randomized response-based \(\epsilon\)-LDP on the labels with class-dependent label noise correction in the next section. Recall the distortion matrix **P** as defined in Section III. The probability that a training point \(z^{i}=(x^{i},y^{i})\) is to perturbed to \(z_{\beta}^{i}=(x^{i},c)\) is \(\frac{1}{\mathcal{C}-1+e^{\epsilon}}\). Using this probability, and the idea that the influence function is only calculated over modified points (i.e., \(c\neq y^{i}\)), we can estimate the effect that applying \(\epsilon\)-LDP to group \(S\subset Z\) would have on the loss of a group of test points \(S_{te}\subseteq Z_{te}\) as:
\[\begin{split}&\mathcal{I}_{pert,loss}^{\text{RR-L}}(S,S_{tie}, \epsilon)=-\nabla_{\theta}\mathcal{L}(S_{tie},\hat{\theta})^{T}H_{\theta}^{-1} \cdot\\ &\nabla_{\theta}\bigg{(}\frac{1}{C-1+e^{\epsilon}}\sum_{i\in S} \sum_{\begin{subarray}{c}c\in C\\ c\neq y^{i}\end{subarray}}\epsilon(h(x^{i};\hat{\theta}),\,c)\ell(h(x^{i}; \hat{\theta}),\,y^{i})\bigg{)}\end{split} \tag{14}\]
where \(\frac{1}{n}\mathcal{I}_{pert,loss}^{\text{RR-L}}(S,S_{te},\epsilon)\) gives the estimated change in model test loss when \(\epsilon\)-LDP is applied to the labels.
#### Iv-B1 Forward Loss Correction
When randomized response is used to perturb labels, it is common that a noise correction procedure is used to counteract the injected noise from randomization. Once such noise correction procedure is Forward Loss Correction (FLC) [26]. FLC is an approach to train machine learning models robust to class-dependent label noise (not necessarily noise crafted by randomized response). They note that a model learned without using loss correction would result in the model being tailored to predict noisy labels instead of the actual labels. To perform FLC, the authors correct the model predictions using a probability matrix that defines the noisy data distribution (in our case, the probability matrix is the distortion matrix **P**) before calculating the loss between the model prediction and the noisy labels \(\tilde{y}\in\tilde{\mathcal{Y}}\). FLC is defined as:
\[\mathcal{L}^{\text{FLC}}(\tilde{Z},\theta)=\frac{1}{n}\sum_{i=1}^{n}\epsilon( \textbf{P}^{T}h(x^{i};\theta),\tilde{y}^{i}) \tag{15}\]
where \(\tilde{Z}=\{\tilde{z}^{i}=(x^{i},\tilde{y}^{i})\}_{i=1}^{n}\) and \(\ell\) is a proper composite loss3[28] such as cross-entropy or square loss. Here, we will
slightly abuse our notation \(h(x;\theta)\) in order to explicitly show that \(\ell\) is a proper composite loss. Thus far, we have considered \(h\) to map \(\mathcal{X}\rightarrow\mathcal{Y}\). In other words, the output of \(h(x;\theta)\) is the predicted class label. However, for this section we consider \(h(x;\theta)=\mathbb{P}(y\mid x)\). In other words, the output of \(h\) is the predicted probability of the class being \(y\) when the input is \(x\). Using this notation, we can rewrite Eq. 15 as:
\[\mathcal{L}_{v}^{\text{FLC}}(\text{\ref{eq:1}},\text{\ref{eq:2}})=\frac{1}{n} \sum_{i=1}^{n}\ell(\mathbb{P}^{T}\psi^{-1}(h(x^{i};\theta)),\text{\ref{eq:3}}^ {i}) \tag{16}\]
Here, \(\psi\) is the link function associated with a particular proper loss. For example, softmax is the inverse link function for cross-entropy. When FLC is applied while minimizing a proper composite loss function, [26] notes that the minimizer of the corrected loss under the noisy distribution is the same as the minimizer of the original loss under the clean distribution:
\[\hat{\theta}=\arg\min_{\theta\in\Theta}\mathcal{L}_{\psi}^{\text{FLC}}(\text{ \ref{eq:1}},\text{\ref{eq:2}})=\arg\min_{\theta\in\Theta}\mathcal{L}_{\psi}( \text{\ref{eq:2}},\text{\ref{eq:3}}) \tag{17}\]
In other words, the learned model will make correct predictions on future non-randomized test data. For brevity, we refer readers to [26] for an in-depth discussion of FLC.
Our scenario varies slightly from [26] in that they assume all labels have been randomly perturbed, while we assume only a known group are. Therefore, if we use Eq. 16 as given in our approach, we will overcompensate for the noise caused by applying randomized response to only group \(S\). To solve this issue, we propose Theorem 1.
**Theorem 1**: _Suppose that the distortion matrix **P** is non-singular. Define the adjusted forward loss correction as:_
\[\mathcal{L}_{v}^{\text{RLC}}(S\cup R,\theta)= \tag{18}\] \[\frac{1}{n}\bigg{(}\sum_{i\in R}\ell(\psi^{-1}(h(x^{i};\theta)), \text{\ref{eq:3}}^{i})+\sum_{j\in S}\ell(\mathbb{P}^{T}\psi^{-1}(h(x^{j}; \theta)),\text{\ref{eq:3}}^{j})\bigg{)}\]
_where \(S\subset Z\) is the group of perturbed points and \(R=Z\backslash S\). Then, the minimizer of the corrected loss under both the noisy and clean distribution is the same as the minimizer of the original loss under the entire clean distribution:_
\[\hat{\theta}=\arg\min_{\theta\in\Theta}\mathcal{L}_{\psi}^{\text{FLC}}(S\cup R,\theta)=\arg\min_{\theta\in\Theta}\mathcal{L}_{\psi}(\text{\ref{eq:3}},\text{ \ref{eq:4}}) \tag{19}\]
Proof:: Assume there are \(K\) disjoint subsets of the training dataset \(Z=G^{1}\cap G^{2}\cap\cdots\cap G^{K}\) each of which is defined by a \(C\times C\) perturbation matrix \(\textbf{P}^{k}\) where \(C=\{1,\dots,C\}\) represent the possible labels. Assume that the make up of each subgroup \(G^{i}\), \(\boldsymbol{\pi}^{i}=\{\pi_{1}^{i},\cdots,\pi_{C}^{i}\}\), is known. Then:
\[\textbf{P}^{1}\boldsymbol{\pi}^{1}+\textbf{P}^{2}\boldsymbol{\pi}^{2}+\cdots+ \textbf{P}^{K}\boldsymbol{\pi}^{K}=\textbf{P}\boldsymbol{\pi} \tag{20}\]
where **P** is a matrix with \(p_{uv}=\sum_{i\in K}\frac{\pi_{u}^{i}}{\pi_{u}}p_{uv}^{i}\) and \(\boldsymbol{\pi}\) is a vector with \(\pi_{c}=\sum_{i\in K}\pi_{c}^{i}\). Eq. 20 shows that we can write the \(K\) different group-level data distributions as one population level distribution. Therefore, the loss function in Eq. 18 reduces to Eq. 16 and the proof provided in [26] follows directly. In short, [26] proves Eq. 17 by showing that by combining \(\textbf{P}^{T}\) with \(\psi^{-1}\) (specifically \(\phi^{-1}=\psi^{-1}\circ\textbf{P}^{T}\)) a new link function \(\phi\) is formed and that the following holds:
\[\mathcal{L}_{\phi}(\text{\ref{eq:2}},\text{\ref{eq:3}}) =\frac{1}{n}\sum_{i=1}^{n}\ell(\phi(h(x^{i},\theta)),\text{\ref{eq:3} }^{i}) \tag{21}\] \[=\frac{1}{n}\sum_{i=1}^{n}\ell(\psi(\textbf{P}^{-1})^{T}h(x^{i}, \theta)),\text{\ref{eq:3}}^{i})\] \[=\frac{1}{n}\sum_{i=1}^{n}\ell(\psi(h(x^{i},\theta)),\text{ \ref{eq:3}}^{i})\]
Theorem 1 gives intuition on how we can incorporate the distortion matrix **P** into the influence function in order to show how implementing FLC to correct noise from \(\epsilon\)-LDP affects the model's final test loss. Specifically, since influence functions only consider training instances of interest (i.e., those removed/modified/perturbed), we can modify Eq. 14 to consider FLC without over correcting the loss:
\[\mathcal{L}_{pert,loss}^{\text{RLR-RC}}(S,S_{te},\epsilon)=-\nabla_{\theta} \mathcal{L}(S_{te},\theta)^{T}H_{\theta}^{-1}. \tag{22}\] \[\nabla_{\theta}\bigg{(}\frac{1}{C-1+e^{\epsilon}}\sum_{i\in S} \sum_{\begin{subarray}{c}\epsilon\in C\\ \epsilon\neq\text{\ref{eq:3}}\end{subarray}}\ell(\textbf{P}^{T}h(x^{i}; \theta),\text{\ref{eq:3}})-\ell(h(x^{i};\theta),\text{\ref{eq:3}}^{i})\bigg{)}\]
where \(\frac{1}{n}\mathcal{I}_{pert,loss}^{\text{RLR-RC}}(S,S_{te},\epsilon)\) gives the estimated change in model test loss when the effect of using FLC to correct the noise of \(\epsilon\)-LDP is simulated.
### _Discussion_
#### Iv-C1 Time Complexity
In Table I, we detail the computational complexity of calculating the influence function using three different approaches to computing the inverse Hessian vector product (IHVP) as well as the time complexity of normal training of a logistic regression model using gradient based learning. Specifically, we detail the complexity of computing \(\mathcal{I}_{pert,loss}^{\text{RR}}\) explicitly and using the conjugate gradient (CG) or stochastic estimation (SE) approaches to estimate the IHVP. We direct interested readers to [21] for a more in depth discussion of how CG and SE can decrease the total computation time. White using the explicit IHVP approach seems to be more computationally complex than retraining, we note that this cost is only accrued once for all \(\epsilon\) and groups being considered while retraining would have to be done \(e\times g\) times where \(e\) is the number of \(\epsilon\) values and \(g\) is the number of groups being considered. Additionally, using the influence-based approach with CG or SE IHVP estimation can have computational speed-up over the naive retraining approach when the number of training epochs \(E\) is large. However we again note that the IHVP only has to be calculated (or estimated using CG/SE)
once_ to compute the influence of all \(\epsilon\) values (\(e\)) and group constructions being considered. On the other hand, retraining has to be performed once for every \(\epsilon\) and for every group construction. In Section V-D we clearly show the power of approximation using influence functions to save computational time even when small models are considered.
#### Iv-D2 Using \(\mathcal{I}_{pert,loss}^{\text{RR}}\) With Other LDP Protocols
While our analysis and construction has been based on the idea of using randomized response to perturb the features and/or label, here we describe when and how our work can be extended to other LDP protocols such as Randomized Aggregatable Privacy-Preserving Ordinal Response (RAPPOR) [15], Optimal Linear Hashing (OLH) [31], Optimal Unary Encoding (OUE) [31], Binary Linear Hashing (BLH) [31], and Thresholding with Histogram Encoding (THE) [31]. In Eq. 13, the influence function explicitly considers randomized response as the LDP perturbation scheme due to the scaling term \(1-\prod_{f\in\mathcal{F}}\frac{e^{\epsilon f}}{d_{f}-1+e^{\epsilon f}}\). However, this scaling term can be easily altered to represent other perturbation schemes like BLH and OLH. The only restrictions on which LDP perturbation methods can be analyzed using influence functions are: 1) randomization must be done at an individual record level (in order to be able to analyze how changing the record changes the model), and; 2) the the randomization cannot change the feature representation. For example, RAPPOR, OUE, and THE all perform one-hot encoding on the user's input before randomization. This changes the overall feature domain size which makes it complicated to use the randomized value as input to the machine learning model trained on the non-randomized data. In these cases, if the randomized value cannot be fed into the machine learning model trained on non-randomized data (which has a smaller feature domain space), then there is no possible way to see the loss that would be produced on the randomized input. Therefore, the effect of LDP protocols such as OLH and BLH can be approximated using our proposed influence function approach, while other LDP protocols like RAPPOR, OUE, and THE would require modification of our influence based approach since they change the feature domain size.
## V Evaluation
In this section, we present the details of our experimentation to show that the influence functions of Eqs. 13, 14, and 22 are able to properly estimate the effect that a certain \(\epsilon\) will have on model test loss. Additionally, we provide a detailed timing analysis. We test a standard \(\epsilon\) range of 30 evenly spaced \(\epsilon\) values from 0.001 to 10. All experiments are run on a Tesla V100 (32GB RAM) GPU. Our code is publicly available at [https://tinyurl.com/yc2ra8m4](https://tinyurl.com/yc2ra8m4).
#### V-1 Datasets
We use three datasets in our experimentation: Adult, ACSPublicCoverage (ACSPubCov), and MNIST. Specifically, we perform normal pre-processing (e.g., drop duplicates, perform feature selection,...), use an 80/20 train/test split, and either binarize (ACSPubCov) or one-hot encode (Adult) the features. This was to aid the influence function in approximating the true loss. Additionally, we chose to use only four classes (1, 3, 7, 8) from the MNIST dataset. This was to allow faster retraining for comparison with our influence based method. Table II explains the characteristics of the datasets and results of each dataset on a logistic regression model.
#### V-2 Group Construction
For the Adult and ACSPubCov dataset we select the group \(S\) from the set of training points \(Z\) based the _gender_ attribute and for the MNIST dataset we randomly select one of the four labels to be the group \(S\). During experimentation, we test 10 evenly spaced values of \(k\) to use as the group size ranging from \(1\%\) to \(30\%\) of the selected group \(S\). W.L.O.G we set \(S_{te}=Z_{te}\). Specifically, we note that
Fig. 1: Actual difference in test loss vs. estimated difference in test loss for all \(\epsilon\) values per group size for the Adult dataset. Cooler colors depict smaller \(\epsilon\) values. Top row: no noise correction, bottom row: with forward loss correction.
in our experimentation we choose to focus on one selected group (e.g., gender) and study how varying this group's size affects the estimation ability of the influence functions. We leave further experimentation over how the selection of the group (e.g., based on age or race) affects the estimation for future work.
#### Iv-A3 Architecture
Here, we recall that \(\mathcal{I}_{pert,loss}^{RR}\) is the general influence function for estimating the effect of applying randomized response-based \(\epsilon\)-LDP on the features, labels, or features and labels has on the utility of the model (see Eq. 13), \(\mathcal{I}_{pert,loss}^{RR-L}\) is the influence function for estimating the effect of applying randomized response-based \(\epsilon\)-LDP on the labels only (Eq. 14, and \(\mathcal{I}_{pert,loss}^{RR-FLC}\) is the influence function for estimating the effect of applying randomized response-based \(\epsilon\)-LDP with FLC (Eq. 22). For the experiments on \(\mathcal{I}_{pert,loss}^{RR}\) and \(\mathcal{I}_{pert,loss}^{RR-L}\) we train the original models using the SGDClassifier offered in the scikit-learn python package. In order to calculate the influence function over the various \(\epsilon\) and group sizes, the parameters learned via the SGDClassifier were loaded into a Pytorch model. After calculating the approximate effect using influence functions, the data was perturbed using randomized response and another SGDClassifier was trained over the randomized data. For the experiments on \(\mathcal{I}_{pert,loss}^{RR-FLC}\), a Pytorch model was used for both the original and retrained model, the loss function used was the Negative Log Likelihood with log softmax to enable proper calculation of FLC, and we used a learning rate of 0.001 for the Adult and ACSPubCov dataset and 0.5 for MNIST.
#### Iv-A4 Metrics
Based on the analysis performed in [22], we use two metrics to evaluate how well influence functions can approximate the true effect \(\epsilon\) has on model performance: Spearman's rank correlation coefficient (\(\rho\)) and mean absolute error (MAE). The value of \(\rho\) tells to what degree the estimated effect \(\mathcal{I}_{pert,loss}^{\text{RR}}\) and the actual effect \(\mathcal{I}_{actual}=|\mathcal{L}(\mathcal{I}_{te},\hat{\theta}_{S,\epsilon}) -\mathcal{L}(\mathcal{I}_{te},\hat{\theta})|\) rank subsets of points similarly, where \(\hat{\theta}_{S,\epsilon}\) are the optimal parameters of the model trained over the training set with \(S\) that has been perturbed with randomized response parameterized by \(\epsilon\). MAE gives measurement of how far apart (on average) our estimated value \(\mathcal{I}_{pert,loss}^{\text{RR}}\) and true value \(\mathcal{I}_{actual}\) are when we apply \(\epsilon\)-LDP. We note that we run each experiment ten times and report the average result.
### _Analysis of \(\mathcal{I}_{pert,loss}^{RR-L}\)_
For each \(\epsilon\) and group size listed previously, we test to what degree the influence function of Eq. 14 is able to estimate the true change in model test loss when perturbation via \(\epsilon\)-LDP and retraining actually occurs. We report the MAE and correlation coefficient \(\rho\) values in Table III and graph the results of the Adult dataset on the top row of Fig. 1. Fig. 1 shows that the Adult dataset has a very strong correlation (average of 0.9) between the actual change in test loss and the approximate change calculated using Eq. 14. Having a large correlation coefficient \(\rho\) means that the influence function can rank the \(\epsilon\) values (according to how they affect the test loss) similarly to how they are ranked when retraining is actually performed. Additionally, the MAE between the true and estimated test loss is relatively low (average of 0.013) especially when the group size is small. For all group sizes, the highest MAE values is 0.029 which is a relatively good estimation of the true change in test loss. The correlation results on the ACSPubCov data are worse than the Adult dataset (especially at small group sizes) with an average \(\rho\) of 0.523, but the MAE is better with an average of 0.002. We attribute the worse correlation results on the ACSPublicCoverage dataset to the number of \(\epsilon\) values tested. When the \(\epsilon\) range was changed to only have 20 values between 0.001 and 5, the average \(\rho\) increases to 0.615. The results on the MNIST dataset are also good with an average \(\rho\) value of 0.747 as well as good MAE average of 0.009. We note that the lower correlation values on the MNIST dataset are due to using the SE approach to calculate the IHVP for the MNIST dataset whereas we explicitly calculated the inverse Hessian product for the Adult and ACSPubCov datasets as the models were small enough to do so. Overall, the results show that influence functions are able to properly capture the true effect that occurs when \(\epsilon\)-LDP is applied to the labels of the
training dataset.
### _Analysis of \(\mathcal{I}_{pert,loss}^{RR-FLC}\)_
Similar to Section V-A, we test to what degree the influence function of Eq. 22 is able to estimate the true change in model test loss when the perturbation via \(\epsilon\)-LDP, noise correction, and retraining actually occurs. We report all results in Table III and plot the results for the Adult dataset on the second row of Fig. 1. For all datasets, the MAE is significantly smaller in comparison to \(\mathcal{I}_{pert,loss}^{\text{RR-L}}\). For example, when \(k=17.11\%\), the MAE on the Adult dataset reduces from 0.0132 to 0.0009 (a \(\sim\)93% decrease), from 0.0012 to 0.0008 on the ACSPublicCoverage dataset, and from 0.0082 to 0.0014 on the MNIST dataset. However, \(\rho\) degrades as well. This decrease in \(\rho\) after performing FLC is not surprising since by applying FLC we effectively remove the perturbations caused by \(\epsilon\)-LDP. However, \(\mathcal{I}_{pert,loss}^{\text{RR-FLC}}\) is still able to accurately predict the change in test loss with very small MAE meaning that it is still a good approximation method to actually performing retraining.
### _Analysis of \(\mathcal{I}_{pert,loss}^{RR}\)_
While most of the experimental evaluation has been performed on \(\mathcal{I}_{pert,loss}^{\text{RR-L}}\) and \(\mathcal{I}_{pert,loss}^{\text{RR-FLC}}\) to show that influence function approximation works in cases both with and without noise correction, here we briefly analyze the ability of our general influence equation in Eq. 13 to approximate the change in test loss when randomized response is performed on the features only, the labels only, or on the features and labels. We show our results on the ACSPublicCoverage dataset for a group size \(k=75\%\) in Fig. 2. When randomized response is performed on the features only, our influence based method is able to approximate the true change that would occur with an average MAE of 0.0009 and a \(\rho\) of \(0.797\). When randomized response is performed on the labels only, our influence based method is able to approximate the true change that would occur with an average MAE of 0.0407 and a \(\rho\) of 0.859. All these results point towards our formulated influence function being a good approximator for the true change in test loss.
### _Time Analysis_
To calculate all the estimated changes in test loss using the influence function approach, we have computational cost incurred by three main sources: 1) training the original model on the clean data to get the optimal parameters, 2) calculating the inverse Hessian vector product (IHVP: \(-\nabla_{\theta}\mathcal{L}(S_{tc},\hat{\theta})^{T}H_{\hat{\theta}}^{-1}\)), and 3) calculating the influence function for each \(\epsilon\) per each group size. Additional computational costs will be incurred if model retraining is performed to fit a graph like those shown in Fig. 1. In Table IV, we report the average time (out of three runs) to calculate \(\mathcal{I}_{pert,loss}^{\text{RR-L}}\) using an _explicit_ calculation of IHVP and the average time to perform retraining for all different \(\epsilon\) values and group sizes. The first row shows the time to train the original model on the clean data (which is the same for both \(\mathcal{I}_{pert,loss}^{\text{RR-L}}\) and retraining). The second row shows the time to compute the IHVP (which, we note, only has to be performed _once_ regardless of how many \(\epsilon\) values or group sizes are tested). The third row shows the average time to compute the approximate or true change for one \(\epsilon\) value and group size. The fourth row shows the average time to compute the approximate or real change for a single group size. In our experimentation, we tested 30 \(\epsilon\) values per group size. Finally, the last row shows the average time to compute the approximate or real change for all 10 wanted group sizes, each of which consider all 30 \(\epsilon\) values. It is clear that using an influence based approach saves substantial time and computational resources even on small models like those used in the experimentation. We note, however, that the time to calculate \(\mathcal{I}_{pert,loss}^{\text{RR-L}}\) should not change significantly even if larger or deeper models are used. This is because when calculating the influence over a neural network or CNN model, normally only the weights of the last fully connected layer are used in the calculation [21]. On the other hand, a deeper or larger model would cause the time of model retraining to increase proportional to the size of the model. Further, we note that using an approximation approach for the calculation of the IHVP (such as conjugate gradient or stochastic estimation) would offer additional computational time savings and we leave experimentation over these settings for future work.
Due to space constraints, our timing analysis focuses on the label perturbation case only. However, we note that the computation time for the general case (\(\mathcal{I}_{pert,loss}^{\text{RR}}\)) is similar to that of the labels - especially in cases when all the features selected to be perturbed have small domains. When they have large feature domains, the computational cost increases slightly due to the requirement of calculating the potential loss under all the different possible feature combinations (see Eq. 13). \(\mathcal{I}_{pert,loss}^{\text{RR-FLC}}\) also incurs minor additional costs (that scales with the domain size of the label) due to the required matrix multiplication to correct the loss (i.e., \(\textbf{P}^{T}h(x^{i};\hat{\theta})\)). We leave the timing analysis over \(\mathcal{I}_{pert,loss}^{\text{RR}}\)\(\mathcal{I}_{pert,loss}^{\text{RR-FLC}}\) for future work.
### _Discussion_
While in our experimentation we performed retraining of the model for every \(\epsilon\) and group size under consideration, this was to show the ability of the influence function to approximate the true change and is not necessary in practice. In cases where noise correction is applied, the MAE is small and the calculated influence itself serves as a good approximation to the
true change that would occur in the test loss under randomized response-based \(\epsilon\)-LDP. In cases where noise correction is not applied, the MAE is larger and not as good of a representation of the true change. However, the correlation between the estimated and true change is strong. This means that we can plot a graph similar to Fig. 1 to derive what the true change in test loss would be when only given the approximate change. To generate such a graph, a few \(\epsilon\) values can be selected to perform model retraining for and then a line can be fit to the resulting values. Additionally, in our experimentation, we set \(S_{te}=Z_{te}\). However, when the test set is large this can cause the computation of the influence to increase. To shorten the computation time, a random sample can be taken from \(Z_{te}\) to be used as \(S_{te}\) without significantly affecting the ability of the influence function to approximate the true change in test loss.
## VI Conclusion
In this work, we propose an approach based on influence functions to estimate the effect that a chosen \(\epsilon\) in randomized response-based \(\epsilon\)-LDP has on model utility loss. We show that our method is able to accurately approximate the true change that would occur if the data (e.g., features, labels, or features and labels) were actually perturbed, in cases both with and without noise correction applied, and the model was retrained. Further, we show that our method can offer significant computational speed-up over the naive retraining approach. Our future work includes considering how the choice of \(\epsilon\) affects other metrics such as the final fairness of the model.
## Acknowledgements
This work was supported in part by NSF 1920920 and 1946391.
|
2309.10037 | A stabilizer code model with non-invertible symmetries: Strange
fractons, confinement, and non-commutative and non-Abelian fusion rules | We introduce a stabilizer code model with a qutrit at every edge on a square
lattice and with non-invertible plaquette operators. The degeneracy of the
ground state is topological as in the toric code, and it also has the usual
deconfined excitations consisting of pairs of electric and magnetic charges.
However, there are novel types of confined fractonic excitations composed of a
cluster of adjacent faces with vanishing flux. They manifest confinement, and
even larger configurations of these fractons are fully immobile although they
acquire emergent internal degrees of freedom. Deconfined excitations change
their nature in presence of these fractonic defects. As for instance, fractonic
defects can absorb magnetic charges making magnetic monopoles exist while
electric charges acquire restricted mobility. Furthermore, some generalized
symmetries can annihilate any ground state and also the full sector of fully
mobile excitations. All these properties can be captured via a novel type of
\textit{non-commutative} and \textit{non-Abelian} fusion category in which the
product is associative but does not commute, and can be expressed as a sum of
(operator) equivalence classes. Generalized non-invertible symmetries give rise
to the feature that the fusion products form a non-unital category without a
proper identity. We show that a variant of this model features a deconfined
fracton liquid phase and a phase where the dual (magnetic) strings have
condensed. | Tanay Kibe, Ayan Mukhopadhyay, Pramod Padmanabhan | 2023-09-18T18:00:04Z | http://arxiv.org/abs/2309.10037v4 | A stabilizer code model with non-invertible symmetries: Strange fractons, confinement, and non-commutative and non-Abelian fusion rules
###### Abstract
We introduce a stabilizer code model with a qutrit at every edge on a square lattice and with non-invertible plaquette operators. The degeneracy of the ground state is topological as in the toric code, and it also has the usual deconfined excitations consisting of pairs of electric and magnetic charges. However, there are novel types of confined fractonic excitations composed of a cluster of adjacent faces (defects) with vanishing flux. They manifest confinement, and even larger configurations of these fractons are fully immobile although they acquire emergent internal degrees of freedom. Deconfined excitations change their nature in presence of these fractonic defects. As for instance, a magnetic monopole can exist anywhere on the lattice exterior to a fractonic defect cluster while electric charges acquire restricted mobility. These imply that our model featuring fractons is neither of type I, nor of type II. Furthermore, local operators which are symmetries can annihilate any ground state and also the full sector of states which can decay to a ground state under local perturbations. All these properties can be captured via a novel type of _non-commutative_ and _non-Abelian_ fusion category in which the product is associative but does not commute, and can be expressed as a sum of (operator) equivalence classes which includes that of the _zero_ operator. We introduce many other variants of this model and discuss their relevance in quantum field theory.
###### Contents
* I Introduction and Summary of Results
* II The model
* A Ground states have topological degeneracy
* II. |
2309.09566 | Synchronous orders on the set of integers | A binary relation over a free monoid is synchronous if it can be recognized
by a synchronous automaton that reads its two tapes simultaneously. We consider
the case where the free monoid is generated by a single element (which makes it
isomorphic to the additive monoid of integers) and where the binary relation
recognized is a strict order. Our main results are: given such an automaton it
is possible to determine whether or not is has infinite chains or antichains;
we characterize the orders that are linear; given two linear synchronous orders
we show how to determine whether or not they are equivalent. | Christian Choffrut | 2023-09-18T08:20:57Z | http://arxiv.org/abs/2309.09566v2 | # Synchronous orders on the set of integers
###### Abstract
We study the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial polynomial of the polynomial of the polynomial polynomial of the polynomial of the polynomial of the polynomial polynomial of the polynomial of the polynomial polynomial of the polynomial of the polynomial polynomial of the polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial of the polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial of the polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial polynomial of the polynomial of the polynomial polynomial of the polynomial of the polynomial polynomial of the polynomial of the polynomial polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial of the polynomial polynomial of the polynomial of
recognizes an ordering, does this ordering have infinite chains? infinite antichains? and questions of the like. I suspected that this is not the case but I started to work in the special case where the alphabet has a unique letter. I could prove that these questions are decidable, I characterized the orders that are linear and showed that the equivalence of two linear orders is decidable. I posted it on arxiv on september 19. Then I thought that I should enrich my introduction...and found out that these were old results. I did not change the rest of my manuscript except for the bibliography..
###### Abstract
A binary relation over a free monoid is synchronous if it can be recognized by a synchronous automaton that reads its two tapes simultaneously. We consider the case where the free monoid is generated by a single element (which makes it isomorphic to the additive monoid of integers) and where the binary relation recognized is a strict order. Our main results are: given such an automaton it is possible to determine whether or not is has infinite chains or antichains; we characterize the orders that are linear; given two linear synchronous orders we show how to determine whether or not they are equivalent.
## 1 Introduction
Let \(\mathcal{A}\) be a structure over some domain \(D\) with a collection of relations. It is _automatic_ if there is some finite alphabet such that via an appropriate encoding, \(D\) maps into a regular language. Furthermore each \(n\)-ary relation can be encoded into a synchronous finite automata which are \(n\)-tape automata reading the \(n\) tapes simultaneously. Consequently, the first order theory of these structures is decidable. There is a general agreement to date the coining of the term to [7] but the systematic study of discrete groups via automatic and semiautomatic structures [5] is often ignored. Most of the literature in this area consists of inquiring which structures have or do not have an automatic presentation (see, e.g., [6] for a good account).
Here our purpose is opposite. Instead of starting with structures we start with automata. More specifically, considering synchronous automata over a unary alphabet, i.e., over the set of natural integers \(\mathbb{N}\), what kind of order structures can they possibly represent? We show that given such an automaton we can determine whether or not the order has infinite chains or antichains. We are able to characterize the linear orders that are representable and show that given two synchronous automata representing linear orders, it is decidable whether or not these orders are equivalent. These
questions can be posed for nonunary synchronous automata but I am not aware of any published results in the general case. Concerning nonunary alphabets, the situation is to be compared with that of relations defined by \(n\)-tape automata which process \(n\) tapes from left to right but which are not constrained to read them simultaneously. The family thus obtained, the _rational relations_ in the terminology of [2] or the relation defined by generalized automata in [4], see also [14], is much richer than the family of synchronous relations. In particular, the most basic properties such as reflexivity, antisymmetry and transitivity are undecidable for this class, [8, pages 56-57]. This makes the decidability of the questions tackled in this paper for synchronous relations over nonunary alphabets more challenging.
## 2 Preliminaries
### Synchronous relations
With every \(n\)-tuple of words \((x_{1},\ldots,x_{n})\) of a finite alphabet \(\Sigma\) we associate the \(n\)-tuple obtained by padding to the right of each component as few occurrences of a new symbol \(\#\) as possible in such a way that all components have the same length, e.g., \((a,aba,bb)^{\#}=(a\#\#,aba,bb\#)\). Every padded \(n\)-tuple can be considered unambiguously as an element of the free monoid generated by the finite alphabet \(\Delta=(\Sigma\cup\{\#\})^{n}\setminus\{(\#)^{n}\}\). E.g., \((a,aba,bb)^{\#}=(a\#\#,aba,bb\#)=(a,a,b)(\#,b,b)(\#,a,\#)\). Given an \(n\)-ary relation \(R\subseteq\Sigma^{n}\), we let \(R^{\#}\) denote the set \(\{(x_{1},\ldots,x_{n})^{\#}\mid(x_{1},\ldots,x_{n})\in R\}\). Then \(R\subseteq(\Sigma^{*})^{n}\) is _synchronous_ if there exists a finite automaton on the alphabet \(\Delta\) which recognizes \(R^{\#}\). This notion was introduced by Elgot and Mezei with the terminology of "FAD" relation. It was proved that these relations form a Boolean algebra, that their class is closed under composition of relations and under projection and that the emptiness problem is decidable [4, page 49].
It is proved in [3] that the set of synchronous relations on a nonunary alphabet is the set of \(n\)-tuples defined in the first order logic of \(\Sigma^{*}\) with the signature consisting of the prefix relation (the string \(x\) is a prefix of the string \(y\)), the equal length relation (\(x\) and \(y\) have equal length) and last letter unary relation (some \(a\in\Sigma\) is the last letter of the string \(x\)). However this logic fails to capture all synchronous relations in the case of a unary alphabet (see paragraph 2.2 for a suitable logic in this case).
### Synchronous automata on a unary alphabet
The free monoid over a unary alphabet is commutative and can be identified with \(\mathbb{N}\). The concatenation is written additively. Instead of the ugly padding symbol, we view the \(n\)-vectors in \(\mathbb{N}^{n}\) differently. The _support_ of a vector \(x\in\mathbb{N}^{n}\) is the subset of indices \(0<i\leq n\) such that \(x_{i}\neq 0\). With all subsets \(\emptyset\neq I\subseteq\{1,\ldots,n\}\) we associate the vector \(e_{I}\) whose \(i\)-th component is equal to \(1\) if \(i\in I\) and to \(0\) otherwise. Then every nonzero vector can be written uniquely as a sum \(e_{I_{1}}+\cdots+e_{I_{k}}\) where \(I_{1}\supseteq\cdots\supseteq I_{k}\). Reinterpreting the use of the # symbol in this particular case, a synchronous automaton on \(\mathbb{N}^{n}\) is a finite automaton on the finite alphabet \(\{e_{I}\mid\emptyset\neq I\subseteq\{1,\ldots,n\}\}\) with the condition that if \(e_{I}\) and \(e_{J}\) label two cosecutive transitions, then \(I\supseteq J\).
A different proof of the following can be found in [12] where this logic is called _modular logic_. Further results on this logic can be found in [1]
**Proposition 1**.: _A relation \(R\subseteq\mathbb{N}^{n}\) is synchronous if and only if it can be defined in the first order logic of the structure \(\langle\mathbb{N};(x-y\in L)_{L\text{ regular}}\rangle\)_
Proof.: It is routine to check that all the primitive predicates define synchronous relations. Furthermore, the synchronous relations form a Boolean algebra and are closed under composition and projection, [4], so all definable relations are synchronous. We prove the converse by showing that the running of the automaton can be defined by a formula.
The set \(E=\{e_{I}\mid I\subseteq\{1,\ldots,n\}\}\) is provided with the partial ordering \(e_{I}\geq e_{J}\) if \(J\subseteq I\). Now every vector in \(\mathbb{N}^{n}\) can be uniquely expressed as \(\alpha_{1}e_{I_{1}}+\cdots+\alpha_{r}e_{I_{r}}\) where \(e_{I_{1}}>\cdots>e_{I_{r}}\), \(\alpha_{1},\cdots,\alpha_{r}\in\mathbb{N}\setminus\{0\}\).
The set of states \(Q\) of a synchronous automaton is a disjoint union of subsets \(Q_{I}\), \(I\subseteq\{1,\ldots,n\}\) where a state \(q\) belongs to \(Q_{I}\) if and only if it is the target of a transition labeled by \(e_{I}\). A generic path is of the form
\[q_{0}\xrightarrow{\alpha_{1}e_{I_{1}}}q_{1}\xrightarrow{\alpha_{2}e_{I_{2}} }\cdots q_{r-1}\xrightarrow{\alpha_{r}e_{I_{r}}}q_{r} \tag{1}\]
and the relation recognized is the union over all sequences \(\{1,\ldots,n\}\supset I_{1}\supset I_{r-1}\supset I_{r}\) and over all sequences of states \(q_{0},q_{1},\ldots,q_{r}\) of the labels \(\alpha_{1}e_{I_{1}}+\cdots+\alpha_{r}e_{I_{r}}\). We let \(L_{k}\) denote the (regular) set of lengths of all paths from \(q_{k-1}\) to \(q_{k}\). The set of labels of a path such as \(1\) can be expressed by the following formula \(\bigwedge_{1\leq k\leq r}\phi_{k}\) with
\[\phi_{k}=(\bigwedge_{i,j\in I_{k}}x_{k,i}=x_{k,j})\wedge(\bigwedge_{i\in I_{k }}x_{k,i}-0\in L_{k})\]
if \(r=1\). Otherwise, with the convention \(I_{r+1}=\emptyset\)
\[\phi_{k}=(\bigwedge_{i,j\in I_{k}\setminus I_{k+1}}x_{k,i}=x_{k,j})\wedge( \bigwedge_{i\in I_{k}}x_{k,i}-x_{k-1,i}\in L_{k})\]
where \(x_{k-1,1}\) is interpreted as \(0\) if \(k=1\).
### The binary case
In this paragraph, we describe explicitly the natural decomposition of the synchronous automata on \(\mathbb{N}\times\mathbb{N}\) and fix some notations so as to be able to work more easily on binary relations which is the main purpose of this paper.
Given a symbol \(a\), an \(a\)-deterministic automaton over the alphabet \(\{a\}\) consists of \(n\) states \(q_{0},\ldots,q_{n-1}\) along with \(n-1\) transitions of the form \(q_{i}\xrightarrow{a}q_{i+1}\) for \(0\leq i<n-1\) and \(q_{n-1}\xrightarrow{a}q_{t}\) for some \(\leq t<n-1\). The integer \(p=ln-t\) is the _period_ and \(t\) the _transient_. The state \(q_{0}\) is the inital state of the automaton. A state \(q_{i}\)_is transient_ if \(i<t\) and _periodic_ otherwise.
The following proposition is trivial and could serve as a definition of binary synchronous automata.
**Proposition 2**.: _A binary synchronous automaton \(\mathcal{A}\) consists of an \((1,1)\)-deterministic automaton \(\mathcal{B}\) whose state set is \(\{q_{0},\ldots,q_{n-1}\}\) and for \(0\leq i<n\) an \((1,0)\)- resp. \((0,1)\)- deterministic automaton \(\mathcal{A}_{i}^{(+)}\) resp. \(\mathcal{A}_{i}^{(-)}\) satisfying the following conditions_
* _the initial state of_ \(\mathcal{A}_{i}^{(+)}\) _and_ \(\mathcal{A}_{i}^{(-)}\) _is_ \(q_{i}\)_._
* _the set of states of_ \(\mathcal{B}\) _and_ \(\mathcal{A}_{i}^{(+)}\) _for_ \(0\leq i<n\) _are disjoint except for the initial state of_ \(\mathcal{A}_{i}^{(+)}\)_. The same holds with_ \(\mathcal{B}\) _and_ \(\mathcal{A}_{i}^{(-)}\) _for_ \(0\leq i<n\)_. The set of states of_ \(\mathcal{A}_{i}^{(+)}\) _and_ \(\mathcal{A}_{j}^{(-)}\) _for_ \(i\neq j\) _are disjoint._
* _a state of_ \(\mathcal{A}\) _is final if and only if it is a final state of some_ \(\mathcal{A}_{i}^{(+)}\) _or_ \(\mathcal{A}_{j}^{(-)}\)_._
The relation \(R\subseteq\mathbb{N}\times\mathbb{N}\)_defined_ or _recogn-ized_ by \(\mathcal{A}\) is the set of pairs \((k,\ell)\) which label a path from \(q_{0}\) to a final state of \(\mathcal{A}_{i}^{(+)}\) or \(\mathcal{A}_{j}^{(-)}\).
**Example 3**.: _The automata below recognize the linear order \(\cdots<2<1<0\) and \(0<1<2\cdots\)_
_The next automaton recognizes the linear orders \(2<1<0\) (three elements)_
Strictly speaking, the labels of the nonempty paths in \(\mathcal{B}\) are pairs of the form \((k,k)\) with \(k>0\). By convention we identify \((k,k)\) with the integer \(k\). Similarly, the labels of a path in \(\mathcal{A}_{i}^{(+)}\) (resp. \(\mathcal{A}_{i}^{(-)}\) are of the form \((k,0)\) (resp. \((0,k)\)) which we identify with the integer \(k\). As a result, an integer \(k>0\) is interpreted as \((k,k)\) in \(\mathcal{B}\), \((k,0)\) in \(\mathcal{A}_{i}^{(+)}\) and \((0,k)\) in \(\mathcal{A}_{i}^{(-)}\). We let \(\lambda_{i}\) and \(\rho_{i}\) denote the transitions on the subautomata \(\mathcal{A}_{i}^{(+)}\) and \(\mathcal{A}_{i}^{(-)}\). Thus we write \(\lambda_{i}(q_{i},k)=r\) and \(\theta_{i}(q_{i},\ell)=s\) with the obvious meaning. The transition on \(\mathcal{B}\) is simply denoted \(q\cdot k\) for all integers \(k\) and all states \(q\) of \(\mathcal{B}\). By abuse of notation, for all \(i\geq n\) we let \(q_{i}\) be the states \(q_{j}\) where \(n-p\leq j<n\) and \(j=i\bmod p\). With these notations we have \(R(k,\ell)\) if and only if
\[\begin{array}{l}\mbox{if $k<\ell$ and $q_{0}\cdot k=q_{i}$ then $\theta_{i}(q_{i},\ell-k)$ is final in $\mathcal{A}_{i}^{(-)}$}\\ \mbox{if $k>\ell$ and $q_{0}\cdot\ell=q_{i}$ then $\lambda_{i}(q_{i},k-\ell)$ is final in $\mathcal{A}_{i}^{(+)}$}\end{array} \tag{2}\]
Figure 2: the relation \(2<1<0\)
Orders
### General definitions
By an order on a set \(X\) we mean a strict partial order, i.e., a binary relation which is transitive and has no loop, i.e., that satisfy the axioms.
\[\forall x,y,z\ R(x,y)\wedge R(y,z)\to R(x,z)\quad\neg\exists x,y\ R(x,y)\wedge R( y,x)\]
The _support_ of \(R\), denoted \(\mathit{supp}(R)\) is the subset \(\{x\in X\mid\exists y\ (x,y)\in R\vee(y,x)\in R\}\). The order is _complete_ if the support of \(R\) is \(X\). The relation is _linear_ if for all \(x,y\) in the support of \(X\) either \((x,y)\in R\) or \((y,x)\in R\) holds. E.g., the relation \((2n,2m)\) with \(n<m\) defines a linear order on \(\mathbb{N}\) but it is not complete.
Two ordered sets \(X\) and \(Y\) are said to have the same _order type_ if there exists a bijection \(f:X\to Y\) such that \(f\) and its its inverse are monotonic. The order types of \(\mathbb{N}\), \(-\mathbb{N}\), \(\mathbb{Z}\) and \(\mathbb{Q}\) are denoted \(\omega\), \(\omega^{*}\), \(\zeta\) and \(\eta\) respectively. Given an integer \(n\in\mathbb{N}\) the linear order type of \(\{1,2,\ldots,n\}\) is denoted \(\mathbf{n}\). The reader is referred to the handbook [13] for an introduction to linear orders. The _sum_\(R+S\) of two orders of disjoint supports is the order defined by
\[R\cup S\cup\mathrm{supp}(R)\times\mathrm{supp}(S) \tag{3}\]
The sum of two order types \(\rho\) and \(\sigma\) is the order type \(\rho+\sigma\) of the sum \(R+S\) for any \(R\) of order type \(\rho\) and \(S\) of order type \(\sigma\) of disjoint supports. The _inverse_ of \(R\) is the relation \(R^{-1}=\{(y,x)\mid(x,y)\in R\}\).
An order on \(\mathbb{N}\) is _synchronous_ if there exists a synchronous automaton on \(\mathbb{N}\) such that \(R\) is the set recognized by the automaton.
_Henceforth, all synchronous orders are orders on \(\mathbb{N}\)._
### Poor linear orderings
We find it convenient to say that a linear order type is _poor_ if it is a finite sum of order types \(\omega\), \(\omega^{*}\) and \(\mathbf{n}\) for some \(n\in\mathbb{N}\).
\[\chi_{1}+\cdots+\chi_{n}\quad\chi_{i}=\omega\ \text{or}\ \omega^{*}\ \text{or}\ \mathbf{n}\ \text{for some}\ n<\omega \tag{4}\]
This sum is _reduced_ if it is equal to \(\mathbf{0}\) or otherwise if it contains no subsums of the form \(\mathbf{0}\), \(\mathbf{n}+\mathbf{m}\), \(\mathbf{n}+\omega=\omega\) or \(\omega^{*}+\mathbf{n}=\omega^{*}\).
**Proposition 4**.: _Every poor linear order is equivalent to a unique reduced sum._
Proof.: Consider two reduced sums \(S_{1}\) and \(S_{2}\) defining the same poor order. We show that they are equal by induction on the maximum of number of summands in \(S_{1}\) and \(S_{2}\) and we assume that none of them is reduced to \(\mathbf{0}\). If the order has no minimum element then \(S_{1}=\omega^{*}+T_{1}\) and \(S_{2}=\omega^{*}+T_{2}\) then \(T_{1}\) and \(T_{2}\) are equivalent and we are done. If we have
\[S_{1} =\mathbf{n}_{1}+\chi_{1}+T_{1}\] \[S_{2} =\mathbf{n}_{2}+\chi_{2}+T_{2}\]
with \(n_{1},n_{2}.\neq 0\) then \(\chi_{1}=\chi_{2}=\omega^{*}\) which implies \(n_{1}=n_{2}\), i.e., \(\chi_{1}+T_{1}\) and \(\chi_{2}+T_{2}\) are equivalent and we are done. So we assume without loss of generality \(n_{2}>0\)
\[S_{1}=\chi_{1}+T_{1}\quad S_{2}=\mathbf{n}_{2}+\chi_{2}+T_{2}\]
Then \(\chi_{1}=\omega\) because the order has a minimal element and \(\chi_{2}=\omega^{*}\) but this is impossible because \(\omega\) and \(\mathbf{n}_{2}+\omega^{*}\) are incomparable.
For all subsets \(A\) of a linear order we let \(\inf(A)\) denote the greatest lower bound and \(\sup(A)\) the least upperbound of \(A\) when they exist. For two subsets \(A,B\) we write \(A\prec B\) if for all \(a\in A\) and \(b\in B\) it holds \(R(a,b)\).
**Proposition 5**.: _The order type of a linear relation is poor if and only if its support is a finite union of singletons, countable ascending and countable descending chains._
Proof.: The condition is clearly necessary. We prove that it is sufficient by induction on the number of chains and singletons.
We let \(I(X)\) denote the minimal interval containing \(X\). If the set consists of a unique singleton or of a unique ascending or descending chain then we are done. Assume that we have proved that a linear set which a finite union of singletons and infinite ascending and descending chains is a disjoint union \(A=\bigcup_{i=1}^{r}A_{i}\) where for all \(0<i<r-1\) we have \(I(A_{i})\prec I(A_{i+1})\). Since poor linear orderings are closed under inverse ordering it suffices to prove that the union of \(A\) with a subset \(B\) which is either a singleton or of order type \(\omega\), has an expression of the form 4. We consider the case where \(B\) is a singleton. If it belongs to some \(I(A_{i})\) then \(A_{i}\) is replaced by \(A_{i}^{\prime}=A_{i}\cup B\). The order type is unchanged if \(A_{i}\) is of type \(\omega\) or \(\omega^{*}\) or if \(B\subseteq A_{i}\) and otherwise it is changed to \(\mathbf{n}+\mathbf{1}\) if \(A_{i}\) is of type \(\mathbf{n}\) with \(n>1\). If \(B\) belongs
to no such interval then \(B\) is inserted between \(A_{i}\) and \(A_{i+1}\) or before \(A_{1}\) or after \(A_{r}\) accordingly.
Now we consider the case where \(B\) is of type \(\omega\). Every interval contains either a finite subset of \(B\) or the complement of a finite prefix. Because of the previous consideration we may assume that \(B\) is included in some \(I(A_{i})\) or in some interval separating the \(A_{i}\)'s. If \(B\prec A_{1}\) or \(A_{r}\prec B\) or if \(B\) is contained in the interval separating \(A_{i}\) and \(A_{i+1}\) for some \(i=1,\ldots,r-1\) then the disjoint union is obtained by inserting \(B\) either before \(A_{1}\) of after \(A_{r}\) or between \(A_{i}\) and \(A_{i+1}\).
Assume it holds \(B\subseteq A_{i}\) which implies in particular that \(A_{i}\) is of type \(\omega\) or \(\omega^{*}\). If \(A_{i}\) is of type \(\omega\) and \(\sup(B)=\sup(A_{i})\) then \(A_{i}\cup B\) is of type \(\omega\) otherwise it is of type \(\omega\cdot 2\) because
\[A_{i}\cup B=\big{(}B\cup(A_{i}\cap[\inf(A_{i}),\sup(B)])\big{)}\cup(A_{i}\cap ]\sup(B),\sup(A_{i})[\big{)}\]
There remains the case where \(A_{i}\) is of type \(\omega^{*}\). Then we have \(\inf(A_{i})<\inf(B)<\sup(B)<\sup(A_{i})\) which implies
\[\begin{array}{l}A_{i}\cup B=A_{i,1}\cup A_{i,2}\cup A_{i,3}\quad A_{i,1} \prec A_{i,2}\prec A_{i,3}\\ A_{i,1}=(]\inf(A_{i}),\inf(B)[\cap A_{i})\\ A_{i,2}=([\inf(B),\sup(B)[\cap(B\cup A_{i}))\\ A_{i,3}=[\sup(B),\sup(A_{i})]\cap A_{i}\end{array}\]
whose order type is \(\omega^{*}+\omega+\mathbf{n}\).
## 4 Synchronous orders on \(\mathbb{N}\)
We recall that unless otherwise stated we deal with synchronous relations \(R\subseteq\mathbb{N}\times\mathbb{N}\) uniquely. We make additional technical assumptions to Proposition 2 which simplify the proofs without losing the generality of the results. An automaton is _normal_ if it satisfies the following conditions (with the notations of Proposition 2)
* The subautomaton \(\mathcal{B}\) and all automata \(\mathcal{A}_{i}^{(+)}\) and. \(\mathcal{A}_{j}^{(-)}\) have the same period \(p\) which is greater than their transients.
* All pairs \((k,\ell)\) define exactly one path in \(\mathcal{A}\), whether successful or not.
**Proposition 6**.: _A synchronous relation \(R\) can be realized by a normal automaton. Let \(t\leq\alpha<\beta<n\) with the notations of Proposition 2. For \(R^{\epsilon}\) equal to \(R\) or its inverse \(R^{-1}\) we have_
\[(\alpha+2p,\beta)\in R^{\epsilon} \Leftrightarrow\forall k\geq 2\ (\alpha+kp,\beta)\in R^{\epsilon} \tag{5}\] \[(\alpha,\beta+p)\in R^{\epsilon} \Leftrightarrow\forall k\geq 1\ (\alpha,\beta+kp)\in R^{\epsilon}\] (6) \[\forall k<\ell (\alpha+kp,\beta+\ell p)\in R^{\epsilon} \Rightarrow(\alpha,\beta+(\ell-k)p)\in R^{\epsilon} \tag{7}\]
Proof.: It suffices to consider the case \(R^{\epsilon}=R\). We set \(\alpha=t+r,\beta=t+s,0\leq r<s<p\) and we use the statement 2.
Implication 5. We have \(\alpha+2p-\beta=p+(s-r)>p\), thus the pair \((\alpha+2p,\beta)\) takes \(q_{0}\) to \(\lambda_{\beta}(q_{\beta},p+(s-r)\) which is a periodic state of \(\mathcal{A}_{\beta}^{(+)}\), implying that for \(k\geq 2\), \((\alpha+kp,\beta)\) takes \(q_{0}\) to \(\lambda_{\beta}(q_{\beta},2p+(s-r))=\lambda_{\beta}(q_{\beta},p+(s-r))\).
Implication 6. We have \(\beta+p-\alpha=s-r+p>p\) thus the pair \((\alpha,\beta+p)\) takes \(q_{0}\) to \(\theta_{\alpha}(q_{\alpha},p+(s-r)\) which is a periodic state of \(\mathcal{A}_{\alpha}^{(-)}\), implying that for \(k\geq 1\), \((\alpha,\beta+kp)\) takes \(q_{0}\) to \(\theta_{\alpha}(q_{\alpha},s-r+kp)=\theta_{\alpha}(q_{\alpha},s-r+p)\).
Implication 7. We have \(q_{0}\cdot\alpha=q_{0}\cdot k\alpha\) and \(\beta+\ell p-(\alpha+kp)=(\ell-k)p+s-r>p\). Now, \(\theta_{\alpha}(\beta,up)\) is a periodic state of \(\mathcal{A}_{\alpha}^{(-)}\) for all \(u\geq 1\). Thus \((\alpha,\beta+(\ell-k)p)\) takes \(q_{0}\) to \(\theta_{\alpha}(q_{\alpha},\beta+(\ell-k)p)\) and \((\alpha,\beta+\ell p)\) takes \(q_{0}\) to \(\theta_{\alpha}(q_{\alpha},\beta+\ell p)=\theta_{\alpha}(q_{\alpha},\beta+( \ell-k)p)\).
### Composition
The following is a simple way to define a synchronous relation equivalent to a given synchronous relation of arbitrary arity (the arity is arbitrary and the relation is not necessarily an order).
**Lemma 7**.: _If \(R\subseteq\mathbb{N}^{n}\) is synchronous then for all integers \(m>r\geq 0\) the relation \(R_{m,r}\) defined by \((mx_{1}+r,\ldots,mx_{n}+r)\in R_{m,r}\) if and only if \((x_{1},\ldots,x_{n})\in R\) is also synchronous._
_If \(n=2\) and \(R\) defines an ordering, then \(R_{m,r}\) defines an equivalent ordering._
Proof.: Let \(\mathcal{A}\) be a synchronous automaton recognizing \(R\). The relation \(R_{m,0}\) is recognized by the automaton obtained from \(\mathcal{A}\) by replacing each transition \(q\xrightarrow{x}q^{\prime}\) where \(x\in\{0,1\}^{n}\) by a sequence of \(m\) transitions
\[q\xrightarrow{x}s_{1}\cdots s_{m-1}\xrightarrow{x}q^{\prime}\]
where the \(s_{i}\) are fresh state symbols. If \(\phi(x_{1},\ldots,x_{n})\) defines the relation \(R_{m,0}\) then the relation \(R_{m,r}\) is defined by the formula
\[\exists y_{1},\ldots,y_{n}\ (\bigwedge_{i=1}^{n}x_{i}=y_{i}+r\ \wedge\phi(y_{1}, \ldots,y_{n}))\]
We can construct new relations with the help of the previous lemma.
**Proposition 8**.: _If \(R,R^{\prime}\subseteq\mathbb{N}^{2}\) are two synchronous orders of types \(\rho,\rho^{\prime}\) then the sum \(R+R^{\prime}\) is a synchronous relation of order type \(\rho+\rho^{\prime}\)._
Proof.: It suffices to verify that \(R+R^{\prime}\) is synchronous, but this results from the expression
\[R+R^{\prime}=R\cup R^{\prime}\cup\mathrm{supp}(R)\times\mathrm{supp}(R^{ \prime})\]
### Characterization of synchronous linear orders
It is clear that \(\omega,\omega^{*}\) and \(\mathbf{n}\) are synchronous (e.g., see Figure 3 for an order type \(\omega^{*}\) and \(\omega\) and Figure 3. for an order of type \(\mathbf{3}\) ). By Proposition 8 all linear relations of poor type are synchronous. The converse holds but we need two previous elementary results.
**Lemma 9**.: _Let \(E\subseteq\mathbb{N}\) be a regular subset. The linear relation \(\{(a,b)\mid(a<b,a,b\in E\}\) is synchronous. Similarly for the linear order \(\{(a,b)\mid(a>b,a,b\in E\}\)._
Proof.: Using Proposition 1, the trace on \(E\) of the natural order over \(\mathbb{N}\) is defined by the formula
\[x-0\in E\wedge y-0\in E\wedge x-y>0\]
**Corollary 10**.: _Let \(R\) be synchronous and assume that the complement \(E\) of its support is infinite. The exists a synchronous relation \(S\) of support \(E\) and of order type \(\omega\) (resp. \(\omega^{*}\)) such that \(R+S\) is complete. If \(\tau\) is the order type of \(R\), then the order type of \(R+S\) is \(\tau+\omega\)._
**Lemma 11**.: _Let \(R\) be a synchronous relation. If \(\text{supp}(R)\) has a finite complement, there exists a complete synchronous order which has the same order type as \(R\)._
Proof.: Indeed, assume \(a\not\in\text{supp}(R)\). Define \(f:\mathbb{N}\to\mathbb{N}\) by \(f(k)=k\) is \(k<a\) and \(f(k)=k-1\) if \(k>a\). Consider the relation \(R^{\prime}=\{(f(k),f(\ell))\mid(k,\ell)\in R\}\}\). The relation is an order and it is synchronous because it is the composition of three synchronous relations \(f^{-1}\circ R\circ f\). Furthermore \(f\) and \(f^{-1}\) ar monotone. If the complement of the support of \(R\) contains \(b\) elements, it suffices to apply this construction \(b\) times.
**Theorem 12**.: _A countable linear relation on \(\mathbb{N}\) is synchronous if and only if it is of poor order type, e.g., of the form 4._
Proof.: The condition is sufficient by Proposition 8. We prove that it is necessary. By lemma 11 and corollary 10 we may assume that the relation is complete since for any order type \(\tau\), \(\tau\) is a poor linear order if and only if \(\tau+\omega\) is a poor linear order.
We write \(n\prec m\) if \(R(n,m)\) holds and reserve the expression \(n<m\) for the usual order in the integers. The expression \(n\succ m\) is equivalent to \(m\prec n\). For some \(t\leq\alpha<t+p\) consider the set of inputs greater than or equal to \(t\) that are congruent to \(\alpha\) modulo \(p\). I claim that we have
\[\alpha\prec\alpha+p\prec\cdots\prec\alpha+kp\prec\cdots\text{ or} \tag{8}\] \[\alpha\succ\alpha+p\succ\cdots\succ\alpha+kp\succ\cdots \tag{9}\]
Since the order is total we have \(\alpha\prec\alpha+p\) or \(\alpha\succ\alpha+p\). In the first case we have \(q_{0}\cdot\alpha=q_{i}\) and \(\theta_{i}(q_{i},p)=r\). But then \(q_{0}\cdot(\alpha+p)=q_{i}\) and \(\theta_{i}(q_{i},p)=r\), i.e., \(\alpha+p\prec\alpha+2p\). More generally we have \(\alpha+kp\prec\alpha+(k+1)p\) which proves the claim. The second case can be treated similarly. Since the relation is a finite union of singletons and countable chains as above we may conclude by applying Proposition 5.
### Decision issues of on unary synchronous orders.
By Proposition 1 every synchronous relation is Presburger definable. Consequently, given a binary synchronous relation \(R\subseteq\mathbb{N}\times\mathbb{N}\) it is decidable whether or not it is an order (resp. linear order), whether or not for a given integer \(n\) it has a chain resp. an antichain of size less than or equal to \(n\)
whether or not it is \(N\)-free etc.... Here we tackle problems that do not seem to be expressible in Presburger theory.
**Proposition 13**.: _It is decidable whether or not a synchronous order has an infinite chain, resp. an infinite antichain._
_If there is no infinite antichain, the lengths of the antichains are bounded by \(2n+2\) where \(n\) is the number of states of the subautomaton \(\mathcal{B}\)._
Proof.: We may assume that the relation is complete. Indeed, let \(E\) be the complement of the support of \(R\). Then the synchronous relation \(R\cup\operatorname{supp}(R)\times E\) is complete and it has infinite chains if and only if so does \(R\).
We use the notations introduced in paragraph 2.3 and in particular we let \(t\) be the transient of the subautomaton \(\mathcal{B}\). Since a descending chain is ascending for the inverse ordering and since the inverse of a synchronous order on \(\mathbb{N}\) is also synchronous it suffices to consider ascending chains. We claim that there exists an infinite chain if and only if some \(\mathcal{A}_{i}^{(-)}\) for \(t\leq i<n\) recognizes \(p\) (the common period), i. e., \(\theta_{i}(q_{i},p)\) is final. It is clearly sufficient since in this case consider some integer \(k\) taking \(q_{0}\) to \(q_{i}\) in \(\mathcal{B}\). Then \(q_{0}\cdot(k+p)=q_{i}\) and thus \(k\prec k+p\prec k+2p\cdots\). Conversely, if there exists an infinite chain there exists an infinite chain that is increasing for the natural order on \(\mathbb{N}\). Then there exist \(t\leq k<\ell\) such that \(\ell-k\) is a multiple of \(p\), say \(\alpha p\). Let \(q_{0}\cdot k=q_{i}\). Then \(k\prec\ell\) implies \(\theta_{i}(q_{i},\alpha p)\) is a final state of \(\mathcal{A}_{i}^{(-)}\) thus \(\theta_{i}(q_{i},p)\) is a final.
We now turn to antichains. By lemma 11 and corollary 10 we may assume that the relation is complete. We claim that there exists an infinite antichain if and only if there exits an integer \(i\) such that \(\lambda_{i}(q_{i},p)\) and \(\theta_{i}(q_{i},p)\) are non final in \(\mathcal{A}_{i}^{(+)}\) and \(\mathcal{A}_{i}^{(-)}\) respectively. The condition is sufficient. Indeed, because \(R\) is complete any two elements elements \(k<\ell\) taking \(q_{0}\) to \(q_{i}\) differ by a multiple of \(p\). Then \((k,\ell)\not\in R\) and \((\ell,k)\not\in R\) implying that \(p\) is not a final state of \(\mathcal{A}_{k}^{(+)}\) and \(\mathcal{A}_{k}^{(-)}\). Conversely, if there exist infinitely many pairwise incomparable elements there exist infinitely any elements greater than \(t\) taking \(q_{0}\) to the some \(q_{i}\). For two such elements \(k\) and \(\ell\), we have \((k,\ell)\not\in R\) and \((\ell,k)\not\in R\), \(\lambda_{i}(q_{i},p)\) and thus \(\theta_{i}(q_{i},p)\) are nonfinal in \(\mathcal{A}_{i}^{(+)}\) and \(\mathcal{A}_{i}^{(-)}\) respectively.
Now, suppose there is no infinite antichain. We claim that an antichain has less than \(2n+2\) elements. Indeed, in an antichain containing \(2n+2\) elements there are two elements \(k<\ell\) with \(q_{0}\cdot k=q_{0}\cdot\ell\) is not a transient state and thus \(\ell-k\) is a multiple of \(p\). Then \(\lambda_{i}(q_{i},p)\not\in\mathcal{A}_{i}^{(+)}\) and \(\theta_{i}(q_{i},p)\not\in\mathcal{A}_{i}^{(-)}\)
but then the previous discussion shows that there exists an antichain of infinite length.
Observe that infinite antichains exist, e.g., \(\{(2n,2n+1)\mid n\in\mathbb{N}\}\) so that the question on the upper bound on antichains makes sense. Also there is a departure with orders on \(\mathbb{N}\times\mathbb{N}\): the product of the usual order on \(\mathbb{N}\) has no infinite antichain but has antichains of arbitrary lengths.
**Proposition 14**.: _Given two synchronous automata defining linear orders, it is decidable whether or not these orders are equivalent._
Proof.: We assume first that the orders are complete. By Proposition 4 it suffices to prove that we can effectively associate with a synchronous automaton defining a linear order on \(\mathbb{N}\) an expression of the form 4. Using the notation of Theorem 12, the set \(\mathbb{N}\) is the finite and disjoint union of the singletons \(\{0,\ldots,t-1\}\) and ascending and descending chains 8 and 9. By proceeding as in Proposition 5 it suffices to relate them pairwise.
Consider a chain \(C_{\alpha}\) as in 8 or 9 (with \(t\leq\alpha<n\)) and an arbitrary \(\gamma\in\mathbb{N}\). Let \(k\) be the least integer such that \(\alpha+kp>\gamma\). Then
\[\begin{array}{l}\alpha+(k+1)p\succ\gamma\Leftrightarrow\alpha+\ell p \succ\gamma\text{ for all }\ell\geq k+1\\ \alpha+(k+1)p\prec\gamma\Leftrightarrow\alpha+\ell p\prec\gamma\text{ for all }\ell\geq k+1\end{array} \tag{10}\]
This implies that it is possible to determine the integer \(r\) such that \(\gamma\) lies between \(\alpha+rp\) and \(\alpha+(r+1)p\) and solves the problem of how a singleton and a chain relate.
We turn on to the problem of determining how two chains \(C_{\alpha}:\alpha,\alpha+p,\cdots\) and \(C_{\beta}:\beta,\beta+p,\cdots\) relate. We assume \(t\leq\alpha<\beta<n\). Because it is possible to determine how an element relates to a chain, it is sufficient to compare two tails of \(C_{\alpha}\) and \(C_{\beta}\) where at most finitely many only first elements are missing. We use the equivalences 5, 6 and 7.
Case 1. Two ascending chains
\[C_{\alpha}:\alpha\prec\alpha+p\prec\cdots\text{ and }C_{\beta}:\beta\prec \beta+p\prec\cdots\]
Case 1.1. \(\beta\succ\alpha+2p\) implies \(C_{\beta}\succ C_{\alpha}\setminus\{\alpha,\alpha+p\}\) by 5
Case 1.2. \(\alpha\prec\beta+p\) and \(\beta\prec\alpha+2p\) implies because of 5 and 6.
\[\alpha\prec\beta+p\prec\alpha+3p\prec\beta+4p\prec\alpha+6p\prec\cdots\]
which is an interleaving of \(C_{\alpha}\) and \(C_{\beta}\).
Case 1.3. \(\alpha\succ\beta+p\) implies \(C_{\alpha}\succ C_{\beta}\setminus\{\beta\}\) by 6
Case 2. Two descending chains
\[C_{\alpha}:\alpha\succ\alpha+p\succ\cdots\text{ and }C_{\beta}:\beta\succ \beta+p\succ\cdots\]
Case 2. 1. \(\alpha\prec\beta+p\) implies \(C_{\alpha}\prec C_{\beta}\setminus\{\beta\}\) by 6.
Case 2. 2. \(\alpha\succ\beta+p\) and \(\beta\succ\alpha+2p\) implies \(\beta+p\succ\alpha+3p\succ\beta+4p\succ\alpha+6p\succ\cdots\) which is an interleaving of \(C_{\alpha}\) and \(C_{\beta}\) because of 5 and 6.
Case 2. 3. \(\beta\prec\alpha+2p\) implies \(\beta\prec C_{\alpha}\) thus \(C_{\beta}\prec C_{\alpha}\setminus\{\alpha,\alpha+p\}\) by 6.
Case 3. An ascending and a descending chain
\[C_{\alpha}:\alpha\prec\alpha+p\prec\cdots\text{ and }C_{\beta}:\beta\succ \beta+p\succ\cdots\]
Case 3.1. \(\beta\prec\alpha+2p\) which yields \(C_{\beta}\prec C_{\alpha}\setminus\{\alpha,\alpha+p\}\) because of 5
Case 3.2. \(\alpha\prec\beta+p\) and \(\beta\succ\alpha+2p\) implies \(\alpha\prec C_{\beta}\setminus\{\beta\}\) and \(\beta\succ C_{\alpha}\setminus\{\alpha,\alpha+p\}\). Assume \(\beta+kp\prec\alpha+\ell p\). Since \(C_{\beta}\) is decreasing we may assume \(k-\ell\geq 1\), thus \(\beta+(k-\ell)p\prec\alpha\) because of 7 which contradicts \(\alpha\prec C_{\beta}\). Thus \(C_{\alpha}\prec C_{\beta}\).
Case 3.3. \(\beta+p\prec\alpha\) implies \(C_{\beta}\setminus\{\beta\}\prec C_{\alpha}\) because of 6.
Case 4. A descending and an ascending chain
\[C_{\alpha}:\alpha\succ\alpha+p\succ\cdots\text{ and }C_{\beta}:\beta \prec\beta+p\prec\cdots\]
Case 4.1. \(\beta\succ\alpha+2p\) then \(C_{\beta}\succ C_{\alpha}\setminus\{\alpha,\alpha+p\}\) because of 5
Case 4.2. \(\alpha\prec\beta+p\) implies \(C_{\alpha}\prec C_{\beta}\setminus\{\beta\}\) because of 6
Case 4.3. \(\alpha\succ\beta+p\) and \(\beta\prec\alpha+2p\). Thus \(\alpha\succ C_{\beta}\setminus\{\beta\}\) and \(\beta\prec C_{\alpha}\setminus\{\alpha,\alpha+p\}\). Assume \(\alpha+kp\prec\beta+\ell p\) with \(k<\ell\) without loss of generality. Then \(\alpha\prec\beta+(\ell-k)p\) contradicting \(\alpha\succ C_{\beta}\) thus \(C_{\beta}\prec C_{\alpha}\)
Now we consider the general case where \(\prec_{1}\) and \(\prec_{2}\) are two nonncessarily complete orders of type \(\tau_{1}\) and \(\tau_{2}\). Set \(E_{1}=\text{supp}(\prec_{1})\), \(E_{2}=\text{supp}(\prec_{2})\). If the two orders have the same type, then either they have both a maximal element or they have both no maximal element. This property can be verified
because an order has a maximal element if and only if the complement of the following set is nonempty.
\[\{k+\ell\mid\ell\in\mathcal{A}_{k}^{(+)}\}\cup\{k\mid\exists\ell\in\mathcal{A}_{ k}^{(-)}\}\]
By lemma 11 we may assume that \(E_{1}\) and \(E_{2}\) are infinite. If the two orders have a maximal element then using lemma 9 we can complete the two orders and obtain orders of types \(\tau_{1}+\omega^{*}\) and \(\tau_{2}+\omega^{*}\). Then \(\tau_{1}+\omega^{*}=\tau_{2}+\omega^{*}\). if and only if \(\tau_{1}=\tau_{2}\). Similarly if the two orders have no maximal element we can complete the two orders and obtain orders of order types \(\tau_{1}+\omega\) and \(\tau_{2}+\omega\). Then \(\tau_{1}+\omega=\tau_{2}+\omega\) if and only if \(\tau_{1}=\tau_{2}\).
|
2310.20097 | A note on the indivisibility of the Henson graphs | We show that in contrast to the Rado graph, the Henson graphs are not
computably indivisible. | Kenneth Gill | 2023-10-31T00:21:25Z | http://arxiv.org/abs/2310.20097v1 | # A note on the indivisibility of the Henson graphs
###### Abstract.
We show that in contrast to the Rado graph, the Henson graphs are not computably indivisible.
This work is part of the author's Ph.D. dissertation at Penn State University [1].
## 1. Introduction
The Rado graph is, up to graph isomorphism, the unique countable undirected graph that satisfies the following property: if \(A\) and \(B\) are any finite disjoint sets of vertices, there is a vertex not in \(A\) or \(B\) which is connected to every member of \(A\) and to no member of homogeneous and universal for the class of finite graphs.
Our interest here lies with the closely related family of _Henson graphs_, introduced by C. Ward Henson in 1971 [1]. For each \(n\geq 3\), the Henson graph \(H_{n}\) is up to isomorphism the unique countable graph which satisfies the following property analogous to that characterizing the Rado graph: for any finite disjoint sets of vertices \(A\) and \(B\), if \(A\) does not contain a copy of \(K_{n-1}\), then there is a vertex \(x\notin A\cup B\) connected to every member of \(A\) and to no member of \(B\). (Here we write \(K_{m}\) for the complete graph on \(m\) vertices.) The graph \(H_{n}\) is homogeneous and universal for the class of \(K_{n}\)-free finite graphs.
We presume familiarity with the basic terminology of computable structure theory, as for example in the first chapter of [13]. A structure \(\mathcal{S}\) is said to be _indivisible_ if for any presentation \(\mathcal{A}\) of \(\mathcal{S}\) and any coloring \(c\) of \(\operatorname{dom}\mathcal{A}\) with finite range, there is a monochromatic subset of \(\operatorname{dom}\mathcal{A}\) which induces a substructure isomorphic to \(\mathcal{S}\). We call the monochromatic subset in question a _homogeneous set_ for \(c\). \(\mathcal{S}\) is _computably indivisible_ if there is a homogeneous set computable from \(\mathcal{A}\) and \(c\), for any presentation \(\mathcal{A}\) and coloring \(c\) of \(\operatorname{dom}\mathcal{A}\).
For the rest of the paper, we fix a computable presentation of \(H_{n}\) with domain \(\mathbb{N}\) and thus focus only on the coloring. Viewed as a structure in the language of a single binary relation, the Rado graph is known to be indivisible, and computably so (folklore). Each of the Henson graphs is also indivisible. Henson himself proved that a weak form of indivisibility holds for each \(H_{n}\). Full indivisibility was first shown for \(n=3\) by Komjath and Rodl [10], and then for all \(n\) by El-Zahar and Sauer [1]. (A clarified and corrected version of the proof of Komjath and Rodl can be found in [1].) Work on the Ramsey theory of the Henson graphs has progressed beyond vertex colorings; recently, Natasha Dobrinen has undertaken a deep study of the structure of \(H_{n}\) and shown that for each \(n\), \(H_{n}\) has finite big Ramsey degrees, developing many novel techniques in the process [1, 2].
Our far more modest result concerns only vertex colorings and states that unlike the Rado graph, none of the Henson graphs is computably indivisible:
**Theorem 1**.: _For every \(n\geq 3\), there is a computable 2-coloring of \(H_{n}\) with no c.e. homogeneous set._
This theorem naturally raises the question of how complicated a homogeneous set for a coloring of \(H_{n}\) can or must be. An analysis of the proof of the indivisibility of \(H_{3}\) by Komjath and Rodl in [10] demonstrates that a homogeneous set can always be computed in the first jump of the coloring. For \(H_{n}\) in general, the proof of El-Zahar and Sauer in [1] shows that the \((2n-3)\)rd jump of a coloring suffices to compute a homogeneous set. The latter is a strictly worse upper bound for \(n=3\), and it is currently unknown whether a similar discrepancy exists for any \(n\geq 4\). Where vertex colorings of \(H_{n}\) fall on the spectrum of coding vs. cone avoidance is another intriguing question.
## 2. Proof of the theorem
Write \(x\in G\), for a graph \(G\), to mean \(x\) is a vertex of \(G\). By abuse of notation, if \(V\subset G\) is any set of vertices, we will identify \(V\) with the induced subgraph of \(G\) on \(V\). Furthermore, we always identify natural numbers with the elements of \(H_{n}\) they encode via our fixed computable presentation of \(H_{n}\), and sets of naturals with the corresponding induced subgraphs of \(H_{n}\). If \(A=\{a_{1}<\cdots<a_{n}\}\) and \(B=\{b_{1}<\cdots<b_{n}\}\) are two sets of vertices in a graph \(G\), write \(A\simeq^{*}B\) if the map \(a_{i}\mapsto b_{i}\) is an isomorphism of induced subgraphs. If the vertices of \(G\) are given some linear ordering, denote by \(G\,|\,m\) the induced subgraph of \(G\) on its first \(m\) vertices. If \(x\in G\), let \(G(x)\) denote the induced subgraph of \(G\) consisting of the neighbors of \(x\). A set of the form \(G(x)\) is referred to as a "neighbor set". Let \(\mathscr{T}_{n}\) be the set of finite \(K_{n}\)-free simple connected graphs.
We will need two lemmas. The first is a consequence of the following theorem of Jon Folkman, which appears as Theorem 2 in [12]. For a graph \(G\), let \(\delta(G)\) be the largest \(n\) such that \(G\) contains a subgraph isomorphic to \(K_{n}\).
**Theorem 2** (Folkman).: _For each \(k>0\) and finite graph \(F\), there is a finite graph \(G\) such that_
1. \(\delta(G)=\delta(F)\)_, and_
2. _for any partition of the vertices of_ \(G\) _as_ \(G_{1}\sqcup\cdots\sqcup G_{k}\)_, there is an_ \(i\) _such that_ \(G_{i}\) _contains a subgraph isomorphic to_ \(F\)_._
Part (a) implies that \(G\) is \(K_{n}\)-free if \(F\) is.
**Lemma 3**.: _For each \(n\) and \(k\), there is a \(G\in\mathscr{T}_{n}\) which is not an induced subgraph of \(\bigcup_{i=1}^{k}H_{n}(x_{i})\) for any vertices \(x_{1},\ldots,x_{k}\in H_{n}\). In particular, no finite union of neighbor sets in \(H_{n}\) can contain an isomorphic copy of \(H_{n}\)._
Proof.: By applying Theorem 2 with \(F=K_{n-1}\), there is a \(K_{n}\)-free \(G\) such that for every partition of \(G\) into \(k\) sets, at least one set contains a \(K_{n-1}\). Since a neighbor set in \(H_{n}\) cannot contain a \(K_{n-1}\), this means that \(G\) is not contained in any union of \(k\) neighbor sets.
Note that the graph \(G\) can be found computably from \(n\) and \(k\) by a brute-force search. The next fact is a restatement of Lemma 1 of [1]:
**Lemma 4** (El-Zahar & Sauer).: _Let \(\Delta\) be a finite induced subgraph of \(H_{n}\) with \(d\) vertices. Let \(\Gamma\) be any member of \(\mathscr{T}_{n}\) with \(d+1\) vertices put in increasing order such that \(\Delta\simeq^{*}\Gamma\,|\,d\). Then there are infinitely many choices of \(x\in H_{n}\) such that \(\Delta\cup\{x\}\simeq^{*}\Gamma\)._
Proof of Theorem 1.: The proof is by a finite injury priority argument. We build a computable \(c\colon H_{n}\to 2\), viewing \(2\) as the set \(\{R,B\}\) (red and blue), to meet requirements
\[R_{e}\colon\,(|W_{e}|=\infty\wedge|c(W_{e})|=1)\implies\text{Lemma 4 fails if $H_{n}$ is replaced with $W_{e}\subset H_{n}$.}\]
These are given the priority order \(R_{0}>R_{1}>R_{2}>\cdots\). We also define a computable function \(p\) in stages, where \(p(x,s)\) is the planned color of vertex \(x\) at stage \(s\), beginning with \(p(x,0)=R\)
(red) for all \(x\). This function will be used to keep track of vertices which requirements "reserve" to be a certain color. Only one vertex will actually be colored at each stage, starting with \(c(0)=R\).
A requirement \(R_{e}\) is said to be active at stage \(s\) if \(e\leq s\) and \(W_{e,s}\) contains at least one element that was enumerated after the most recent stage in which \(R_{e}\) was injured (to be explained below). If \(R_{e}\) was never injured, we say it is active simply if \(W_{e,s}\neq\emptyset\). Each requirement \(R_{e}\) will amass a finite list of vertices \(\{x_{e}^{1},x_{e}^{2},\ldots,x_{e}^{\ell}\}\) in \(W_{e}\) as its followers, together with a target graph \(\Gamma_{e}\) (also explained below). When a follower \(x_{e}^{m}\) is added, \(R_{e}\) will set the function \(p(x,s)\) for some vertices \(x\in H_{n}(x_{e}^{m})\); we say \(R_{e}\)_reserves_\(x\) when it sets \(p(x,s)\). Weaker requirements cannot reserve vertices which are currently reserved by stronger requirements. The followers, target graph, and reservations of \(R_{e}\) are canceled when \(R_{e}\) is injured by a stronger requirement. (Canceling a reserved vertex just means the vertex is no longer considered to be reserved by \(R_{e}\), and does not change the values of \(p\) or \(c\).) We may as well assume each \(W_{e}\) is monochromatic, and will refer to \(R_{e}\) as either a red or blue requirement accordingly.
We now detail the construction, and afterwards show that all requirements are injured at most finitely often and are met. First, if no \(R_{e}\) is active at stage \(s+1\) for \(e\leq s\), set \(p(x,s+1)=p(x,s)\) for all \(x\), set \(c(s+1)=p(s+1,s+1)\), and end the stage. If a requirement \(R_{e}\) is already active at stage \(s+1\) and has no follower, give it a follower \(x_{e}^{1}\) which is any element of \(W_{e,s}\) that was enumerated after the stage in which \(R_{e}\) was last injured, or otherwise any element of \(W_{e,s}\) if \(R_{e}\) was never injured. Then for every \(y\in H_{n}(x_{e}^{1})\) which is not currently reserved by a stronger requirement and has not yet been colored, reserve \(y\) by setting \(p(y,s+1)\) to be the opposite color as \(c(x_{e}^{1})\).
If \(R_{e}\) is active and has a follower at stage \(s+1\) but no target graph, let its target graph be some \(\Gamma_{e}\in\mathscr{T}_{n}\) which cannot be contained in \(k+1\) neighbor sets, where \(k\) is the total number of all followers of stronger currently active requirements. Such a \(\Gamma_{e}\) may be furnished by Lemma 3. Order \(\Gamma_{e}\) in such a way that each vertex (except the first) is connected to at least one previous vertex.
Next, suppose that at least one requirement is active and has a follower and target graph at stage \(s+1\). Go through the following procedure for each such \(R_{e}\) in order from strongest to weakest. Let \(m\) be the number of followers of \(R_{e}\) at stage \(s\); we will at this point have \(\{x_{e}^{1},\ldots,x_{e}^{m}\}\simeq^{*}\Gamma_{e}\upharpoonright m\). Suppose there is some \(x\in W_{e,s+1}\) with \(x\) greater than the stage at which \(x_{e}^{m}\) was enumerated into \(W_{e}\), and such that \(\{x_{e}^{1},\ldots,x_{e}^{m},x\}\simeq^{*}\Gamma_{e}\upharpoonright(m+1)\). If so, then give \(R_{e}\) the new follower \(x_{e}^{m+1}=x\), and for all \(y\in H_{n}(x_{e}^{m+1})\) with \(y>s+1\) such that \(y\) is not currently reserved by any stronger requirement, have \(R_{e}\) reserve \(p(y,s+1)=R\) if \(R_{e}\) is blue, or \(p(y,s+1)=B\) if \(R_{e}\) is red. Injure all weaker requirements by canceling their followers, target graphs, and reservations. After this is done for all active \(R_{e}\), end the stage by making \(p(z,s+1)=p(z,s)\) for any \(z\) for which \(p(z,\cdot)\) was not modified earlier in the stage, and then letting \(c(s+1)=p(s+1,s+1)\). If instead no \(x\) as above was found for any active \(R_{e}\), then set \(p(x,s+1)=p(x,s)\) for all \(x\), set \(c(s+1)=p(s+1,s+1)\), and end the stage. This completes the construction.
Each requirement only need accumulate a finite list of followers, so in particular \(R_{0}\) will only injure other requirements finitely many times. After the last time a requirement is injured, it only injures weaker requirements finitely often, so inductively we have that every requirement is only injured finitely many times before acquiring its final list of followers and target graph. And each requirement is satisfied: suppose (without loss of generality) \(R_{e}\) is blue. For each \(i\geq 2\), the vertex \(x_{e}^{i}\) is an element of \(H_{n}(x_{e}^{j})\) for some \(j<i\), by assumption on how we have ordered \(\Gamma_{e}\). If \(x_{e}^{j}\) was enumerated into \(W_{e}\) at stage \(s\), then when this \(x_{e}^{j}\) was chosen as a follower, \(R_{e}\) reserved every element of \(H_{n}(x_{e}^{j})\) greater than \(s\) by making its planned color red--except
for those vertices which were already reserved (to be blue) by stronger (red) requirements. Therefore, if \(x_{e}^{i}\) is blue, then since in particular the construction requires \(x_{e}^{i}>s\), we must have \(x_{e}^{i}\) a neighbor of some follower of a stronger (red) requirement. (We asked for \(x_{e}^{i}\) to be greater than the stage \(t\) at which \(x_{e}^{i-1}\) was enumerated. Such an \(x_{e}^{i}\) can be found for any \(t\) by Lemma 4.) So this copy we are building of \(\Gamma_{e}\) inside \(W_{e}\) is contained entirely in a union of neighbor sets of followers of stronger active requirements, except possibly for \(x_{e}^{1}\) which may lie outside of any such neighbor set. If \(R_{e}\) is never injured again, then the number \(k\) of such followers never changes again; it is the same as it was when the target graph \(\Gamma_{e}\) was chosen not to fit inside \(k+1\) neighbor sets. The latter number is large enough to also cover \(x_{e}^{1}\), so that this copy of \(\Gamma_{e}\) can never be completed inside \(W_{e}\), implying Lemma 4 fails in \(W_{e}\).
**Acknowledgements:** This research was supported in part by NSF grant DMS-1854107. I am extremely grateful to my thesis advisors Linda Brown Westrick and Jan Reimann for their invaluable help, and also to Peter Cholak for his comments on an earlier version of the proof of Theorem 1.
|
2310.00347 | Unlocking Bias Detection: Leveraging Transformer-Based Models for
Content Analysis | Bias detection in text is crucial for combating the spread of negative
stereotypes, misinformation, and biased decision-making. Traditional language
models frequently face challenges in generalizing beyond their training data
and are typically designed for a single task, often focusing on bias detection
at the sentence level. To address this, we present the Contextualized
Bi-Directional Dual Transformer (CBDT) \textcolor{green}{\faLeaf} classifier.
This model combines two complementary transformer networks: the Context
Transformer and the Entity Transformer, with a focus on improving bias
detection capabilities. We have prepared a dataset specifically for training
these models to identify and locate biases in texts. Our evaluations across
various datasets demonstrate CBDT \textcolor{green} effectiveness in
distinguishing biased narratives from neutral ones and identifying specific
biased terms. This work paves the way for applying the CBDT \textcolor{green}
model in various linguistic and cultural contexts, enhancing its utility in
bias detection efforts. We also make the annotated dataset available for
research purposes. | Shaina Raza, Oluwanifemi Bamgbose, Veronica Chatrath, Shardul Ghuge, Yan Sidyakin, Abdullah Y Muaad | 2023-09-30T12:06:04Z | http://arxiv.org/abs/2310.00347v3 | # Unlocking Bias Detection: Leveraging Transformer-Based Models for Content Analysis
###### Abstract
Bias detection in text is imperative due to its role in reinforcing negative stereotypes, disseminating misinformation, and influencing decisions. Current language models often fall short in generalizing beyond their training sets. In response, we introduce the Contextualized Bi-Directional Dual Transformer (CBDT) Classifier. This novel architecture utilizes two synergistic transformer networks: the Context Transformer and the Entity Transformer, aiming for enhanced bias detection. Our dataset preparation follows the FAIR principles, ensuring ethical data usage. Through rigorous testing on various datasets, CBDT showcases its ability in distinguishing biased from neutral statements, while also pinpointing exact biased lexemes. Our approach outperforms existing methods, achieving a 2-4% increase over benchmark performances. This opens avenues for adapting the CBDT model across diverse linguistic and cultural landscapes.
**Keywords:** Language Models, News Biases, Bias Identification, Evaluations
## 1 Introduction
As Natural Language Processing (NLP) rapidly evolves, its significance extends well beyond text analysis. NLP influences diverse sectors, ranging from social media analytics to advanced healthcare diagnostics. The pervasive reach of NLP showcases not only its achievements but also highlights vital challenges. Among these challenges, linguistic biases [1, 2], which are often embedded in both data and algorithms, are a significant concern. These biases do more than perpetuate stereotypes; they risk distorting data interpretations, affecting decision-making processes.
As seen in Figure 1, the first statement from User 2 displays a gender-based bias, while the second one reflects a religious bias. Here, "bias" refers to the predisposition or inclination, often rooted in societal stereotypes, that can unduly influence the representation or treatment of particular groups [3]. These biases are often exhibited in the tone, words or messages conveyed in textual conversations. A traditional NLP model primarily trained on political discourse may detect bias in that domain, but would fail when presented with similar bias in a health-centric context. This inconsistency points to a gap in current NLP research: the ability to generalize bias detection across diverse domains.
Figure 1: Real-world example of bias, highlighting the need for NLP solutions.
Despite significant advancements in state-of-the-art models [4; 5; 6; 1; 2; 7; 8; 9], consistent bias identification across diverse domains remains an ongoing challenge. While recent research has mainly focused on evaluating and debiasing language models (LMs) [10; 11; 12] -- a critical step toward minimizing AI risks -- the pressing need to detect biases inherent in the data itself, persists. This highlights the urgency for a holistic NLP framework that not only stands up to academic observation, but also ensures ethical and accurate data interpretation.
To address this issue, our research introduces the novel _Contextualized Bi-Directional Dual Transformer (CBDT) Classifier_. At its core, the CBDT integrates two specialized transformers [13]: the Context Transformer and the Entity Transformer. The former assesses the overall bias in an input text, while the latter focuses on semantics to identify potentially biased words and entities. While bias identification models [14; 15; 16; 17; 18; 19] are available, a significant portion of them possess a certain aspect of bias or scope, and their design is inherently singular. In contrast, we offer a dual-structured approach to accommodate a broader range of contexts and domains. Such an approach allows the CBDT Classifier to comprehensively evaluate biases, offering a layered perspective by analyzing both overarching narrative structures and specific tokens within a text.
Our research offers the following key contributions:
1. **CBDT Classifier Design**: We introduce the _CBDT Classifier_, a novel integration of two specialized transformer architectures, the first transformer performing the contextual understanding of the biases at the sentence level and second one aimed at entity-level bias identification. This design offers a comprehensive and multi-layered analysis of textual content, skillfully tackling both bias detection and semantic interpretation.
2. **Specialized Fine-Tuning**: Building on state-of-the-art methodologies, our approach harnesses two distinct configurations of LMs. The concatenation and subsequent fine-tuning of these models supports the CBDT Classifier's proficiency in precise sequence classification and nuanced token-level categorization.
3. **Enhanced Corpus Preparation**: Recognizing the importance of data quality, we put forth a carefully curated corpus that covers both overt and subtle biases. Drawing from domain expertise, lexicon creation, and rule-based formulation, our training dataset emerges as a diverse, comprehensive, and representative collection.
4. **Rigorous Evaluation**: We extend our model's evaluation beyond our primary dataset to include evaluation on out-of-distribution datasets. This thorough approach underscores our model's robustness, flexibility, and consistent performance across a plethora of contexts.
5. **Adherence to FAIR Principles**: In a commitment to **data ethics**, our dataset is structured in line with the FAIR [20] principles, ensuring it is Findable, Accessible, Interoperable, and Reproducible. This rigorous adherence champions ethical dataset creation, utilization, and sharing.
## 2 Related Work
NLP systems have long been susceptible to the issue of bias, leading to unfair representation in their outcomes [14]. Numerous studies [21, 6, 22, 23, 9, 24] have highlighted how societal and cultural biases inadvertently enter training data. These biases can undermine the integrity of NLP outcomes, perpetuating, and at times amplifying, societal disparities [16, 22].
In a related work, a system has been asked to detect hate speech and provide explanations [25]. Concurrently, another study explored biases in text-based event detection, addressing both data scarcity and annotation challenges [26]. The research presented in [27] investigates the relations between different forms of biases in NLP models, specifically examining bias mitigation in toxicity detection and word embeddings. This study concentrates on three social identities: race, gender, and religion, suggesting that biases can be correlated and that standalone debiasing methods may prove inadequate. Another study [28] expands the understanding of bias by predicting interpersonal group relationships, leveraging fine-grained interpersonal emotions.
There is a rich diversity in research focusing on different facets of bias in NLP. A study [28], for instance, shifted focus from mere demographic biases to predict group dynamics using emotional cues. In another work, a critical analysis [29] was conducted on gender biases in NLP studies, stressing the lack of gender-focused theories. A novel methodology [30] was introduced to counter dataset biases employing a gradient alignment strategy. These insights emphasize the need for continuous vigilance and proactive measures to ensure fairness in NLP models.
A study [31] defines and extensively evaluates how well language models grasp the semantics of four bias-related tasks: diagnosis, identification, extraction, and rephrasing. This evaluation reveals that LMs can handle these tasks to varying extents across multiple bias dimensions, including gender and political orientation. Additional research [32] introduces a cohesive framework that adeptly minimizes unwanted biases in LMs during fine-tuning for subsequent tasks without compromising performance.
A particularly finding was the identification of a persistent Muslim-violence bias in GPT-3 [33]. A related method [19] was proposed to identify biases at the sentence level within news articles. The work demonstrated that understanding the discourse role of a sentence and its relation with nearby sentences can reveal the ideological leanings of an author. Another study [34] utilized NLP techniques to detect language bias in letters of recommendation. The research employed methods such as sublanguage analysis, dictionary-based approach, rule-based approach, and deep learning approach to extract psycholinguistics and thematic characteristics in letters.
An interdisciplinary approach was advocated to understand bias in NLP [35]. The research emphasized identifying sociodemographic bias in various NLP architectures and the importance of interdisciplinary collaboration to bridge gaps and foster a more inclusive and accurate analysis of bias. A comprehensive taxonomy was developed [36] to help identify and mitigate relevant biases from training corpora for improved fairness in NLP systems.
Additionally, a survey [14] formally introduced the concept of bias in deep NLP, suggesting methods for its detection and rectification. Another comprehensive survey [17] delved deeper into the concepts of social bias and fairness in NLP, with the goal
of actualizing fairness in LMs. These studies underline the significance of recognizing and addressing biases in the ever-advancing domain of NLP.
While the methods previously mentioned have greatly influenced our research, our work stands distinct. Unlike prior studies, we have devised a strategy for corpus construction specifically tailored to detect biases in texts. Additionally, we employ a dual LMs approach, enabling bias identification both at the sentence and narrative levels.
## 3 Methods
Figure 2 depicts our CBDT architecture. The architecture consists of two primary components: a corpus construction and a dual-transformer structure. The latter combines the Context Transformer, which shows the contextual understanding of biases, and the Entity Transformer, aimed at entity-level identification. Both transformers are seamlessly integrated into LMs' operations. The resulting output offers a quantifiable measure of bias in textual data (via the bias score) but also pinpoints specific words or entities (tokens) that contribute to the detected bias, leveraging attention weights.
### Corpus Construction
The corpus construction steps are given below:
Figure 2: CBDT Classifier Architecture - A visual representation of the CBDT method’s workflow for bias detection in textual data.
#### Defining Bias Dimension
A bias dimension represents the overarching theme or category of bias. It is grounded in societal, cultural, or stereotypical perspectives leading to preferential or prejudicial views toward specific groups or individuals [36, 29]. Recognizing these dimensions is crucial for accurate identification and analysis of bias in various textual contexts.
#### Lexicon Creation and Rule Formulation
To address this need, we assembled a multi-disciplinary panel comprising two linguists and two domain specialists from journalism and health sectors. Their collective expertise guarantees a holistic capture of potential biases originating from various dimensions. This panel carefully curated a lexicons list populated with biased terms, phrases, and structures, specifically focusing on expressions indicative of stereotypes or manifesting explicit biases. A concise version of these lexicons is showcased in A.
#### Embedding and Bias Detection
Using BERT, we generate embeddings for every sentence in the dataset in a semi-supervised manner. When the cosine similarity between the embeddings of a predefined rule and a text surpassed a specific threshold, the text was flagged as potentially biased. The matching rule then determines its bias dimension. For increased accuracy, we employ an exact word matching method against the bias lexicon. Sentences with matching words are flagged as potentially biased.
#### Bias Label Assignment
The "bias_label" is assigned based on a combination of lexicon matching, rule-based flagging, and BERT-based similarity measures. Sentences flagged by any of these methods are marked for manual review by domain experts.
#### Inter-annotator Agreement
All flagged content undergoes a manual review by four domain experts. Cohen's Kappa and Fleiss' Kappa are used to measure the agreement among annotators, with our rigorous validation process yielding high values of 0.85 and 0.82, respectively.
#### FAIR Scheme Adoption
Our dataset adopts the FAIR principles -- Findable, Accessible, Interoperable, and Reproducible -- as highlighted in [37]. This scheme ensures greater accessibility and usability for research purposes. We structure our data in the CONLL-2003 format [38], a widely recognized standard for token-level annotations. This format is particularly apt for highlighting spans of biased words in texts. In conjunction, our classification labels use a Binary Label Format, where they provide a binary representation, indicating the presence or absence of bias.
The pseudocode for this approach is in Algorithm 1
```
1:Initialize datasets
2:Initialize lexicon
3:functionpreprocess(datasets)
4: Clean and standardize data
5:return consolidated_dataset
6:end function
7:functioncreate_lexicon
8: Assemble expert panel; Curate biased terms
9:return lexicon
10:end function
11:functiondetect_bias(consolidated_dataset, lexicon)
12:for each sentence in consolidated_dataset do
13: Embed using BERT; Flag if cosine similarity? threshold or word in lexicon
14:endfor
15:return flagged_sentences
16:endfunction
17:functionassign_bias_labels(flagged_sentences)
18:for each sentence in flagged_sentences do
19:if matches lexicon OR rule OR BERT similarity then
20: Assign "bias_label", flag for manual review
21:endif
22:endfor
23: Review by experts; Measure inter-annotator agreement
24:return reviewed_sentences with bias_labels
25:endfunction
26:functionadopt_FAIR_scheme(reviewed_sentences)
27: Format according to FAIR; Convert to CONLL-2003 and STS
28:return FAIR_conformant_dataset
29:endfunction
30: consolidated_dataset \(\leftarrow\) preprocess(datasets)
31: lexicon \(\leftarrow\) create_lexicon
32: flagged \(\leftarrow\) detect_bias(consolidated_dataset, lexicon)
33: reviewed \(\leftarrow\) assign_bias_labels(flagged)
34:output \(\leftarrow\) adopt_FAIR_scheme(reviewed)
35:Output output
```
**Algorithm 1** Corpus Preparation and Labelling Algorithm
**Dataset Schema:** Our dataset is designed to align with binary classification format and CONLL-2003 standards. It encompasses a variety of sources, such as the BABE dataset [6] which focuses on political and news media biases. For detecting biases within health notes related to specific genders or races, we incorporated MIMIC-III clinical notes [39]. Additionally, we compiled data on climate change news and occupational reports using Google RSS feeds, with a coverage period ranging
from January 2023 to July 2023. More details of these datasets are in Table 1
```
{ "biased_text":{ "Type":"String", "Example":"Acertaingroupisalwayslazy..." }, "bias_label":{ "Type":"Boolean", "Example":true }, "identified_biased_spans":{ "Type":"ListofStrings", "Example":["alwayslazy"] }, "bias_dimension":{ "Type":"String", "Example":"EthnicStereotyping" } }
```
Listing 1: JSON Schema for Bias Annotation
```
AOcertainOgroupOisOalwaysB-BIASlazyI-BIAS.O
```
Listing 2: CONLL-2003 Format Example
In the binary format, the sentence is paired with a binary score indicating the presence or absence of bias. The CONLL-2003 format, on the other hand, is more granular, tagging each word/token in the sentence to signify whether it is part of a biased expression [40].
Prior to formal experimentation, we select representative excerpts from each domain, showcasing potential bias indicators like emotive language, stereotypical portrayals, or blame attribution. These extracts are presented in the B.
### Contextual Bi-Directional Transformer (CBDT) method
The Contextual Bi-Directional Transformer (CBDT) offers a fresh perspective on the task of bias detection in textual data. A detailed schematic representation of the methodology is given in Figure 2. The CBDT operates in two primary stages: contextual encoding for sequence classification and entity encoding for token classification.
Given an input text, post-tokenization, it is represented as a sequence:
\[X=\{x_{1},x_{2},\ldots,x_{n}\}\]
The initial stage involves the Context Transformer, represented as \(T_{c}\). This stage processes the token sequence to determine the presence of bias, resulting in a contextual encoding, \(C\):
\[C=T_{c}(X)\]
If \(T_{c}\) identifies bias, the process moves to the Entity Transformer, represented as \(T_{e}\). This stage examines the text's semantics, highlighting tokens or entities potentially containing bias, producing an entity encoding, \(E\):
\[E=T_{e}(X)\]
The outcomes of these two stages, \(C\) and \(E\), are combined to form \(F\):
\[F=[C;E]\]
This merged representation, \(F\), is then utilized to classify the text, resulting in a bias score, \(S\), through a classifier function:
\[S=\text{Classifier}(F)\]
Moreover, the Entity Transformer generates attention weights for each token in \(X\), denoted as \(A_{e}=\{a_{1},a_{2},\ldots,a_{n}\}\). Each weight, \(a_{i}\), showcases the relative contribution of the corresponding token \(x_{i}\) to the perceived bias:
\[a_{i}\propto\text{bias contribution of }x_{i}\]
Two BERT-based uncased models serve our training needs: the Context Transformer functions as a binary classifier for bias detection, while the Entity Transformer identifies biased spans. For a given textual input, the CBDT model provides both a bias score, \(S\), and a set of attention weights, \(A_{e}\). These measures not only quantify bias but also spotlight tokens contributing to the perceived bias.
Upon integrating insights from both the contextual and entity encoding stages, the CBDT model produces a singular bias score, \(S\), for the input text. This score provides an overall bias evaluation. To transform this bias detection into a binary classification problem, a threshold \(\tau\) is set on the bias score \(S\). Texts with \(S>\tau\) are classified as "biased," and those with \(S\leq\tau\) as "not biased." In our experiments, \(\tau=0.5\) is chosen in line with standard practices in binary classification tasks [41].
```
1:procedureCBDT(\(X\))
2:Input: Sequence of tokens \(X=\{x_{1},x_{2},\ldots,x_{n}\}\)
3:Output: Bias score \(S\), Attention weights \(A_{e}\)
4:\(C\gets T_{c}(X)\)\(\triangleright\) Context Transformer
5:if\(C\) indicates bias then
6:\(E\gets T_{e}(X)\)\(\triangleright\) Entity Transformer
7:\(A_{e}\leftarrow\) Extract attention weights from \(T_{e}\)
8:else
9:\(E\leftarrow\) zero encoding\(\triangleright\) No bias detected, entity encoding is zero
10:\(A_{e}\leftarrow\) zero weights
11:endif
12:\(F\leftarrow[C;E]\)\(\triangleright\) Merge Encodings
13:returnClassifier(\(F\)), \(A_{e}\)\(\triangleright\) Return bias score and attention weights
14:endprocedure
15:procedureTrainCBDT(\(D\), epochs, mini_batch_size)
16:Input: Labeled dataset \(D=\{(X_{1},y_{1}),\ldots,(X_{m},y_{m})\}\)
17:Output: Optimized parameters \(\theta\)
18:for epoch in 1,..., epochs do
19:for each mini-batch of \((X_{i},y_{i})\) in \(D\)do
20:\(S,A_{e}\leftarrow\) CBDT(\(X_{i}\))
21: Compute binary cross-entropy loss \(L(y_{i},S)\)
22: Update \(\theta\) using Adam optimizer
23:endfor
24:endfor
25:return\(\theta\)
26:endprocedure
```
**Algorithm 2** CBDT Bias Detection
## 4 Experimental Design
In this work, we address the following research questions:
**RQ1**: How effectively can a model detect biases in text across various domains?
**RQ2**: What are the strengths and weaknesses of these models in terms of bias type, severity, and domain specificity?
### Evaluation Metrics
We adopt several standard metrics to assess our model's capability in bias detection:
* _Accuracy_: Reflects the proportion of instances the model predicts correctly, indicating how reliably it distinguishes biased from unbiased texts.
* _Precision_: Demonstrates the model's precision in marking texts as biased, i.e., the fraction of labeled biased texts that are genuinely biased.
* _Recall_ (Sensitivity): Indicates the fraction of actual biases the model correctly detects, highlighting its thoroughness in identifying biased entities.
* _F1-score_: The harmonic mean of precision and recall, balancing the two metrics. For token classification, we consider the micro-averaged F1-score, encompassing individual token results.
* _ROC Curves_ and _AUC_: We plot the Receiver Operating Characteristic (ROC) curves to illustrate the model's distinction between true and false positives. The area under these curves (AUC) quantifies the model's proficiency, with higher values denoting better performance.
By using these metrics, we gain a multifaceted perspective on our model's performance in bias detection, evaluating the individual and combined efficacy of our transformers in detecting biases.
### Baselines
Our approach is compared against a diverse set of baselines, which are categorized as follows:
* **Traditional Baselines:** These include Naive Bayes (NB), Support Vector Machine (SVM), and RandomForest (RF) algorithms.
* **Neural Network-Based Approaches:** In this category, we have Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM), and Vanilla Transformer [13].
* **Advanced Models:** These comprise of state-of-the-art models such as BERT [42], FastText [43], RoBERTa [44], GPT-3 (gpt-3.5-turbo) [45], and Falcon 7B [46]. The GPT-3 is used in zero-shot and few-shot settings, whereas Falcon 7B is used in zero-shot setting.
For the CBDT classifier, we use the learning rate: 0.001 with Adam, batch size: 64, epochs: 20, activation function: ReLU and Softmax, dropout: 0.5, weight decay: 0.0001. All the baseline methods are fine-tuned to their best hyperparamter settings. CBDT is fine-tuned for specific tasks, while Naive Bayes uses TF-IDF for text vectorization. SVM employs a linear kernel, and RandomForest uses 100 estimators. CNN relies on 1D convolutional layers, and LSTM utilizes 128 LSTM units. Vanilla Transformer has 6 encoder layers, and BERT, RoBERTa are used with base-uncased version and are fine-tuned. FastText uses the skip-gram model, and GPT-3 is API-based with a temperature setting of 0.7 for response generation. For Falcon 7B, we use the "falcon-7b-instruct" model and the model has been quantized using the BitsAndBytes library to load parameters and activations in 4-bit format, enabling more efficient computation More hyper parameter details are in C.
### Evaluation Data
The specifics of the training dataset are found in Table 1. For our evaluations, we used a test set from our combined data, split as 70-15-15, Additionally, two other out-of-distribution test sets are used in measuring the models' bias-detection capabilities:
* _SemEval-2018 Valence Classification Data_[47]: Used for sequence classification, this dataset was adapted for binary bias detection. We specifically focused on the English test set with 3259 records.
* _CoNLL 2003 Shared Task Data_[38] : Applied for token classification, we utilized the testset that covers English content with 231 articles, with 3684 sentences and 46435 tokens. The BILOU labeling scheme facilitated token-based categorization [48].
## 5 Results
In our results section, we evaluate the performance of the CBDT model across two intertwined tasks: sequence classification and token classification. For sequence classification, the Context Transformer provides a preliminary assessment, evaluating if a sequence holds any bias. If bias is detected, the Entity Transformer then delves deeper into the sequence, pinpointing specific tokens or entities that may exhibit this bias. By integrating the outputs of both transformers, the CBDT model offers a singular bias score for each sequence, providing a holistic view of potential bias presence and localization within the text. We also conduct an ablation study to underscore the significance of each module within the CBDT architecture, illustrating the model's robustness in diverse settings
### Comparative Performance for Sequence Classification Task
Table 2 showcases the comparative performance of various models for the sequence classification task. Several key observations can be drawn from the results:
The CBDT model, which is our main contribution, consistently outperforms all the baseline models across both evaluation datasets. Specifically, on the primary test set, it gets an F1-score of \(93.85\%\pm 0.6\%\). On the SemEval dataset, it achieves an F1-score of \(90.33\%\pm 0.8\%\). These results highlight the CBDT model's robustness and adeptness in detecting biases spanning diverse textual domains.
Traditional algorithms like RandomForest (RF), Support Vector Machine (SVM), and Naive Bayes (NB) have shown competitive results. The F1-scores for these models oscillate between \(75.00\%\pm 1.1\%\) and \(82.90\%\pm 1.2\%\) across both datasets. However, when compared against more contemporary neural network-based architectures, their limitations become evident.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline
**Data Source** & **Domain** & **Train** & **Dev** & **Test** \\ \hline BABE & News Media & 2590 & 555 & 555 \\ Google RSS & Climate Change & 1050 & 225 & 225 \\ Google RSS & Occupations & 1500 & 225 & 225 \\ MIMIC & Clinical Notes & 1400 & 300 & 300 \\ SemEval-2018 (English) & Valence Classification & – & – & 3259 \\ CoNLL 2003 (English) & Shared Task Data & – & – & 3684 \\ \hline \hline \end{tabular} Annotated data can be made available for research.
\end{table}
Table 1: Data Sources, Domains, and Splitting for Training, Development, and Testing
Within the domain of neural network-based models, LSTM and CNN showed commendable performance. Nevertheless, their performance is lesser than transformer-based models such as GPT-3, RoBERTa, and CBDT. This underscores the potential and superiority of transformer-based architectures, especially in bias detection tasks. Although GPT-3's few-shot learning yields commendable outcomes, it performs lesser than a fine-tuned model like CBDT model.
The zero-shot learning approach of Falcon 7B also exhibits decent performance, but it is not in best-performing models. This shows that while zero-shot learning has its merits, fine-tuning or employing task-specific models like CBDT can elevate the performance ceiling.
A noticable decrease in performance is observed across all models when transitioning from the primary test set to the SemEval dataset. This highlights the inherent challenges posed by OOD data. However, the decline in performance for the CBDT model is relatively more restrained, which hints at its superior generalization capabilities.
The accompanying standard deviations in Table 2 also indicate that the performances of the models are relatively stable across various runs. The CBDT model, in particular, showcases a lower standard deviation, pointing to its performance consistency and stability.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Model & Precision & Recall & F1-Score \\ \hline \multicolumn{4}{c}{_Test Set_} \\ \hline RF & 80.10 \(\pm\) 1.8 & 81.25 \(\pm\) 0.2 & 80.68 \(\pm\) 1.3 \\ SVM & 81.70 \(\pm\) 1.6 & 82.60 \(\pm\) 0.9 & 82.15 \(\pm\) 1.3 \\ NB & 82.30 \(\pm\) 1.4 & 83.50 \(\pm\) 0.7 & 82.90 \(\pm\) 1.2 \\ LSTM & 83.70 \(\pm\) 1.2 & 87.20 \(\pm\) 0.8 & 85.41 \(\pm\) 1.0 \\ Falcon7B (Zero-shot) & 87.30 \(\pm\) 1.3 & 87.80 \(\pm\) 1.1 & 87.55 \(\pm\) 1.2 \\ CNN & 87.50 \(\pm\) 1.4 & 88.50 \(\pm\) 0.9 & 88.00 \(\pm\) 1.3 \\ BERT & 88.00 \(\pm\) 1.0 & 88.50 \(\pm\) 1.2 & 88.25 \(\pm\) 0.9 \\ GPT-3 (Few-shot) & 90.80 \(\pm\) 1.1 & 91.40 \(\pm\) 1.0 & 91.10 \(\pm\) 1.2 \\ RoBERTa & 91.20 \(\pm\) 0.8 & 90.70 \(\pm\) 1.0 & 90.95 \(\pm\) 0.9 \\ CBDT (Our Model) & **93.40 \(\pm\) 0.7** & **94.30 \(\pm\) 0.8** & **93.85 \(\pm\) 0.6** \\ \hline \multicolumn{4}{c}{_SemEval_} \\ \hline RF & 74.30 \(\pm\) 1.0 & 75.70 \(\pm\) 0.9 & 75.00 \(\pm\) 1.1 \\ SVM & 75.40 \(\pm\) 1.1 & 77.20 \(\pm\) 0.9 & 76.30 \(\pm\) 1.2 \\ NB & 76.20 \(\pm\) 1.0 & 78.50 \(\pm\) 0.8 & 77.35 \(\pm\) 1.1 \\ LSTM & 78.00 \(\pm\) 0.9 & 80.70 \(\pm\) 1.1 & 79.33 \(\pm\) 1.0 \\ Falcon7B (Zero-shot) & 79.00 \(\pm\) 1.0 & 81.70 \(\pm\) 1.2 & 80.33 \(\pm\) 1.1 \\ CNN & 83.40 \(\pm\) 1.1 & 82.30 \(\pm\) 0.8 & 82.85 \(\pm\) 1.0 \\ BERT & 84.10 \(\pm\) 1.3 & 83.00 \(\pm\) 0.7 & 83.55 \(\pm\) 1.2 \\ GPT-3 (Few-shot) & 85.00 \(\pm\) 0.9 & 87.70 \(\pm\) 1.0 & 86.33 \(\pm\) 0.9 \\ RoBERTa & 83.00 \(\pm\) 1.2 & 86.00 \(\pm\) 1.1 & 84.47 \(\pm\) 0.8 \\ CBDT (Our Model) & **89.00 \(\pm\) 1.0** & **91.70 \(\pm\) 0.9** & **90.33 \(\pm\) 0.8** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparative performance of various sequence classification models on our test set and SemEval, averaged over 5 runs, with reported standard deviations. Bold means best performance.
In conclusion, the CBDT model excels at detecting biases in sequence classification tasks, re-affirming the superior performance of transformer-based models over traditional deep neural networks or machine learning methods.
### Comparative Performance for Token Classification Task
Table 3 shows the performance of various models in token classification tasks, allowing us to derive the following insights:
Our proposed CBDT model comes as the top-performing model for token classification on the primary test set. With an F1-score of \(92.57\%\pm 0.8\%\), it stands out, reflecting its efficiency in identifying and localizing biases at the token level within text sequences.
The LSTM and CNN, two fundamental neural network architectures, demonstrate commendable performances with F1-scores of \(80.50\%\pm 1.1\%\) and \(86.50\%\pm 1.2\%\), respectively, on the primary test set. This suggests that while traditional neural networks can effectively tackle token-level classification, more advanced architectures offer enhanced precision and recall.
The transformer-based models, particularly BERT, GPT-3, and RoBERTa, establish their superiority in the token classification realm. Their deep architectures, combined with the ability to capture long-range dependencies in text, contribute to their success. Among these, GPT-3's few-shot approach exhibits impressive results,
\begin{table}
\begin{tabular}{l c c c} \hline \hline Model & Precision & Recall & F1-Score \\ \hline \multicolumn{4}{c}{_Test Set_} \\ \hline LSTM & 79.78 \(\pm\) 1.0 & 81.23 \(\pm\) 0.9 & 80.50 \(\pm\) 1.1 \\ Falcon7B (Zero-shot) & 86.13 \(\pm\) 1.2 & 87.39 \(\pm\) 1.1 & 86.76 \(\pm\) 1.3 \\ CNN & 86.10 \(\pm\) 1.1 & 86.90 \(\pm\) 1.0 & 86.50 \(\pm\) 1.2 \\ BERT & 87.00 \(\pm\) 0.8 & 87.10 \(\pm\) 0.7 & 87.05 \(\pm\) 0.9 \\ GPT-3 (Few-shot) & 89.80 \(\pm\) 0.9 & 90.70 \(\pm\) 0.8 & 90.25 \(\pm\) 1.0 \\ RoBERTa & 89.10 \(\pm\) 1.3 & 90.18 \(\pm\) 1.4 & 89.64 \(\pm\) 1.5 \\ CBDT (Our Model) & **92.10 \(\pm\) 0.7** & **93.04 \(\pm\) 0.6** & **92.57 \(\pm\) 0.8** \\ \hline \multicolumn{4}{c}{_CONLL-2003_} \\ \hline LSTM & 77.00 \(\pm\) 1.2 & 79.00 \(\pm\) 1.1 & 77.99 \(\pm\) 1.0 \\ Falcon7B (Zero-shot) & 82.00 \(\pm\) 1.3 & 81.00 \(\pm\) 1.2 & 81.50 \(\pm\) 1.4 \\ CNN & 78.00 \(\pm\) 1.4 & 81.00 \(\pm\) 1.3 & 79.47 \(\pm\) 1.5 \\ BERT & 81.00 \(\pm\) 1.5 & 82.30 \(\pm\) 1.4 & 81.64 \(\pm\) 1.3 \\ GPT-3 (Few-shot) & 82.30 \(\pm\) 1.6 & 84.50 \(\pm\) 1.7 & 83.39 \(\pm\) 1.8 \\ RoBERTa & 83.00 \(\pm\) 1.7 & 85.00 \(\pm\) 1.6 & 83.99 \(\pm\) 1.9 \\ CBDT (Our Model) & **87.00 \(\pm\) 1.8** & **88.70 \(\pm\) 1.7** & **87.84 \(\pm\) 1.6** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparative performance of various token classification models on our test set and CONLL-2003, averaged over 5 runs, with reported standard deviations. Bold means best performance.
although it doesn't surpass the performance of the CBDT model. The slight advantage of CBDT over models like BERT and RoBERTa might stem from its design, specifically tailored for bias detection.
The performance drop across most models when transitioning to the CONLL-2003 dataset, an out-of-distribution dataset, underscores the inherent challenges of adapting to unseen data. However, the CBDT model again proves its mettle by achieving the highest F1-score of \(87.84\%\pm 1.6\%\), illustrating its robust generalization capabilities.
Falcon7B's zero-shot approach on the primary test set produces an F1-score of \(86.76\%\pm 1.3\%\), showcasing the potential of zero-shot learning in token classification tasks. However, when fine-tuning is applied as in GPT-3's few-shot approach, the performance further improves, underscoring the benefits of model adaptation to specific tasks.
The relatively low standard deviations reported for each model suggest their performances are stable across different runs. Particularly, the CBDT model's consistency, as indicated by its standard deviation, ensures its reliability in diverse settings.
In summary, the token classification results emphasize the CBDT model's ability in both identifying and localizing biases within textual sequences. The results also reinforce the assertion that transformer-based architectures, especially when tailored for specific tasks like bias detection, can significantly outperform traditional neural networks1.
Footnote 1: In the later experiments, due to better performance of these models on our test set for both tasks, we show the performance on our test set only.
The heatmap visualization, utilizing attention scores derived from selected samples, reveals varying intensities of bias within textual content, as given in D. The heatmap underscores that specific terms in our example sentences markedly indicate hate speech, toxicity, or discriminatory language, as evidenced by their higher attention scores.
### Comparative Analysis of Different Models' Performance Over Training Epochs
The experiment depicted in Figure 3 evaluates the performance of various baseline models on our combined data's test set across a series of training epochs. The graph represents the aggregate accuracy of the models as they learn and adjust over time.
The CBDT model shows the best performance, steadily improving as training continues. By the 20th epoch, its accuracy is almost perfect, suggesting it's well-trained and potentially reaching its best performance.
Both GPT-3 (few-shot) and RoBERTa start with good accuracy and keep improving steadily over time. In contrast, BERT and CNN, which start with about 50% accuracy, improve at different rates. BERT improves quickly, while CNN's improvement is a bit slower. Falcon-B (zero-shot) shows steady improvement, reaching over 80% accuracy by the 20th epoch. This steady climb, especially given it's a zero-shot model, highlights its ability to learn even without specific task training. LSTM, RF, SVM, and NB start with accuracy scores between 40-50%. Of these, LSTM improves consistently, while RF, SVM, and NB improve more slowly.
Overall, we observe that the transformer models like CBDT, GPT-3, RoBERTa, and BERT start strong and improve rapidly. Traditional machine learning models like RF, SVM, and NB improve more slowly, suggesting they might struggle with complex language tasks compared to newer models. Most models show steady improvement, hinting that more training might not add much value after 20 epochs.
### Comparative Analysis of Model Proficiency in Bias Detection Across Key Dimensions
Figure 4 presents the macro-average F1 scores for various models across different demographic categories, namely Gender, Religion, Race, Age, Sexual Orientation, Disability, Nationality, Income level.
CBDT consistently leads, showcasing its adaptability across diverse datasets. GPT-3 and RoBERTa also performs quite well, emphasizing the capability of transformer architectures in intricate token classifications. However, BERT and CNN exhibit moderate performances with noticeable variations in categories like Age, Race, and Religion, indicating potential complexities. Falcon7B, despite its zero-shot capability, performs variably, excelling in categories like Sexual Orientation but lagging in others such as Income Level. This emphasizes that its adaptability might have boundaries. LSTM, a traditional architecture, shows low-to-medium performance reinforcing the significance of newer architectures in complex bias detection tasks.
Figure 3: Performance trends of various models across training epochs.
Figure 4: Macro-averaged F1 scores for various models on our testset, evaluating performance across different bias dimensions.
### Threshold Analysis
We have undertaken an analysis of the CBDT model's performance across various bias score thresholds to understand the trade-offs between false positives and false negatives.
The Receiver Operating Characteristic (ROC) curve in Figure 5 provides a comprehensive view of model performances in terms of their discrimination capacities. The CBDT model shows best performance, achieving the highest area under the curve (AUC) of 0.79, suggesting its superior ability to differentiate between true positive and false positive classifications. GPT-3 closely follows with an AUC of 0.76, indicating its good classification ability. Falcon7B and BERT further underscore the effectiveness of transformer-based models, obtaining AUCs of 0.72 and 0.70, respectively. Among the traditional architectures, LSTM and RF demonstrate comparable performances with AUCs around 0.63, while SVM and NB lag slightly behind at 0.59. This consolidated analysis shows the dominance of modern architectures, especially the CBDT model, in offering precise and reliable classifications on data.
Figure 5: Receiver Operating Characteristic (ROC) curves for various models. The area under each curve (AUC) provides a quantitative measure of each model’s capability, with higher values indicating superior performance.
### Impact of the CBDT Model Architectural Variations
To assess our model's robustness, we made various architectural adjustments, such as changing activation functions and exploring ensemble methods. We selected these specific architecture and configuration combinations based on factors like relevance, computational feasibility, and expected performance impact. Table 4 displays the results of these architectural variants on accuracy measure.
Table 4 shows that ensemble methods yield the highest performance, specifically 97.0% accuracy, altough at the cost of a high computational cost. Among individual algorithms, CBDT models with different activation functions exhibit strong performances with a balanced computational requirement. Specially, the Sigmoid activation achieves the highest accuracy at 96.2%. Traditional machine learning models like SVM and Random Forest lag behind in performance, and increasing the number of estimators in Random Forest hardly improves performance while escalating computational costs. LSTMs, especially the Bidirectional variant, present a favorable trade-off between performance and computational expense, scoring a 91.8% accuracy at moderate computational cost. Overall, the results highlights the need to carefully consider the trade-offs between performance metrics and computational demands when selecting model architectures.
### Ablation Study on Impact of Fine-Tuning and Regularization Strategies of CBDT
The experiment in Table 5 evaluates the impact of various fine-tuning and regularization strategies on the performance of the CBDT model on the combined data's testset, as measured by accuracy and computational cost.
The results Table 5 showcases the impact of various _fine-tuning strategies_ on the CBDT model's performance. The baseline CBDT model achieves an accuracy of 95.0%. When layer-wise fine-tuning is employed, there's a noticeable improvement with a 1% increase in performance compared to the baseline, suggesting its beneficial impact. On
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Model Variations** & **Accuracy (\%)** & **Comp. Cost** \\ \hline CBDT (ReLU Activation) & 95.0 & Moderate \\ CBDT (Sigmoid Activation) & 96.2 & Moderate \\ CBDT (tanh Activation) & 94.8 & Moderate \\ \hline SVM (Linear Kernel) & 87.0 & Low \\ SVM (RBF Kernel) & 89.2 & Medium \\ \hline RandomForest (100 Estimators) & 86.5 & Medium \\ RandomForest (200 Estimators) & 86.8 & High \\ \hline LSTM (Unidirectional) & 90.0 & Moderate \\ LSTM (Bidirectional) & 91.8 & Moderate \\ \hline Ensemble of Best-Performing Models & 97.0 & High \\ CBDT + FastText Ensemble & 96.5 & High \\ \hline \hline \end{tabular}
\end{table}
Table 4: Performance Metrics for Architectural and Ensemble Variations.
the other hand, feature extraction fine-tuning offers a modest enhancement, with a 0.5% increase in accuracy relative to the baseline. This indicates that while layer-wise fine-tuning is more effective, extracting features can still provide a performance boost.
The _regularization techniques_' effects on the model are also evident from the table. Omitting dropout maintains the accuracy consistent with the baseline, hinting at the model's inherent robustness against overfitting. However, introducing a dropout rate of 0.5 slightly diminishes the performance, reducing accuracy by 0.2%. In contrast, the introduction of a weight decay of 0.01 marginally enhances the model's metrics by 0.2%, suggesting its potential as a beneficial regularization technique. Across all the strategies tested, the computational cost remains moderate, indicating that these techniques do not significantly strain computational resources.
## 6 Discussion
### Theoretical Impact
The results from our research enrich the existing body of work on content bias detection within NLP models. By highlighting the potential of deep learning neural networks and advanced structures such as transformers and LMs, our findings emphasize their role in deepening our comprehension and mitigation of biases. The theoretical implications of our work discussed next.
The superior performance exhibited by the CBDT model, compared to its baselines, strengthens the notion that leveraging the capabilities of transformer-based models in a more stacked and hierarchical way (as we did) enhances accuracy in bias detection. The variations in the performance metrics across different models also shows the complex ways in which biases penetrate and affect neural networks. Overall, our methodology, which concatenates two transformers--one for sentence-level and the other for token-level bias detection--serves as a bridge, seamlessly connecting advancements in NLP and LMs with the intricate world of social biases.
### Practical Impact
From a practical perspective, our research has several implications: Developers and practitioners can have a potent tool like the CBDT model. By incorporating it into
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Fine-Tuning \& Regularization** & **Accuracy (\%)** & **Comp. Cost** \\ \hline CBDT (Baseline) & 95.0 & Moderate \\ CBDT (Layer-wise Fine-tuning) & 96.0 & Moderate \\ CBDT (Feature Extraction Fine-tuning) & 95.5 & Moderate \\ \hline CBDT (No Dropout) & 95.0 & Moderate \\ CBDT (Dropout 0.5) & 94.8 & Moderate \\ \hline CBDT (No Weight Decay) & 95.0 & Moderate \\ CBDT (Weight Decay 0.01) & 95.2 & Moderate \\ \hline \hline \end{tabular}
\end{table}
Table 5: Performance Metrics for Fine-Tuning and Regularization Strategies.
their systems, they can achieve a more balanced and unbiased content analysis, thereby improving overall user experience.
Our findings emphasize for the continuous assessment of machine learning models. Given the fluid nature of content and societal norms, models that remain static risk becoming obsolete or, perform worse. Such outdated models can perpetuate harmful biases. Regular evaluations ensure that models stay relevant, accurate, and aligned with current perspectives.
This study also highlights the versatility and adaptability of the CBDT model. Its ability to discern biases at both the sentence and token levels makes it applicable across a wide array of NLP tasks, from sentiment analysis to content filtering. By demonstrating the efficiency of transformer-based models in bias detection, our research provides a blueprint for organizations and institutions looking to audit their AI systems for fairness and inclusivity. This is especially relevant in sectors like finance, healthcare, and public services, where biased decisions can have serious ramifications.
The detailed analyses presented offer a valuable resource for non-experts, facilitating a clearer understanding of how biases manifest in AI and the steps taken to counteract them. This transparency is crucial for fostering public trust in AI-driven systems.
### Limitations and Future Directions
While our research provides meaningful contributions, it comes with certain limitations. Firstly, our evaluation predominantly centered on English language datasets, which might not entirely reflect the biases inherent in other languages. Furthermore, our study main point was primarily on detecting textual biases, which means potential biases in other mediums such as images or audio might have been overlooked. Additionally, with the ever-evolving science of LMs, newer developments such as LLAMA 2, GPT-4, and their subsequent iterations [49] are worth trying but were not considered in our current research.
Looking ahead, numerous promising directions are proposed further exploration. One direction is to try the versatility of the CBDT model across diverse languages and cultural contexts, which would present a more holistic view of bias detection. Another intriguing domain is to delve into the complexities of intersectional biases [1, 29], where multiple layers of bias can intertwine and interact. Furthermore, gathering real-time user feedback can pave the way for models to continually adapt and align more closely with shifting societal norms and values [50]. Finally, it would also be worthwhile to extend this work towards debiasing the content in real time.
## 7 Conclusion
The widespread utilization of machine learning and NLP models in everyday tasks underscores the importance of understanding these models' behavior. This is particularly important in relation to their handling and potential propagation of biases. Our research is centered on the detection of biased content, leveraging the capabilities of deep learning models, particularly transformer-based models, which are LMs, to identify and mitigate biases in textual content. Our findings underscore the effectiveness
of the CBDT model when compared to a variety of baseline models. We illustrate the CBDT model's proficiency in identifying biases at both the sequence and token levels. The corpus construction process is needed for FAIR data adherence, for which we provided sufficient guidelines. Furthermore, the practical implications of our discoveries extend beyond theoretical discourse, as they provide concrete solutions for developers, practitioners, and other stakeholders. While our study has yielded valuable insights, it has also shed light on areas that warrant further investigation, such as broadening the scope of evaluation to encompass diverse languages and newer models, as well as delving into intersectional biases.
Acknowledgments.Resources used in preparing this research were provided, in part, by the Province of Ontario, the Government of Canada through CIFAR, and companies sponsoring the Vector Institute.
## Declarations
* Funding: This research received no external funding.
* Conflict of interest: The authors declare no conflict or competing interests.
* Ethics approval: The authors declare no conflict or competing interests.
* Consent to participate: N/A
* Consent for publication: N/A
* Availability of data and materials: The MIMIC-III dataset is accessible at MIMIC-III Clinical Database v1.4 on PhysioNet. Similarly, the BABE dataset can be procured from the source. Further data, the annotation schema, and associated data can be provided upon a reasonable request from the corresponding author.
* Code availability: Code will be made available upon request from corresponding author.
* original draft, Writing
- review & editing, Supervision. **Nifemi Bampsbee:** Data handling, Software, Validation, Visualization. **Veronica Chatrath** : Writing
- review& editing. **Yan Sidyakin:** Data handling, Software, Validation. **Shardul Ghuge**: Methodology, Writing
- review & editing. **Abdullah Y. Muad**: Formal Analysis, Software, Visualization, Writing
- review & editing.
## Appendix A Bias Dimension and Biased Tokens
### Subset of Biased Lexicons
* Biased Word/Phrase**
* illegal alien, thug, savage, barbaric
* emotional, weak, bossy, hysterical
* radical, terrorist, fanatic, heretic
* slow, old-fashioned, senile, out of touch
* unnatural, abnormal, deviant, sinful
- greedy, dishonest, shady, untrustworthy
* lazy, freeloader, bum, worthless
* unpatriotic, alien, foreigner, outsider
* crippled, handicapped, defective, impaired
* uneducated, illiterate, backward, naive
* fat, slob, whale, blob
* hoax, alarmist, tree-hugger, denier
* extremist, radical left, far-right, ideologue
* welfare queen, gold-digger, elitist, tycoon
* hillbilly, city slicker, rust belter, coastal elite
* exotic, oriental, primitive, tribal
* uncultured, barbarian, savage, civilized
* hipster, jock, nerd, geek
* frumpy, plain, flashy, ostentatious
* fragile, feeble, unfit, sturdy
## Appendix B Exploratory Analysis of Bias Indicators
**Dataset:** News media
**Extract:** "The highly controversial climate policy, supported by reckless environmentalists, is on the verge of becoming law."
**Reasons:** Uses emotive language and labels that may negatively color the reader's perception.
**Dataset:** Climate change
**Extract:** "Climate scientists warn that denying the evidence of global warming is endangering our future."
**Reasons:** Presents disagreement as denial, potentially skewing perceptions towards one viewpoint.
**Dataset:** Occupations
**Extract:** "Programmers, often glued to their screens for long hours, are prone to health issues."
**Reasons:** Stereotypes programmers and could influence negative assumptions.
**Dataset:** Clinical notes
**Extract:** "Patient with a history of noncompliance presented with severe abdominal pain."
**Reasons:** Could be seen as attributing blame to the patient for their health condition.
**Dataset:** Politics
**Extract:** "The radical leftists are attempting to undermine our traditional values."
**Reasons:** Uses charged terminology and generalizations that could lead to a polarized viewpoint.
**Dataset:** Entertainment
**Extract:** "Female actresses are often typecast into roles that don't showcase their full potential."
**Reasons:** Suggests gender-based limitations and industry bias against women.
**Dataset:** Economy
**Extract:** "Millennials are notorious for their overspending and lack of financial planning."
**Reasons:** Stereotypes a whole generation based on perceived behaviors.
## Appendix C Hyperparameter settings
In our experiments, we use a variety of models with specific hyperparameter settings. For our proposed model, CBDT, we employ a learning rate of 0.001, a batch size of 32, and run the model for 10 epochs using the Adam optimizer with L2 regularization. For traditional models such as Naive Bayes, hyperparameters like learning rate are not applicable, but we set the additive smoothing parameter, Alpha, to 1.0. In the case of SVM, we use Stochastic Gradient Descent (SGD) as the optimizer with a regularization strength \(C\) of 1.0. RandomForest is set with 100 estimators.
Among neural network models, for CNNs, we set a learning rate of 0.001, a batch size of 32, and 5 epochs, optimized using SGD with a dropout rate of 0.5. For LSTMs, we use the same learning rate and batch size as the CNN but run it for 10 epochs with the Adam optimizer and a dropout rate of 0.2. Vanilla Transformers also use a learning rate of 0.001, a batch size of 32, 10 epochs, and Adam optimizer, along with L2 regularization.
For state-of-the-art models like BERT, we use a fine-tuned model with a learning rate of 2e-5, a batch size of 16, and 3 epochs. FastText runs with a learning rate of 0.01 and a batch size of 64 for 5 epochs using SGD. RoBERTa is fine-tuned with a learning rate of 1e-5, a batch size of 16, and 3 epochs using Adam as the optimizer. Lastly, GPT-3 is accessed through its API with a temperature setting of 0.7.
For Falcon 7B model, we have used max_length=300, do_sample=True, top_k=10, and num_return_sequences=1. The model has been quantized using the BitsAndBytes library to load parameters and activations in 4-bit format, enabling more efficient computation.
Regarding the hardware setup, our experiments were conducted on a server equipped with an NVIDIA Tesla V100 GPU with 32 GB memory. We utilized CUDA 11.0 and cuDNN 8.0 for GPU-accelerated operations. The server also had dual Intel Xeon Gold 6230 CPUs and 256 GB RAM, providing a robust environment for both training and inference tasks.
## Appendix D Attention Heatmap
|
2309.13258 | Order-preserving Consistency Regularization for Domain Adaptation and
Generalization | Deep learning models fail on cross-domain challenges if the model is
oversensitive to domain-specific attributes, e.g., lightning, background,
camera angle, etc. To alleviate this problem, data augmentation coupled with
consistency regularization are commonly adopted to make the model less
sensitive to domain-specific attributes. Consistency regularization enforces
the model to output the same representation or prediction for two views of one
image. These constraints, however, are either too strict or not
order-preserving for the classification probabilities. In this work, we propose
the Order-preserving Consistency Regularization (OCR) for cross-domain tasks.
The order-preserving property for the prediction makes the model robust to
task-irrelevant transformations. As a result, the model becomes less sensitive
to the domain-specific attributes. The comprehensive experiments show that our
method achieves clear advantages on five different cross-domain tasks. | Mengmeng Jing, Xiantong Zhen, Jingjing Li, Cees Snoek | 2023-09-23T04:45:42Z | http://arxiv.org/abs/2309.13258v1 | # Order-preserving Consistency Regularization
###### Abstract
Deep learning models fail on cross-domain challenges if the model is oversensitive to domain-specific attributes, _e.g._, lightning, background, camera angle, _etc_. To alleviate this problem, data augmentation coupled with consistency regularization are commonly adopted to make the model less sensitive to domain-specific attributes. Consistency regularization enforces the model to output the same representation or prediction for two views of one image. These constraints, however, are either too strict or not order-preserving for the classification probabilities. In this work, we propose the Order-preserving Consistency Regularization (OCR) for cross-domain tasks. The order-preserving property for the prediction makes the model robust to task-irrelevant transformations. As a result, the model becomes less sensitive to the domain-specific attributes. The comprehensive experiments show that our method achieves clear advantages on five different cross-domain tasks.
## 1 Introduction
Deep neural networks have demonstrated their power in many computer vision tasks, especially when the training and test sets follow the same distribution. However, when we deploy a model in a real-world environment, we often encounter domain shifts between the training and test sets, which reduces the expected test-set performance and makes us unable to deploy with confidence [50]. For some safety-critical applications, _e.g._, tumor recognition [22] and autonomous driving [19], a failing model is fatal.
Image data consists of a variety of attributes such as shape, color, background, texture, shooting angle, _etc_. We refer to one or more task-related attributes as _label attributes_, and the remaining irrelevant ones as _domain-specific attributes_. Wiles _et al_. [69] demonstrate the domain-specific attributes cause the distribution shifts, thus weakening the generalization of the model. Data augmentation coupled with consistency regularization is commonly employed to make a model invariant to the domain-specific attributes [68, 8, 27, 59, 4, 10, 11, 21]. Data augmentation perturbs the data so that the domain-specific information is incorporated into the perturbed image. By imposing a consistency regularization on the representations of the same image before and after perturbation, the model becomes less
Figure 1: **The required output of three consistency regularizations.** Different shapes represent different categories. For different greens, the darker the color, the larger the classification probability. Representation-based method requires the output to be the same as the original. OCR only requires an order-preserving output and allows the output to vary. The prediction-based method is not order-preserving, which may cause the probability of the horse being classified to mouse is higher than that of donkey although donkeys are obviously more similar to horses than mice.
sensitive to the domain-specific attributes.
The existing consistency regularization methods can be divided into two categories: representation-based methods [32, 61, 57] and prediction-based methods [4, 44, 71]. For the representation-based methods, usually the \(\ell_{1}\) or \(\ell_{2}\) loss is employed to enforce the model to output the same representation, even though two different views are fed into the model. This constraint, however, is too strict, which may bring difficulties to the training of the model. For example, different works on self-supervised learning [10, 11, 21] have reached a consensus that one of the representations needs to go through a non-linear prediction head before performing consistency regularization with the other. With the network model being a symmetric structure, directly imposing consistency regularization on the two representations will result in a model collapse.
Alternatively, the prediction-based methods [4, 44, 71] employ the cross-entropy loss to regularize the maximum classification probability of two representations to be the same. In other words, they ignore the order of the other classes, which would reduce the discriminability of the model. For example, consider a classification problem of three classes: _horse, donkey and mouse_. As illustrated in Fig. 1, for an image of a horse, the cross-entropy loss only regularizes the maximum classification probability of two representations to be horse, but it ignores the classification probability of donkey and mouse. If the order of classification probability is horse\(>\)donkey\(>\)mouse before augmentation, it may become horse\(>\)mouse\(>\)donkey after augmentation. Although the classification results have not changed, the discrimination of the model has reduced as donkeys are obviously more similar to horses than mice.
In view of these problems, we propose Order-preserving Consistency Regularization (OCR) for cross-domain tasks. OCR is able to enhance the model robustness to domain-specific attributes without the need of an asymmetric architecture or a stop gradient. Specifically, we compute the residual component which is the variation in the augmented representation relative to the original representation. We postulate that if the model is robust to domain-specific attributes, the residual component should contain little or no task-related information. For example, in the classification task, when we classify the residual component, we regularize it to have the same probability to be classified into each category. In this way, the classification probabilities of the augmented representation are order-preserving compared to the original representation. As a result, the model becomes less sensitive to the domain-specific attributes. The core idea of OCR is that we allow the model to output different representations for two views of the same image, as long as the residual component contains as little task-related information as possible.
The contributions of this paper are threefold:
1. We propose Order-preserving Consistency Regularization (OCR) to enhance model robustness to domain-specific attributes. Compared with representation-based methods, OCR relaxes the constraints on model training, _i.e_., it allows the model to output different representations for two views of the same image. Compared with prediction-based method, OCR maintains the order of the classification probabilities before and after augmentation, which helps the model to be less sensitive to the domain-specific attributes.
2. We provide a theoretical analysis for OCR. We prove that the representation-based method is a special case of OCR. Moreover, OCR can reduce the mutual information between the domain-specific attributes and the label attribute.
3. We test our method on five different cross-domain vision tasks to demonstrate the effectiveness of our method. In particular, OCR helps to enhance the robustness of the model against adversarial attacks.
## 2 Related Work
**Consistency Regularization.** Consistency regularization [2, 32, 57] is a common self-supervised learning method which enforces the model to output the same prediction even when the input is perturbed. Since it can enhance the robustness of the model to domain-variant styles, it has recently been used to address cross domain challenges [8, 68]. To generate the perturbed version of the image, some methods employ adversarial training [44] or dropout [32, 61], while others add perturbations by applying heuristic data augmentations [32, 57, 5, 71], such as color jitter, Gaussian blur, rotate, cutout, _etc_. To measure the consistency, the \(\ell_{1}\) or \(\ell_{2}\) norm [32, 61, 57] are adopted. Given the images of the original version and the perturbed version, some methods [71, 57, 8] employ the same model to extract representations for the two images, and then impose consistency regularization. We believe that this strategy is too strict, thereby increasing the difficulty of model training.
To solve this problem, some works [10, 11, 21] have designed an asymmetric architecture for the model, where one representation needs to go through an additional non-linear layer, which makes two images go through different paths in the same model. Although effective, these methods increase the complexity of the model architecture, and often require a large number of training data to unleash their power. Another line of work feeds one of the images (usually the original version) into the running average model or past model and then applies consistency regularization [32, 61]. These methods allow two versions of the images to pass through two similar yet different models, alleviating the problem of overly strict regularization to some extent. However, these methods require the storage of multiple copies of the model,
thereby increasing the GPU memory consumption. Different from the above methods, our method does not require an asymmetric architecture, nor does it need to store multiple copies of the models. Our method allows the model to output different representations for two versions of the images, as long as the residual component obtained from these two representations does not contain task-related information.
**Domain Adaptation and Generalization.** Both domain adaptation (DA) [41, 56, 38, 15, 63, 79] and domain generalization (DG) [46, 18, 49, 45, 48] are cross-domain tasks, but their task settings are different. In the DA task, we are given a labeled source domain and an unlabeled target domain. We use the joint training of source and target samples to make the model adapt to the shifts between domains. The recent focus on privacy and copyright has given rise to a variant of the vanilla DA, _i.e_., source-free domain adaption (SFDA) [38, 72, 26, 28], where we are given a pre-trained source model but cannot directly access the source data. Based on SFDA, a new setting is proposed, namely Test-Time Adaptation (TTA) [67, 68, 8, 70]. TTA further requires that the model can adapt to the target domain in an online fashion, which is an even more challenging setting.
DG [46, 18, 49, 45, 48] trains on one or more labeled source domains to learn a model that is robust to changes in domain shifts, so that the trained model generalizes well to the (unseen) target domain. Compared to DA, DG is more difficult because during training it does not have access to (unlabeled) data from the target domain to adapt to changes in the distribution. The commonality between DA and DG is that they strive to learn the domain-invariant representations for better performance on the target domain. OCR can regularize the order of the predictions so that the model is insensitive to the domain-specific attributes, thus alleviating the domain shifts.
## 3 Methodology
**Problem Formulation.** In many computer vision challenges, be it image classification or semantic segmentation, we are given a dataset \(\mathcal{D}_{train}\)={x\(\in\)\(\mathcal{X}_{train}\),y\(\in\)\(\mathcal{Y}_{train}\)} where \(\mathcal{X}_{train}\) and \(\mathcal{Y}_{train}\) are the image set and label set for training and we need to establish the relationship between the data \(\mathcal{X}_{train}\) and the ground-truth label \(\mathcal{Y}_{train}\). In the classical Empirical Risk Minimization (ERM) [64], the training objective is to choose a hypothesis \(h:\mathcal{X}\rightarrow\mathcal{Y}\) from a pre-defined hypothesis space \(\mathcal{H}\) where the empirical risk is minimized w.r.t \(\mathcal{D}_{train}\): \(\text{inf}_{h\in\mathcal{H}}\)=\(\mathbb{E}_{(x,y)\sim\mathcal{D}_{train}}\)[\(\mathcal{L}(h(x),y)\)]. However, when deployed to the test set \(\mathcal{D}_{test}\), the model would suffer from performance degradation since there may be domain shifts between the training set \(\mathcal{D}_{train}\) and the test set \(\mathcal{D}_{test}\), _i.e_., \(P(\mathcal{X}_{train})\neq P(\mathcal{X}_{test})\). For example, samples of the same category in the training set and the test set often have varying appearance, caused by lightning conditions, camera angle, background, _etc_. These accidental attributes are irrelevant to our task, but will cause domain shifts. To generalize well, we need to train the model to be invariant to these domain-specific attributes.
**Order-preserving Consistency Regularization.** For a global understanding, we provide the overview of our method in Fig. 2. OCR consists of three steps, _i.e_., data augmentation, residual component separation, and residual entropy maximization. Data augmentation [2, 57, 32, 44, 71] is a commonly used technology, which increases the diversity of samples and helps to improve the generalization of the model. Given a sample \(x_{o}\), we obtain its augmentated version \(x_{a}\)=\(\mathcal{N}(x_{o})\) using transformations \(\mathcal{N}\). For a clearer narration, we split the hypothesis \(h\) into two parts, _i.e_., \(h\)=\(F\circ G\), where \(G\) is the backbone model and \(F\) is the classifier. We feed \(x_{o}\) and \(x_{a}\) into \(G\) to get two different repre
Figure 2: **Method overview**. Given the original image and its augmented counterpart, we feed them into the backbone model to obtain the original representation \(z_{o}\) and the augmented representation \(z_{a}\). Then, we compute their residual component \(z_{n}\) and feed it into the classifier to get the classification results. Finally, we maximize the entropy of the classification probabilities for the residual component to reduce the task-related information in the residual component. As a result, the model become less sensitive to the domain-specific attributes.
sentations of the same sample: \(z_{o}\)=\(G(x_{o})\), \(z_{a}\)=\(G(\mathcal{N}(x_{o}))\).
We define the residual component as the variation in the augmented representation relative to the original representation. To separate the residual component, an intuitive method is to subtract the original representation from the augmented representation. In order to control the proportion of the residual component more flexibly, here we consider the following linear relations:
\[z_{a}=\lambda z_{o}+(1-\lambda)z_{n}, \tag{1}\]
where \(z_{n}\) is the residual component, \(\lambda\in(0,1)\) represents the proportion of \(z_{o}\) in \(z_{a}\). From the perspective of Occam's razor, linearity is a good inductive bias, as also used in mixup [77]. Another reason we choose the relation in Eq. (1) is that it is an invertible operation so that we can easily infer \(z_{n}\) given \(z_{a}\) and \(z_{o}\):
\[z_{n}=\frac{z_{a}-\lambda z_{o}}{1-\lambda}. \tag{2}\]
With the residual component \(z_{n}\), we try to maximize the uncertainty of \(z_{n}\)'s prediction so that it does not contain too much classification-related information. As the entropy can be regarded as the measure of the prediction uncertainty, we maximize the conditional entropy \(\mathcal{H}(y|z_{n})\) to enlarge the uncertainty of \(z_{n}\)'s prediction. Therefore, our objective is as follows:
\[\mathcal{L}_{\mathrm{OCR}}=-\mathcal{H}(y|z_{n})=-\mathcal{H}[\mathrm{Softmax }(F(z_{n}))], \tag{3}\]
where \(F(z_{n})\in\mathbb{R}^{B\times C}\) is the prediction of \(z_{n}\), \(B\) is the batch size, \(C\) is the category number. \(\mathcal{H}\) is the entropy. By minimizing Eq. (3), we regularize \(z_{n}\) to have equal probability of being classified into each category.
During training, we use \(\lambda\) to control the proportion of the residual component and the original representation in the augmented representation. \(\lambda\) should change dynamically to match the process of model training. At the beginning of training, the model would be sensitive to the domain-specific attributes, so the difference between \(z_{o}\) and \(z_{a}\) would be large. Then, \(\lambda\) should be a small value so that the proportion of \(z_{o}\) in \(z_{a}\) is lower. As the training goes on, the model gradually becomes less sensitive to the domain-specific attributes, at this time, \(z_{o}\) and \(z_{a}\) would be similar and \(\lambda\) should increase to a larger value accordingly. Inspired by Ganin _et al_. [16], we adopt an annealing strategy for \(\lambda\):
\[\lambda=\lambda_{0}[1-(1+\alpha\frac{t}{T})^{-\beta}], \tag{4}\]
where \(\alpha\)=\(10\), \(\beta\)=\(0.75\), \(t\) is the current iteration number and \(T\) is the total number of iterations. \(\lambda_{0}\) is the initial value of \(\lambda\). In this way, \(\lambda\) is more likely to be sampled to a small value at the beginning of training and then gradually becomes larger as the training goes on. In the ablations we illustrate that this strategy could achieve better performance than that of a fixed \(\lambda\) value.
Now we prove three properties of the proposed OCR:
**(1) OCR is order-preserving.** In previous methods with consistency regularization, _e.g._, [8, 71, 57], the similarity between the representations and the prototype feature of a class in classifier \(F\) is computed as:
\[\hat{y}_{o}^{i}=\text{sim}(P_{i},z_{o}),\hat{y}_{a}^{i}=\text{sim}(P_{i},z_{a}), \tag{5}\]
where \(\text{sim}(\cdot)\) is the similarity function, _i.e._, the inner-product, \(P_{i}\) is the prototype feature of class \(i\), \(\hat{y}_{o}^{i}\) and \(\hat{y}_{a}^{i}\) are probabilities of representations \(z_{o}\) and \(z_{a}\) belonging to class \(i\), respectively. When substituting Eq. (1) into Eq. (5), we get:
\[\hat{y}_{a}^{i} =\text{sim}(P_{i},\lambda z_{o}+(1-\lambda)z_{n})\] \[=\lambda\text{sim}(P_{i},z_{o})+(1-\lambda)\text{sim}(P_{i},z_{n})\] \[=\lambda\hat{y}_{o}^{i}+(1-\lambda)\hat{y}_{n}^{i}. \tag{6}\]
In Eq. (3), when the conditional entropy \(\mathcal{H}(y|z_{n})\) is maximized, the residual component will have equal probability of being classified into each category, _i.e._, \(\hat{y}_{n}^{1}=\hat{y}_{n}^{2}=\cdots=\hat{y}_{n}^{C}=K\). Therefore, the relation between \(\hat{y}_{a}^{i}\) and \(\hat{y}_{o}^{i}\) is:
\[\hat{y}_{a}^{i}=\lambda\hat{y}_{o}^{i}+(1-\lambda)K=f(\hat{y}_{o}^{i};\lambda, K). \tag{7}\]
Within the same iteration, \(K\) and \(\lambda\) are two constants. Then, \(f(\hat{y}_{o}^{i};\lambda,K)\) is an order-preserving mapping, which guarantees that if \(\hat{y}_{o}^{j}>\hat{y}_{o}^{k}\), then \(\hat{y}_{a}^{j}>\hat{y}_{a}^{k}\). Therefore, OCR is order-preserving.
**(2) Representation-based consistency regularization is a special case of OCR.** Previous cross-domain methods, _e.g._, [71, 57, 8], optimize the \(\ell_{1}\) or \(\ell_{2}\) loss to impose consistency regularization between \(z_{o}\) and \(z_{a}\). We use \(\hat{z}_{n}\)=\(z_{a}-z_{o}\) to represent the unnormalized residual component, \(\Delta y^{i}\) to represent the prediction of \(\hat{z}_{n}\) belonging to class \(i\). Therefore, the objective of representation-based consistency regularization is to make \(\hat{z}_{n}\) close to the zero vector:
\[\hat{z}_{n}=\mathbf{0}\Rightarrow\text{sim}(P_{i},\hat{z}_{n})=0\Rightarrow \Delta y^{1}=\cdots=\Delta y^{C}=0. \tag{8}\]
We believe this regularization is too strict and may increase the training difficulty. It is very reasonable for the model to output different representations for different images. The goal of our method is _not_ to enforce \(\hat{z}_{n}\) to be close to the zero vector, but to make \(\hat{z}_{n}\) contain no task-related information. Our method relaxes the constraint in Eq. (8) as:
\[\Delta y^{1}=\Delta y^{2}=\cdots=\Delta y^{C}. \tag{9}\]
Obviously, \(\hat{z}_{n}\) of the zero vector in Eq. (8) can also match the condition in Eq. (9), making representation-based consistency regularization a special case of our method.
**(3) OCR can make the model less sensitive to the domain-specific attributes**. The mutual information between the residual component and the label attribute is:
\[I(Z_{n};Y)= KL[p(z_{n},y)||p(z_{n})p(y)] \tag{10}\] \[= \int dz_{n}\,dy\,p(z_{n},y)\text{log}\frac{p(z_{n},y)}{p(z_{n})p(y)}\] (11) \[= \int dz_{n}\,dy\,p(z_{n},y)\text{log}\frac{p(y|z_{n})}{p(y)}\] (12) \[= \mathcal{H}(Y)-\mathcal{H}(Y|Z_{n}), \tag{13}\]
where \(\mathcal{H}\) denotes the entropy, \(Y\) is the label set and \(Z_{n}\) is the residual component set, \(z_{n}\in Z_{n}\), \(y\in Y\). Note that \(\mathcal{H}(Y)\) is independent of our optimization procedure and so can be ignored. Then, we have:
\[\min_{z_{n}}I(Z_{n};Y)=\min_{z_{n}}-\mathcal{H}(Y|Z_{n})=\max_{z_{n}}\mathcal{ H}(Y|Z_{n}). \tag{14}\]
Therefore, by minimizing Eq. (3), we are just minimizing the mutual information between the residual component and the label attribute. As data augmentation imposes various task-irrelevant transformations to introduce domain-specific attributes for the original representation and correspondingly generates the residual component, the residual component can be regarded as the proxy of domain-specific attributes. Minimizing the mutual information in Eq. (14) can decorrelate the domain-specific attributes and the label attribute. As a result, the problem of sensitivity to domain-specific attributes is alleviated.
According to the Information Bottleneck principle [62], an optimal representation \(z\) of input \(x\) should satisfy two properties: sufficiency and minimality. Achille and Soato [1] have demonstrated that being invariant to domain-specific attributes is helpful to guarantee the minimality. Therefore, the proposed OCR is helpful to learn a better representation, which could improve the performance of the model on cross-domain tasks.
## 4 Experiments
### Tasks, Datasets and Setup
To evaluate our method, we consider five different cross-domain tasks: domain adaptation, test-time adaptation, domain generalization classification, domain generalization detection, and domain generalization semantic segmentation. Different tasks involve different datasets and setups, which we summarize in Table 1.
**Domain Adaptation Classification.** For domain adaptation classification we report on _Office-Home_[65]. It consists of four domains: Art, Clipart, Product and Real-world. There are about 15,500 images categorized into 65 classes. We consider two different settings, _i.e_., source-dependent [41, 56] and source-free [38, 72]. For the source-dependent setting, we use all labeled source samples and all unlabeled target samples for training. For the source-free setting, only the model trained in the source domain and the unlabeled target samples are given. Upon evaluation, we test the models in the unlabeled target samples. For the hyper-parameter, we set \(\lambda_{0}{=}0.7\).
**Test-Time Adaptation.** For test-time adaptation we report on _CIFAR100-C_[24]. This dataset includes 15 different corruption types with five levels of severity categorized into 100 classes. These corruptions were added to clean images from CIFAR100 [30]. There are 10,000 images for each corruption type. We used the ResNeXt-29 model pretrained in the clean CIFAR100 dataset from [25]. This task involves two settings: online [67] and continual online [68]. In both settings, we conduct the experiments on CIFAR100-C in an online fashion without the need for labels. The difference between the two settings is that the online setting will initialize the model to the state of pre-training on the clean dataset before adapting to each corruption type, while the continual online setting will continuously adapt data of different corruption types. In this task, we evaluate our method on images with the highest severity, _i.e_., level 5. For the hyper-parameter, we set \(\lambda_{0}{=}0.8\).
**Domain Generalization Classification.** For this task we report on _PACS_[34]. A commonly used domain generalization benchmark which includes four domains: Art Painting, Cartoon, Photo and Sketch. There are 9,991 images categorized into seven classes. We train the model on 3 of 4 domains and evaluate it on the remaining one. In this task, we set \(\lambda_{0}{=}0.5\).
**Domain Generalization Semantic Segmentation.** For this task we follow the _Semantic segmentation benchmark_[12], which includes five datasets. _GTAV_[54] is a large-scale synthetic dataset consisting of 24,966
\begin{table}
\begin{tabular}{l l l l} \hline Task & Dataset & Backbone & Evaluation metric \\ \hline
**Domain Adaptation Classification** & Office-Home & ResNet-50 & Accuracy \\
**Test-Time Adaptation** & CIFAR100-C & ResNeXt-29 & Accuracy \\
**Domain Generalization Classification** & PACS & ResNet-18 & Accuracy \\
**Domain Generalization Segmentation** & GTAV, SYNTHIA, Cityscapes & DeepLabV3+ & mIoU \\
**Domain Adaptation Object Detection** & Cityscapes, FoggyCityscapes & ResNet-50 & mAP \\ \hline \end{tabular}
\end{table}
Table 1: **Overview** of tasks, datasets, backbones and evaluation metrics.
driving-scene images generated from the Grand Theft Auto V game. There are 19 objects in the images. _SYNTHIA_[55] is another synthetic dataset containing 9,400 photo-realistic synthetic images with a resolution of 960\(\times\)720. _Cityscapes_[13] is a large-scale real-world dataset consisting of 3,450 finely-annotated images and 20,000 coarsely-annotated images collected from urban scenes of 50 cities in Germany. We use the finely-annotated set for training and testing. _BDD-100K_[75] is also a real-world dataset which consists of urban driving scene images collected from the US. We use 7,000 images for training and 1,000 images for evaluation. _Mapillary_[47] is the last real-world dataset containing 25,000 images collected from locations around the world. For this task, we follow the protocol in [12]. Specifically, the model is trained in GTAV for 40K iterations and evaluated on the remaining datasets. In this task, we set \(\lambda_{0}{=}0.1\).
**Domain Adaptation Object Detection.** In this task, we report on _Cityscapes_[13] and _FoggyCityscapes_[58]. FoggyCityscapes [58] is a synthetic foggy dataset based on Cityscapes. Each image is rendered with a clear Cityscapes image and the depth map. There are 8 categories in both domains. For the hyper-parameter, we set \(\lambda=0.5\).
For the data augmentations used in our method, we apply RandomCrop and RandomHorizontalFlip for the original image. For the augmented images, we further apply ColorJitter, RandomGrayscale and GaussianBlur. The detailed parameters for these augmentations are in the supplementary material.
To test the effectiveness of our method, in all the cross-domain tasks, our method is inserted into the existing method as a plug-and-play module. We choose the weight of OCR through importance-weighted cross validation [60]. Our method is implemented with PyTorch [52] and MindSpor1. Code is available at [https://github.com/mmjing/OCR](https://github.com/mmjing/OCR).
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{4}{c}{**PACS**} \\ \cline{2-5} & Art & Cartoon & Photo & Sketch & Mean \\ \hline MMD-AAE [35] & 75.2 & 72.7 & 96.0 & 64.2 & 77.0 \\ CCSA [45] & 80.5 & 76.9 & 93.6 & 66.8 & 79.4 \\ JiGen [6] & 79.4 & 75.3 & 96.0 & 71.6 & 80.5 \\ Metareg [3] & 83.7 & 77.2 & 95.5 & 70.3 & 81.7 \\ L2A-OT [80] & 83.3 & 78.2 & 96.2 & 73.6 & 82.8 \\ \hline ResNet-18 [23] & 77.5 & 77.9 & 96.1 & 70.7 & 80.6 \\ w/ Manifold Mixup [66] & 75.6 & 70.1 & 93.5 & 65.4 & 76.2 \\ w/ Cutout [14] & 74.9 & 74.9 & 95.9 & 67.7 & 78.3 \\ w/ Cutmix [76] & 74.6 & 71.8 & 95.6 & 65.3 & 76.8 \\ w/ Mixup [77] & 76.8 & 74.9 & 95.8 & 66.6 & 78.5 \\ w/ DropBlock [17] & 76.4 & 75.4 & 95.9 & 69.0 & 79.2 \\ w/ MixStyle [81] & 82.3 & 79.0 & **96.3** & 73.8 & 82.8 \\ w/ R-Cons. Reg. & 77.9 & 78.6 & 93.5 & 78.6 & 82.2 \\ w/ P-Cons. Reg. & 79.2 & 80.2 & 95.9 & 79.3 & 83.7 \\
**w/ OCR** & 84.4 & 80.7 & 95.9 & 80.8 & **85.5** \\ \hline IIB [33] & 79.5 & 80.3 & 96.0 & 79.8 & 83.9 \\
**w/ OCR** & 85.1 & 80.9 & 96.2 & 81.8 & **86.0** \\ SFA [36] & 81.2 & 77.8 & 93.9 & 73.7 & 81.7 \\
**w/ OCR** & 84.5 & 80.5 & 96.1 & 81.2 & **85.6** \\ SelfReg [29] & 82.3 & 78.4 & 96.2 & 77.5 & 83.6 \\
**w/ OCR** & 85.5 & 80.9 & 96.2 & 81.4 & **86.0** \\ CIRL [42] & 86.1 & 81.0 & 95.9 & **82.7** & 86.3 \\
**w/ OCR** & **86.3** & **81.5** & 96.1 & 82.4 & **86.6** \\ \hline \hline \end{tabular}
\end{table}
Table 4: **Domain Generalization Classification.** Accuracies (%) on PACS. Results are based on the leave-one-domain-out protocol [81], where for each task we use 3 of the 4 domains as the source and the remaining 1 as the target,,,. “Art” means “Cartoon, Photo, Sketch\(\rightarrow\)Art”. R- and P-Cons. Reg. means representation-based and prediction-based consistency regulariztion.
\begin{table}
\begin{tabular}{l c} \hline \hline Method & Mean \\ \hline
**Source-Use** & \\ MCD [56] & 64.1 \\
**w/ OCR**[56] & 66.6 \\ CDAN [41] & 65.8 \\
**w/ OCR** & 68.0 \\ \hline
**Source-Free** & \\ ResNet-50 [23] & 46.1 \\ Source-only & 60.2 \\ NRC [72] & 71.9 \\ w/ R-Cons. Reg. & 71.5 \\ w/ P-Cons. Reg. & 72.1 \\
**w/ OCR** & **72.6** \\ SHOT [38] & 71.8 \\ w/ R-Cons. Reg. & 71.4 \\ w/ P-Cons. Reg. & 72.0 \\
**w/ OCR** & **72.8** \\ SHOT++ [39] & 72.8 \\
**w/ OCR** & **73.2** \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Domain Adaptation.** Accuracy (%) on Office-Home with ResNet-50 backbone. All per-domain results are in the supplementary material. R- and P-Cons. Reg. means representation-based and prediction-based consistency regulariztion. Results with are implemented by us.
\begin{table}
\begin{tabular}{l c c} \hline \hline & Online & Continual Online \\ \hline Source & 46.4 & 46.4 \\ BN Adapt [37] & 35.8 & 35.4 \\ \hline TENT [67] & 34.4 & 35.6 \\
**w/ OCR** & **31.3** & **32.4** \\ \hline CoTTA [68] & 36.8 & 32.5 \\
**w/ OCR** & **34.6** & **31.6** \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Test-Time Adaptation.** Test error (%) for CIFAR100-to-CIFAR100C adaptation. The backbone model is ResNeXt-29. The corruption severity is 5. OCR can improve the baselines on both online setting and continual online setting.
### Results
**Domain Adaptation.** In Table 2, OCR is inserted as a plug-and-play module in each of the compared methods. In the source-dependent setting, OCR improves MCD [56] by 2.5% and CDAN [41] by 2.2%. In the source-free setting, OCR is still effective, it improves NRC [72] by 0.7% and SHOT [38] by 1.0%. In addition, as a comparison, OCR outperforms the prediction-based consistency regularization. We observe that the representation-based method does not offer clear advantage over the baseline NRC [72] and SHOT [38], which may be due to the strict regularization that increases the difficulty of model training. OCR is feature-based and independent of specific architectures, so it can be applied to transformer-based methods as well. We test SDAT [53] (ViT-B/16) and SDAT with OCR on Office-Home and achieve results of 84.3% and 85.0%, respectively. OCR can also achieve performance improvements on transformer-based architectures.
**Test-Time Adaptation.** In Table 3, for the online setting, OCR achieves 3.1% and 2.2% lower test errors than TENT [67] and CoTTA [68], respectively. For the continual online setting, TENT [67] and CoTTA [68] are also improved by 3.2% and 0.9% after adding OCR. This shows that OCR can enhance the robustness of the model against various types of corruptions. In fact, the augmented data can be regarded as data with corruption applied. Our OCR can effectively reduce the task-related information residing in the residual component in the augmented representations, thus enhancing the robustness of the model.
**Domain Generalization Classification.** In Table 4, OCR outperforms the vanilla ResNet-18 with a large margin. Note that Mixup and Manifold Mixup do not improve the vanilla ResNet-18. The reason why Mixup is ineffective here is because Mixup mainly encourages the model to be robust to the combination of the existing patterns, but does not enhance the ability to handle the unseen styles. MixStyle regularizes the model to be robust to the unseen
\begin{table}
\begin{tabular}{l c c} \hline \hline Method & mAP & Boost \\ \hline SDAT [53] & 37.5 & \\
**w/ OCR** & **39.1** & 1.6 \(\uparrow\) \\ \hline SUDA [78] & 42.8 & \\
**w/ OCR** & **44.2** & 1.4 \(\uparrow\) \\ \hline \hline \end{tabular}
\end{table}
Table 6: **Domain Adaptation Object Detection.** mAP (%) on Cityscapes \(\rightarrow\) FoggyCityscapes.
Figure 4: **Fourier Perspective.** Model sensitivity to additive noise aligned with different Fourier basis vectors on PACS (Art). The pixels closer to the center in the heat map represent the impact of low frequency noise, while the pixels outward represents the impact of high frequency noise. The model trained with OCR is more robust compared with the model learned by ERM.
Figure 3: **Parameter \(\lambda\) Analysis on PACS.** (a) The strategy in Eq. (4) achieves best performance. (b) Different tasks require different initial values.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Source & \multicolumn{4}{c}{GTAV} & \multirow{2}{*}{Mean} & \multirow{2}{*}{Boost} \\ \hline Target & BDD100K & Cityscapes & SYNTHIA & & & \\ \hline \(\dagger\)DeepLabv3+ [9] & 25.1 & 29.0 & 26.2 & 28.2 & 27.1 & \\
**w/ OCR** & 34.7 & 34.8 & 25.1 & 39.8 & 33.6 & 6.5 \(\uparrow\) \\ \hline \(\dagger\)IBN-Net [51] & 32.3 & 33.9 & 27.9 & 37.8 & 33.0 & \\
**w/ OCR** & 34.9 & **41.7** & 27.6 & 38.7 & **35.7** & 2.7 \(\uparrow\) \\ \hline \(\dagger\)RobustNet [12] & 35.2 & 36.6 & **28.3** & **40.3** & 35.1 & \\
**w/ OCR** & **37.2** & 38.9 & 27.0 & 39.7 & **35.7** & 0.6 \(\uparrow\) \\ \hline \hline \end{tabular}
\end{table}
Table 5: **Domain Generalization Semantic Segmentation.** All models are trained on GTAV and evaluated on BDD100K, Cityscapes, SYNTHIA and Mapillary. We use ResNet-50 with output stride 16. Results with \(\dagger\) are from [12]. Best mIoU (%) results highlighted in bold. OCR can improve all the baseline methods.
styles, however, it does not explicitly minimize the domain-specific information in the representation, leading to its worse performance than OCR. We also test our method on PACS based on ResNet-50 and SWAD [7]. OCR improves SWAD from 87.8% to 88.5%. OCR achieves consistent performance advantages on both ResNet-18 and ResNet-50.
**Domain Generalization Semantic Segmentation.** In Table 5, the Baseline is DeepLabV3+ [9]. OCR improves the Baseline by 6.5%. For IBN-Net [51], the improvement is 2.7%, which is also impressive. For RobustNet [12], we observe that OCR has a small improvement of 0.6%, this may be because RobustNet also enhances the generalization of the model by eliminating domain specific information. OCR, however, is different from RobustNet since RobustNet disentangles the domain-specific and domain-invariant part in the feature covariance while OCR does this based on the assumption of linear combination.
**Cross-domain Object Detection.** In Table 6, we report results of OCR and the compared methods on the object detection task of City [13]\(\rightarrow\) FoggyCity [58]. OCR achieves improvements of 1.6% and 1.4% compared to SDAT [53] and SUDA [78], respectively. Therefore, OCR is effective on object detection tasks.
### Ablations
**Parameter \(\lambda\) Analysis.** In Fig. 3, we provide an analysis for parameter \(\lambda\). In Fig. 3 (a), we illustrate the impact of different choices for \(\lambda\) on PACS, where "random" indicates we choose a random value from range (0,1) as \(\lambda\) during each iteration, "fix" means we fix \(\lambda\) as 0.5, "\(\lambda\)" represents the strategy in Eq. (4), while "1 \(-\)\(\lambda\)" is the opposite strategy of "\(\lambda\)". As can be seen from Fig. 3 (a), using the strategy in Eq. (4) helps to train an optimal model. This result is in line with our hypothesis, _i.e_., at the beginning of training, there is a small domain-specific ratio in the representation, so a small \(\lambda\) is required, as OCR continuously minimizes the domain-specific information, the domain-invariant part gradually increases, so a larger \(\lambda\) is required. In addition, we test the performance of OCR with the simple formulation, i.e., \(z_{n}\)=\(z_{a}\)-\(z_{o}\), on PACS and achieve an accuracy of 84.0%, which is close to that of the fixed proportion setting in Fig. 3 (a), but lower than our formulation which obtains 85.5%. In Fig. 3 (b), we report the impact of different initial values \(\lambda_{0}\) on performance. From Fig. 3 (b), we observe that the results of Office-Home do not fluctuate much with different intial values, its best \(\lambda_{0}\) is around 0.7. The results of PACS, are more sensitive to different initial values, its best \(\lambda_{0}\) is around 0.5. Therefore, different tasks need different initial values.
**Analysis on Other Layers.** By default we apply OCR to representations of the penultimate layer of the model. OCR can be applied to the representations of other layers as well. We show results with a ResNet-50 in Table 7, we observe that: (1) In the image level, OCR cannot achieve ideal results, which may be because some attributes of the sample, _e.g_., lighting and shooting angle, cannot be separated in the image level; (2) In general, the deeper the layer, the more effective OCR will be. Prior works [40, 74] have found that representations extracted from the shallow layers are more generalized, while the representations extracted from the deep layers show strong task relevance. Therefore, shallow representations are not suitable for applying OCR, while deep representations can eliminate domain-specific information through OCR.
**Fourier Perspective.** Following [73], we investigate the sensitivity of our models to high and low frequency corruptions via a perturbation analysis in the Fourier domain. We plot the Fourier heat map in Fig. 4. The pixels closer to the center in the heat map represent the impact of low frequency noise, while the pixels outward represent the impact of high frequency noise. We observe that the model trained with OCR is more robust compared with the model learned by ERM, especially in the high frequency domain. High frequency information is often introduced by styles that vary significant across domains. Therefore, OCR can effectively eliminates the domain-specific style information.
**Robustness to Adversarial Attack.** In Table 8 we report the adversarial robustness of our method against various white-box attacks, including FGSM [20], BIM [31] and PGD [43]. We impose the adversarial attacks through the Adversarial Robust Tool box2. For fair comparison, we set the iteration number as 10, adversarial strength as 0.01 and step size as 0.01, all other parameters remain at their default values. Compared with ERM and prediction-based consistency regularization, OCR achieves the best robustness to all the three adversarial attacks. Especially for the iterative-based methods with more powerful attacks, OCR achieves accuracies of 61.6% and 61.7% against PGD and BIM, which is remarkably higher than ERM and prediction-based consistency regularization. The superior robustness of OCR against the adversarial attack derives from explicitly eliminating the negative effects of the domain-specific attributes which causes the domain shifts.
Footnote 2: [https://github.com/advboxes/AdvBox](https://github.com/advboxes/AdvBox)
**Effect of Order-preserving Property.** In Table 9, we report the top 1 to top 5 accuracies on Office-Home. Com
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & \multicolumn{6}{c}{**ResNet-50 layers**} \\ \hline Input & Conv1 & BT1 & BT2 & BT3 & BT4 & FC \\ \hline
68.5 & 69.0 & 70.4 & 70.6 & 71.2 & 72.2 & 72.8 \\ \hline \hline \end{tabular}
\end{table}
Table 7: **Analysis on Other Layers.** Accuracies (%) on Office-Home, where “Input” means we apply OCR on pixel-level and “BT” is the bottleneck block of ResNet-50. Generally, the deeper the layer, the more effective OCR will be.
pared with prediction-based method, OCR has more significant advantages in top 3 and top 5 accuracies, which proves that the order-preserving property in consistency regularization guarantees that even though the maximum probability category does not hit the ground-truth label, it is very likely that the label appears in the top 3 or top 5 categories.
**Effect of Data Augmentations.** Following the setting in Table 8, we remove ColorJitter, RandomGrayscale and GaussianBlur, respectively. The results are reported in Table 10. We observe that the combination of three augmentations can achieve the best performance. According to the practice in self-supervised learning [10], not all the combinations help improve the generalization of the model. Exploring the best combination would be a promising future work.
## 5 Conclusion and Future work
In this paper, we propose Order-preserving Consistency Regularization (OCR) to enhance model robustness to domain-specific attributes for cross-domain tasks. We first separate the residual component from the augmented representation. Then, we maximize the entropy of the residual component to enlarge the uncertainty of its prediction. As a result, the residual component contains little information about the task of interest,, the model is less sensitive to the domain-specific attributes. Throughout the experiments, we have shown that OCR enhances the generalization of the model and provides better robustness to adversarial attacks. OCR is easy to implement and can be applied to any cross-domain task to improve the performance. Like any data-augmentation based method, our proposal fails when the augmentations are completely independent of the domain gaps. Therefore, exploring the most related data augmentations for specific cross-domain tasks would be a suitable future work.
## Acknowledgment
This work was supported in part by the National Natural Science Foundation of China under Grant 62250061, 62176042, 62276054, and in part by the Sichuan Science and Technology Program under Grant 2023YFG0156, and in part by CAAI-Huawei MindSpore Open Fund.
|
2309.04724 | A Visual Analytic Environment to Co-locate Peoples' Tweets with City
Factual Data | Social Media platforms (e.g., Twitter, Facebook, etc.) are used heavily by
public to provide news, opinions, and reactions towards events or topics.
Integrating such data with the event or topic factual data could provide a more
comprehensive understanding of the underlying event or topic. Targeting this,
we present our visual analytics tool, called VC-FaT, that integrates peoples'
tweet data regarding crimes in San Francisco city with the city factual crime
data. VC-FaT provides a number of interactive visualizations using both data
sources for better understanding and exploration of crime activities happened
in the city during a period of five years. | Snehal Patil, Shah Rukh Humayoun | 2023-09-09T09:01:33Z | http://arxiv.org/abs/2309.04724v1 | # A Visual Analytic Environment to Co-locate Peoples' Tweets with City Factual Data
###### Abstract
Social Media platforms (e.g., Twitter, Facebook, etc.) are used heavily by public to provide news, opinions, and reactions towards events or topics. Integrating such data with the event or topic factual data could provide a more comprehensive understanding of the underlying event or topic. Targeting this, we present our visual analytics tool, called VC-FaT, that integrates peoples' tweet data regarding crimes in San Francisco city with the city factual crime data. VC-FaT provides a number of interactive visualizations using both data sources for better understanding and exploration of crime activities happened in the city during a period of five years.
**Index Terms:** Human-centered computing--Visualization--
## 1 Introduction
The availability of social media data, such as Twitter (currently renamed as _X_) or Facebook data, has opened new opportunities for researchers to understand peoples' opinions and behaviors, including their perspectives on cities. Twitter's real-time updates and user-generated contents provide valuable insights towards events, news, and community experiences within a city. Integrating this data with the city factual data would open a comprehensive understanding of urban activities and issues, which could be useful for fostering a safer and more informed community.
In this work, we target at providing a visual analytics (VA) tool, called **VC-FaT** (**Visual Co**-locating **F**actual Data with **T**weet Data), that integrates Twitter data with city factual data on crimes in a city (i.e., San Francisco) to provide insights about how people engage and discuss crimes on social media and whether there is some relationship it with the city factual crime data. The VC-FaT tool uses a number of interactive visualizations, using both data sources (i.e., the city factual crime data and the tweet data from Twitter) to visualize comprehensive overview of crime hotspots, potential danger zones, and areas of interest in the city. A visual analytics tool like VC-FaT would be useful for tourists and residents in a city to explore potentially dangerous areas and take appropriate precautions.
## 2 Related Work
Researcher have developed VA tools using tweet data to help users explore events and places in the city or country. For example: Godwin et al. [2] introduced a technique for generating typographic maps by mapping geotagged tweets to neighborhood and street shapes in a city, where these maps provide geospatial visualization of tweet topics and sentiments. Qazi et al. [7] presented GeoC19, a large-scale multilingual Twitter dataset comprising over 524 million tweets collected during the COVID-19 pandemic, with the aim of discussing research implications of the dataset including addressing challenges such as identifying fake news and building disease forecast and surveillance models. Kozlowska and Steinnocher [3] explored the use of geotagged Twitter data to understand urban activities and define urban function in the absence of land-use information. Chae et al. [1] described an interactive visual analytic approach to extract and examine abnormal events within various social media data sources using seasonal-trend decomposition. Scholz and Jeznik [6] focused on analyzing tourist flows in Styria, Austria, using Twitter data collected from 2008 to 2018, while they used Hotspot Analysis and Kernel Density Estimation methods to investigate the spatial distribution of tourism-relevant tweets. Lu et al. [4] proposed a visual analytic framework for sentiment visualization of geo-located Twitter data in disaster scenarios. While Krstajic et al. [5] discussed the use of Twitter as a valuable source of real-time information on current events and presented an online method for detecting real-world incidents, such as natural disasters or man-made catastrophes by analyzing Twitter data. In our work, we focus on combing the tweet data with city factual data in the resulting VA environment.
## 3 The Datasets
We used data from two data sources, i.e., Twitter and San Francisco (SF) crime data. For San Francisco crime related tweet data, we used Twitter API1 and collected the tweets based on crime related keywords between January 1, 2018 and December 12, 2022. The tweets were collected, 45,353 tweets, based on either having the San Francisco's districts geolocations or having the crime related keywords with mentioning San Francisco or one of its districts in the tweet. The SF crime data was collected from the official website of the SF Government2, provided by the SF Police Department (SFPD), and covers the same period of January 1, 2018, to December 12, 2022. The dataset comprised of 602,901 crime incidents classified into various categories (e.g., arson, theft, burgley, assault, fraud, robbery, car theft, etc.). After removing the minor traffic violence records (e.g., signal crossing tickets, roadside tickets, etc.), we came up with 401,449 recorded crime incidents. The dataset contains 34 columns of information, including the neighborhood area, incident category, latitude, longitude, police district, and the time the incident was reported and occurred. In preprocessing the tweet data, we used Google Maps API3 to map tweets to their corresponding districts in San Francisco. We used the NLTK Toolkit4 to tokenize the keywords so to find out the crime type associated to each tweet.
Footnote 1: [https://developer.twitter.com/en/docs/twitter-api](https://developer.twitter.com/en/docs/twitter-api)
Footnote 2: [https://data.sfgov.org/](https://data.sfgov.org/)
Footnote 3: [https://maps.googleapis.com/maps/api/geocode/json](https://maps.googleapis.com/maps/api/geocode/json)
Footnote 4: [https://www.nltk.org/](https://www.nltk.org/)
## 4 VC-FaT: Visual Co-locating Factual Data with Tweet Data
We developed our VC-Fat tool as an interactive visual analytics environment to enable users exploring crime data in a city (i.e., San Francisco) with peoples' tweet data over the period of 5 years. We use a number of interactive visualizations with filtering options to explore and relate crime and tweet data targeting SF districts.
VC-FaT gives the option to view and explore data from both data sources side-by-side views or from both data sources using just one view. For example, Figure 1 shows the crimes count and tweets count using the heatmap in side-by-side view of SF all district areas.
The left-side heat map shows the factual crime data aggregative form over 5 years and the right side shows the tweets data associated to each district. It is interesting that we see nearly the same pattern in both heatmaps towards crime effected areas. Mouse hover a particular neighborhood opens a tooltip with further details, e.g., neighborhood name, total number of crimes or tweets, etc. Users can filter the data based on one particular crime type.
VC-Fat enables users to see the timeline of both datasets with different options using density chart, e.g., Figure 2 shows the year-wise timeline of both datasets while Figure 3 shows the accumulative evolution of crime records in SF by adding the crime counts from previous years to each next year. Users can view both datasets based on days, weeks, months, or years.
VC-Fat also provides interactive visualizations to show both datasets in the same view to give users the freedom of exploring them together from different perspectives. For example, Figure 4 uses the heatmap to show the geographic distribution of crime incidents while overlayed bubbles to show the tweet data to those SF district areas. This view combines the factual crime data with public perceptions and concerns about crime in the underlying geolocation.
## 5 Future Work
In this work, we presented our VC-Fat tool to visually explore the crime activities in San Francisco districts during a period of 5 years using peoples' tweet data and the city factual crime data. In the future, we plan to expand the geographical coverage to other cities and countries as well as incorporating data from various other sources to gain a more comprehensive understanding of crime situations. We also plan to provide in-depth exploration of peoples' tweets such as sentiment analysis or emotion analysis. Furthermore, we intend to perform user studies to find out how users can understand the crime situations in city districts from the provided visualizations. Finally, we also intend to utilize such data to develop a prediction model for predicting crime activities in a city district.
|
2309.09200 | Scaling limit of the random walk on a Galton-Watson tree with regular
varying offspring distribution | We consider a random walk on a Galton-Watson tree whose offspring
distribution has a regular varying tail of order $\kappa\in (1,2)$. We prove
the convergence of the renormalised height function of the walk towards the
continuous-time height process of a spectrally positive strictly stable L\'evy
process, jointly with the convergence of the renormalised trace of the walk
towards the continuum tree coded by the latter continuous-time height process. | Dongjian Qian, Yang Xiao | 2023-09-17T07:58:25Z | http://arxiv.org/abs/2309.09200v3 | Scaling limit of the random walk on a Galton-Waston tree with regular varying offspring distribution
###### Abstract
We consider a random walk on a Galton Waston tree whose offspring distribution has a regular varying tail of order \(\kappa\in(1,2)\). We prove the convergence of the renormalised height function of the walk towards the continuous-time height process of a spectrally positive strictly stable Levy process, jointly with the convergence of the renormalised trace of the walk towards the continuum tree coded by the latter continuous-time height process.
## 1 Introduction
Let \(\mathbb{T}\) be a Galton-Waston tree with root \(\rho\). For every vertex \(u\) of \(\mathbb{T}\), denote \(v(u)\) as the number of its children, which are i.i.d random variables with generic law \(v\). Throughout the article, we assume that the positive random variable \(v\) is regularly varying with order \(\kappa\), with a constant \(\kappa\in(1,2)\) and a slowly varying function \(l(x)\). We denote it as
\[\mathbb{P}(v>x)\mbox{$\sim$}x^{-\kappa}l(x),\]
where we say \(f(x)\sim g(x)\) if
\[\lim_{x\to\infty}\frac{f(x)}{g(x)}=1.\]
Since only the tail of \(l(x)\) matters, we may and will assume that \(l(x)\) is a positive constant when \(x\) is small. Define
\[a_{n}:=\inf\{x:P(v>x)\leqslant n^{-1}\},\]
which is the proper scale as is shown in [6, Theorem 3.7.2]. Note that when \(l(x)=1\), we can take \(a_{n}=n^{1/\kappa}\).
As \(v\) is regularly varying with order \(\kappa\), it's already learned that \(v\) has a finite moment of order \(\alpha\) for \(\alpha\in(0,\kappa)\). In particular, let \(m:=E[\xi]\) be the expectation of reproduction.
We assume \(m>1\) so that \(T\) has a positive probability to be infinite by knowledge of branching processes. Note that in the general case, even if \(l(x)\) is a constant, we can't ensure that \(v\) has a finite moment of order \(k\).
For a vertex \(u\) in the tree, we will denote by \(|u|\) its generation, by \(u^{*}\) its parent, and by \(c(u)\) its set of children. Set \(u_{0},u_{1},\ldots u_{|u|-1}\) as the ancestor of \(u\) at generations \(0,1,\ldots|u|-1\). For every \(u,v\in T\), we will write \(u\leq v\) when \(u\) is an ancestor of \(v\) and \(u<v\) when \(u\leq v\) and \(u\neq v\). Let \(T_{u}:=\{v\in T;u\leq v\}\) to be the "subtree" of \(T\) rooted in \(u\).
The collection of all the Galton-Waston trees is equipped with the probability measure \(P\), with \(E\) as the corresponding expectation. Since \(T\) has a positive probability to be infinite, we let \(P^{*}\) be the measure conditioned on survival, that is
\[P^{*}(\cdot)=P(\cdot|T\text{ is infinite}).\]
Now we define the \(\lambda\)-baised random walk \((X_{n})_{n\geq 0}\) on a fixed tree \(T\). The walk starts from the root \(\rho\), with the transition probability as follows:
\[P^{T}(X_{n+1} =u^{*}|X_{n}=u)=\frac{\lambda}{\lambda+v(u)};\] \[P^{T}(X_{n+1} =v|X_{n}=u)=\frac{1}{\lambda+v(u)},\quad\forall v\in c(u).\]
For the root, we add an extra parent \(\rho^{*}\) to it and suppose that the random walk is reflected in \(\rho^{*}\). \(P^{T}\) (resp. \(E^{T}\)) denotes the law (resp. expectation) of the random walk conditional on \(T\), which is called the quenched law. Another measure \(P\), often referred to as the annealed law, is obtained by averaging the quenched law over \(P\). \(E\) is its associated expectation. Also, we denote by \(P^{*}\) the annealed probability conditioned on non-extinction.
On the topic of random walk on a Galton-Waston tree. Lyons proved in [11], under \(P^{*}\), \((X_{n})_{n\geq 0}\) is transient when \(\lambda>m\), positive recurrent when \(0<\lambda<m\) and null recurrent when \(m=\lambda\). In the critical case, namely \(\lambda=m\), Y.Peres and O.Zeitouni proved in [12] that a central limit theorem on the height function of the walk holds, under some moments condition on the offspring distribution. This theorem has been extended by A.Dembo and N.Sun in [4] to the case of multitype Galton-Waston tree, under some weaker hypotheses. However, there seems no investigation on the case when \(v\) only has finite moment of order less than two.
In [1], L.Raphelis and E.Aidekon consider the trace of the walk at time \(n\), that is the subtree of \(T\) made up the vertices visited until time \(n\): \(\mathcal{R}_{n}:=(T^{n},d_{T_{n}})\). In the notion, \(T^{n}:=\{u\in T:\exists k\leq n,X_{k}=u\}\) is regarded as a metric space, equipped with the natural graph distance \(d_{T_{n}}\). The trace can also be considered as an unlabelled tree, with the edge inherited from \(T\). Its proved in [1] that when \(\lambda=m>1\), and \(v\) has finite variation, the
following convergence holds both under \(P^{*}\) (annealed case) and under \(P^{T}\) for \(P^{*}\)-a,e. tree (quenched case):
\[\frac{1}{\sqrt{\sigma^{2}n}}(\mathcal{R}_{n},(X_{[n!]})_{t\in[0,1]})\overset{n \to\infty}{\Longrightarrow}(\mathcal{T}_{(B_{t})_{t\in[0,1]}},(B_{t})_{t\in[0,1 ]}), \tag{1}\]
where \(\sigma^{2}:=\frac{m(m-1)}{E[\nu(\nu-1)]}\). \((B_{t})_{t\in[0,1]}\) is a reflected Brownian motion in time interval \([0,1]\) and \(\mathcal{T}_{(B_{t})_{t\in[0,1]}}\) is a real tree encoded by the Brownian motion. The convergence holds in law for the pointed Gromov-Hausdorff topology on metric spaces and Skorokhod topology on cadlag functions. The definition and the concept of convergence of real trees can be found in, for example, [8].
Note that the \(\lambda\)-baised random walk can be included in the framework of random walk on a tree with random environment, where no randomness is introduced in the environment. In detail, the potential of a vertex \(u\) in the first generation of \(T\), always denoted as \(\triangle V(u)\), is \(\ln(\lambda)\) with probability 1. Then the Laplace transform of the point process of the potential can also be defined for \(t\geqslant 0\) by:
\[\psi(t):=E[\sum_{|u|=1}e^{-tV(u)}]=E[\frac{\nu}{\lambda^{t}}].\]
In [3], it's proved by L.Raphelis that under the hypothesis \((H_{c})\) and \((H_{\kappa})\) on \(\psi\), the height function of the walk \((|X_{n}|)_{n\geqslant 0}\) and the trace \((\mathcal{R}_{n})_{n\geqslant 0}\) converge jointly towards the continuous-time height process of a spectrally positive stable Levy process of index \(\kappa\) and Levy forest coded by it, with same method as [1]. The proof of the main theorem in that article is relied on the theory about convergence of height process and Levy trees constructed in [5]. Roughly speaking, the convergence of height process indicates the convergence of real trees.
However, the result in [3] hasn't cover the case \(\lambda\)-biased random walk. On one hand, in that article, Theorem B of [7] is applied as a main tool to get the regularly varying tail, which asks the non-lattice condition. On the other hand, the condition \((H_{\kappa})\) in that article is not satisfied for general \(l(x)\), even though \(l(x)\) is a constant. To be precise, usually we can't guarantee regular variation of order \(\kappa\) and finite \(\kappa\) moment at the same time. It's not just a technique problem, as will be shown in the following proof, the regularly varying tail of a key random variable comes from totally different places. Roughly speaking, in our case the it's the behavior of the random walk during one excursion out of the spine that determines the behaviour of the tail of the key random variable.
In this article, we prove that when \(\nu\) is regularly varying, the trace and height of the null-recurrent \(\lambda\)-baised random walk also have a scaling limit. It seems that it is almost a necessary condition when \(\nu\) doesn't have second moment, since only such distributions are in the domain of attraction of a \(\kappa\)-stable distribution. From now on, we assume \(\lambda=m\). The statement of our main theorem is the following.
**Theorem 1.1**.: _Suppose the offspring distribution \(\mathsf{v}\) is regularly varying with index \(\mathsf{\kappa}\) for a certain \(\mathsf{\kappa}\in(1,2)\). There exists a constant \(\mathsf{C}^{*}\in(0,\infty)\) such that the following convergence holds in law under \(\mathsf{P}^{*}\) or \(\mathsf{P}^{\mathrm{T}}\) for \(\mathbb{P}^{*}\)-a.e. trees:_
\[\frac{\mathsf{a}(\mathsf{n})}{\mathsf{n}}\bigg{(}(|\mathsf{X}_{[\mathsf{n} \mathsf{t}]}|)_{\mathsf{t}\geqslant 0},\mathcal{R}_{\mathsf{n}}\bigg{)} \stackrel{{\mathsf{n}\to\infty}}{{\Longrightarrow}}\mathsf{C}^{*} \bigg{(}(\mathsf{H}_{\mathsf{t}})_{\mathsf{t}\geqslant 0},\mathcal{T}_{( \mathsf{H}_{\mathsf{t}})_{0\leqslant\mathsf{t}\leqslant 1}}\bigg{)},\]
_where \((\mathsf{H}_{\mathsf{t}})_{\mathsf{t}\geqslant 0}\) is the continuous-time height process of a strictly stable spectrally positive Levy process of index \(\mathsf{\kappa}\) and \(\mathcal{T}_{(\mathsf{H}_{\mathsf{t}})_{0\leqslant\mathsf{t}\leqslant 1}}\) is the real tree coded by \((\mathsf{H}_{\mathsf{t}})_{0\leqslant\mathsf{t}\leqslant 1}\). The convergence holds in law for Skorokhod topology on cadlag functions and the pointed Gromov-Hausdorff topology on metric spaces._
_In particular, when \(\mathsf{l}(\mathsf{x})\) is a constant \(\mathsf{C}_{1}\), we have_
\[\frac{1}{\mathsf{n}^{1-\frac{1}{\mathsf{x}}}}\bigg{(}(|\mathsf{X}_{[\mathsf{ n}\mathsf{t}]}|)_{\mathsf{t}\geqslant 0},\mathcal{R}_{\mathsf{n}}\bigg{)} \stackrel{{\mathsf{n}\to\infty}}{{\Longrightarrow}}\mathsf{C}^{** }\bigg{(}(\mathsf{H}_{\mathsf{t}})_{\mathsf{t}\geqslant 0},\mathcal{T}_{( \mathsf{H}_{\mathsf{t}})_{0\leqslant\mathsf{t}\leqslant 1}}\bigg{)},\]
_where \(\mathsf{C}^{**}=\mathsf{C}^{*}\mathsf{C}_{\mathsf{l}}^{-1/\mathsf{\kappa}}\)._
**Remark 1.2**.: The constant \(\mathsf{C}^{*}\) can be computed as
\[(\mathsf{C}_{0}|\Gamma(1-\mathsf{\kappa})|)^{-\frac{1}{\mathsf{x}}}2^{-\frac {\mathsf{x}-1}{\mathsf{x}}}(\frac{\mathsf{m}-1}{\mathsf{m}})^{-\frac{2}{ \mathsf{x}}},\]
where \(\mathsf{C}_{0}\) is given in Proposition 4.5.
**Remark 1.3**.: When \(\mathsf{\kappa}=2\), we can define truncated moment function
\[\mathsf{\mu}(\mathsf{x})\coloneqq\mathsf{E}_{1}[(\mathsf{L}^{1}-1)^{2}1_{ \{\mathsf{L}^{1}\leqslant\mathsf{x}\}}],\]
where the definitions of the measure and the random variable \(\mathsf{L}^{1}\) appear in section 4. It can be proved similarly that \(\mathsf{\mu}(\mathsf{x})\sim 2\mathsf{C}_{0}\int^{\mathsf{x}}\mathsf{y}^{-1} \mathsf{l}(\mathsf{y})\mathrm{d}\mathsf{y}\), which is slowly varying by [2, Proposition 1.5.9 a]. Note that the tail of \(\mathsf{\mu}(\mathsf{x})\) is different from that of \(\mathsf{l}(\mathsf{x})\). In fact,
\[\lim_{\mathsf{x}\to\infty}\frac{1}{\mathsf{l}(\mathsf{x})}\int_{0}^{\mathsf{x }}\mathsf{t}^{-1}\mathsf{l}(\mathsf{t})\mathrm{d}\mathsf{t}=\infty.\]
\(\mathsf{a}^{\prime}(\mathsf{n})\) is the function of \(\mathsf{n}\) satisfying
\[\frac{\mathsf{n}\mathsf{\mu}(\mathsf{a}^{\prime}(\mathsf{n}))}{(\mathsf{a}^{ \prime}(\mathsf{n}))^{2}}\to\mathsf{C}\]
for some constant \(\mathsf{C}\). Then we can prove in the same way that
\[\frac{\mathsf{a}^{\prime}(\mathsf{n})}{\mathsf{n}}\bigg{(}(|\mathsf{X}_{[ \mathsf{n}\mathsf{t}]}|)_{\mathsf{t}\geqslant 0},\mathcal{R}_{\mathsf{n}} \bigg{)}\stackrel{{\mathsf{n}\to\infty}}{{\Longrightarrow}}\mathsf{ C}^{*}\bigg{(}(|\mathsf{B}_{\mathsf{t}}|)_{\mathsf{t}\geqslant 0},\mathcal{T}_{|\mathsf{B}_{ \mathsf{t}}|_{0\leqslant\mathsf{t}\leqslant 1}}\bigg{)},\]
where \((B_{t})_{t\geq 0}\) is a standard Brownian motion and the constant \(C^{*}=(C/2)^{-1/2}m/(m-1)\). In particular, when \(l(x)\) is a constant \(C_{1}\), we have
\[\frac{1}{(n\ln^{-1}(n))^{\frac{1}{2}}}\bigg{(}(|X_{[nt]}|)_{t\geq 0}, \mathcal{R}_{n}\bigg{)}\stackrel{{ n\to\infty}}{{\Longrightarrow}}C^{*} \bigg{(}(|B_{t}|)_{t\geq 0},\mathcal{T}_{|B_{t}|_{0\leq t\leq 1}}\bigg{)},\]
with \(C^{*}=(C_{0}C_{1})^{-1/2}m/(m-1)\). When \(v\) has a second moment, which is equivalent to \(x^{-1}l(x)\) is integrable, \(\mu(x)\) is bounded. Thus \(a^{\prime}(n)=\sqrt{n}\) and \(C=E_{1}[(L^{1}-1)^{2}]\). It can be easily checked that the formula degenerates to (1).
The article is arranged as follows. In Section 2, a global strategy is given and the details are omitted, which can be referred to in [3]. The theorem then boils down to the check of a hypothesis \((\textbf{H}_{t})\) in that section. In Section 3, two equivalent change of measure of the trace is introduced, which is used in the following proof. Then, the whole Section 4 is devoted to proving the hypothesis \((\textbf{H}_{t})\). The Subsection 4.2 is core part and the main contribution of the paper. In that subsection, we apply the Laplace transform and dominated convergence theorem to obtain regular variation of the tail of a key random variable.
## 2 Overview of the proof
We first show an annealed version of the main theorem for random walks on forests. Denote \(\mathbb{F}\) a forest made up of a collection i.i.d Galton-Watson trees \((\mathbb{T}_{i})_{i\geq 1}\) with corresponding roots \((\rho_{i})_{i\geq 1}\). Let a nearest-neighbour random walk \((X_{n}^{\mathbb{F}})_{n\geq 0}\) on \(\mathbb{F}\) starts from \(\rho_{1}\) with the transition probability:
\[P^{\mathbb{F}}(X_{n+1}^{\mathbb{F}}=v|X_{n}^{\mathbb{F}}=u)=\frac {1}{m+v(u)},\quad v\in c(u);\] \[P^{\mathbb{F}}(X_{n+1}^{\mathbb{F}}=u^{*}|X_{n}^{\mathbb{F}}=u)= \frac{m}{m+v(u)},\quad u\text{ is not a root};\] \[P^{\mathbb{F}}(X_{n+1}^{\mathbb{F}}=\rho_{i+1}|X_{n}^{\mathbb{F} }=u)=\frac{m}{m+v(u)},\quad u=\rho_{i}.\]
The behavior of the random walk on \(\mathbb{F}\) is similar to that on \(\mathbb{T}\) except when it's on the root. Let
\[F:=\{u\in\mathbb{F},\exists n\geq 0,X_{n}^{\mathbb{F}}=u\}\]
be the sub-forest made up with vertices visited by the random walk. As there should not be any ambiguity on the context hereafter, we will simply denote the walk \((X_{n})_{n\geq 0}\) and still denote by \(\mathcal{R}_{n}\) the set of vertices of \(\mathbb{F}\) visited before time \(n\).
**Proposition 2.1**: _Suppose \(\nu\) is regularly varying with order \(\kappa\) with \(\kappa\in(1,2)\). Then the following convergence holds in law under \(P\):_
\[\frac{a(n)}{n}\bigg{(}(|X_{[n!]}|)_{t\geq 0},\mathcal{R}_{n}\bigg{)}\stackrel{{ n\to\infty}}{{ \Longrightarrow}}C^{*}\bigg{(}(H_{t})_{t\geq 0},\mathcal{T}_{(H_{t})_{0\leq t \leq 1}}\bigg{)}\]
_where \((H_{t})_{t\geq 0}\) is the continuous-time height process of a strictly stable spectrally positive Levy process of index \(\kappa\). The convergence holds in law for Skorokhod topology on cadlag functions and the pointed Gromov-Hausdorff topology on metric spaces._
Theorem 1.1 then follows from Proposition 2.1 by Section 5 of [3], by restricting the walk to forests composed of excursions over a certain height. Note that, in [3], the only requirement on \(\nu\) used in the proof is \(\psi((\kappa+1)/2)=m^{(1-\kappa)/2}<1\), which is obviously satisfied when \(\kappa\in(1,2)\).
The proof of Proposition 2.1 is exactly the same as that in [3]. First, the trace can be seen as multitype Galton-Watson tree/forest. Then \((|X_{n}|,F)\) is associated with two leafed Galton-Watson forests with edge lengths, denoted as \(F^{R}\) and \(F^{X}\). \(F^{X}\)'s height process is equal to \((|X_{n}|)_{n\geq 0}\) and \(F^{R}\)'s height process is equal to that of \(F\). Next, we state a result on the associated height process of such forests, under certain hypotheses. At last, we can conclude the Proposition 2.1 provided \(F^{R}\) and \(F^{X}\) satisfy the hypotheses.
### Reduction of trees
For every \(u\in F\), which is not a root, we denote by \(\beta(u)\) the edge local time of \(u\):
\[\beta(u):=\sharp[n\geq 0:X_{n}=u^{*},X_{n+1}=u],\]
which is the number of visits of \(X_{n}\) from \(u^{*}\) to \(u\). If \(u\) is a root, set \(\beta(u)=1\). Since in our case, \((X_{n})_{n\geq 0}\) is recurrent, it's given in [1] that:
**Lemma 2.2**: _[_1_, Lemma 3.1]_ _Under the annealed law \(P\), the marked forest \((F,\beta)\) is a multitype Galton-Watson forest with roots of initial type 1._
Leafed Galton-Watson forests with edge lengths are multitype Galton-Watson forests that every vertex has two types, \(0\) and \(1\), and only vertices of type \(1\) may give progeny. We can denote it with a triplet \((F,e,l)\) where for \(u\in F\), \(e(u)\in\{0.1\}\) stands for the type of \(u\) and \(l(u)\) is the length of edge joining \(u\) with its parent. The details of construction can be found in Section 2.1 of [3].
We can define its associated weighted height process \(H^{l}_{F}\): if \(u(n)\) is the \(n\)-th vertex of \(F\) in the lexicographical order, then
\[H^{l}_{F}(n)=\sum_{p\leq\nu\leq u(n)}l(\nu).\]
Then we will build from \((F,\beta)\) two leafed Galton-Watson forests with edge lengths. Before this, we define the notion of the optional line of a given type. Let \(\mathcal{B}^{1}_{u}\) be the set of vertices descending from \(u\) in \(F\) having no ancestor of type 1 since \(u\). Formally,
\[\mathcal{B}^{1}_{u}:=\{v\in F:u<v,\;\beta(w)\neq 1,\;\forall u<w<v\}.\]
Also, denote by \(\mathcal{L}^{1}_{u}\) the set of vertices of type 1 descending from \(u\) in \(F\) and having no ancestor of type 1 since \(u\). Formally,
\[\mathcal{L}^{1}_{u}:=\{v\in F:u<v,\;\beta(v)=1,\;\beta(w)\neq 1,\;\forall u<w<v\}.\]
Denote by \(L^{1}_{u}\) (resp. \(B^{1}_{u}\)) the cardinal of \(\mathcal{L}^{1}_{u}\) (resp. \(\mathcal{B}^{1}_{u}\)).
Now, let us briefly introduce the forests \(F^{R}\) and \(F^{X}\). Each component \(T^{R}_{k}\) (resp. \(T^{X}_{k}\)) of \(F^{R}\) (resp. \(F^{X}\)) is built from \(T_{k}\) of \(F\). We just introduce the notion without giving an intuition. \(T^{R}_{k}\) is built as follows:
1. **Initialisation** Generation 0 of \(T^{R}_{k}\) is made up of \(\rho_{k}\) of \(T_{k}\). Set \(l(\rho_{k})=0\) and \(e(\rho_{k})=1\).
2. **Induction** Suppose that generation \(n\) of \(T^{R}_{k}\) has been built. If generation \(n\) is empty, then generation \(n+1\) is empty. Otherwise, by construction, each \(u\in T^{R}_{k}\) with \(|u|=n\) and \(e(u)=1\) was associated with a vertex \(u^{\prime}\) in \(T_{k}\). Every time we take in the lexicographical order a vertex \(v^{\prime}\in T_{k}\) such that \(v^{\prime}\in\mathcal{B}^{1}_{u^{\prime}}\), we add a vertex \(v\) as a child of \(u\) in \(T^{R}_{k}\) correspondingly, thus forming the progeny of \(u\). Set \(e(v)=1\) if \(\beta(v^{\prime})\)=1 and \(e(v)=0\) otherwise. For each vertex \(v\in T^{R}_{k}\), set \(l(v)=|v^{\prime}|-|u^{\prime}|\).
Next, we continue to build \(F^{X}\) on the basis of \(F^{R}\). For a vertex \(v\) of type 0, which is visited by \((X_{n})_{n\geq 0}\)\(k_{v}\) times, add \(k_{v}-1\) siblings and attach them to the parent of \(v\). They all have type 0 and the same length as \(v\). For vertices \(u\) of type 1, which has been visited by \((X_{n})_{n\geq 0}\) from its children \(k_{u}\) times, attach \(k_{u}\) vertices of type 0, length 0 to \(u\). Since each vertex in the forest corresponds to one step of \((X_{n})_{n\geq 0}\), we can reorder the vertices according to the time of the visit of the steps to which they correspond in each set of sibling. Thus the forest \(F^{X}\) is built.
### Convergence of the height process associated with \(F^{R}\) and \(F^{X}\)
Let \((F,e,l)\) be a leafed Galton-Watson forest. For each vertex \(u\in F\), denote \(v(u)\) (resp. \(v^{1}(u)\)) as the total number of children (resp. number of children of type 1) of
\(F\). Denote \(F^{1}\) as the forest is limited to its vertices of type 1. We make the following hypotheses on the reproduction law:
\[(\mathbf{H}_{1})\begin{cases}(i)\,E[\nu^{1}]=1;\\ (ii)\,\exists\epsilon>0,\,s.t.\,E[(\nu)^{1+\epsilon}]<\infty;\\ (iii)\,\exists C_{0}>0,\,s.t.\,P(\nu^{1}>x)\sim C_{0}l(x)x^{-\kappa};\\ (iv)\,\exists r>1,\,s.t.\,E[\sum_{|u|=1}r^{1(u)}]<\infty.\end{cases}\]
Let \(m\) be the expectation of reproduction and \(\mu\) be the expected sum of the length of type 1 children, i.e.
\[m:=E[\sum_{|u|=1}1]=E[\nu];\quad\mu=E[\sum_{|u|=1,\epsilon(u)=1}l(u)].\]
Also, define for \(\forall n\in\mathbb{N}\),
\[H^{1}_{F}(n):=|u^{1}(n)|,\quad H^{1}_{F}(n):=\sum_{\rho\leqslant\nu\leqslant u (n)}l(\nu).\]
where \(u(n)\) is the \(n\)-th vertex of \(F\) (resp. \(F^{1}\)) taken in the lexicographical order. The next theorem can be proved by following [3, Theorem 2] word by word if \((\mathbf{H}_{1})\) is satisfied.
**Theorem 2.3**.: _Let \((F,e,l)\) be a leafed Galton-Waston forest with edge lengths, with offspring distribution satisfying hypothesis \((H_{1})\). The following convergence holds in law:_
\[\frac{a(n)}{n}\bigg{(}\bigg{(}H^{1}_{F}([ns])_{s\geqslant 0}\bigg{)}, \bigg{(}H^{1}_{F}([ns])_{s\geqslant 0}\bigg{)}\bigg{)}\stackrel{{ n\to\infty}}{{ \Longrightarrow}}\frac{1}{(C_{0}|\Gamma(1-\kappa)|)^{\frac{1}{k}}}\bigg{(}( \mu H_{\frac{s}{m}})_{s\geqslant 0},(H_{s})_{s\geqslant 0}\bigg{)},\]
_where \(C_{0}\) is the constant defined in Propostion 4.5 and \(H\) is the continuous-time height process of a spectrally positive Levy process \(Y\) with Laplace transform \(E[\exp(-\lambda Y_{t})]=\exp(t\lambda^{\kappa})\) for \(\lambda,t>0\)._
Then Proposition 2.1 can be proved by the same deduction in Section 2.3 of [3] based on Theorem 2.3. In summary Theorem 1.1 follows if we can check hypothesis \((\mathbf{H}_{1})\) is satisfied by \(F^{R}\) and \(F^{X}\), which will be done in Section 4. Before the check, we will prepare some preliminaries about the law of the type of vertices.
## 3 Change of measure
### Quenched law of \(\beta\)
For \(u\in F\), we can easily compute that under \(P^{T}\) conditioned on \(\beta(u)=k\), the process \((\beta(v))_{v\in c(u)}\) has the negative multinomial distribution \(\beta=\beta_{k}\), that is
\[P^{T}((\beta(v))_{v\in c(u)}=(l_{v})_{v\in c(u)}|\beta(u)=k)\] \[= (\frac{m}{m+v(u)})^{k}\prod_{v\in c(u)}(\frac{1}{m+v(u)})^{l_{v} }\frac{(k-1+\sum_{v\in c(u)l_{v}})!}{(k-1)!\prod_{v\in c(u)}l_{v}!}.\]
The formula can be understood as follows. When the random walk \(X\) arrived at \(u\), a trial is taken with probability \(\frac{v(u)}{m+v(u)}\) to succeed. If the result is success, \(X\) moves to one of the children of \(u\) with the same probability \(\frac{1}{m+v(u)}\). Since \(X\) is recurrent, it must come back to \(u\) again and continue to choose the next vertex. Take the exact trial again and \(X\) continues to move until the result is failure, namely \(X\) moves back to \(u^{*}\). The procedure repeats until the \(k\)-th failure.
For the multitype Galton-Waston tree \((T,\beta)\), we can denote by \(P_{i}\) the law of its distribution with initial type \(i\). It's calculated in [1] from the quenched distribution of \(\beta\) that
\[m_{ij}:=E_{i}[\sum_{|u|=1}1_{\{\beta(u)=j\}}]=\frac{(i+j-1)!}{(i-1)!j!}\frac{m ^{(i+1)}}{(m+1)^{i+j}}.\]
The left and right eigenvectors of the matrix \((m_{ij})_{ij}\) associated with the eigenvalue \(1\) are the \((a_{i})_{i\geq 1}\) and \((b_{i})_{i\geq 1}\), where \(a_{i}:=(m-1)m^{-i}\) and \(b_{i}:=(1-m^{-1})i\). They are normalized such that \(\sum_{i\geq 1}a_{i}=1\) and \(\sum_{i\geq 1}a_{i}b_{i}=1\).
### Change of measure on (T,\(\beta\))
Let \(Z_{n}:=\sum_{|u|=n}\beta(u)\) to be the multitype additive martingale of \((T,\beta)\). For every \(i\geq 1\), \((Z_{n}/i)_{n\geq 0}\) becomes a martingale under \(P_{i}\) with expectation \(1\) with respect to the filtration generated by \((u,\beta(u))_{u\in T,|u|\leq n}\) because \((b_{i})_{i}\) is the right eigenvector of \((m_{ij})_{ij}\) associated to eigenvalue \(1\).
A new law \(\hat{P}_{i}\) is introduced on marked trees with a spine by the multitype additive martingale \(Z_{n}\). Let \(\hat{\beta}=(\hat{\beta}_{i})_{i\geq 1}\) to be a probability measure with Radon-Nikodym derivative \(Z_{1}\) with respect to \(\beta\). We construct \((T,\beta,(\omega_{n})_{n\geq 0})\) under \(\hat{P}_{i}\) as follows:
1. **Initialisation** Generation \(0\) of T is made up of the root \(\rho\) with type \(i\). Also set \(\omega_{0}=\rho\).
2. **Induction** Let \(n\geqslant 0\). Suppose the tree and \(\omega_{k}\) up generation \(n\) have been built. \(\omega_{n}\) has reproduction law \(\hat{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{ }}}}}}}}}}}}}}\). Other vertices \(u\) of generation \(n\) reproduce according to \(\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{ }}}}}}}}}}}}\). Then choose a vertex as \(\omega_{n+1}\) among children \(u\) of \(\omega_{n}\) each with probability \(\beta(u)/(\sum\limits_{v\in c(\omega_{n})}\beta(v))\).
We denote by \(\hat{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{ \mathsf{ \mathsf{ }}}}}}}}}}}}}\) the expectation associated with \(\hat{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ }}}}}}}}}}}}\). The new measure functions as the spinal decomposition of branching random walk. \(\omega_{n}\) is called a vertex on the spine. The following proposition is given in [3].
**Proposition 3.1**: _[_3_, Proposition 3]_ _The following statements holds. Under \(\hat{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ }}}}}}}}}}}}}\), the process \((\beta(\omega_{k}))_{k\geqslant 0}\) is a Markov chain on \(\mathbb{N}\) with initial state \(i\), and with the transition probabilities \((\hat{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \ \mathsf{ \mathsf{ \ \mathsf{ \ \mathsf{ \mathsf{ \ \mathsf{ \ \mathsf{ \ \mathsf{ \ \mathsf{ \ \mathsf{ \ \mathsf{ \ \mathsf{ \ \mathsf{ \ \mathsf{ \ \mathsf{ \ \mathsf{ \ \ \mathsf{ \ \mathsf{ \ \mathsf{ \ \ \mathsf{ \ \ { \ \mathsf{ \ \ { \ \ \mathsf{ \ \ { \ \ \ \mathsf{ \ \ { \ \ \ \ \ \ {\
Now we run a series of random walks on \(\mathbb{T}\) under \(\hat{\mathbb{P}}\). For any vertex of the spine \(\tilde{\upomega}_{i}\), we associate it with two i.i.d truncated nearest-neighbour random walk with the same law, denoted as \((\mathsf{X}_{n}^{1,\tilde{\upomega}_{i}})_{n\geqslant 0}\) and \((\mathsf{X}_{n}^{2,\tilde{\upomega}_{i}})_{n\geqslant 0}\) respectively. Each of them is defined as follows. It starts on \(\tilde{\upomega}_{i}\). If it's on \(u\in\mathbb{T}\), then it jump to one of the child with probability \(\frac{1}{m+\nu(u)}\) and to the parent of \(u\) with probability \(\frac{m}{m+\nu(u)}\). If it reach \(\tilde{\upomega}_{i-1}\), then it's killed at once.
Let for each \(u\in\mathbb{T}\),
\[\tilde{\upbeta}_{i}^{j}(u)\coloneqq\sharp[n\geqslant 0:\mathsf{X}_{n}^{j, \tilde{\upomega}_{i}}=u^{*},\mathsf{X}_{n+1}^{j,\tilde{\upomega}_{i}}=u],\,i \geqslant 0,j=1,2\]
be the edge local times on \(u\) of the walk launched on \(\tilde{\upomega}_{i}\), and let
\[\tilde{\upbeta}^{j}(u):=\sum_{i=1}^{\infty}\,\tilde{\upbeta}_{i}^{j}(u),\,j=1,2.\]
Finally let
\[\tilde{\upbeta}(u)\coloneqq\tilde{\upbeta}^{1}(u)+\tilde{\upbeta}^{2}(u)+1_ {\{u\in(\tilde{\upomega}_{k})_{k\geqslant 0}\}}.\]
We set \(\tilde{\mathsf{T}}:=\{u\in\mathbb{T},\tilde{\upbeta}(u)\geqslant 0\}\). It's proved in [3] that the two change of measure are indeed the same.
Proposition 3.3.: _[_3_, Proposition 5]_ _Under \(\hat{\mathsf{P}}_{1}\), \((\mathsf{T},\upbeta,(\upomega_{k})_{k\geqslant 0})\) has the same law as \((\tilde{\mathsf{T}},\tilde{\upbeta},(\tilde{\upomega}_{k})_{k\geqslant 0})\) with \((\tilde{\mathsf{T}},(\tilde{\upomega}_{k})_{k\geqslant 0})\) built under \(\hat{\mathbb{P}}\)._
## 4 Proof of Proposition 2
The key of the proof is just to ensure that \(\mathsf{F}^{\mathsf{R}}\) and \(\mathsf{F}^{\mathsf{X}}\) satisfy \((\mathbf{H}_{l})\). Note that, by the construction, for both \(\mathsf{F}^{\mathsf{R}}\) and \(\mathsf{F}^{\mathsf{X}}\), \(\nu^{1}(u)\) has the law of \(L^{1}\), the cardinal of the optional line \(\mathcal{L}^{1}\), under \(\mathsf{P}_{1}\). The law of the total offspring distribution of \(\mathsf{F}^{\mathsf{R}}\) has the law of \(\mathsf{B}^{1}\), the cardinal of the optional line \(\mathcal{B}^{1}\). The law of the total offspring distribution of \(\mathsf{F}^{\mathsf{X}}\) has the law of \(\sum_{u\in\mathcal{B}^{1}}2\upbeta(u)\), the total time spent in \(\mathcal{B}^{1}\) in one excursion.
Therefore, it's enough to verify that \(\mathsf{E}_{1}[L^{1}]=1\) to prove \((\mathbf{H}_{l})(i)\) of \(\mathsf{F}^{\mathsf{R}}\) and \(\mathsf{F}^{\mathsf{X}}\) and verify that \(\mathsf{E}_{1}[(\sum_{u\in\mathcal{B}^{1}}\upbeta(u))^{1+\epsilon}]<\infty\) to prove \((\mathbf{H}_{l})(ii)\)(ii) of \(\mathsf{F}^{\mathsf{R}}\) and \(\mathsf{F}^{\mathsf{X}}\). For \((\mathbf{H}_{l})(iv)\), notice that each vertex in first generation \(\mathsf{F}^{\mathsf{R}}\), denoted as \(\nu^{\mathsf{R}}\), corresponds to a vertex \(\nu\in\mathcal{B}^{1}\) and that \(\mathsf{l}(\nu^{\mathsf{R}})=|\nu|\). Moreover, the set of vertices of the first generation of \(\mathsf{F}^{\mathsf{X}}\), denoted as \(\nu^{\mathsf{X}}\), is made up of a corresponding vertex \(\nu^{\mathsf{R}}\) of \(\mathsf{F}^{\mathsf{R}}\) after having replicated itself a number of \(2\upbeta(u)-1\) of times. Hence,
\[\mathsf{E}[\sum_{u\in\nu^{\mathsf{R}}}\,\mathsf{r}^{\mathsf{l}(u)}]=\mathsf{E} _{1}[\sum_{u\in\mathcal{B}^{1}}\,\mathsf{r}^{\mathsf{l}|u|}]\leqslant\mathsf{E} _{1}[\sum_{u\in\mathcal{B}^{1}}\,2\upbeta(u)\mathsf{r}^{\mathsf{l}|u|}]=\mathsf{ E}[\sum_{u\in\nu^{\mathsf{X}}}\,\mathsf{r}^{\mathsf{l}(u)}].\]
Therefore, we only need to verify that there exists an \(\tau>1\) such that \(\mathsf{E}_{1}[\sum_{u\in\mathcal{B}^{1}}2\beta(u)\tau^{|u|}]<\infty\).
Finally, for \((\mathbf{H}_{1})(\mathrm{iii})\), we need to check that there exists a positive constant \(\mathsf{C}_{0}\) such that \(\mathsf{P}_{1}(\mathrm{L}^{1}>x)\sim\mathsf{C}_{0}x^{-\kappa}\). This is core of the article as well as the main contribution compared to the references. The proof is quite different from that in [3], since in our case the regular varying tail of \(\mathrm{L}^{1}\) is resulted from the tail behavior of \(\nu\), instead of the behavior of \(\mathcal{L}^{1}\).
### Hypotheses \((\mathbf{H}_{1})(\mathbf{i})\),\((\mathbf{ii})\) and \((\mathbf{iv})\)
Let
\[\hat{\tau}_{1}:=\min\{k\geqslant 1:\beta(\omega_{k})=1\}\]
to be the first non-null hitting time of \(1\) by the Markov chain \((\beta(\omega_{k}))_{k\geqslant 0}\).
**Lemma 4.1**.: _For any \(i\geqslant 1\), we have,_
\[\mathsf{E}_{i}[\mathrm{L}^{1}]=i.\]
_In particular, \(\mathsf{E}_{1}[\mathrm{L}^{1}]=1\), and the reproduction law of both \(\mathsf{F}^{R}\) and \(\mathsf{F}^{X}\) satisfies conditions \((\mathbf{H}_{1})\)(i)._
Proof.: The proof is the same as that in [3]. For reader's convenience, we spell out the details. For any \(i\geqslant 1\), we have
\[\mathsf{E}_{i}[\mathrm{L}^{1}] =\mathsf{E}_{i}[\sum_{u\in\mathcal{L}^{1}}1]=\mathsf{E}_{i}[\sum_ {u\in\mathcal{L}^{1}}\beta(u)]\] \[=\sum_{k\geqslant 1}\mathsf{E}_{i}[\sum_{|u|=k}\beta(u)1_{\{ \beta(u_{1}),\ldots,\beta(u_{k-1})\neq 1,\beta(u)=1\}}]\] \[=\sum_{k\geqslant 1}i\hat{\mathsf{E}}_{i}[1_{\{\beta(\omega_{1}),\ldots,\beta(\omega_{k-1})\neq 1,\beta(\omega)=1\}}]\] \[=i\sum_{k\geqslant 1}\hat{\mathsf{E}}_{i}[1_{\{\hat{\tau}_{1}=k \}}]=i.\]
where we used the multitype many-to-one lemma in the last but one equation.
**Lemma 4.2**.: _For every \(\alpha>0\), there exist \(\mathsf{C}_{\alpha}>0\) such that for any \(\tau\in(1,\mathsf{m}^{\alpha})\) and \(i\geqslant 1\),_
\[\hat{\mathsf{E}}_{i}[\sum_{k=1}^{\hat{\tau}_{1}}(\beta(\omega_{k}))^{\alpha} \tau^{k}]\leqslant\mathsf{C}_{\alpha}i^{\alpha}.\]
_As a consequence, there exists a constant \(\mathsf{C}_{\mathsf{p}}>0\), such that for any \(\mathsf{p}>0\)_
\[\hat{\mathsf{E}}_{i}[\hat{\tau}_{1}^{\mathsf{p}}]\leqslant\mathsf{C}_{\mathsf{ p}}\ln^{\mathsf{p}}(1+i)\]
_and the laws of \(\mathsf{F}^{R}\) and \(\mathsf{F}^{X}\) satisfy \((\mathbf{H}_{1})\)(iv)._
Proof. The proof is similar to that of Lemma 10 in [3]. Note that now we can take \(\alpha\) as any positive real number, because in the proof of Lemma 25 in [3],
\[\mathsf{E}[\sum_{|\mathfrak{u}|=1}\mathsf{e}^{-(\alpha+1)\mathsf{V}(\mathfrak{u })}]=\mathsf{E}[\sum_{|\mathfrak{u}|=1}\mathfrak{m}^{-(\alpha+1)}]=\mathfrak{ m}^{-\alpha}<1\]
for any \(\alpha>0\).
Now we continue to prove \((\mathbf{H}_{1})\)(ii). Denote \(\tilde{\mathsf{B}}^{1}\) as \(\sum_{\mathfrak{u}\in\mathcal{B}^{1}}\beta(\mathfrak{u})\) under \(\mathsf{P}_{1}\). We also have the following lemma:
**Lemma 4.3**.: _[_3_, Lemma 11]_ _For every \(\alpha\in(0,\kappa-1)\), \(\epsilon>0\), there exists a constant \(C^{\prime}_{\alpha+\epsilon}>0\) such that for any \(\mathfrak{i}\geqslant 1\),_
\[\mathsf{E}_{\mathfrak{i}}[(\tilde{\mathsf{B}}^{1})^{1+\alpha}]\leqslant C^{ \prime}_{\alpha+\epsilon}\mathfrak{i}^{1+\alpha+\epsilon}.\]
_As a consequence, the law of \(\mathsf{F}^{\mathsf{R}}\) and \(\mathsf{F}^{\mathsf{X}}\) satisfy hypothesis \((\mathbf{H}_{1})\)(ii)._
### Regular varying tail of \(\mathsf{L}^{1}\)
Recall that we have assumed that \(\kappa\in(1,2)\) and the function \(\mathfrak{l}(\mathsf{x})\) is slowly varying. Instead of proving the regular varying tail of \(\mathsf{L}^{1}\) under \(\mathsf{P}\), we prove it under \(\hat{\mathsf{P}}\) with help of the following lemma.
**Lemma 4.4**.: _[_3_, Lemma 13]_ _As \(\mathsf{x}\to\infty\), \(\mathsf{P}_{1}(\mathsf{L}^{1}>\mathsf{x})\sim\mathsf{x}^{-\kappa}\mathfrak{l}( \mathsf{x})\) if and only if as \(\mathsf{x}\to\infty\), \(\hat{\mathsf{P}}_{1}(\mathsf{L}^{1}>\mathsf{x})\sim\frac{\kappa}{\kappa-1} \mathsf{x}^{-(\kappa-1)}\mathfrak{l}(\mathsf{x})\)._
For \(\kappa=2\), we can prove in instead that
\[\mathsf{E}_{1}[(\mathsf{L}^{1})^{2}1_{\{\mathsf{L}^{1}\leqslant\mathsf{x}\}} ]=\sum_{\mathsf{k}\geqslant 1}\mathsf{E}_{1}[\sum_{|\mathfrak{u}|=\mathsf{k}} 1_{\{\mathfrak{u}\in\mathcal{L}^{1}\}}\mathsf{E}_{1}[\mathsf{L}^{1}1_{\{ \mathsf{L}^{1}\leqslant\mathsf{x}\}}|\mathcal{F}_{\mathsf{k}}]]=\hat{ \mathsf{E}}_{1}[\mathsf{L}^{1}].\]
We then claim the main proposition of the subsection. The proof also works for the case when \(\kappa=2\) in spite of a little modification.
**Proposition 4.5**.: _we have_
\[\hat{\mathsf{P}}_{1}(\mathsf{L}^{1}>\mathsf{x})\sim C_{\kappa}\mathsf{x}^{-( \kappa-1)}\mathfrak{l}(\mathsf{x}),\]
_where \(C_{\kappa}=\frac{2\Gamma(\kappa)\kappa}{(\mathfrak{m}-1)(\mathfrak{m}+1)^{ \kappa-1}}\). As a result, \((\mathsf{H}_{1})\)(iii) holds with \(C_{0}=\frac{2\Gamma(\kappa)}{(\mathfrak{m}-1)(\mathfrak{m}+1)^{\kappa-1}}\)._
Recall that \((\beta(\omega_{\mathsf{k}}))_{\mathsf{k}\geqslant 0}\) is nothing but a random walk with transition matrix \((\mathfrak{m}_{\mathfrak{i}\mathfrak{j}})_{\mathfrak{i}\mathfrak{j}}\) under \(\hat{\mathsf{P}}_{1}\). In order to obtain the tail behavior of \(\mathsf{L}^{1}\), we need to the second construction mentioned in Subsection 3.2 by considering Proposition 3.3. Under the measure \(\hat{\mathbb{P}}\), \(\omega_{\mathsf{k}}\) is chosen between the children of \(\omega_{\mathsf{k}-1}\) averagely. Let \(\Omega(\omega_{\mathsf{k}})\) to be the siblings of \(\omega_{\mathsf{k}}\).
If \(\beta(\omega_{k-1})\) and \(\beta(\omega_{k})\) have been determined, it's easy to see that the distribution of \((\beta(u))_{u\in\Omega(\omega_{k})}\) doesn't depend on anything else.
In detail, conditioned on \(\beta(\omega_{k-1})\) and \(\beta(\omega_{k})\), there are \(\beta(\omega_{k-1})-1\) visits of the directed edge \((\omega_{k-2},\omega_{k-1})\) by the random walks \(X^{1,\omega_{1}}_{l}\) and \(X^{2,\omega_{1}}_{l}\) where \(0\leqslant l\leqslant k-2\). (1 is subtracted as \(\omega_{k-1}\) is on the spine so an additional time is counted.) Moreover, there are two other random walks \(X^{1,\omega_{k-1}}_{n}\) and \(X^{2,\omega_{k-1}}_{n}\) which start from \(\omega_{k-1}\) killed when hitting \(\omega_{k-2}\). In all, there are \(\beta(\omega_{k-1})+1\) trials, which fail with probability \(\frac{m}{\nu(\omega_{k-1})+m}\). In each trial, if it fails, namely the parent is chosen, it stops immediately. If not, one of the children of \(\omega_{k-1}\) is chosen with the probability \(\frac{1}{\nu(\omega_{k-1})+m}\) and we repeat the procedure until the parent is chosen.
Since \(\omega_{k}\) is chosen averagely from the children of \(\omega_{k-1}\) and its type is known, without loss of generality, we can assume it is the first child of \(\omega_{k-1}\) that has been visited for \(\beta(\omega_{k})-1\) times in all. Therefore, the conditional distribution of the types of the other children of \(\omega_{k-1}\), namely \((\beta^{k}_{2},\beta^{k}_{3},\ldots\beta^{k}_{\nu(\omega_{k-1})})\) is distributed as below.
\[\hat{\mathbb{P}}((\beta^{k}_{2},\beta^{k}_{3},\ldots\beta^{k}_{ \nu(\omega_{k-1})})=(b_{2},b_{3},\ldots b_{\nu(\omega_{k-1})})|\beta(\omega_{ k-1}),\beta(\omega_{k}))\] \[= \frac{(\beta(\omega_{k-1})+\beta(\omega_{k})+\sum_{j=2}^{\nu( \omega_{k-1})}b_{j}-1)!}{(\beta(\omega_{k-1}))!(\beta(\omega_{k})-1)!\prod_{j =2}^{\nu(\omega_{k-1})}(b_{j})!}\frac{m^{\beta(\omega_{k-1})+1}}{(\nu(\omega _{k-1})+m)^{\beta(\omega_{k-1})+\beta(\omega_{k})+\sum_{j=2}^{\nu(\omega_{k-1 })}b_{j}}}\] \[(\frac{(\beta(\omega_{k-1})+\beta(\omega_{k})-1)!}{(\beta(\omega _{k-1}))!(\beta(\omega_{k})-1)!}\frac{(m)^{\beta(\omega_{k-1})+1}}{(1+m)^{ \beta(\omega_{k-1})+\beta(\omega_{k})}})^{-1}\] \[= \frac{(\beta(\omega_{k-1})+\beta(\omega_{k})+\sum_{j=2}^{\nu( \omega_{k-1})}b_{j}-1)!}{(\beta(\omega_{k-1})+\beta(\omega_{k})-1)!\prod_{j=2 }^{\nu(\omega_{k-1})}(b_{j})!}\frac{(m+1)^{\beta(\omega_{k-1})+\beta(\omega_{ k})}}{(\nu(\omega_{k-1})+m)^{\beta(\omega_{k-1})+\beta(\omega_{k})+\sum_{j=2}^{ \nu(\omega_{k-1})}b_{j}}}.\]
It is exactly the probability that \((\beta^{k}_{2},\beta^{k}_{3},\ldots\beta^{k}_{\nu(\omega_{k-1})})=(b_{2},b_{3},\ldots b_{\nu(\omega_{k-1})})\) when \(\beta:=\beta(\omega_{k-1})+\beta(\omega_{k})\) trails are carried out and the failure probability is \(\frac{m+1}{\nu(\omega_{k-1})+m}\). We denote such probability measure \(\tilde{P}_{\beta}\), that is \(\tilde{P}_{\beta}(\cdot):=\hat{P}(\cdot|\beta(\omega_{k-1})+\beta(\omega_{k})=\beta)\) (\(k\) and \(\beta(\omega_{k-1})\), \(\beta(\omega_{k})\) can be any positive integer without affecting the measure). We may and will assume that \(k=1\) so \((u_{j})_{2\leqslant j\leqslant\nu(\rho)}\) are the children except the one on the spine.
Define the random variable \(L^{1}_{0}:=\sum_{j=2}^{\nu(\rho)}L^{1}_{u_{j}}\), where \(L^{1}_{u_{j}}\) is the number of type 1 children of the root in the tree \(T^{R}_{u_{j}}\) (\(T^{R}_{u_{j}}\) is the reduced tree of \(T_{u_{j}}\)). \(L^{1}_{u_{j}}\) is independent of each other and only depends on the type of \(u_{j}\), which is the edge local time of \(u_{j}\) in \(\beta\) trials. We claim that the following proposition holds.
**Proposition 4.6**.: \(L^{1}_{0}\) _is regularly varying with tail distribution \(\frac{C^{\prime}_{\kappa}}{m}(\frac{\beta}{m+1})^{\kappa-1}l(x)x^{-(\kappa-1)}\) when \(\beta\) trials are made, namely_
\[\tilde{P}_{\beta}(L^{1}_{0}>x)\sim\frac{C^{\prime}_{\kappa}\beta}{m(m+1)^{ \kappa-1}}l(x)x^{-(\kappa-1)},\]
_where \(C^{\prime}_{\kappa}=\frac{\Gamma(\kappa)\kappa}{\kappa-1}\)._
**Remark 4.7**.: When \(\kappa=2\), we can only prove \(\tilde{\mathbb{E}}_{\beta}(L^{1}_{0}1_{\{L^{1}_{0}\leqslant\chi\}})\sim C^{\prime }_{\kappa}\beta m^{-1}(m+1)^{-1}l^{\prime}(x)\), where \(l^{\prime}(x)=\int_{0}^{x}y^{-1}l(y)dy\). However, it's does not matter, since we only need to verify that the tail \(\mu(x)=E_{1}[(L^{1}-1)^{2}1_{\{L^{1}\leqslant\chi\}}]\) is regularly varying.
In order to prove the Proposition 4.6, we need fine understanding of the behaviour of \((L^{1}_{u_{j}})_{2\leqslant j\leqslant\nu(\rho)}\). Under \(\tilde{P}_{\beta}\), for \(u\) as a child of \(\rho\) not on the spine, \(\beta(u)\) has the law of the sum of \(\beta\) geometric random variables of parameter \(\frac{1}{m+\nu(\rho)}\). Denote \(N+1\) as the number of children of the root \(\rho\) and \(\Omega(u_{1})\) as the siblings of \(u_{1}\), which follows the offspring law \(\nu(\rho)\). Let \(V:=\sum_{j=2}^{N+1}\beta_{j}\) as the sum of edge local time of the siblings of \(u_{1}\), where \(\beta_{j}:=\beta(u_{j})\). The next two lemmas show that the regular varying comes from \(V\) instead of \((L^{1}_{u_{j}})_{2\leqslant j\leqslant N+1}\), which is the essential difference from [3].
**Lemma 4.8**.: _[_3_, Lemma 14]_ _Let \(\alpha\in(0,\kappa-1)\), and \(i\geqslant 1\). There exist a constant \(C_{\alpha}>0\) such that_
\[E_{i}[(L^{1})^{1+\alpha}]\leqslant C_{\alpha}i^{1+\alpha}.\]
From lemma 4.8, we know that for \(u\in\Omega(u_{1})\), \(L^{1}_{u}\) has finite moment of order \(1+\alpha\) for \(\alpha\in(0,\kappa-1)\), since a vertex \(u\) not on the spine reproduce with the probability measure \(P_{\beta(u)}\).
**Lemma 4.9**.: _For \(V\) defined above, we have_
\[\tilde{P}_{\beta}(V>x)\sim\frac{C^{\prime}_{\kappa}\beta}{m(m+1)^{\kappa-1}}l( x)x^{-(\kappa-1)},\]
_where \(C^{\prime}_{\kappa}=\frac{\Gamma(\kappa)\kappa}{\kappa-1}\)._
Proof.: First as \(\nu(\rho)\) is regularly varying and \(N=\nu(\rho)-1\), by the K Karamata theorem and many-to-one formula, we have
\[\hat{\mathbb{P}}(N>x)=\mathbb{E}[\frac{\nu}{m}1_{\{N>x\}}]\sim\frac{\kappa}{m( \kappa-1)}x^{-(\kappa-1)}l(x). \tag{2}\]
Since the \(\beta\) trials are taken independently and \(V=\sum_{k=1}^{\beta}V^{k}\), where \(V^{k}\) is the sum of edge local times in the \(k\)-th trial. We only need to prove that
\[\tilde{P}_{1}(V>x)\sim\frac{C^{\prime}_{\kappa}}{m}(\frac{1}{m+1})^{\kappa-1}l (x)x^{-(\kappa-1)}.\]
In fact, since \(V^{k}\) are i.i.d random variable, with regular varying tail of the same order, by Laplace transform, we can get that the sum of them is also regularly varying with the tail we want.
In one trial, when \(N\) fixed, \(V\) follows the geometric distribution with the failure probability \(\frac{m+1}{m+N+1}\), namely,
\[\tilde{P}_{1}(V>n|N)=(1-\frac{m+1}{m+N+1})^{n+1}.\]
Hence, by averaging over \(N\), we get that when \(n\) large enough,
\[\tilde{P}_{1}(V>n)=\hat{\mathbb{E}}[(\tilde{P}_{1}(V>n|N)]\] \[= -\int_{x=0}^{\infty}(1-\frac{m+1}{m+x+1})^{n+1}d\hat{P}(N>x)\] \[= \int_{x=0}^{\infty}(n+1)(1-\frac{m+1}{m+x+1})^{n}\frac{m+1}{(m+x+ 1)^{2}}\hat{P}(N>x)dx\] \[\sim \frac{\kappa(m+1)}{m(\kappa-1)}\int_{x=0}^{\infty}n(1-\frac{m+1} {m+x+1})^{n}(l(x)x^{-\kappa-1}\wedge 1)dx,\]
where we apply (2) in the last line.
Then take \(\frac{m+\kappa+1}{m+1}\) as the new variable and note that \(l(x)\) is slowly varying, we have
\[\frac{\kappa(m+1)}{m(\kappa-1)}\int_{x=0}^{\infty}n(1-\frac{m+1}{ m+x+1})^{n}(l(x)x^{-\kappa-1}\wedge 1)dx\] \[\sim \frac{\kappa}{m(\kappa-1)}(m+1)^{-(\kappa-1)}\int_{x=1}^{\infty }n(1-\frac{1}{x})^{n}l(x)x^{-\kappa-1}dx\] \[\sim \frac{\kappa}{m(\kappa-1)}(m+1)^{-(\kappa-1)}\int_{x=0}^{1}n(1-x )^{n}l(\frac{1}{x})x^{\kappa-1}dx.\]
Then we evaluate the tail of \(\int_{x=0}^{1}n(1-x)^{n}l(\frac{1}{x})x^{\kappa}dx\).
\[\frac{n^{\kappa-1}}{l(n)}\int_{x=0}^{1}n(1-x)^{n}l(\frac{1}{x})x^ {\kappa-1}dx= \int_{x=0}^{1}(1-x)^{n}\frac{l(\frac{1}{x})}{l(n)}(nx)^{\kappa-1 }dnx\] \[= \int_{x=0}^{n}(1-\frac{x}{n})^{n}\frac{l(\frac{n}{x})}{l(n)}x^{ \kappa-1}dx\]
Note that \((1-\frac{x}{n})^{n}\leqslant e^{-x}\) and
\[\frac{l(\frac{n}{x})}{l(n)}\leqslant C_{1}(x^{\epsilon}\vee x^{-\epsilon})\]
from the representation of slowly varying function [2, Section 1.3], where \(C_{1}\) is a constant determined by \(l(x)\) and \(\epsilon\) is small enough. Then we can get that when \(n\) is large enough
\[(1-\frac{x}{n})^{n}\frac{l(\frac{n}{x})}{l(n)}x^{\kappa}\leqslant C_{1}e^{-x} (x^{\kappa-1+\epsilon}\vee x^{\kappa-1-\epsilon}),\]
which is integrable on \((0,\infty)\). Therefore, by the dominated convergence theorem,
\[\lim_{n\to\infty}\int_{x=0}^{n}(1-\frac{x}{n})^{n-1}\frac{l(\frac{n}{x})}{l(n) }x^{\kappa-1}dx=\int_{x=0}^{\infty}e^{-x}x^{\kappa-1}dx=\Gamma(\kappa).\]
Thus we have
\[\frac{\kappa}{m(\kappa-1)}(m+1)^{-(\kappa-1)}\int_{x=1}^{\infty}n(1-\frac{1}{ x})^{n}l(x)x^{-\kappa-1}dx\sim\frac{C_{\kappa}^{\prime}}{m}(m+1)^{-(\kappa-1)}n^{-( \kappa-1)}l(n),\]
where \(C_{\kappa}^{\prime}=\frac{\Gamma(\kappa)\kappa}{\kappa-1}\).
Let \(N_{i}:=\sum_{2\leq j\leq N+1}1_{\beta(u_{j})=i}\) as the number of type \(i\) vertices in \(\Omega(u_{1})\). We want to estimate \(N_{i}\) conditioned on \(N\) and \(V\). Let \(a_{i}=\frac{N_{i}}{N}\), we can immediately get that
\[\sum_{i=0}^{\infty}a_{i}=1;\] \[\theta:=\sum_{i=0}^{\infty}ia_{i}=\frac{V}{N}.\]
The next lemma says that the sum of local times of the vertices which haven visited for many time is relatively small. Given \(N\) and \(V\), \(\beta\) provides no more information so we omit the subscript.
**Lemma 4.10**.: _For any \(\epsilon>0\), we can find some \(K=K(\theta,\epsilon)\in\mathbb{N}^{*}\) such that_
\[\tilde{P}(\sum_{i=K}^{\infty}i\alpha_{i}\geq\epsilon|N,V)\leq\epsilon.\]
Proof.: Now assume that \(N,V\) have been fixed. By Markov inequality,
\[\tilde{P}(\sum_{i=K}^{\infty}i\alpha_{i}\geq\epsilon|N,V)\leq\frac{\tilde{E}[ \sum_{i=K}^{\infty}iN_{i}|N,V]}{N\epsilon}.\]
Since \(\tilde{E}[\sum_{i=K}^{\infty}iN_{i}|N,V]\) is the total edge local time of the vertices which are visited more than \(K\) times. Thus,
\[\tilde{E}[\sum_{i=K}^{\infty}iN_{i}|N,V]=\tilde{E}[\sum_{j=2}^{N+1}\beta_{j}1_ {\{\beta_{j}\geq K\}}|N,V]=N\tilde{E}[\beta_{2}1_{\{\beta_{2}\geq K\}}|N,V]\]
For \(\beta_{2}\) conditioned on \(N,V\), it equals \(\sum_{i=1}^{V}X_{i}\) where \(X_{i}\) are i.i.d binomial distribution with probability \(\frac{1}{N}\) to be \(1\). Then we can easily deduce that
\[\tilde{E}[\beta_{2}]=\frac{V}{N}=\theta;\] \[\tilde{E}[\beta_{2}^{2}]=\tilde{E}[\beta_{2}]+\tilde{E}[\sum_{i \neq j}X_{i}X_{j}]=\frac{V}{N}+\frac{V(V-1)}{N^{2}}\leq\theta+\theta^{2}.\]
Notice that the second moment of \(\beta_{2}\) above only depends on \(\theta\). Thus, by Markov inequality again,
\[\tilde{E}[\beta_{2}1_{\{\beta_{2}\geq K\}}|N,V]\leq\frac{\theta+\theta^{2}}{ K}\]
Take \(K\geq\frac{\theta^{2}+\theta}{\epsilon^{2}}\) then \(\tilde{E}[\beta_{2}1_{\{\beta_{2}\geq K\}}|N,V]\leq\epsilon^{2}\). Then
\[\tilde{P}(\sum_{i=K}^{\infty}i\alpha_{i}\geq\epsilon|N,V)\leq\frac{\tilde{E}[ \beta_{1}1_{\{\beta_{1}\geq K\}}|N,V]}{\epsilon}\leq\epsilon.\]
Now we can prove the proposition 4.6.
proof of Proposition 4.6.: By Theorem 8.16 in [2], the statement is equivalent to
\[1= \liminf_{s\to 0+}\frac{\mathsf{c}_{\kappa}\mathsf{m}(\mathsf{m}+1)^{ \kappa-1}\mathsf{s}^{-(\kappa-1)}}{\mathsf{l}(\frac{1}{s})\beta\mathsf{C}_{ \kappa}^{\prime}}(1-\tilde{\mathsf{E}}_{\beta}[\mathsf{e}^{-s\mathsf{L}\frac{ 1}{6}}])\] \[= \limsup_{s\to 0+}\frac{\mathsf{c}_{\kappa}\mathsf{m}(\mathsf{m}+1)^{ \kappa-1}\mathsf{s}^{-(\kappa-1)}}{\mathsf{l}(\frac{1}{s})\beta\mathsf{C}_{ \kappa}^{\prime}}(1-\tilde{\mathsf{E}}_{\beta}[\mathsf{e}^{-s\mathsf{L}\frac{ 1}{6}}]).\]
\(\mathsf{c}_{\kappa}=\Gamma(2-\kappa)\) is the constant appearing when the Laplace transform is taken (we don't need it if \(\kappa=2\)). Thus, we only need to prove the upper limit and the lower limit meets.
_1.Lower limit_
First, following the proof of Lemma 4.9, we can take \(\mathsf{A}\) large enough such that for \(\mathsf{x}\) large enough,
\[\tilde{\mathsf{P}}_{\beta}(\mathsf{V}>\mathsf{x},\mathsf{V}\leqslant\mathsf{ AN})\geqslant(1-\epsilon)\tilde{\mathsf{P}}_{\beta}(\mathsf{V}>\mathsf{x}).\]
In fact, we have
\[\tilde{\mathsf{P}}_{\beta}(\mathsf{V}>\mathsf{x}\vee\mathsf{AN}) \leqslant\beta\tilde{\mathsf{P}}_{1}(\beta\mathsf{V}>\mathsf{x}\vee\mathsf{ AN}).\]
When \(\mathsf{N}\geqslant\frac{\mathsf{x}}{\mathsf{A}}\), \(\tilde{\mathsf{P}}_{1}(\beta\mathsf{V}>\mathsf{AN}|\mathsf{N})\leqslant(1- \frac{\mathsf{m}+1}{\mathsf{m}+\mathsf{N}+1})^{\frac{\mathsf{AN}}{\mathsf{p} }}\leqslant\mathsf{e}^{-\frac{\mathsf{A}}{\beta(\mathsf{m}+1)}}\). Then there exists a constant \(\mathsf{C}\) such that
\[\tilde{\mathsf{P}}_{1}(\beta\mathsf{V}>\mathsf{AN},\mathsf{N}\geqslant\frac{ \mathsf{x}}{\mathsf{A}})\leqslant\mathsf{C}\mathsf{e}^{-\frac{\mathsf{A}}{ \beta(\mathsf{m}+1)}}(\frac{\mathsf{x}}{\mathsf{A}})^{-(\kappa-1)}\mathsf{l}( \frac{\mathsf{x}}{\mathsf{A}}),\]
which is less than \(\frac{\epsilon}{2\beta}\tilde{\mathsf{P}}_{\beta}(\mathsf{V}>\mathsf{x})\) when \(\mathsf{A}\) large enough. When \(\mathsf{N}<\frac{\mathsf{x}}{\mathsf{A}}\),
\[\tilde{\mathsf{P}}_{1}(\beta\mathsf{V}>\mathsf{x},\mathsf{N}< \frac{\mathsf{x}}{\mathsf{A}})\leqslant-\int_{\mathsf{y}=0}^{\frac{\mathsf{ x}}{\mathsf{A}}}(1-\frac{\mathsf{m}+1}{\mathsf{y}+\mathsf{m}+1})^{\frac{\mathsf{x}}{ \mathsf{y}}}\mathrm{d}\tilde{\mathsf{P}}(\mathsf{N}>\mathsf{y})\] \[= \mathsf{o}(\mathsf{x}^{-(\kappa-1)}\mathsf{l}(\mathsf{x}))+\frac {\mathsf{x}}{\mathsf{m}(\mathsf{m}+1)^{\kappa-1}(\mathsf{x}-1)}\int_{\mathsf{ y}=1}^{\frac{\mathsf{x}}{\mathsf{A}(\mathsf{m}+1)}}\frac{\mathsf{x}}{\beta}(1- \frac{1}{\mathsf{y}})^{\frac{\mathsf{x}}{\mathsf{p}}-1}\mathsf{l}(\mathsf{y}) \mathsf{y}^{-\kappa-1}\mathrm{d}\mathsf{y}\] \[= \mathsf{o}(\mathsf{x}^{-(\kappa-1)}\mathsf{l}(\mathsf{x}))+(\frac {\mathsf{x}}{\beta})^{-(\kappa-1)}\mathsf{l}(\mathsf{x})\frac{\mathsf{x}}{ \mathsf{m}(\mathsf{m}+1)^{\kappa-1}(\mathsf{x}-1)}\int_{\mathsf{y}=\frac{ \mathsf{A}(\mathsf{m}+1)}{\mathsf{y}}}^{\infty}\mathsf{e}^{-\mathsf{y}} \mathsf{y}^{\kappa}\mathrm{d}\mathsf{y},\]
where we use the dominated convergence theorem in the last equality again. It is also less than \(\frac{\epsilon}{2\beta}\tilde{\mathsf{P}}_{\beta}(\mathsf{V}>\mathsf{x})\) when \(\mathsf{A}\) large enough.
Then we choose \(\mathsf{K}=\frac{\mathsf{A}+\mathsf{A}^{2}}{\epsilon^{2}}\). Denote \(\mathsf{V}_{\mathsf{K}}=\sum_{i=0}^{\mathsf{K}-1}i\mathsf{N}_{i}\), which is the total edge local time of the vertices of type less than \(\mathsf{K}\). Then we first prove that \(\mathsf{V}_{\mathsf{K}}\) stochastically
dominates some random variable with regular variation. Indeed,
\[\tilde{\mathrm{P}}_{\beta}(V_{K}>x) \geqslant\tilde{\mathrm{E}}_{\beta}[1_{\{V_{K}>x,V_{K}>(1-\epsilon)V \}}1_{\{V\leqslant AN\}}]\] \[\geqslant\tilde{\mathrm{E}}_{\beta}[1_{\{V_{K}>(1-\epsilon)V\}}1_ {\{V>\frac{x}{1-\epsilon},V\leqslant AN\}}]\] \[=\tilde{\mathrm{E}}_{\beta}[\tilde{\mathrm{P}}_{\beta}[V_{K}>(1- \epsilon)V|V,N;V\leqslant AN]1_{\{V>\frac{x}{1-\epsilon},V\leqslant AN\}}]\] \[\geqslant(1-\epsilon)^{2}\tilde{\mathrm{P}}_{\beta}(V>\frac{x}{1- \epsilon})\] \[\sim(1-\epsilon)^{1+\kappa}\frac{C_{K}^{\prime}\beta}{m(m+1)^{ \kappa-1}}x^{-(\kappa-1)}l(x).\]
where we use Lemma 4.10 in the fourth line. Then we can find a positive random variable \(V_{K}^{\prime}\) with the regularly varying tail \((1-\epsilon)^{1+\kappa}C_{K}^{\prime}\beta m^{-1}(m+1)^{-(\kappa-1)}x^{-( \kappa-1)}l(x)\) and \(\tilde{\mathrm{P}}_{\beta}(V_{K}>x)\geqslant\tilde{\mathrm{P}}_{\beta}(V_{K}^ {\prime}>x)\). Thus \(\tilde{\mathrm{E}}_{\beta}[(1-s)^{V_{K}}]\leqslant\tilde{\mathrm{E}}_{\beta} [(1-s)^{V_{K}}]\) for \(s\in(0,1)\).
Recall that \(\mathrm{E}_{\beta}[L^{1}]=\beta\) and \(L^{1}_{u_{j}}\) are i.i.d random variables whose distribution only depends on \(\beta(u_{j})\). Let us consider the Laplace transform of \(L^{1}_{0}\), for \(s\in(0,1)\). By classifying the children by the type, we have
\[\tilde{\mathrm{E}}_{\beta}[e^{-sL^{1}_{0}}] =\tilde{\mathrm{E}}_{\beta}[e^{-s\sum_{j=2}^{N+1}L^{1}_{u_{j}}}]\] \[=\tilde{\mathrm{E}}_{\beta}[\prod_{i=0}^{\infty}(\psi_{i}(s))^{N_ {i}}]\] \[\leqslant\tilde{\mathrm{E}}_{\beta}[\prod_{i=0}^{K-1}(\psi_{i}(s) )^{N_{i}}],\]
where \(\psi_{i}(s):=\mathrm{E}_{i}[e^{-sL^{1}}]\leqslant 1\). As \(\psi_{i}\) has derivative \(-i\) at \(s=0\), we can get that
\[\tilde{\mathrm{E}}_{\beta}[\prod_{i=0}^{K-1}(\psi_{i}(s))^{N_{i}}] =\tilde{\mathrm{E}}_{\beta}[\prod_{i=0}^{K-1}(e^{-s}+o(s))^{iN_{i}}]\] \[=\tilde{\mathrm{E}}_{\beta}[(e^{-s}+o(s))^{V_{K}}]\] \[\leqslant\tilde{\mathrm{E}}_{\beta}[(e^{-s}+o(s))^{V_{K}}]\] \[=1-(1-\epsilon)^{1+\kappa}\frac{C_{K}^{\prime}\beta}{c_{\kappa}m( m+1)^{\kappa-1}}s^{\kappa-1}l(\frac{1}{s})+o(s^{\kappa-1}l(\frac{1}{s})).\]
Therefore,
\[\liminf_{s\to 0+}\frac{c_{\kappa}m(m+1)^{\kappa-1}s^{-(\kappa-1)}}{l( \frac{1}{\kappa})\beta C_{\kappa}^{\prime}}(1-\tilde{\mathrm{E}}_{\beta}[e^{- sL^{1}_{0}}])\geqslant(1-\epsilon)^{1+\kappa}.\]
Since \(\epsilon\) is arbitrary, we get the lower bound.
_2.Upper limit_
We use the Jensen inequality to deduce the upper limit.
\[\tilde{\mathsf{E}}_{\beta}[\mathsf{e}^{-s\mathsf{L}_{\beta}}] =\tilde{\mathsf{E}}_{\beta}[\prod_{j=2}^{N+1}\mathsf{E}[\mathsf{e}^ {-s\mathsf{L}_{\mathsf{ij}}^{1}}|\beta(\mathsf{u}_{j})]]\] \[\geqslant\tilde{\mathsf{E}}_{\beta}[\prod_{j=2}^{N+1}\mathsf{e}^ {-s\mathsf{E}[\mathsf{L}_{\mathsf{ij}}^{1}|\beta(\mathsf{u}_{j})]}]\] \[=\tilde{\mathsf{E}}_{\beta}[\mathsf{e}^{-sV}]\] \[=1-\frac{\beta\mathsf{C}_{\kappa}^{\prime}}{\mathsf{c}_{\kappa} \mathsf{m}(\mathsf{m}+1)^{\kappa-1}}s^{\kappa-1}\mathsf{l}(\frac{1}{s})+ \mathsf{o}(s^{\kappa-1}\mathsf{l}(\frac{1}{s})).\]
Therefore,
\[\limsup_{s\to 0+}\frac{\mathsf{c}_{\kappa}\mathsf{m}(\mathsf{m}+1)^{\kappa-1}s ^{-(\kappa-1)}}{\mathsf{l}(\frac{1}{\kappa})\beta\mathsf{C}_{\kappa}^{\prime}} (1-\tilde{\mathsf{E}}_{\beta}[\mathsf{e}^{-s\mathsf{L}_{\beta}^{1}}])\leqslant 1.\]
As the lower limit and the upper limit are the same, the proposition is proved.
We also need to prove that \(\frac{\mathsf{x}^{\kappa}}{\mathsf{l}(\mathsf{x})}\tilde{\mathsf{P}}_{\beta} (\mathsf{l}_{0}^{1}>\mathsf{x})\) is bounded by \(\mathsf{C}\beta^{\kappa}\) where \(\mathsf{C}\) is a constant that doesn't depend on \(\beta\) to apply the dominated convergence theorem. We need the additive martingale as a bridge and prepare two lemmas first.
**Lemma 4.11**.: _Let \(\mathsf{W}_{\infty}\) to be the almost sure limit of the positive martingale \((\mathsf{W}_{\mathsf{k}})_{\mathsf{k}\geqslant 1}\), where_
\[(\mathsf{W}_{\mathsf{k}})_{\mathsf{k}\geqslant 1}\coloneqq(\sum_{|\mathsf{u} |=\mathsf{k}}\frac{1}{\mathsf{m}^{\mathsf{k}}})_{\mathsf{k}\geqslant 1}.\]
_Then \(\mathsf{W}_{\infty}\) has a finite moment of order \(1+\alpha\) under \(\mathbb{P}\) for any \(\alpha\in(0,\kappa-1)\)._
Proof.: Since \(\mathbb{E}[\mathsf{v}^{1+\alpha}]<\infty\), we can use [10, Theorem 2.1] to get the lemma.
We denote \(\mathsf{P}_{\mathsf{n}}\) (resp. \(\mathsf{E}_{\mathsf{n}}\)) as the measure (resp. expectation) of the multi-type Galton Waston tree conditioned on the root having type \(\mathsf{n}\).
**Lemma 4.12**.: _Let \(\alpha\in(0,\kappa-1)\), we have_
\[\lim_{\mathsf{n}\to\infty}\mathsf{E}_{\mathsf{n}}[\|\frac{\mathsf{L}^{1}}{ \mathsf{n}}-\mathsf{W}_{\infty}|^{1+\alpha}]=0.\]
_Moreover, there exists a positive random variable \(\mathsf{Y}\) with a finite moment of order \(1+\alpha\) and \(\mathfrak{a}_{\mathsf{n}}\to 0\), such that \(\mathfrak{a}_{\mathsf{n}}\mathsf{Y}\) is stochastically greater than \(|\frac{\mathsf{L}^{1}}{\mathsf{n}}-\mathsf{W}_{\infty}|\) under \(\mathsf{P}_{\mathsf{n}}\) for any \(\mathsf{n}\geqslant 1\)._
Proof.: We combine Proposition 6 with Lemma 27 in [3] and then get the lemma.
Since every \(\mathsf{u}_{j}\in\Omega(\mathsf{u}_{1})\) is the root of a subtree, we can define \(\mathsf{W}_{\infty}^{\mathsf{u}_{j}}\) as the almost sure limit of the positive martingale \((\mathsf{W}_{\mathsf{k}}^{\mathsf{u}_{j}})_{\mathsf{k}\geqslant 1}\coloneqq(\sum_{| \mathsf{u}|=\mathsf{k}+1,\mathsf{u}\in\mathsf{T}^{\mathsf{u}_{j}}}1/\mathsf{m} ^{\mathsf{k}})_{\mathsf{k}\geqslant 1}\).
**Proposition 4.13**.: _For \(L^{1}_{0}\) defined as above, it holds with a constant \(C\) that_
\[\frac{x^{\kappa-1}}{l(x)}\hat{P}_{\beta}(L^{1}_{0}>x)\leq C\beta^{\kappa}.\]
Proof.: By the Lemma 4.12, we can find i.i.d random variables \(Y_{u_{j}},2\leq j\leq N+1\) with finite \(1+\alpha\) moments (\(\alpha\in(0,\kappa-1)\)) such that \(\frac{L^{1}_{u_{j}}}{\beta(u_{j})}-W^{u_{j}}_{\infty}\) is stochastically dominated by \(Y_{u_{j}}\) under \(P_{\beta(u_{j})}\). Take \(Z_{u_{j}}\coloneqq Y_{u_{j}}+W^{u_{j}}_{\infty}\), then \(Z_{u_{j}}\) are i.i.d random variables with finite \(1+\alpha\) moment and \(L^{1}_{u_{j}}\) is stochastically dominated by \(\beta(u_{j})Z_{u_{j}}\). Note that \(Z_{u_{j}}\) doesn't depend on \(\beta(u_{j})\).
Now let \(\beta_{k}(u_{j})\) be the edge local time of \(u_{j}\) in the \(k\)-th trail. It's easy to see that \((\beta_{k}(u_{2}),\beta_{k}(u_{3}),\ldots\beta_{k}(u_{N+1}))_{1\leq k\leq\beta}\) are i.i.d with the generic distribution which is the law of the edge local time in one trial. Thus
\[\tilde{P}_{\beta}(L^{1}_{0}>x) \leq\tilde{P}_{\beta}(\sum_{j=2}^{N+1}\beta(u_{j})Z_{u_{j}}>x) \leq\sum_{k=1}^{\beta}\tilde{P}_{\beta}(\sum_{j=2}^{N+1}\beta_{k}(u_{j})Z_{u_ {j}}>\frac{x}{\beta})\] \[=\beta\tilde{P}_{1}(\sum_{j=2}^{N+1}\beta(u_{j})Z_{u_{j}}>\frac{ x}{\beta}).\]
Assume that \(m_{Z}\coloneqq E[Z]<\infty\). By the Laplace transform, following the same proof as Proposition 4.6, we have
\[\tilde{E}_{1}[e^{-s\sum_{j=2}^{N+1}\beta(u_{j})Z_{u_{j}}}]\sim\frac{C^{\prime }_{\kappa}m_{Z}^{\kappa-1}}{c_{\kappa}m(m+1)^{\kappa-1}}s^{\kappa-1}l(\frac{1 }{s}).\]
Thus \(\sum_{j=2}^{N+1}\beta(u_{j})Z_{u_{j}}\) is also regularly varying and we can find a constant \(C\) independent of \(\beta\) such that
\[\beta\tilde{P}_{1}(\sum_{j=2}^{N+1}\beta(u_{j})Z_{u_{j}}>\frac{x}{\beta})\leq C \beta^{\kappa}x^{-(\kappa-1)}l(x).\]
Now, we're ready to prove the main proposition.
proof of Proposition 4.5.: Note that
\[\hat{P}_{1}(L^{1}_{\Omega(\alpha_{\mathrm{k}})}>x|\beta(\omega_{k}),\beta( \omega_{k-1}))\sim\frac{C^{\prime}_{\kappa}(\beta(\omega_{k-1})+\beta(\omega_ {k}))}{m(m+1)^{\kappa-1}}l(x)x^{-(\kappa-1)}.\]
As under \(\hat{P}_{1}\), \((\beta(\omega_{k}))_{k}\) is distributed as a random walk, thus
\[\lim_{x\to\infty}\frac{x^{\kappa-1}}{l(x)}\hat{P}_{1}(L^{1}>x)=\lim_{x\to\infty }\frac{x^{\kappa-1}}{l(x)}\hat{E}_{1}[\hat{P}_{1}((\sum_{k=1}^{\hat{\tau}_{1}} L^{1}_{\Omega(\alpha_{\mathrm{k}})}+1)>x|(\beta(\omega_{k}))_{k})]\]
Almost surely \(\hat{\tau}_{1}\) is finite and \(L^{1}_{\Omega(\omega_{k})}\) are independent of each other conditioned on \((\beta(\omega_{k}))_{k}\) with regular variation of the same order. By considering the Laplace transform and Theorem 8.16 in [2], \(\sum_{k=1}^{\hat{\tau}_{1}}L^{1}_{\Omega(\omega_{k})}\) is regular varying, that is
\[\hat{P}_{1}((\sum_{k=1}^{\hat{\tau}_{1}}L^{1}_{\Omega(\omega_{k})}+1)>x|(\beta( \omega_{k}))_{k})\sim\sum_{k=1}^{\hat{\tau}_{1}}\frac{C^{\prime}_{\kappa}( \beta(\omega_{k-1})+\beta(\omega_{k}))}{m(m+1)^{\kappa-1}}l(x)x^{-(\kappa-1)}.\]
Since we have
\[\hat{P}_{1}(\sum_{k=1}^{\hat{\tau}_{1}}L^{1}_{\Omega(\omega_{k})}>x|(\beta( \omega_{k}))_{k})\leqslant\sum_{k=1}^{\hat{\tau}_{1}}\hat{P}_{\beta(\omega_{ k-1})}(L^{1}_{\Omega(\omega_{k})}>\frac{1}{2k^{2}}x),\]
and then \(\frac{x^{\kappa-1}}{l(x)}\hat{P}_{1}(\sum_{k=1}^{\hat{\tau}_{1}}L^{1}_{\Omega( \omega_{k})}>x|(\beta(\omega_{k}))_{k})\) is bounded by the random variable
\[C(\sum_{k=1}^{\hat{\tau}_{1}+1}\beta(\omega_{k-1})^{\kappa}k^{2(\kappa-1)+ \epsilon}),\]
where \(C\) is a constant and \(\epsilon\) is close to \(0\) by Proposition 4.13. It's integrable by Lemma 4.2. Therefore, by the dominated convergence theorem,
\[\lim_{x\to\infty}\frac{x^{\kappa-1}}{l(x)}\hat{P}_{1}(L^{1}>x) =\hat{E}[\lim_{x\to\infty}\frac{x^{\kappa-1}}{l(x)}\hat{P}_{1}(( \sum_{k=1}^{\hat{\tau}_{1}}L^{1}_{\Omega(\omega_{k})}+1)>x|(\beta(\omega_{k}) )_{k})]\] \[=\frac{C^{\prime}_{\kappa}}{m(m+1)^{\kappa-1}}\hat{E}_{1}[\sum_{ k=1}^{\hat{\tau}_{1}}\beta(\omega_{k-1})+\beta(\omega_{k})]\] \[=\frac{C^{\prime}_{\kappa}}{m(m+1)^{\kappa-1}}2\hat{E}_{1}[\sum_ {k=1}^{\hat{\tau}_{1}}\beta(\omega_{k})].\]
Since \((\beta(\omega_{k}))_{k\geqslant 0}\) is a Markov chain with transition probability \(\hat{P}(\beta(\omega_{k})=j|\beta(\omega_{i-1})=i)=(b_{i}/b_{j})m_{ij}\), we have a stationary measure associated with it \((\hat{a}_{i})_{i\geqslant 1}=(m^{-i}/i)_{i\geqslant 1}\). By the theory of Markov chain, it follows that
\[\hat{E}_{1}[\sum_{k=1}^{\hat{\tau}_{1}}\beta(\omega_{k})]=\sum_{i\geqslant 1 }\frac{i\hat{a}_{i}}{\hat{a}_{1}}=\frac{m}{m-1}.\]
In summary, we get that, as \(x\to\infty\)
\[\hat{P}_{1}(L^{1}>x)\sim C_{\kappa}x^{-(\kappa-1)}l(x),\]
where \(C_{\kappa}=\frac{2\Gamma(\kappa)\kappa}{(m-1)(m+1)^{\kappa-1}(\kappa-1)}\). |
2307.16854 | Note on the Margolus-Levitin quantum speed limit for arbitrary fidelity | For vanishing fidelity between initial and final states two important quantum
speed limits, the Mandelstam-Tamm limit (involving energy dispersion) and
Margolus-Levitin one (involving excitation energy expectation value) have been
derived. While the generalization of the former limit to the case of arbitrary
fidelity is straightforward, the relevant generalization of the latter, given
in the seminal paper by Giovanetti et al (Phys. Rev. A67 (2003), 052109) was
based on the conjectured equality of lower and upper bounds on the right hand
side of generalized Margolus-Levitin inequality, verified numerically up to
seven digits. Only recently there appear two proofs of the conjecture. We
provide below a very elementary new proof, based on the simplest tools from
differential calculus. Thus the generalized Margolus-Levitin speed limit can be
derived much in the spirit of the original one valid for vanishing fidelity. | Krzysztof Andrzejewski, Katarzyna Bolonek-Lasoń, Piotr Kosiński | 2023-07-31T17:13:52Z | http://arxiv.org/abs/2307.16854v2 | # Note on the Margolus-Levitin quantum speed limit for arbitrary fidelity
###### Abstract
A simple proof is given that the upper and lower speed limits derived in _Phys. Rev._**A67** (2003), 052109, coincide. Only the most elementary analytical tools are used.
## I Inroduction
In recent decades much effort has been devoted to the study of the so called quantum speed limit, i.e. a lower bound on time it takes quantum system to evolve in some definite way. Many forms of quantum speed limits have been derived which involve various quantities: energy, fidelity, purity, entropy etc. and concern closed (isolated) and open systems. The prototype of quantum speed limit is the famous
Mandelstam-Tamm [1] relation which gives the lower bound on time necessary for the quantum system to evolve from an initial state to the orthogonal one. It reads
\[t\geq\frac{\pi}{2\Delta E}, \tag{1}\]
where \(\Delta E\) is the energy dispersion in the initial state. Actually, a more general result [1, 2, 3] can be derived: for \(\delta\) being the fidelity between the initial and final state, the evolution time is bounded by
\[t\geq\frac{\arccos\sqrt{\delta}}{\Delta E}. \tag{2}\]
Margolus and Levitin [4] derived an alternative speed limit yielding lower bound for the orthogonalization time in terms of expectation value of energy in initial state. It reads
\[t\geq\frac{\pi}{2\left\langle H-E_{0}\right\rangle}, \tag{3}\]
with \(E_{0}\) being the ground state energy of initial state. The natural question arises whether the inequality (3) may be generalized to the case of arbitrary fidelity between the initial and the final state. Giovanetti et al. [5] have shown that the relevant bound takes the form
\[t\geq\frac{\alpha(\delta)}{\left\langle H-E_{0}\right\rangle}, \tag{4}\]
with \(\alpha(\delta)\) being some function of fidelity. Although they did not provided the closed analytical formula for \(\alpha\), the upper and lower bounds for the latter were given. Actually, these bounds agree numerically to 7 significant figures leading to the conjecture that, in fact, they coincide. If this is the case the exact formula for \(\alpha\) is at our disposal.
Quite recently, Hornedal and Sonnerborn [6] provided a rigorous proof, based on symplectic geometry, that the exact formula for \(\alpha\) coincides with the upper bound on it, derived in [5]. Alternative proof of the coincidence of Giovannetti et al. upper and lower bounds on \(\alpha\) has been given in [7].
In the present note we give a straightforward proof of the equality of upper and lower bounds on \(\alpha\), derived in the paper by Giovannetti et al. [5].
Lower and upper bounds coincide
Giovanetti et al. [5] derived the lower and upper bounds on the function \(\alpha(\delta)\) as follows. The upper bound results by considering the particular family of two-level states parametrized by one real variable. It reads
\[M(\delta)=\frac{2}{\pi}\min_{z^{2}\leq\delta}\left(\left(\frac{1+z}{2}\right) \arccos\left(\frac{2\delta-1-z^{2}}{1-z^{2}}\right)\right); \tag{5}\]
actually, the bound presented in [5] has a slightly different, but equivalent, form; we prefer the expression used in [6]. The lower bound is obtained [5] by considering the family of inequalities
\[\cos x+q\sin x\geq 1-ax, \tag{6}\]
for \(x\geq 0\), \(q\geq 0\). They are optimized by bounding the left hand side by the linear function tangent to it and equal to \(1\) for \(x=0\) (cf. Fig. 1).
Denoting by \(y\) the \(x\)-coordinate of the tangent point one finds
\[\cos y+q\sin y=1-ay, \tag{7}\] \[-\sin y+q\cos y=-a, \tag{8}\]
with \(y\) being restricted to the interval
\[y\in\left\langle\pi-\arctan\left(\frac{1}{q}\right),\pi+\arctan(q)\right\rangle. \tag{9}\]
Figure 1: Configuration leading to optimal inequality (6).
Eqs. (7)-(9) define implicitly the function \(a=a(q)\). Once this function is defined, the lower bound on \(\alpha\) reads [5]
\[m(\delta)=\frac{2}{\pi}\min_{0\leq\theta<2\pi}\left[\max_{q\geq 0}\left(\frac{1- \sqrt{\delta}(\cos\theta-q\sin\theta)}{a(q)}\right)\right]. \tag{10}\]
Our aim here is to give a simple, straightforward proof of the equality
\[m(\delta)=M(\delta). \tag{11}\]
In what follows we denote
\[\alpha \equiv 1-\sqrt{\delta}\cos\theta\geq 0, \tag{12}\] \[\beta \equiv\sqrt{\delta}\sin\theta. \tag{13}\]
Eqs. (12), (13) define a circle of radius \(\sqrt{\delta}\), centered at \((0,1)\) in the \(\alpha-\beta\) plane:
\[(\alpha-1)^{2}+\beta^{2}=\delta. \tag{14}\]
Further, we define the angle \(\varphi\) by
\[\cos\varphi\equiv\frac{\beta}{\sqrt{\alpha^{2}+\beta^{2}}},\quad\sin\varphi \equiv\frac{\alpha}{\sqrt{\alpha^{2}+\beta^{2}}}\geq 0; \tag{15}\]
then \(0\leq\varphi\leq\pi\). The relevant geometry is depicted on Fig. 2.
The main, yet very simple, idea is to parametrize the solutions to (7), (8) in terms of \(y\):
\[q =\frac{1-\cos y-y\sin y}{\sin y-y\cos y}, \tag{16}\] \[a =\frac{1-\cos y}{\sin y-y\cos y},\quad y>0. \tag{17}\]
It follows from eq. (16) that
\[\frac{dq}{dy}=\frac{y(y-\sin y)}{(\sin y-y\cos y)^{2}}>0, \tag{18}\]
i.e. \(q\) is a monotonically growing function of \(y\) wherever it is defined. We are interested in the solutions corresponding to \(0\leq q<\infty\). It is easy to check that,
due to the constraint (9), it is sufficient to consider the interval \(y_{-}\leq y\leq y_{+}\), where \(y_{\pm}\) are determined by the conditions
\[1-\cos y_{-}-y_{-}\sin y_{-}=0,\quad{\pi\over 2}<y_{-}<\pi, \tag{19}\] \[\sin y_{+}-y_{+}\cos y_{+}=0,\quad\pi<y_{+}<{3\over 2}\pi. \tag{20}\]
Numerically, \(y_{-}=2.3311\), \(y_{+}=4.4934\). In the interval \(\langle y_{-},y_{+}\rangle\)\(q(y)\) is monotonically growing from \(0\) to \(\infty\). Consequently, it is invertible. Inserting \(y=y(q)\) into eq. (17) one obtains the relevant fuction \(a=a(q)\). The functions \(q=q(y)\), \(a=a(y)\) on the interval \(\langle y_{-},y_{+}\rangle\) are sketched on Fig. 3 and 4, respectively.
Note that
\[{da\over dy}={\sin y(\sin y-y)\over(\sin y-y\cos y)^{2}} \tag{21}\]
and
\[{da\over dq}={{da/dy}\over{dq/dy}}=-{\sin y\over y}. \tag{22}\]
With the notation introduced in (12), (13) eq. (10) takes the form
\[m(\delta)={2\over\pi}\min_{0\leq\theta<2\pi}\left[\max_{q\geq 0}\left({\alpha+ \beta q\over a(q)}\right)\right]. \tag{23}\]
In order to find the maximum on the right hand side of eq. (23) we put
\[F(q)={\alpha+\beta q\over a(q)}. \tag{24}\]
Figure 2: The set of points \((\alpha,\beta)\) defined by eqs. (12), (13).
Instead of looking for the maximal value of \(F(q)\) on the interval \(\langle 0,\infty\rangle\) one can, equivalently, consider \(F\) as the function of \(y\) on the interval \(\langle y_{-},y_{+}\rangle\); it reads
\[F(y)=\frac{\alpha(\sin y-y\cos y)+\beta(1-\cos y-y\sin y)}{1-\cos y} \tag{25}\]
or, using (12), (13), (15),
\[F(y)=\sqrt{\alpha^{2}+\beta^{2}}\left(\frac{\cos\varphi-\cos(\varphi+y)-y\sin (\varphi+y)}{1-\cos y}\right). \tag{26}\]
In order to find the maximal value of \(F(y)\) we compute the relevant derivative:
\[\frac{dF(y)}{dy}=\frac{(y-\sin y)(\cos\varphi-\cos(\varphi+y))}{(1-\cos y)^{2 }}. \tag{27}\]
\(F^{\prime}(y)\) vanishes for \(y>0\) only provided
\[\cos(\varphi+y)=\cos\varphi. \tag{28}\]
Figure 4: The function \(a=a(y)\).
Figure 3: The function \(q=q(y)\).
Now, \(0\leq\varphi\leq\pi\) and \(\frac{\pi}{2}<y<\frac{3}{2}\pi<2\pi\), which yields the unique solution to (28):
\[y=2\pi-2\varphi. \tag{29}\]
However, \(y_{-}\leq y<y_{+}\), which results in the following restrictions on \(\varphi\):
\[\pi-\frac{1}{2}y_{+}<\varphi\leq\pi-\frac{1}{2}y_{-}. \tag{30}\]
For \(0\leq\varphi\leq\pi-\frac{1}{2}y_{+}\) or \(\pi-\frac{1}{2}y_{-}<\varphi\leq\pi\), \(F^{\prime}(y)\) does not vanish on the interval \(\langle y_{-},y_{+}\rangle\). Therefore, we have to consider two separate cases:
* for all \(\varphi\), defined by eqs. (15), the inequalities (30) do hold. This is illustrated on Fig. 5.
Then for any \(\varphi\) eq. (28) has the unique solution in the interval \(\langle y_{-},y_{+}\rangle\). Moreover, one easily finds
\[\left.\frac{d^{2}F(y)}{dy^{2}}\right|_{F^{\prime}(y)=0}=\frac{(y-\sin y)\sin( \varphi+y)}{(1-\cos y)^{2}}. \tag{31}\]
Now, by virtue of (29), \(\varphi+y=2\pi-\varphi\) so that \(\pi+\frac{1}{2}y_{+}>\varphi+y\geq\pi+\frac{1}{2}y_{-}\) and \(\sin(\varphi+y)<0\). Therefore, \(F(y)\) attains unique maximum at \(y=2\pi-2\varphi\). It equals
\[F_{max}=\sqrt{\alpha^{2}+\beta^{2}}\left(\frac{\pi-\varphi}{\sin\varphi}\right) \tag{32}\]
Figure 5: The inequalities (30) are obeyed for all \((\alpha,\beta)\).
\[F_{max}=\left(\frac{\alpha^{2}+\beta^{2}}{\alpha}\right)\arccos\left(\frac{-\beta}{ \sqrt{\alpha^{2}+\beta^{2}}}\right).\] (33) Let us note that in the next step one has to minimalize \(F_{max}\) over the pairs \((\alpha,\beta)\) running the circle (14). It is obvious that the minimum is attained for \(\beta\leq 0\). Defining \[\omega\equiv\sqrt{\delta}\cos\theta,\quad-\sqrt{\delta}\leq\omega\leq\sqrt{\delta}\] (34) and taking into account the remark above allows us to write \[F_{max}=\left(\frac{1-2\omega+\delta}{1-\omega}\right)\arccos\left(\frac{ \sqrt{\delta-\omega^{2}}}{\sqrt{1-2\omega+\delta}}\right).\] (35) Using the identity \(\arccos(2\tau^{2}-1)=2\arccos\tau\) we rewrite (35) as \[F_{max}=\left(\frac{1-2\omega+\delta}{2(1-\omega)}\right)\arccos\left(\frac{ \delta-1+2\omega-2\omega^{2}}{1-2\omega+\delta}\right).\] (36) Finally, we note that \[z=\frac{\delta-\omega}{1-\omega}\] (37) defines one-to-one mapping of the interval \(\left\langle-\sqrt{\delta},\sqrt{\delta}\right\rangle\) onto itself. Eqs. (36), (37) imply: \[F_{max}=\left(\frac{1+z}{2}\right)\arccos\left(\frac{2\delta-1-z^{2}}{1-z^{2} }\right)\] (38) yielding (11).
2. for some points on the circle (14) the inequalities (30) are violated. This is illustrated on Fig. 6. For any point \((\alpha,\beta)\) belonging to the arc \(AB\) or \(CD\) the derivative \(F^{\prime}(y)\) is nonvanishing in the interval \(\langle y_{-},y_{+}\rangle\). Consider first the arc \(AB\). Then \(0\leq\varphi\leq\pi-\frac{1}{2}y_{+}\) and it is easy to see that \(F^{\prime}(y)>0\) in the interval \(\langle y_{-},y_{+}\rangle\). Therefore, the maximum is attained for \(y\to y_{+}\) and reads \[F_{AB}(y_{+})\equiv F_{AB}(\alpha,\beta)=\frac{-\beta}{\cos y_{+}}.\] (39)
On the contrary, for \(\pi-\frac{1}{2}y_{-}<\varphi\leq\pi\), \(F^{\prime}(y)<0\) and the maximum is attained for \(y=y_{-}\):
\[F_{CD}(y)\equiv F_{CD}\left(\alpha,\beta\right)=\frac{\alpha}{\sin y_{-}}. \tag{40}\]
We will now compare \(F_{AB}\) and \(F_{CD}\) with \(F_{max}\) defined by eq. (33). To this end we parametrize the points \((\alpha,\beta)\) by the angle \(\psi\equiv\pi-\varphi\) (cf. Fig. 7).
The coordinates \((\alpha,\beta)\) of two intersection points obey the equations
\[\alpha+\beta\tan\psi=0, \tag{41}\] \[(\alpha-1)^{2}+\beta^{2}-\delta=0, \tag{42}\]
which yield
\[\alpha =\sin^{2}\psi\pm\sin\psi\sqrt{\delta-\cos^{2}\psi}, \tag{43}\] \[\beta =-\sin\psi\cos\psi\mp\cos\psi\sqrt{\delta-\cos^{2}\psi}. \tag{44}\]
Figure 6: Geometric setting for violating the inequalities (30).
Then
\[F_{max}=\left(2+\frac{\delta-1}{\sin^{2}\psi\pm\sin\psi\sqrt{\delta -\cos^{2}\psi}}\right)\psi, \tag{45}\] \[F_{AB}=\frac{\sin\psi\cos\psi\pm\cos\psi\sqrt{\delta-\cos^{2}\psi} }{\cos y_{+}},\] (46) \[F_{CD}=\frac{\sin^{2}\psi\pm\sin\psi\sqrt{\delta-\cos^{2}\psi}}{ \sin y_{-}}. \tag{47}\]
Now, it is straightforward to check that
\[F_{max}-F_{AB}=\frac{\sin\psi\pm\sqrt{\delta-\cos^{2}\psi}}{2\sin\psi}\left(2 \psi-\frac{\sin 2\psi}{\cos y_{+}}\right). \tag{48}\]
On the \(AB\) arc \(2\pi\geq 2\psi\geq y_{+}>\pi\) and the right hand side is nonnegative; it vanishes for \(2\psi=y_{+}\) which corresponds to the points \(A\) and \(B\). Moreover \(F_{AB}\) takes the minimal value at \(A\). As a result, \(F_{max}\) and \(F_{AB}\) attain the same minimal value on the \(AB\) arc.
Analogously,
\[F_{max}-F_{CD}=\frac{\sin\psi\pm\sqrt{\delta-\cos^{2}\psi}}{2\sin\psi}\left(2 \psi-\frac{1-\cos 2\psi}{\sin y_{-}}\right). \tag{49}\]
On the \(CD\) arc \(0\leq 2\psi\leq y_{-}\) and the right hand side is nonnegative; it vanishes for \(2\psi=y_{-}\) which corresponds to the points \(C\) and \(D\). \(F_{CD}\) takes
Figure 7: Parametrization of the circle (14).
the minimal value at \(C\). Again we conclude that \(F_{max}\) and \(F_{CD}\) attain the same minimal value on the \(CD\) arc.
It follows from the above discussion that also in (ii) case one obtains the correct value of \(m(\delta)\) by minimalizing \(F_{max}\) on the whole circle (14). Therefore, we can refer to the reasoning described in (i). This concludes the proof.
|
2309.10751 | Chirality-inverted Dzyaloshinskii-Moriya interaction | The Dzyaloshinskii-Moriya interaction (DMI) is an antisymmetric exchange
interaction, which is responsible for the formation of topologically protected
spin textures in chiral magnets. Here, by measuring the dispersion relation of
the DM energy, we quantify the atomistic DMI in a model system, i.e., a Co
double layer on Ir(001). We unambiguously demonstrate the presence of a
chirality-inverted DMI, i.e., a sign change in the chirality index of DMI from
negative to positive, when comparing the interaction between nearest neighbors
to that between neighbors located at longer distances. The effect is in analogy
to the change in the character of the Heisenberg exchange interaction from,
e.g., ferromagnetic to antiferromagnetic. We show that the pattern of the
atomistic DMI in epitaxial magnetic structures can be very complex and provide
critical insights into the nature of DMI. We anticipate that the observed
effect is general and occurs in many magnetic nanostructures grown on
heavy-element metallic substrates. | Khalil Zakeri, Alberto Marmodoro, Albrecht von Faber, Sergiy Mankovsky, Hubert Ebert | 2023-09-19T16:52:44Z | http://arxiv.org/abs/2309.10751v1 | # Chirality-inverted Dzyaloshinskii-Moriya interaction
###### Abstract
Dzyaloshinskii-Moriya interaction (DMI) is an antisymmetric exchange interaction, which is responsible for the formation of topologically protected spin textures in chiral magnets. Here by measuring the dispersion relation of the DM energy, we quantify the atomistic DMI in a model system, i.e., a Co double layer on Ir(001). We unambiguously demonstrate the presence of a chirality-inverted DMI, i.e, a sign change in the chirality index of DMI from negative to positive, when comparing the interaction between nearest neighbors to that between neighbors located at longer distances. The effect is in analogy to the change in the character of the Heisenberg exchange interaction from, e.g., ferromagnetic to antiferromagnetic. We show that the pattern of the atomistic DMI in epitaxial magnetic structures can be very complex and provide critical insights into the nature of DMI. We anticipate that the observed effect is general and occurs in many magnetic nanostructures grown on heavy-element metallic substrates.
Exchange interaction is an inherently quantum mechanical effect, which describes the fundamental interaction between indistinguishable particles. In magnetic solids, the symmetric Heisenberg exchange interaction (HEI), is essential to understand the magnetic order [1]. In a spin Hamiltonian representation, having the form of \(\mathcal{H}_{\text{HEI}}=-\sum_{i\neq j}J_{ij}\mathbf{S}_{i}\cdot\mathbf{S}_{j}\), the exchange coupling parameter \(J_{ij}\), which describes the interaction between atomic spins \(\mathbf{S}_{i}\) and \(\mathbf{S}_{j}\), is symmetric with respect to permutation of the atoms on sites \(i\) and \(j\). The ferro- or antiferromagnetic interaction manifests itself in the sign of \(J_{ij}\). Depending on the mechanism dominating the interatomic exchange interaction in a material, the coupling can be either short-range, e.g., as in the case of the direct exchange or superexchange interactions, or long-range, as for instance in the case of the Ruderman-Kittel-Kasuya-Yosida (RKKY) interaction [2; 3; 4], having oscillatory behavior as a function of the interatomic distance. The latter one is intrinsic for the itinerant-electron systems and may be significant in 3D systems, e.g., bcc Fe [5; 6; 7; 8; 9; 10], and in 2D systems, e.g., ultrathin films [11; 12]. Moreover, \(J_{ij}\) can change sign from positive (ferromagnetic) to negative (antiferromagnetic) as a function of the interatomic distance in an oscillatory manner [12].
In addition to HEI there exist another type of exchange interaction that is of antisymmetric nature. This interaction which has the form of \(\mathcal{H}_{\text{DMI}}=\sum_{i\neq j}\mathbf{D}_{ij}\cdot\mathbf{S}_{i} \times\mathbf{S}_{j}\) is known as Dzyaloshinskii-Moriya interaction (DMI) [13; 14]. Here the so-called DM vector \(\mathbf{D}_{ij}\) is antisymmetric with respect to permutation of the sites \(i\) and \(j\). The direction of \(\mathbf{D}_{ij}\) is governed by the symmetry of the lattice [14] and determines the twist of spins. DMI is a chiral interaction and has been shown to be a consequence of spin-orbit coupling (SOC) in spin systems with broken inversion symmetry [13; 14].
In the case of ultrathin magnetic films, nanostructures as well as separated magnetic atoms deposited on heavy-element metallic substrates DMI can be significant due to the strong SOC of the substrate. However, since each of these systems belong to a different dimensionality class, different interactions act on individual atomic spins, and hence the microscopic nature of the interaction responsible for the DMI is different. It has been discussed that the DMI between the deposited individual magnetic atoms may change its sign as a function of the separation distance in an oscillatory manner [15]. A similar behavior is predicted for magnetic chains deposited on stepped surfaces [16]. In the case of ultrathin magnetic films, multilayers and nanostructures the atomistic DMI vectors are linked to the symmetry of the underlying lattice and hence form an array of chiral vectors. In such a case the interesting feature would be that any change in the sign of the DMI vectors would lead to a chirality inversion within this array and a possible local change in the winding number \(\mathcal{Q}\). Unfortunately, the experimental proof of any sign-change in the DMI vectors in such planar magnets has remained elusive, since a quantitative experimental determination of the atomistic DMI vectors is challenging. It is well-known that in 2D systems DMI leads to the formation of spin textures, e.g., magnetic helices [17], spirals [18], skyrmions [19; 20], antiskyrmions [21], merons [22] and antimerons [23], some of which are topologically protected. For an experimental design of a specific spin texture a detailed knowledge of the fundamental magnetic interactions in such 2D systems is of prime importance. Hence, in the case of magnetic structures of itinerant electron character grown on substrates with a strong SOC one may ask the following questions. (i) Is DMI in such systems similarly long ranged as HEI? (ii) Is the sign of \(\mathbf{D}_{ij}\) only given by the symmetry of the
lattice? (iii) Can the sign of \(\mathbf{D}_{ij}\) change with interatomic distance, in a similar fashion as \(J_{ij}\)? If this is true, one would expect a chirality inversion of DMI when comparing the nearest neighbor interaction to the interaction between spins located at a longer distances. This would provide guidelines for designing novel spin textures, e.g., atomic scale skyrmionium with \(\mathcal{Q}=0\) or more complex spin textures including skyrmions and skyrmioniums.
In this Letter we will provide answers to all these fundamental questions. We will show that the pattern of \(\mathbf{D}_{ij}\) in low-dimensional itinerant magnetic structures grown on heavy-element metallic substrates can be rather complex. We introduce the magnon spectroscopy as a versatile tool to resolve such complex patterns of DMI. We provide direct experimental evidence for a change of the chirality index of DMI, which manifests itself in the asymmetry of the magnon dispersion relation. We will shed light on the origin of the observed effect and provide guidelines for quantum engineering of DMI in low-dimensional magnets on the atomic scale.
We examine an epitaxial Co double layer on Ir(001) as a representative of 2D systems. All the experimental details are provided in Supplemental Material [24]. We have shown earlier that the presence of DMI would lead to an asymmetry in the magnon dispersion relation [25; 26; 27] and hence magnon spectroscopy provides a way to identify DMI [28; 29]. The magnon dispersion relation was probed along the \(\bar{\Gamma}\)-\(\bar{\mathrm{X}}\) direction of the surface Brillouin zone by means of spin-polarized high-resolution electron energy-loss spectroscopy (SPHREELS) [30; 31; 32]. Typical SPHREEL spectra are presented in Fig. 1. The spectra were recorded for different spin polarizations of the incoming electron beam. \(I_{\downarrow}\) (\(I_{\uparrow}\)) represents the intensity spectrum when the spin polarization of the incoming electron beam was parallel (antiparallel) to the ground state magnetization \(\mathbf{M}\). The difference spectrum \(I_{\downarrow}-I_{\uparrow}\) provides all the necessary information regarding the magnons, e.g., their energy and lifetime [33; 34]. In this experiment \(\mathbf{M}\) was parallel to the \([\overline{1}10]\)-direction and the magnon wavevector \(\mathbf{Q}\) was along the \([\overline{1}\overline{1}0]\)-direction (see Figs. 1 and 2).
DMI lifts the degeneracy of magnons having the same value of the wavevector but opposite propagation directions. Therefore, measuring the energy asymmetry \(\Delta\varepsilon(\mathbf{Q})=\varepsilon(\mathbf{Q})-\varepsilon(-\mathbf{Q})\) of the magnon dispersion relation provides a way to quantify the strength of DMI [26]. However, experimental determination of this asymmetry is not trivial, as probing the magnons with opposite orientations of \(\mathbf{Q}\) requires a change in the scattering geometry, which may lead to unwanted effects. We have shown that a more accurate way is to perform a time-inversion experiment, keeping the scattering geometry unchanged [27]. This can be realized by reversing the direction of \(\mathbf{M}\). The energy asymmetry can, therefore, be defined as \(\Delta\varepsilon(\mathbf{Q})=\varepsilon_{\mathbf{M}\parallel}[\overline{1} 0](\mathbf{Q})-\varepsilon_{\mathbf{M}\parallel}[\overline{1}\overline{1}0]( \mathbf{Q})\), where \(\varepsilon_{\mathbf{M}\parallel}[\overline{1}10](\mathbf{Q})\) and \(\varepsilon_{\mathbf{M}\parallel}[\overline{1}\overline{1}0](\mathbf{Q})\) denote the magnon energy with the wavevector \(\mathbf{Q}\) when \(\mathbf{M}\) is parallel to the \([\overline{1}10]\)- and \([\overline{1}\overline{1}0]\)-direction, respectively. Figure 2 shows the difference spectrum recorded for \(Q=0.65\) A\({}^{-1}\) and for two different orientations of \(\mathbf{M}\), i.e., \([\overline{1}10]\) and \([\overline{1}\overline{1}0]\).
Data presented in Fig. 2 unambiguously indicate that
Figure 1: Typical SPHREELS spectra recorded at a wavevector of \(Q=0.65\) Å\({}^{-1}\) on a Co double layer epitaxially grown on Ir(001). The spectra are recorded at the incident energy of \(E_{i}=8\) eV and at room temperature. The red and blue spectra, denoted by \(I_{\downarrow}\) and \(I_{\uparrow}\), are recorded with the spin polarization vector of the incident electron beam being parallel and antiparallel to the magnetization \(\mathbf{M}\), respectively. The difference spectrum \(I_{\downarrow}-I_{\uparrow}\) is shown by the sea-green color. The scattering geometry is schematically illustrated in the inset. The energy and wavevector of the incident (scattered) beam are shown by \(E_{i}\) and \(\mathbf{k_{i}}\) (\(E_{f}\) and \(\mathbf{k_{f}}\)), respectively.
Figure 2: Difference spectra recorded at \(Q=0.65\) Å\({}^{-1}\) for the two opposite directions of magnetization, i.e., \(\mathbf{M}\parallel[\overline{1}10]\) (sea-green solid circles) and \(\mathbf{M}\parallel[1\overline{1}0]\) (red solid circles). In order to easily compare the spectra with different magnetization directions, the same spectra multiplied by \(-1\) are shown as well. The magnon propagation direction (wavevector \(\mathbf{Q}\)) with respect to the principle directions of the Co layers and \(\mathbf{M}\) is schematically sketched in the right side.
in this system DMI is present. However, the value of \(\Delta\varepsilon\) is only about 4 meV, which seems to be rather small, at first glance. In order to shed light on the origin of this low value of \(\Delta\varepsilon\) and also to further quantify the atomistic DM vectors we have probed the so-called dispersion relation of the DM energy, i.e., energy asymmetry versus wavevector \(\Delta\varepsilon(\mathbf{Q})\). A summary of the results from several different experiments performed on different samples (thicknesses of 1.8, 2 and 2.4 atomic layers) is presented in Fig. 3. Since the quantity \(\Delta\varepsilon(\mathbf{Q})\) is antisymmetric with respect to \(\mathbf{Q}\), the data seem to be mirrored with respect to the origin. Note that at low wavevectors the dipolar scattering along with the large SOC of the system leads to interesting spin-dependent effects (a discussion on such effects is out of the scopes of the present Letter, see also Supplementary Note II of [24]). We, therefore, focus on the spectra collected away from the dipolar scattering regime, where those effects are absent. Instead, we show the measurement with small steps of wavevectors. Here the most important observations are: (i) the energy asymmetry is unexpectedly low, and (ii) more importantly, the maximum and the minimum of \(\Delta\varepsilon(\mathbf{Q})\) are located in the second half of the \(\bar{\Gamma}\)-\(\bar{\mathrm{X}}\) path. Both observations are clear evidence of a chirality-inverted DMI (see below).
We resort to first-principles calculations of DMI, in order to gain further insight into the physics of these observations [24]. Our first-principles calculations are based on the fully relativistic Korringa-Kohn-Rostoker electronic structure method [35]. Details of the scheme used to compute the above quantities within the general framework of the magnetic force theorem [36] are given in Ref. [37]. The experimental interatomic distances were used as the input of the calculations [38; 39; 40; 41; 42].
The calculations results are summarized in Figs. 3(b) and (c), where we show the DM vectors describing the antisymmetric interaction between the Co atoms only. The values are similar to the literature values for similar systems [20; 21; 25; 43; 44; 45; 46; 47; 48; 49]. In Fig. 3(b) the interaction of a Co atom located at position \((0,0)\) within the interface layer, adjacent to the Ir(001) surface, with the other Co atoms at position \(\mathbf{R}_{j}\) in the same layer (sea-green color) or located in the exposed surface layer (orange color) are shown. Similarly, Fig. 3(c) displays the DMI between a Co atom located within the surface layer at \((0,0)\) and the other Co atoms sitting at sites \(\mathbf{R}_{j}\) either in the same layer (orange color) or located in the deeper interface layer (sea-green color). The length of each vector represents the strength of the interaction \(|\mathbf{D}_{ij}|\) and their color represents their distance from the origin sites (0,0). The rotation sense of the DM vectors is also encoded in their color. The red color means that a counter clockwise rotation is favored and the blue color indicates a clockwise rotation of the DM vectors. The chirality index is defined as \(c=\hat{\mathbf{D}}_{ij}\cdot\hat{\mathbf{S}}_{i}\times\hat{\mathbf{S}}_{j}\), where the hat indicates the unit vector. Its sign is, therefore, directly given by the direction of \(\mathbf{D}_{ij}\)[50]. Looking at the data presented in Fig. 3(b) one realizes that the first nearest neighbor DM vectors, shown in brown color are rather small (\(D_{1,x}=D_{1,y}=-0.073\) meV). They exhibit a counter clockwise (CCW) rotation about the origin site (0,0) (\(c<0\)). The second nearest neighbor DM vectors, shown in red color, are rather large (\(D_{2,x}=-0.45\), \(D_{2,y}=0\)) or (\(D_{2,x}=0\), \(D_{2,y}=-0.45\)) meV and also show a CCW rotation sense (\(c<0\)). The surprising result was obtained for the third nearest neighbor (dark-blue color). The DM vectors show a clockwise (CW) rotation (\(c>0\)) with the components \(D_{3,x}=D_{3,y}=+0.1\) meV. Interestingly, the fourth nearest neighbors favor again a CCW rotation sense (\(c<0\)) but the fifth nearest neighbor DM vectors have the tendency to be of CW nature (\(c>0\)). Looking at the data presented in Fig. 3(c) one can draw a similar conclusion. The only difference is that the third nearest neighbor shows a more pronounced positive chirality (CW rotation) \(D_{3,x}=D_{3,y}=+0.16\) meV. Unlike the interface layer the fifth nearest neighbor exhibits also a negative
Figure 3: (a) Dispersion relation of the DM energy \(\varepsilon(\mathbf{Q})\). The experimental data are shown by the open circles. Different colors indicate the results of different (but similar) samples. The error bars represent the standard deviations in the values of \(\Delta\varepsilon\). The results of _ab initio_ calculations are shown by the solid red curve. The black dashed curve indicates the artificial assumption of all DMI having alike counter clockwise chirality. (b) and (c) DM vectors from _ab initio_ calculations when the origin site (0,0) is located in the interface Co layer (b) and in the surface Co layer (b). In the ball representation of the Co atoms the sea-green and orange colors indicate the interface and surface layers, respectively. The color scales represent the order of nearest neighbors. The chirality index is also encoded in the color code. More red (blue) means a counter clockwise (clockwise) rotation is favored.
chirality index (\(c<0\), CCW rotation).
Based on the values obtained from first principles, we calculated the dispersion relation of the DM energy and the results are shown by the solid red curve in Fig. 3(a) [24]. In line with the experiment one observes that the dispersion relation of the DM energy exhibits its extrema at \(\pm 0.7\) A\({}^{-1}\). It is apparent that such an observation cannot be understood if all DMI terms possess the same chirality. This is demonstrated by the black dashed curve shown in Fig. 3(a). In such a case one must observe a rapid increase of \(\Delta\varepsilon(\mathbf{Q})\) as \(Q\) increases with extrema located in the first half of the \(\bar{\Gamma}\)-\(\bar{\mathrm{X}}\) symmetry direction, i.e., below \(0.5\) A\({}^{-1}\) and a rapid decrease at larger wavevectors. In the region of small wavevectors \(\Delta\varepsilon(\mathbf{Q})\) can be approximated by terms linear with respect to \(Q\). Hence, only if the system exhibits a chirality inversion of DMI one would observe a reduction of the DM energy in this region, as a result of competing terms with opposite signs. Note that the higher order Heisenberg type of exchange interactions are all of symmetric nature and hence do not appear in \(\Delta\varepsilon(\mathbf{Q})\). Moreover, other interactions of chiral character are expected to be much weaker than DMI and their contribution to \(\Delta\varepsilon(\mathbf{Q})\) may be neglected (see Supplementary Note II of [24]).
Our results clarify the long-standing question regarding the small micromagnetic DMI of the Co/Ir interface (cf. Refs. [51; 52] and references therein), which seems to be due to cancellation effect of negative and positive terms. The overall (micromagnetic) DMI is still negative (CCW rotation) [24].
In order to shed light on the origin of the observed chirality-inverted DMI, one may start with the model by Fert and Levy for a three-sites interaction [53; 54]. According to this model \(\mathbf{D}_{ij}\) between \(\mathbf{S}_{i}\) and \(\mathbf{S}_{j}\) mediated by a nonmagnetic atom sitting on site \(n\) is given by \(\mathbf{D}_{ij}=\frac{D_{ij}^{0}}{\mathcal{R}_{ij}}\sum_{n}\mathbf{R}_{in} \cdot\mathbf{R}_{jn}\left(\mathbf{R}_{in}\times\mathbf{R}_{jn}\right)/\left( R_{in}R_{jn}\right)^{3}\)[53; 54; 55]. Here \(\mathbf{R}_{in}\) and \(\mathbf{R}_{jn}\) are the displacement vectors and \(D_{ij}^{0}=C\sin[\gamma(E_{\mathrm{F}})]\sin[k_{\mathrm{F}}(R_{in}+R_{jn}+R_ {ij})+\gamma(E_{\mathrm{F}})]\), where \(C\) is a constant proportional to the SOC strength and the strength of the interaction between the magnetic sites \(i\) and \(j\), and \(\gamma\) is the scattering phase shift of the electrons on site \(n\), which, in turn, is related to the number of available \(d\) electrons and the Fermi energy \(E_{F}\) or wavevector \(k_{F}\). The vector product determines the direction of \(\mathbf{D}_{ij}\). Considering the lattice, as shown in Figs. 3(b) and (c), the symmetry of the pairs of the magnetic atoms remains unchanged for all the neighbors. One would, therefore, expect the same sign for all \(\mathbf{D}_{ij}\), as given by the vector product. For \(D_{ij}^{0}>0\) one, would therefore expect a CCW rotation of \(\mathbf{D}_{ij}\) and hence a negative chirality index (\(c<0\)). However, depending on the quantities appearing in the argument of sinus functions as well as on the sign of \(C\) the prefactor \(D_{ij}^{0}\) can either be positive or negative. Any change in the sign of \(\mathbf{D}_{ij}\) must, therefore, be due to a sign change of \(D_{ij}^{0}\). Conditions under which \(D_{ij}^{0}\) can be positive or negative depend on the details of the electronic structures. Hence, it is not straightforward to provide simple guidelines for which system such an effect is expected (see also Supplementary Note III of [24]). Moreover, one has to recall that the coupling between Co atoms at sites \(i\) and \(j\) is mediated by electrons of surrounding atoms, both Co as well as Ir atoms, hybridized with the electrons of the interacting Co atoms [56]. The SOC-induced spin mixing as well as the itinerant nature of the electrons in the system are essential for the observed change in the chirality index of DMI via a mechanism similar to (but beyond) that of the RKKY interaction [2; 3; 4]. Interestingly, a similar behavior has also been predicted for DMI in Co/Ir/Pt(111) [45], Ru/Co/Ir(111) and Mn/Re(0001) [57] and is expected to occur in many other combinations of \(3d\) magnetic films, multilayers and nanostructures in contact with metallic substrates. Note that the observed chirality-inverted DMI reported here is different from the layer-dependent DMI discussed in Ref. [58] (see Supplemental Note II of [24]). Our analysis of \(J_{ij}\) shows that there is no direct relation between the sign of \(\mathbf{D}_{ij}\) and that of \(J_{ij}\)[24], in line with the results of Ref. [45]. Therefore, not only the overlap of the electronic wavefunctions but also the spin mixing and orbital contribution, as a consequence of SOC must be important [48; 59]. All these effects are tightly connected to the degree of the hybridization of the electronic states of the magnetic atoms with those of the substrate atoms [60; 61; 47].
In conclusion, by probing the dispersion relation of the DM energy, we quantified the atomistic DMI in a model system of ultrathin ferromagnet on heavy-element substrate, i.e., a Co double layer epitaxially grown on Ir(001). Our detailed analysis of the DM energy dispersion showed that the pattern of the atomistic DMI in epitaxial magnetic structures can be very complex. Upon the increase of the interatomic distances DMI can change its chirality index from positive to negative and vice versa, even though the symmetry of the system is unchanged. The effect is in analogy to the oscillatory HEI in ferromagnetic metals. The phenomenon was explained by comparing the experimental results to those of _ab initio_ density functional theory calculations and was attributed to the strong electronic hybridizations, the role of orbital degree of freedom and the presence of the spin-mixed itinerant electrons. The observed complex pattern of DM vectors is a general feature across different systems and is expected to be present in many magnetic structures grown on heavy-element metallic substrates (see Supplementary Note III [24]). Beside providing new insights into the microscopic origin of DMI, our results offer new routes to tune this fundamental interaction on the atomic scale. Moreover, our work showcases how magnon spectroscopy, which directly probes the dispersion of DM energy, can be used to quantify the atomistic DMI in great detail and experimentally identify any
chirality-inverted DMI.
## Acknowledgments
Financial support by the Deutsche Forschungsgemeinschaft (DFG) through the DFG Grants ZA 902/7-1 and ZA 902/8-1 and the Heisenberg Programme (Grants ZA 902/3-1 and ZA 902/6-1) is acknowledged. Kh.Z. thanks the Physikalisches Institut for hosting the group and providing the necessary infrastructure. A.M. gratefully acknowledges partial financial support by the Czech Science Foundation grant GA CR 23-04746S, by the Deutscher Akademischer Austauschdienst program "Bilateral exchange of academics", and computational resources by the IT4Innovation grant OPEN-24-35 "CHIR-SPIN".
|
2309.05412 | B meson decays in covariant confined quark model | The aim of this text to present the covariant confined quark model (CCQM) and
review its applications to the decays of $B$ mesons. We do so in the context of
existing experimental measurements and theoretical results of other authors,
which we review also. The physics principles are in detail exposed for the
CCQM, the other results (theoretical and experimental) are surveyed in an
enumerative way with comments. We proceed by considering successively three
categories of decay processes: leptonic, semileptonic and non-leptonic. | Stanislav Dubnička, Anna Zuzana Dubničková, Mikhail Alekseevich Ivanov, Andrej Liptaj | 2023-09-11T12:26:26Z | http://arxiv.org/abs/2309.05412v2 | # B meson decays in covariant confined quark model
###### Abstract
The aim of this text to present the covariant confined quark model (CCQM) and review its applications to the decays of \(B\) mesons. We do so in the context of existing experimental measurements and theoretical results of other authors, which we review also. The physics principles are in detail exposed for the CCQM, the other results (theoretical and experimental) are surveyed in an enumerative way with comments. We proceed by considering successively three categories of decay processes: leptonic, semileptonic and non-leptonic.
\({}^{1}\) Institute of Physics, Slovak Academy of Sciences,
Britislava, Slovakia
\({}^{2}\) Faculty of Mathematics, Physics and Informatics,
Comenius University, Bratislava, Slovakia
\({}^{3}\) Bogoliubov Laboratory of Theoretical Physics,
Joint Institute for Nuclear Research, Dubna, Russia
\({}^{\ddagger}\) [email protected]
## 1 Introduction
The confinement property of quantum chromodynamics (QCD) implies it is not possible to study the strong force using the scattering of free quarks. The confinement itself being a manifestation of the strong force, one cannot but analyze more complex systems such as hadrons, i.e. bound states of quarks. All hadrons are colorless (white) objects, among them mesons consisting of two quarks only. Even though no stable mesons exist, the meson physics is often seen as the most simple testing ground of QCD.
Various measurement provided us so far with a large amount of experimental data (masses, decay rates) which challenges our ability to provide theoretical predictions. For the above-mentioned reasons, the perturbative calculations performed at partonic level need to be complemented by the so-called hadronic effects, which are non-perturbative in nature and originate in the long-range interaction between quarks and gluons. As of now, we do not have a well-established general method for a reliable computation of hadronic effects for arbitrary processes from first principles.
Our ability to describe mesons and other QCD states without model dependence is limited, yet improves in time. Light meson physics is often treated within the chiral perturbation theory (ChPT) based on an (approximate) flavor chiral symmetry of the QCD which is spontaneously broken. Assuming this symmetry together with constraints from the analyticity and unitarity, phenomenological Lagrangians were proposed in [1]. This allowed to reproduce the results from complicated methods of the current algebra. In [1] the Lagrangians have been given in the leading order only, the extension of this approach which included meson loops was formulated in two original papers [2, 3]. Since, the ChPT proved to be a successful effective field theory approach with remarkable results [4, 5], however the large masses of other quarks besides \(u\), \(d\) and \(s\) exclude the heavy-quark physics from its applicability range.
A different approach is represented by non-perturbative methods, such as the Dyson-Schwinger equations. The latter were formulated decades ago [6, 7, 8] in terms of an infinite number of coupled differential equations imposed to the Green functions of the theory. With necessary simplifications results were derived first for abelian theories. Then the approach was extended
also to the more complicated case of non-abelian theories [9], thus including QCD and hadronic physics. The application to heavy quarks was for the first time presented in [10].
A distinctive non-perturbative theoretical technique to investigate the strong interaction physics are the QCD sum rules [11, 12]. The central object of interest are the correlation functions of interpolating quark currents treated using the operator product expansion (OPE) and expressed in term of a perturbative continuum contribution and a low energy parameterization. These are then matched assuming the quark-hadron duality. The results are derived in form of sum rules, the uncertainties have to take into account various necessary approximations. Among others, the results for leptonic decay constants and hadron transition form factors have been derived [13, 14].
In the domain of heavy meson physics (which we are interested in) a specific tool is available: the approximate realization of the heavy quark symmetry gives rise to the heavy quark effective theory (HQET) [15, 16, 17]. The symmetry appears when the mass of the heavy quark goes to infinity and is the combination of a heavy quark flavor symmetry and the independence of hadronic properties on the heavy quark spin state. It allows for important simplifications and leads to results expanded in the inverse of the heavy quark mass.
An important model-independent approach with possibly very broad applicability is represented by numerical QCD calculations on the lattice. Here an important progress was made over last decades [18], nowadays predictions of form factors in weak decays of heavy particles become available [19, 20, 21, 22]. The potential of the method is immense, since, as is evident from [23], the bulk of the experimental data in high-energy physics is related to hadrons and explaining them at few percent level accuracy would be a triumph.
However we are not at this point now and the possibility for lattice calculations to become the mainstream of theoretical predictions will depend on the future developments. Thus, despite the important achievements of the lattice QCD, model dependent methods remain the most popular and versatile tools in making QCD predictions with hadronic effects included. This is mainly due to the fact that the lattice QCD remains limited to a narrow set of specific processes while the model framework can be usually easily adopted to various settings, making thus predictions more easy to produce. This is especially true with relation to the B factories, i.e. very high-luminosity accelerator facilities nowadays in operation where a large number of various heavy hadron decays is registered and measured. Many of these approaches can be described as "quark" models, since they describe the hadron by considering its valence quarks using some specific assumptions or ansitze (see e.g. [24, 25]).
The Nambu-Jona-Lasinio (NJL) model based on the ideas of Y. Nambu and G. Jona-Lasinio ( Refs. [26, 27] of original papers) is widely used in the low-energy phenomenology of light quarks \((u,d,s)\). The hadron masses are generated due to the spontaneous breaking of chiral symmetry where the pion plays a role of the Goldstone boson. This approach found many applications in light meson physics due to the simplicity of calculations, for review, see, e.g. Ref. [28]. Some efforts have been made to extend the NJL model for applications to heavy mesons with taking into the account the heavy quark symmetry [29, 30]. In our early paper [31], which was a predecessor of the CCQM, a clear relation of the so-called compositeness condition (addressed later) with the requirement of the correct normalization of the kinetic term in the NJL Lagrangian after the spontaneous breaking of the chiral symmetry was shown.
As far as quark models are concerned, for weak decays they are usually combined with a perturbative computation at the quark level. Here, it is customary to use an effective four-fermion theory derived using the OPE and governed by the low-energy Hamiltonian
\[{\cal H}^{b\to q}_{\rm eff.}=\frac{G_{F}}{\sqrt{2}}V_{tb}V_{tq}^{*}\sum_{i}C_{ i}(\mu)Q_{i}(\mu) \tag{1}\]
here written for the \(b\to q\in\{s,d\}\) transition. \(Q_{i}(\mu)\) are local operators expressed in terms of quark fields, \(C_{i}(\mu)\) are the Wilson coefficients which can be evaluated perturbatively, \(V_{ij}\) are Cabibbo - Kobayashi - Maskawa (CKM) matrix elements and \(\mu\) is the QCD renomalization scale. Its value is set to a typical momentum transfer which is for weak decays significantly smaller that the \(W\) mass. Thus \(W\) is effectively removed from (1), it enters in computations of \(C_{i}(\mu)\). An excellent overview of weak decays is given in [32].
The heavy decay processes are of a special interest for the particle physics community for several reasons [33]. One of them is the determinations of the CKM matrix elements and the study of related questions such as the CP violation, unitarity triangle, baryogenesis and weak physics in general. Further, B factories are used to search for new exotic states including tetraquarks, pentaquarks, glueballs and so on. The collected data also allowed to study fragmentation processes, test the lepton universality, investigate possible lepton flavor violation and address the questions related to a new, beyond Standard Model (SM) physics [34, 35].
Indeed, various new physics (NP) scenarios [36, 37, 38, 39, 40, 41, 42] predict deviations from the SM in B meson decay processes. Because of the very high luminosity the nowadays colliders have, there is a hope that even rare (small in number) deviations from the SM physics can be detected.
We present here how the covariant confined quark model (CCQM) [43] has been used to investigate the b-physics processes. A dedicated effort was made in previous years and decades to cover most of the measured B meson data, and since they are large in number we believe it is appropriate to review them. We provide in this text the overview of the results from the perspective of the CCQM, but we also point to contributions and achievements from other approaches and authors. Up to some exceptions, the majority of the outcome was formulated in terms of the SM predictions which were compared to data. In this way possible tensions or deviations were identified or hypothesis about the nature of an exotic state were expressed. This then points to possible NP phenomena or better understanding of exotic particles, especially when there is an agreement with other theoretical works too.
The large quantity of various B-related results which have been published in the past does not allow us to review each decay in full details. We therefore define three categories and for each we present a demonstrative calculation with one or two example processes. The categories are leptonic, semileptonic and non-leptonic (radiative) decays.
The text is structured as follows: In Sec. 2 the general features of the CCQM are presented. The following three sections are dedicated to specific process categories, as mentioned above. Each has three subsections, one with a general overview, the second presenting in more details the computations for a chosen example process and the third where results obtained within the CCQM framework are summarized. The text ends with conclusion and outlook.
## 2 Covariant confined quark model
The key points for the model construction are
* Lorentz symmetry and invariant Lagrangian,
* compositeness and double counting,
* confinement of quarks,
* gauge symmetry and inclusion of electromagnetic (EM) fields,
which we address is this order. In an additional subsection we also briefly describe the computational techniques.
### Lagrangian
To construct a theory with Lorentz symmetry one naturally recurs to a Lagrangian formulation. So is done for the CCQM which is an effective field approach where both, quark and hadronic fields occur. The quark-meson interaction term is written as
\[\mathcal{L}_{int}=g_{M}M\left(x\right)J_{M}\left(x\right)+\text{ H.c. }\,,\quad J_{M}\left(x\right)=\int dx_{1}\int dx_{2}F_{M}\left(x;x_{1},x_{2} \right)\overline{q}_{2}\left(x_{2}\right)\Gamma_{M}q_{1}\left(x_{1}\right), \tag{2}\]
where \(M\) represents the mesonic field, \(q\) the quark one, \(g_{M}\) is their coupling and H.c. stands for the Hermitian conjugate. The interpolating quark current \(J_{M}\) is non-local and the integral over the positions \(x_{1}\), \(x_{2}\) of constituent quarks is weighted by a vertex function \(F_{M}\). The symbol \(\Gamma_{M}\) represents a combination of gamma matrices depending on the spin of \(M\). For a scalar \(M\) one has \(\Gamma_{M}=1\), for pseudoscalar \(\Gamma_{M}=\gamma^{5}\) and for a vector particle the expression is \(\Gamma_{M}=\gamma^{\mu}\). In the latter case the mesonic field has a Lorentz index too (\(M_{\mu}\)) and the indices are contracted.
It's interesting to see what happens in the case of local interation when \(F_{M}(x;x_{1},x_{2})=\delta(x-x_{1}-x_{2})\). Then one clearly observes that the interaction Lagrangian given by Eq. (2) together with free meson and quark Lagrangians corresponds to the NJL model after bosonization.
The explicit form of \(F_{M}\) is driven by two requirements. First the positions of quarks are constrained so as to make the hadron be situated in their barycenter. For this a delta function is introduced where the weights in its argument depend on the constituent quark masses \(w_{i}=m_{i}/(m_{1}+m_{2})\). Second, to manifestly respect the Lorentz symmetry, the remaining dependence is written as a function of the spacetime interval
\[F_{M}\left(x;x_{1},x_{2}\right)=\delta\left(x-w_{1}x_{1}-w_{2}x_{2}\right) \Phi_{M}\left[\left(x_{1}-x_{2}\right)^{2}\right]. \tag{3}\]
Further steps in the construction of \(F_{M}\) are done with respect to the computational convenience. \(\varPhi_{M}\) is assumed to have a Gaussian form in the momentum representation
\[\varPhi_{M}\left[\left(x_{1}-x_{2}\right)^{2}\right]=\int\frac{d^{4}k}{\left(2 \pi\right)^{4}}e^{-ik\left(x_{1}-x_{2}\right)}\widetilde{\varPhi}_{M}\left(-k ^{2}\right),\quad\widetilde{\varPhi}_{M}\left(-k^{2}\right)=e^{k^{2}/\Lambda_{M }^{2}}, \tag{4}\]
where \(\Lambda_{M}\) is a free parameter of the model related to the meson \(M\). The square of the momentum in the argument of the exponential becomes negative in the Euclidean region \(k^{2}=-k_{E}^{2}\) which implies an appropriate fall-off behavior and removes ultraviolet divergences in Feynman diagrams. The question of other possible function forms of \(\varPhi_{M}\) was addressed in [31], where four different ansatzes were tested, each having a meaningful physical interpretation. The dependence of the results on the function form was found to be small.
The S-matrix is constructed from the interaction Lagrangian as \(S=T\exp\{i\int d^{4}x\mathcal{L}_{\text{int}}(x)\}\). The calculation of the matrix elements of \(S\) proceeds in a standard manner, first, by making convolution of the quark fields with a help of T-product, and, second, by using the Fourier-transforms of quark propagators and vertex functions to go to the momentum space. Note that we use the ordinary local forms of the quark propagators \(S(k)=1/(m_{q}-\not\!k)\) in our approach.
Besides the hadron-related \(\Lambda_{M}\), the CCQM comprises four "global" parameters: three constituent quark masses and one universal cutoff which plays a role in the quark confinement (as explained later). The values expressed in GeV are
\[m_{q}=m_{u,d}=0.241,\quad m_{s}=0.428,\quad m_{c}=1.67,\quad m_{b}=5.05,\quad \lambda=0.181, \tag{5}\]
where one does not distinguish between the two light quarks and use the same mass for both. The values slightly changed in the past, they were few times [44, 45] updated if significant new data become available. They were extracted by over-constrained global fits of the model on available experimental points.
The CCQM does not include gluons. The gluonic effects are effectively taken into account by the vertex function which is adjusted to describe data by tuning the free parameter it contains.
At last we have to mention that the CCQM is suitable for description of various multi-quark states including baryons [46, 47] and tetraquarks [48]. In this text we focus on mesons, the approach is in other cases very similar: the interpolating quark current is constructed for a given number of quarks (more alternatives can be considered) and multiplied by the hadronic field to give the interaction Lagrangian.
### Compositeness condition
The interaction of a meson is given by the Lagrangian (2): the meson fluctuates into its constituent quarks, these interact and afterwards combine back into a mesonic final state. Yet, (2) implies that both, quark and mesons, are elementary and this rises concerns about the double counting.
These questions were addressed by implementing the so-called compositeness condition [44, 43, 49] which originates in works [50, 51, 52] (see [53] for a review). The interaction of a meson through the creation of virtual quark states implies the mesonic field is dressed, i.e. its vertex and wave function need to be renormalized. This is reflected in the renormalization constant \(Z_{M}\) which can be interpreted as the overlap between the physical state and the corresponding bare state. By requiring \(Z_{M}=0\) one makes this overlap vanish, i.e. the physical state does not contain bare state and can be regarded as a bound state. As a consequence the quarks exist only as virtual and quark degrees of freedom do not appear on the level of the physical state.
\(Z_{M}\) is expressed in terms of the derivative of the meson mass operator \(\Pi_{M}^{{}^{\prime}}\) (its scalar part for vector mesons)
\[Z_{M}=1-g_{M}^{2}\Pi_{M}^{{}^{\prime}}\left(m_{M}^{2}\right)=0 \tag{6}\]
Figure 1: Meson mass function diagram.
and at one-loop level (Fig. 1) is given by
\[\Pi^{{}^{\prime}}_{PS}(p^{2}) =\frac{-i}{2p^{2}}p^{\alpha}\frac{d}{dp^{\alpha}}\int d^{4}k\,\widetilde {\mathcal{G}}^{2}_{PS}(-k^{2})\mathrm{tr}\left[\gamma^{5}S_{1}(k+w_{1}p)\gamma^ {5}S_{2}(k-w_{2}p)\right], \tag{7}\] \[\Pi^{{}^{\prime}}_{V}(p^{2}) =\frac{-i}{3}\left(g_{\mu\nu-}\frac{p_{\mu}p_{\nu}}{p^{2}}\right) \frac{1}{2p^{2}}p^{\alpha}\frac{d}{dp^{\alpha}}\int d^{4}k\,\widetilde{ \mathcal{G}}^{2}_{V}(-k^{2})\mathrm{tr}\left[\gamma^{\mu}S_{1}(k+w_{1}p)\gamma^ {\nu}S_{2}(k-w_{2}p)\right], \tag{8}\]
for pseudoscalar and vector mesons respectively. The symbol \(S_{i}\) denotes the quark propagator \(S_{i}=1/(m_{q_{i}}-\gamma^{\mu}k_{\mu})\) and the differentiation is done using the identity \(d\Pi/dp^{2}=(p^{\mu}\,d\Pi/dp^{\mu})/(2p^{2})\).
To reach the equality (6) one profits from the up-to-now undetermined coupling constant \(g_{M}\) and tune its value so that (6) is satisfied. As consequence, the coupling \(g_{M}\) becomes the function of \(\Lambda_{M}\). In this way the number of parameters of the model is reduced and one increases its predictive power and stability. If the values \(\Lambda_{M}\) and \(g_{M}\) are unknown from previous studies, their determination is the first step in the application of the CCQM.
As it will be discussed in the next sections, the adjustable parameters of the model (quark masses, size parameters and infrared cutoff) are determined by fitting the experimental data of physical observables. For instance, in the case of the B-meson, the size parameter is found to be equal to \(\Lambda_{B}=1.96\) GeV. By using the compositeness condition it gives the numerical value of the coupling constant \(g_{B}=4.80\).
### Infrared confinement
The CCQM is a successor of the so-called relativistic constituent quark model (see [54]) and in [43] it was proposed to refine the latter by effectively implementing quark confinement into it. This was motivated by data on heavy particles which required an extension to situations where the hadron is heavier than the sum of its constituent quarks. To prevent the decay into free quarks in such a scenario, a technique inspired by confined propagators is used. Here the propagators are written in the Schwinger representation and a cutoff is introduced in the upper integration limit. The propagator then becomes an entire function
\[\frac{S_{i}(k)}{(m_{q_{i}}+\gamma^{\mu}k_{\mu})}=\int_{0}^{\infty}d\alpha e^{ -\alpha(m_{q_{i}}^{2}-k^{2})}\rightarrow\int_{0}^{1/\lambda^{2}}d\alpha e^{- \alpha(m_{q_{i}}^{2}-k^{2})}=\frac{1-e^{-(m_{q_{i}}^{2}-p^{2})/\lambda^{2}}}{ m_{q_{i}}^{2}-p^{2}}, \tag{9}\]
where the absence of singularities indicates the absence of a single quark asymptotic state. A modified version of this strategy was adopted and the cutoff was applied to the whole structure \(F\) of the Feynman diagram. It can be formally written as
\[\Pi=\int_{0}^{\infty}d^{n}\alpha\,F(\alpha_{1},\ldots,\alpha_{2})=\int_{0}^{ \infty\to 1/\lambda^{2}}dt\,t^{n-1}\int_{0}^{1}d^{n}\alpha\,\delta(1- \sum_{i=1}^{n}\alpha_{i})F(t\alpha_{1},\ldots,t\alpha_{2}) \tag{10}\]
and can be obtained by inserting the unity \(1=\int_{0}^{\infty}dt\,\delta(t-\sum_{i=1}^{n}\alpha_{i})\) into the expression on the left hand side. The single cutoff (indicated by the arrow) in the \(t\) variable is done in the last step, the remaining integration variables are confined to an \(n\) dimensional simplex. After the cutoff is applied the integral becomes convergent for arbitrary values of the kinematic variables meaning that the quark thresholds are removed with quarks never being on the mass shell. The the cutoff value (5) is the same for all processes.
### Electromagnetic interactions and gauge symmetry
Radiative decays represent another important class of processes measured at heavy meson factories. For their description one has to include the interactions with photons into the CCQM [43, 55]. Because of the non-local interaction Lagrangian this is not straightforward and requires a dedicated approach. Taking into the account quarks and scalar mesons, the free parts of the Lagrangian are treated in the usual way, i.e. the minimal substitution is used
\[\partial^{\mu}\psi\rightarrow(\partial^{\mu}-ie_{\psi}A^{\mu})\psi,\quad \partial^{\mu}\bar{\psi}\rightarrow(\partial^{\mu}+ie_{\psi}A^{\mu})\bar{\psi}, \tag{11}\]
where \(\psi\in\{q,M\}\) and \(e_{\psi}\) is its electric charge in the units of the proton charge. One then gets
\[\mathcal{L}^{EM_{1}}=eA_{\mu}(x)J^{\mu}_{M}(x)+e^{2}A^{2}(x)M^{-}(x)M^{+}(x)+ \sum_{q}e_{q}A_{\mu}(x)J^{\mu}_{q}(x), \tag{12}\]
\[J^{\mu}_{M}(x)=i[M^{-}(x)\partial^{\mu}M^{+}(x)-M^{+}(x)\partial^{\mu}M^{-}(x)],\quad J^{\mu}_{q}(x)=\bar{q}(x)\gamma^{\mu}q(x). \tag{13}\]
The compositeness condition formulated above however prevents a direct interaction of the dressed particle, i.e. the meson, with photons: the contributions from the photon-meson tree
level diagram and analogous diagrams with self-energy insertions into the external mesonic line determine the renormalization constant \(Z\) and \(Z=0\) implies they cancel. The interaction thus proceeds only through intermediate virtual states.
The gauging of the non-local interaction (2) is done in a manner similar to [56]. First one multiplies the quark fields in (2) by a gauge field exponential
\[q_{i}(x)\to e^{-i\epsilon_{q_{i}}I(x_{i},x,P)}q_{i}(x),\quad I(x_{i},x,P)=\int_ {x}^{x_{i}}d\varepsilon_{\mu}A^{\mu}(z), \tag{14}\]
where \(P\) is the path connecting \(x_{i}\) and \(x\), the latter being the position of the meson. One can verify that the Lagrangian is invariant under the following gauge transformations
\[q_{i}(x)\to e^{i\epsilon_{q_{i}}f(x)}q_{i}(x),\quad\bar{q}_{i}(x) \rightarrow\bar{q}_{i}(x)e^{-i\epsilon_{q_{i}}f(x)}, \tag{15}\] \[M(x)\to e^{i\epsilon_{M}f(x)}M(x),\quad A^{\mu}(x) \to A^{\mu}(x)+\partial^{\mu}f(x), \tag{16}\]
here \(f(x)\) is some scalar function. The apparent path-dependence of the definition (14) is not an actual one: in the perturbative expansion only derivatives of the path integral appear and these are path independent
\[\frac{\partial}{\partial x^{\mu}}I(x,y,P)=A_{\mu}(x). \tag{17}\]
The individual terms of the Lagrangian are generated by expanding the gauge field exponential by orders in \(A^{\mu}\). At first order one has
\[\mathcal{L}^{EM_{2}}(x)=g_{M}M(x)\iiint dx_{1}\,dx_{2}\,dy\,E_{M}^{\mu}\left(x ;x_{1},x_{2},y\right)A_{\mu}(y)\overline{q}_{2}\left(x_{2}\right)\Gamma_{M}q _{1}\left(x_{1}\right), \tag{18}\]
where \(E_{M}\) is defined through its Fourier transform \(\widetilde{E}_{M}\): \((x_{1}-x,x_{2}-x,y-x)\stackrel{{ FT}}{{\leftrightarrow}}(p_{1},p _{2},q)\)
\[\widetilde{E}_{M}^{\mu}(p_{1},p_{2},q) =\sum_{i=1,2}\vartheta_{i}e_{q_{i}}w_{i}(w_{i}q^{\mu}+\vartheta_{ i+1}2l^{\mu})\int_{0}^{1}dt\widetilde{\Phi}_{M}^{{}^{\prime}}\left[-t(w_{i}q+ \vartheta_{i+1}l)^{2}-(1-t)l^{2}\right], \tag{19}\] \[l =w_{1}p_{1}+w_{2}p_{2},\quad\vartheta_{i}=(-1)^{i}. \tag{20}\]
Symbol \(\widetilde{\Phi}_{M}^{{}^{\prime}}\) denotes the derivative with respect to the argument. In corresponding Feynman diagrams the photon is attached to the non-local vertex.
### Computations
From the Lagrangian one derives the Feynman diagrams. Gaussian expressions in the vertex function (4) and in the Fock-Schwinger propagator (9) can be joined into a single exponent which takes a quadratic form in the loop momenta \(k\). It can be formally written as \(\exp(ak^{2}+2rk+z)\), \(a=a(\{\alpha\})\), \(r=r(\{\alpha\},\{p\})\), where \(\{\alpha\}\) denotes the set of Schwinger parameters and \(\{p\}\) external momenta. The exponential is preceded by a polynomial \(P\) in loop momenta which originates from the trace of Dirac matrices (numerators of propagators). Since the powers of \(k\) can be generated by differentiation with respect to \(r\), the loop momenta integration is formally written as
\[\int d^{4}k\,P(k)\exp(ak^{2}+2rk+z)=\exp(z)P\left(\frac{1}{2}\frac{\partial}{ \partial r}\right)\int d^{4}k\,\exp(ak^{2}+2rk). \tag{21}\]
Using the substitution \(u=k+r/a\), the argument of the exponential is transformed
\[\int d^{4}k\,\exp(ak^{2}+2rk)=\int d^{4}u\,\exp(au^{2}-r^{2}/a)=\exp(-r^{2}/a )\int d^{4}u\,\exp(au^{2}) \tag{22}\]
and the integration is performed in the Euclidean region as a simple Gaussian integral. Further, the differential operator and the \(r\)-dependent exponential can be interchanged
\[P\left(\frac{1}{2}\frac{\partial}{\partial r}\right)\exp\left(-\frac{r^{2}}{a }\right)=\exp\left(-\frac{r^{2}}{a}\right)P\left(-\frac{r}{a}+\frac{1}{2}\frac {\partial}{\partial r}\right) \tag{23}\]
which simplifies the action of the differential operator. One arrives to
\[\int_{0}^{\infty}d\alpha_{1}\cdots\int_{0}^{\infty}d\alpha_{n}F(\alpha_{1}, \ldots,\alpha_{n}), \tag{24}\]
where \(F\) represents the whole structure of the Feynman diagram including (23). A FORM [57] code is used to treat symbolic expressions: besides computing traces it is also used to repeatedly perform chain rule application in (23) and arrive to an explicit formula with no differential operators appearing. The implementation of the infrared confinement as expressed by (10) is the last step before the numerical integration.
## 3 Leptonic decays of \(B\) mesons
### Overview
Large mass difference between heavy mesons and leptons implies, by phase-space arguments, small branching fractions of pure and radiative leptonic decays. Some of these are further suppressed by CKM elements or helicity. Thus for most leptonic decays only limits have been measured.
At the usual 95% confidence level a branching fraction measurement is available only for \(B_{s}^{0}\to 2\mu\)[58, 59, 60, 61] and \(B^{\pm}\to\tau^{\pm}\nu_{\tau}\)[62, 63, 64, 65]. If the criteria are loosened to (at least) one sigma significance, additional results can be cited: \(B^{0}\to 2\mu\)[58], \(B^{+}\to\mu^{+}\nu_{\mu}\)[66, 67] and \(B^{+}\to^{+}\ell^{\prime}\nu_{\tau}\)[68]. The limits are settled [23] for \(B^{+}\to e^{+}\nu_{e}\), \(B^{+}\to e^{+}\nu_{e}\gamma\), \(B^{+}\to\mu^{+}\nu_{\mu}\gamma\), \(B^{+}\to\mu^{+}\mu^{-}\mu^{+}\nu_{\mu}\), \(B^{0}\to e^{+}e^{-}\), \(B^{0}\to e^{+}e^{-}\gamma\), \(B^{0}\to\mu^{+}\mu^{-}\gamma\), \(B^{0}\to\mu^{+}\mu^{-}\mu^{+}\mu^{-}\), \(B^{0}\to\tau^{+}\tau^{-}\), \(B_{s}^{0}\to e^{+}e^{-}\), \(B_{s}^{0}\to\tau^{+}\tau^{-}\) and \(B_{s}^{0}\to\mu^{+}\mu^{-}\mu^{+}\mu^{-}\).
These experimental results motivate various analyses. Pure leptonic decays are considered as theoretically clean with the main source of uncertainty represented by the hadronic effects of the initial state, which are contained in the leptonic decay constant of the hadron. The neutrino production process corresponds, in the leading order, to the annihilation of the constituent quarks into a virtual \(W\) meson which subsequently decays. The branching fraction is given by
\[\mathcal{B}(B^{+}\to\ell^{+}\nu)=\frac{G_{F}^{2}m_{B}m_{t}^{2}}{8\pi}\left(1- \frac{m_{t}^{2}}{m_{B}^{2}}\right)^{2}f_{B}^{2}|V_{ub}|^{2}\tau_{B^{+}}, \tag{25}\]
where \(G_{F}\) is the Fermi coupling constant, \(V_{ij}\) the CKM matrix element and \(\tau_{P}\) the lifetime of the particle \(P\).
A general information about \(B\) leptonic decays is contained in several reviews. Besides [32], a more specific focus on processes with charged pseudoscalar mesons is given in [69] and a summary concerning specifically \(B\) decays (leptonic and semileptonic ) is provided in [70]. The existing theoretical approaches follow two directions. One focuses on the SM contributions at different precision levels, the other is concerned with NP beyond the SM.
Dilepton final states are produced at one loop through box and penguin diagrams. The cross section formula can be found e.g. in [71], equation (4.10). The leptonic decays constants of \(B\) (and \(D\)) mesons where determined in a model-independent way using lattice calculations in [72]. The SM treatment of dilepton decays includes the computation of three-loop QCD corrections [73], the evaluation of the electroweak contributions at the two-loop level [74] and further improvements of theoretical predictions reached by combining additional EM and strong corrections [75]. The authors of [76] investigated the effect of the virtual photon exchange from scales below the bottom-quark mass and found a dynamical enhancement of the amplitude at the 1% level. The soft-collinear effective theory approach was used in [77] to evaluate the power-enhanced leading-logarithmic QED corrections.
The radiative processes have the advantage of not being helicity suppressed at the price of one additional \(\alpha_{\rm EM}\) factor. A larger number of results can be cited for radiative dilepton production. An evaluation within a constituent quark model was performed in [78] to estimate branching fractions, the same observables were predicted by the authors of [79, 80] using the light-cone QCD sum rules and by those of [81] using the light-front model. Universal form factors related to the light cone wave function of the \(B_{s}\) meson allowed to make estimates in [82]. Interesting results were given in [83], where it was shown that the gauge invariance and other considerations allow to significantly constrain the form factor behavior, and also in [84] where the authors have demonstrated that the non-perturbative hadronic effects largely cancel in amplitude ratios of pure leptonic and radiative decays. The impact of the light meson resonances on long-distance QCD effects was studied in [85]. In [86] the authors have identified the effective \(B\to\mu\mu\gamma\) lifetime and a related CP-phase sensitive observable as appropriate quantities to study the existing B decay discrepancies.
Also for decays \(B\to\gamma l\nu_{l}\) several studies can be cited. The work [87] was concerned with photon spectrum and the decay rates of the process. The authors of [88] used the HQET to predict form factors and in [89] the heavy-quark expansion and soft-collinear effective theory were applied to evaluate the soft-overlap contribution to the photon. The process was also studied in [90]. Here, assuming an energetic photon, the authors aimed to quantify the leading power-suppressed corrections in \(1/E_{\gamma}\) and \(1/m_{b}\) from higher-twist B-meson light-cone distribution amplitudes. The soft-collinear effective theory was the approach adopted in [91, 92].
The recent publication [93] focused on four-body leptonic B decays: off-shell photon form factors were computed within the QCD factorization formalism and predictions for differential distribution of various observables were presented. Similar processes are addressed also in [94, 95, 96].
Although the most tensions with the SM are seen in the semileptonic sector, the pure leptonic decays are of a concern too. The summary papers [35, 97] mention two tensions. Fist is related to the combined likelihood for \(B^{0}\) and \(B^{0}_{s}\) decays to \(\mu^{+}\mu^{-}\) where the theory-measurement difference reaches \(2.3\sigma\). The other concerns the branching fraction ratio for the \(B^{0}_{s}\to\mu^{+}\mu^{-}\) reaction \(R={\cal B}_{\rm exp}/{\cal B}_{\rm SM}\) which deviates from 1 by \(2.4\sigma\). In [98] is the difference between the theory and the experiment for the dimuon \(B_{s}\) decay quantified to be \(2.2\sigma\).
The possible NP contributions are usually assessed by introducing new, beyond SM four-fermion contact operators and the corresponding Wilson coefficients. Once evaluated in the appropriate NP approach, it is possible to conclude about their effect on the theory-experiment discrepancy, see e.g. [99].
An overview of various flavor-violating extensions of the SM also with relation to \(B\to\ell\ell\) decay was presented in [100]. In [101] the \(B_{s}\) dimuon decay was considered and it was argued that the decay width difference between the light and heavy \(B_{s}\) mass eigenstates is a well-suited observable for the detection of NP. The work [37] points to the ambiguity in choice of the NP operators that might play role in explaining the tensions in the \(B\) semileptonic decays. They show that this ambiguity can be lifted by analyzing the longitudinal polarization asymmetry of the muons in \(B^{*}_{s}\to\mu\mu\). Various discrepancies in measured data are addressed in [102], among them also dimuon branching fractions. The attempt to explain them is based on lepton-flavored gauge extensions of the SM, a specific construction with a massive gauge boson \(X_{\mu}\) and "muoquark" \(S_{3}\) is presented. Several texts are interested in decays with tau lepton in the final state. In [103, 104, 105] these decays are studied in relation with various alternative scenarios of the Higgs boson model and in [106] they are analyzed in the context of non-universal left-right models.
### Radiative leptonic decay \(B_{s}\to\ell^{+}\ell^{-}\gamma\) in CCQM
Before reviewing other CCQM results on leptonic \(B\) decays we present in more details the evaluation of the branching fraction for \(B_{s}\to\ell^{+}\ell^{-}\gamma\)[107]. The computations are in many ways similar to those of other cases and provide an insight of how leptonic and radiative decays are treated within the CCQM. Since \(B_{s}\) is the only hadron, one needs to extend the set of parameters (5) only by one number, i.e. \(\Lambda_{B_{s}}=2.05\,\)GeV which was settled in previous works. The values of remaining parameters are identical to (5), see Eq. (8) of [107]. Two explicit forms of the effective Hamiltonian (1) are considered
\[{\cal H}^{b\to s\ell^{+}\ell^{-}}_{\rm eff.}= \frac{G_{F\rm{\bar{\nu}}EM}}{2\sqrt{2}\pi}V_{tb}V_{ts}^{*}\bigg{[} C^{\rm eff}_{9}\{\bar{s}\gamma^{\mu}(1-\gamma^{5})b\}(\bar{\ell}\gamma_{ \mu}\ell)-\frac{2\tilde{m}_{b}}{q^{2}}C^{\rm eff}_{7}\{\bar{s}i\sigma^{\mu\nu }q_{\nu}(1+\gamma_{5})b\}(\bar{\ell}\gamma_{\mu}\ell)\] \[+C_{10}\{\bar{s}\gamma^{\mu}(1-\gamma_{5})b\}(\bar{\ell}\gamma_{ \mu}\gamma_{5}\ell)\bigg{]}, \tag{26}\] \[{\cal H}^{b\to s\gamma}_{\rm eff}= -\frac{G_{F}}{\sqrt{(}2)}V_{tb}V_{ts}^{*}C^{\rm eff}_{7}\frac{ \tilde{m}_{b}}{8\pi^{2}}\left[8\sigma_{\mu\nu}(1+\gamma_{5})b\right]F^{\mu\nu}, \tag{27}\]
where \(\sigma_{\mu\nu}=i[\gamma_{\mu},\gamma_{\nu}]\) and \(F^{\mu\nu}\) is the EM field tensor. In (26) the dilepton is produced from the weak \(b\)-s transition, Fig. 2, in (27) the weak transition gives birth to a real photon, Fig 3. An additional set of diagrams depicted in Fig 4 is considered too, where the real photon is emitted as the final-state radiation (FSR).
The tilde notation in (26)(27) indicates the QCD quark mass (different from (5)) which is \(\tilde{m}_{b}=4.68\pm 0.03\,\)GeV [108]. The values of scale-dependent Wilson coefficients were determined in [109] at the matching scale \(\mu_{0}=2m_{W}\) and run to the hadronic scale \(\mu_{b}=4.8\,\)GeV. The effective operators are defined through the standard SM operators as follows
\[C^{\rm eff}_{7}= C_{7}-C_{5}/3-C_{6},\] \[C^{\rm eff}_{9}= C_{9}+C_{0}[h(\tilde{m}_{c},s)+\Omega]-\frac{1}{2}h(1,s)(4C_{3} +4C_{4}+3C_{5}+C_{6}) \tag{28}\] \[-\frac{1}{2}h(0,s)(C_{3}+3C_{4})+\frac{2}{9}(3C_{3}+C_{4}+3C_{5} +C_{6}),\]
where
\[\begin{split}& C_{0}=3C_{1}+C_{2}+3C_{3}+C_{4}+3C_{5}+C_{6},\quad \Omega=\frac{3\pi}{\alpha^{2}}\kappa\sum_{V_{i}=\Psi(1s),\Psi(2s)}\frac{\Gamma(V_ {i}\to\ell^{+}\ell^{-})m_{V_{i}}}{m_{V_{i}}^{2}-q^{2}-im_{V_{i}}\Gamma_{V_{i}}},\\ &\hat{m}_{c}=\tilde{m}_{c}/m_{B_{s}},\quad\tilde{m}_{c}=1.27\pm 0.03\text{GeV},\quad s=q^{2}/m_{B_{s}}^{2},\quad\kappa=1/C_{0},\\ & h(0,s)=\frac{8}{27}-\frac{8}{9}\ln\frac{\hat{m}_{b}}{\mu}-\frac {4}{9}\ln s+\frac{4}{9}i\pi,\\ & h(\hat{m}_{c},s)=-\frac{8}{9}\left[\ln\frac{\hat{m}_{b}}{\mu}+ \ln\hat{m}_{c}-\frac{1}{3}-\frac{x}{2}\right]-\frac{2}{9}(2+x)\sqrt{|1-x|} \Theta(x),\\ &\Theta(x)|_{x<1}=\ln\left|\frac{\sqrt{1-x}+1}{\sqrt{1-x}-1} \right|-i\pi,\quad\Theta(x)|_{x>1}=2\arctan\frac{1}{\sqrt{x-1}},\quad x=\frac{ 4\hat{m}_{c}^{2}}{s}.\end{split} \tag{29}\]
The \(\Omega\) function in \(C_{0}^{\text{eff}}\) parameterizes, in the standard Breit-Wigner form, the resonant contributions from \(\Psi(1s)\) and \(\Psi(2s)\) charmonia states.
sum of contributions from particular Feynman graphs in Figs. 2 and 3. One has
\[F_{V} =m_{B_{s}}(e_{6}\tilde{F}_{V}^{\psi\gamma b}+e_{s}\tilde{F}_{V}^{ \gamma s}),\] \[F_{A} =m_{B_{s}}(e_{6}\tilde{F}_{A}^{\psi\gamma b}+e_{s}\tilde{F}_{A}^{ \gamma s}+e_{b}\tilde{F}_{A}^{\rm Bubble-b}+e_{s}\tilde{F}_{A}^{\rm Bubble-s}),\] \[F_{TV} =e_{b}\tilde{F}_{TV}^{\lambda b}+e_{s}\tilde{F}_{TV}^{\gamma s}+e _{b}\tilde{F}_{TV}^{\lambda(\ell\ell)b}+e_{s}\tilde{F}_{TV}^{\tilde{s}(\ell )s}, \tag{31}\] \[F_{TA} =e_{b}\tilde{F}_{TA}^{\psi\gamma b}+e_{s}\tilde{F}_{TA}^{\gamma s }+e_{b}\tilde{F}_{TA}^{\rm Bubble-b}+e_{s}\tilde{F}_{TA}^{\rm Bubble-s}+e_{b} \tilde{F}_{TA}^{\rm Bubble-\ell\ell)b}+e_{s}\tilde{F}_{TA}^{\tilde{s}(\ell \ell)s},\]
where "\(q\gamma q\)" superscript refers to a real photon emission from the quark line, "\(bubble\)" to the real photon emission from the non-local hadron-quark vertex and "\(q(\bar{\ell}\ell)q\)" corresponds to the virtual photon emission from the quark line.
The branch point at \(q^{2}=4m_{s}^{2}\) corresponding to the virtual photon emission from the \(s\) quark (left in Fig. 3) is situated well inside the accessible physical \(q^{2}\) region. This leads to the appearance of light vector meson resonance which prevents us to compute the corresponding form factors within the CCQM. An approach inspired by [110] is adopted and a gauge-invariant vector-meson dominance model is used to express the form factors in question
\[\tilde{F}_{TV,TA}^{s(\bar{\ell}\ell)s}=\tilde{F}_{TA}^{s(\bar{ \ell}\ell)s}(0)-\sum_{V}2f_{V}^{EM}G_{1}^{T}(0)\frac{q^{2}/M_{V}}{q^{2}-M_{V}^ {2}+iM_{V}\Gamma_{V}}, \tag{32}\] \[G_{1}^{T}: \langle V(p_{2},\epsilon_{2})|s\sigma^{\mu\nu}b|B_{s}(p_{1}))\rangle=\] (33) \[=(\epsilon_{2}^{t})_{\alpha}\left[\varepsilon^{\beta\mu\nu\alpha} P_{\beta}G_{1}^{T}(q^{2})+\varepsilon^{\beta\mu\nu\alpha}q_{\beta}G_{2}^{T}(q^{2})+ \varepsilon^{\alpha\beta\mu\nu}P_{\alpha}q_{\beta}\frac{G_{0}^{T}(q^{2})}{(m_{ B_{s}}+M_{V})^{2}}\right],\]
where \(P=p_{1}+p_{2}\). With all this objects defined, one can write down the amplitude for the structure dependent part
\[\mathcal{M}_{\rm SD}= \frac{G_{F}}{\sqrt{2}}\frac{\alpha_{EM}V_{tb}V_{ts}^{*}}{2\pi}e( \epsilon_{2}^{*})_{\alpha}\left\{\left[\varepsilon^{\mu\alpha\nu\beta}(p_{1})_ {\nu}(p_{2})_{\beta}\frac{F_{V}(q^{2})}{m_{B_{s}}}-iT_{1}^{\mu\alpha}\frac{F_{ A}(q^{2})}{m_{B_{s}}}\right]\times\left(C_{9}^{\rm eff}\bar{\ell}_{\gamma\mu}\ell\right.\right.\] \[\left.\left.+\,C_{10}\bar{\ell}_{\mu\gamma}\gamma_{5}\ell\right) +\left[\varepsilon^{\mu\alpha\nu\beta}(p_{1})_{\nu}(p_{2})_{\beta}F_{TV}(q^{2} )-iT_{1}^{\mu\alpha}F_{TA}(q^{2})\right]\frac{3\bar{m}b_{s}}{q^{2}}C_{7}^{\rm eff }\bar{\ell}_{\gamma\mu}\ell, \tag{34}\]
where \(T_{1}^{\mu\alpha}=[g^{\mu\alpha}p_{1}p_{2}-(p_{1})^{\alpha}(p^{2})^{\mu}]\). The structure-independent _bremsstrahlung_ (Fig. 4) amplitude takes the form
\[\mathcal{M}_{\rm BR}=-i\frac{G_{F}}{\sqrt{2}}\frac{\alpha_{EM}V_{tb}V_{ts}^{* }}{2\pi}e(\epsilon_{2}^{*})_{\alpha}(2m_{\ell}f_{B_{s}}C_{10})\bar{u}(k_{-}) \bigg{[}\frac{\gamma^{\alpha}\not{p}_{1}}{t-m_{t}^{2}}-\not{\not{p}_{1}\gamma^ {\alpha}}\bigg{]}\gamma_{5}v(k_{+}). \tag{35}\]
Here \(t=(p_{2}+k_{-})^{2}\), \(u=(p_{2}+k_{+})^{2}\). To avoid infrared divergences in (35) a lower boundary on the photon energy has to be introduced \(E_{\gamma}>E_{\gamma\,{\rm min}}\) set later, in numerical computations (Table 1), to 20 MeV.
The differential branching fraction in \(t\) and \(s\equiv q^{2}\) has a general expression
\[\frac{d\Gamma}{ds\,dt}=\frac{1}{2^{8}\pi^{3}m_{B_{s}}^{3}}\sum_{\rm pol.}\left| \mathcal{M}_{\rm SD}+\mathcal{M}_{\rm BR}\right|^{2}, \tag{36}\]
where one sums over the polarization of photons and leptons, \(4m_{\ell}^{2}\leq s\leq m_{B_{s}}^{2}\), \(t_{-}\leq t\leq t_{+}\) with \(t_{\pm}=m_{\ell}^{2}+(m_{B_{s}}^{2}-s)[1\pm\sqrt{1-4m_{\ell}^{2}/s}]/2\). The explicit formulas for double and single differential distributions we omit here because of their complexity, they are stated in Eqs. (32)-(38) of [107].
The form factors predicted by the CCQM model are shown in Fig. 5. For \(F_{TV/TA}\) form factors two scenarios are presented: by including the VMD component (32) these form factors become complex and thus their norm is shown. Alternatively, they can be shown without the VMD component as real functions
\[\tilde{F}_{TV,TA}\equiv F_{TV}-e_{s}\tilde{F}_{TV,TA}^{s(\bar{ \ell}\ell)s}. \tag{37}\]
Figure 4: Final-state radiation diagrams. Figures were originally published in [107].
In Fig. 6 we compare our form factors with the Kozachuk-Melikhov-Nikitin (KMN) form factors calculated in Ref. [110]. Using the definitions we can relate our form factors \(F_{i}(q^{2})\) to the KMN form factors \(F_{i}(q^{2},0)\) as follows (see Ref. [110] for more detail):
\[F_{V/A}(q^{2},0)=F_{V/A}(q^{2}),\ F_{TV/TA}(q^{2},0)\equiv\tilde{F}_{TV/TA}(q^{2 })=F_{TV/TA}(q^{2})-e_{b}\tilde{F}_{TV/TA}^{b(\bar{l})b}-e_{s}\tilde{F}_{TV/TA} ^{s(\bar{l})s}.\]
One can see that in the low-\(q^{2}\) region (\(q^{2}\lesssim 20\) GeV\({}^{2}\)) the corresponding form factors from the two sets are very close. In the high-\(q^{2}\) region, the KMN form factors steeply increase and largely exceed our form factors. It is very interesting to note that our form factors share with the corresponding KMN ones not only similar shapes (especially in the low-\(q^{2}\) region) but also relative behaviors, i.e., similar relations between the form factors, in the whole \(q^{2}\) region. Several comments should be made: (i) our form factors satisfy the constraint \(F_{TA}(q^{2},0)=F_{TV}(q^{2},0)\) at \(q^{2}=0\), with the common value equal to 0.135; (ii) in the small-\(q^{2}\) region, \(F_{V}(q^{2},0)\approx F_{TA}(q^{2},0)\approx F_{TV}(q^{2},0)\); (iii) \(F_{V}(q^{2},0)\) and \(F_{TV}(q^{2},0)\) are approximately equal in the full kinematical range and rise steeply in the high-\(q^{2}\) region; and (iv) \(F_{A}(q^{2},0)\) and \(F_{TA}(q^{2},0)\) are rather flat when \(q^{2}\to M_{B_{s}}^{2}\) as compared to \(F_{V}(q^{2},0)\) and \(F_{TV}(q^{2},0)\). These observations show that our form factors satisfy very well the constraints on their behavior proposed by the authors of Ref. [83].
The analytic \({\cal O}(\alpha_{s})\)-computation at twist-1,2 of the \(\bar{B}_{u,d,s}\to\gamma\) form factors have been presented in [111] within the framework of sum rules on the light-cone. A fit was provided in terms of a \(z\)-expansion with correlation matrix and the form factors extrapolated to the kinematic endpoint by using the \(g_{BB^{*}\gamma}\) couplings as a constraint. When comparing with [111] the following identification should be used
\[V_{\perp,\parallel}=F_{V,A}\:,\quad T_{\perp,\parallel}=\tilde{F}_{TV,TA}\:. \tag{38}\]
One can see (Fig. 7) that the results of [111] are larger and, in particular, show an earlier rise. The differential branching fractions shown as a function of dimensionless variable
Figure 5: Transition form factors \(B_{s}\to\gamma\) as defined by (31) and (37). Figures were originally published in [107].
are, together with the branching fraction ratio
\[r_{\gamma}(\hat{s})\equiv\frac{d\mathcal{B}(B_{s}\rightarrow\gamma\mu^{+}\mu^{-})/d \hat{s}}{d\mathcal{B}(B_{s}\rightarrow\gamma e^{+}e^{-})/d\hat{s}} \tag{39}\]
depicted in Fig. 8. The total branching fractions for the three lepton flavors are presented in Table 1. The numbers in brackets indicate the results of computations with long-distance contributions included (but one excludes the region of the two low lying charmonia \(0.33\leq\hat{s}\leq 0.55\)), results without the long distance contributions correspond to \(\kappa=0\) in (29). The comparison with theoretical predictions of other authors is shown in Table 2. The dominant error source of the results was identified to be the uncertainty of the hadronic form factors and the error on the branching fractions was estimated to reach 30%. One should remark that the resonant peaks induced by light \(\phi\) particles lead to significant enhancement of the branching fraction (\(\approx\)15%).
In summary, in the presented SM computations within the CCQM the hadronic transition form factors and radiative leptonic branching fractions of the \(B_{s}\) meson were evaluated. The form factors are in a very good agreement with those presented in [110] and the branching fraction numbers for light leptons agree with [112]. For the tau lepton decay mode, where bremsstrahlung
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline & CCQM & [78] & [79] & [80] & [81] & [82] & [85] & [112] & [113] \\ \hline electron & 15.9 & 6.2 & 2.35 & - & 7.1 & 20.0 & 24.6 & 18.4 & 17.4 \\ muon & 10.5 & 4.6 & 1.9 & - & 8.3 & 12.0 & 18.9 & 11.6 & 17.4 \\ tau & 13.7 & - & - & 15.2 & 15.7 & - & 11.6 & - & - \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison of branching fractions with other theoretical predictions. Table was originally published in [107].
Figure 6: Comparison of the form factors \(F_{i}(q^{2},0)\) calculated in our model (solid lines) with those from Ref. [110] (dashed lines). Figures are taken from [107].
dominates, the presented results agree with all other authors. Together, these results from various authors with [107] included, reflect our understanding of the SM description of the \(B_{s}\to\ell^{+}\ell^{-}\gamma\) decay process and provide an estimate on the error of theoretical SM predictions, beyond which one can claim NP manifestations.
### Other CCQM results on \(B\) leptonic decay
The CCQM was applied also to the leptonic decays \(B\to\ell^{-}\bar{\nu}_{\ell}\)[114] and \(B_{c}^{-}\to\tau\bar{\nu}\)[115].
The work [114] provides a SM analysis of pure leptonic and semileptonic decays. Most of the results presented there concern the semileptonic processes, which have richer structure and significant hints for the NP. Yet the results for purely leptonic branching fractions were presented too
\[\begin{array}{cccc}\ell&e&\mu&\tau\\ {\cal B}(B^{-}\to\ell^{-}\bar{\nu}_{\ell})&1.16\times 10^{-11}&0.49\times 10^{-6 }&1.10\times 10^{-4}\end{array}\.\]
The numbers are in good agreement with the experimental values for the tau lepton (\(1.090\pm 0.24)\times 10^{-4}\)[23] and the muon (\(0.53\pm 0.22)\times 10^{-9}\)[67], which became more precisely measured since then, and also with the experimental limit for the electron. The agreement with several theoretical prediction of other authors was shown too. Since the leptonic decay constants are crucial in the description of purely leptonic decays and carry all of the necessary non-perturbative information, their values have also been listed for \(B_{(s,c)}^{(*)}\) and \(D_{(s)}^{(*)}\) mesons, see Table I of [114].
In [115] possible NP contributions were evaluated for chosen leptonic and semileptonic decays. It was assumed that these contributions affect only the third generation of leptons and all neutrinos were considered as left-handed. New, beyond-SM four-fermion operators were introduced in the Hamiltonian (1)
\[Q_{V_{l}}=(\bar{q}\gamma^{\mu}P_{l}b)(\bar{\tau}\gamma_{\mu}P_{L}\nu_{\tau}), \quad Q_{S_{i}}=(\bar{q}P_{b}b)(\bar{\tau}P_{L}\nu_{\tau}),\quad Q_{T_{L}}=( \bar{q}\sigma^{\mu\nu}P_{L}b)(\bar{\tau}\sigma_{\mu\nu}P_{L}\nu_{\tau}) \tag{40}\]
with \(\sigma_{\mu\nu}=i[\gamma_{\mu},\gamma_{\nu}]\), \(P_{L,R}=(1\mp\gamma_{5})/2\) and \(i\in\{L,R\}\) (left, right). The most of the text deals with semileptonic decays where the \(R_{D^{(*)}}\) discrepancy is observed (42). The set of observables was extended to
\[R_{\pi(\rho)}=\frac{{\cal B}(\bar{B}^{0}\to\pi(\rho)\tau\bar{\nu})}{{\cal B}( \bar{B}^{0}\to\pi(\rho)\mu\bar{\nu})},\quad R_{\tau}^{u}=\frac{\tau_{\bar{\rho }^{0}}}{{\cal B}(\bar{B}^{0}\to\pi\mu\bar{\nu})},\quad R_{\tau}^{c}=\frac{ \tau_{\bar{\rho}^{0}}}{{\cal B}(\bar{B}^{0}\to\pi\mu\bar{\nu})},\quad R_{\tau} ^{c}=\frac{\tau_{\bar{\rho}^{0}}}{{\tau_{B^{c}}}}\frac{{\cal B}(\bar{B}_{c}^{ -}\to\tau\bar{\nu})}{{\cal B}(\bar{B}^{0}\to D\mu\bar{\nu})}, \tag{41}\]
of which the first is meant to analyze the \(R\) anomaly also for the \(b\to u\) transition and the two remaining concern the leptonic decays. The limits on the Wilson coefficients \(C_{V_{i},S_{i},T_{l}}\) were extracted assuming that only one of them is dominant at a time (besides the SM ones). Including into the analysis also the leptonic observable \(R_{\tau}^{u}\) (together with \(R_{D^{(*)}}\)) it was found that no \(C_{S_{R},S_{L}}\) values were allowed (within 2 \(\sigma\)) and for \(C_{V_{L},V_{R},T_{L}}\) allowed regions were identified in the complex plane (Fig. 1 of [115]). Further, the leptonic \(B_{c}^{-}\) branching fractions were evaluated within the SM, \({\cal B}(\bar{B_{c}^{-}}\to\tau\bar{\nu})=2.85\times 10^{-2}\), \({\cal B}(\bar{B_{c}^{-}}\to\mu\bar{\nu})=1.18\times 10^{-4}\) and observables (41) were predicted for the SM and NP scenarios. In the latter case the corresponding Wilson coefficient \(C_{i}\) was varied (one at a time) in the allowed region of the complex plain and the impact on the observable was determined. For the leptonic \(R_{\tau}^{c}\) variable the prediction stands
Figure 7: Form factors for the \(\bar{B}_{s}\to\gamma\) transition calculated in [111]. Figures are taken from [111].
\[R_{\tau}^{c}=\begin{array}{cccc}SM&C_{V_{L}}&C_{V_{R}}&C_{T_{L}}\\ 3.03&3.945\pm 0.735&3.925\pm 0.815&3.03.\end{array}\]
As summary one can say that, within the given scenario, the text translated existing experimental information into the constraints on NP Wilson coefficients. Contributions of some of them (\(C_{S_{R},S_{L}}\)) were excluded and some (\(C_{V_{L},V_{R},T_{L}}\)) were constrained.
## 4 Semileptonic decays of \(B\) mesons
### Overview
The experimental information on the semileptonic B decays is much larger than on the pure leptonic decays. The LHCb experiment alone published in the past 10 years more than 35 papers on this topic and the number further increases if other experiments (Belle, BaBar, Belle II) are taken into the account. The same is true for theoretical publications which are large in quantity. With the aim to provide an overview of the CCQM results, we restrain ourselves only to most significant experimental measurements and theoretical predictions of other authors.
The focus of the community is predominantly driven by the so-called flavor anomalies. They are often defined as ratios of branching fractions, the most prominent of them are
\[R_{K^{(*)}}=\frac{{\cal B}(B\to K^{(*)}\mu^{+}\mu^{-})}{{\cal B}(B\to K^{(*)}e^ {+}e^{-})},\quad R_{D^{(*)}}=\frac{{\cal B}(B\to D^{(*)}\tau\nu_{\tau})}{{\cal B }(B\to D^{(*)}\ell\nu_{\ell})},\quad R_{J/\psi}=\frac{{\cal B}(B\to J/\Psi\tau \nu_{\tau})}{{\cal B}(B\to J/\Psi\mu\nu_{\mu})}. \tag{42}\]
The first observable is sensitive to the \(b\to s\) quark transition, the two remaining to \(b\to c\). Other quantities measured in semileptonic decays of the \(B\) meson are listed for example in Sec. VII of [116]. In these and other observables deviations were seen (see e.g. Tab XVIII of [117] for a nice review) with some of them reaching up to \(4\sigma\), which is naturally interpreted as significant argument in favor of the NP (see e.g. [118] ). The most recent LHCb measurements nevertheless weaken some of these observations and imply that the discrepancy with the SM may not be so pronounced after all. In [119] the deviation of a correlated observables \(R_{D}\) and \(R_{D^{*}}\) from the SM prediction is \(1.9\sigma\) and the results for \(R_{K}\) and \(R_{K^{*}}\) given in [120] are in agreement with the SM.
Figure 8: Differential decay rates for \(B_{s}\to\ell^{+}\ell^{-}\gamma\) and the ratio \(\hat{r}\) (39) with long-distant contributions included (solid line) and excluded (dashed line). Figures were originally published in [107].
However, if one includes also older measurements and measurements of different experiments, the situation seem not to be yet solved and discrepancy is still close to \(3\sigma\)[121].
The LHCb detector was specifically designed for \(b\) physics and the experiment successfully reaches its purpose by being the most important source of the experimental information on \(b\) decays. The measurements of \(B\to K^{*}\ell^{+}\ell^{-}\) were presented in works [122, 123, 124, 125, 126, 127, 128, 129, 130]. Two of them [127, 130] study the lepton-flavor universality by measuring \(R_{K^{*}}\), but with no significant deviations from the SM. Most of the remaining works are concerned with angular distributions: the coefficients (noted for a \(p\)-wave process as \(F_{L}\), \(A_{FB}\), \(S_{3,...,9}\)) in front of angular terms which appear in the decay width formula are combined into so-called optimized observables \(P_{i}^{(^{\prime})}\), and here some significant tensions are seen (e.g. \(3\sigma\) in \(P_{2}\) for \(q^{2}\) between 6 and 8 GeV\({}^{2}\)[129]).
The semileptonic \(B\) decays with the \(K\) meson in the final state are addressed in [131, 132, 133]. The first publication is concerned with the angular distribution and the differential branching fraction, the two others focus more specifically on the lepton flavor universality question, with an observation of a \(2.5\sigma\) deviation from the SM in \(R_{K}\). This was however, as mentioned earlier, undermined by the recent measurement [120] where no longer the deviation is seen.
The process \(B\to D^{*}\ell^{+}\ell^{-}\) was analyzed in [134, 135, 136, 119] and no deviation of \(R_{D^{*}}\) from the SM greater than \(2\sigma\) was detected. The same is true for the \(R_{J/\Psi}\) observable measured in [137]. The decay of the \(B_{s}^{0}\) particle to \(\phi\mu^{+}\mu^{-}\) was studied in [138, 139, 140], where, in the last analysis, a disagreement with the SM prediction is observed in the differential branching fraction for \(1\)GeV\({}^{2}\leq q^{2}\leq 6\)GeV\({}^{2}\) at the level of \(3.6\sigma\).
Various other semileptonic \(B\) decays were measured at the LHCb which we do not mention here. An overview of the lepton flavor universality question in \(b\) decays at the LHCb was, as of 2022, given in [141].
An additional experimental information on the semileptonic \(B\) decays comes from BaBar measurements. Studies of the \(B\to D^{(*)}\ell\nu_{\ell}\) process were presented in [142, 143, 144, 145, 146, 147, 148]. In the first three references the question of the lepton flavor universality is addressed (\(\ell=\tau\)) and the measurement of \(R_{D}\) and \(R_{D^{*}}\) performed. The authors claim a deviation of \(2.0\sigma\) for \(R_{D}\), \(2.7\sigma\) for \(R_{D^{*}}\) and \(3.4\sigma\) for their combination. The four latter references present the measurement of the \(|V_{cb}|\) element of the CKM matrix and the analysis of corresponding transition form factors.
The decays with the \(K^{(*)}\ell^{+}\ell^{-}\) final state were addressed in [149, 150, 151, 152, 153, 154]. The texts present the measurements of branching fractions, the \(R_{K^{(*)}}\) observable, the isospin and CP asymmetries, the forward-backward angular asymmetry of the lepton pair and the \(K^{*}\) longitudinal polarization (and others). Overall, the results are in an agreement with the SM expectations, the anomaly observed for isospin asymmetries in both \(K\) and \(K^{*}\) channels in [151] was not later confirmed in [152].
The BaBar collaboration also published results on semileptonic \(B\) decays into light mesons \(\pi\) and \(\rho\)[155, 156]. Here the branching fractions and the \(|V_{ub}|\) element were determined and also transition form factors were discussed.
Further, BaBar published results on semileptonic decays where hadronic state \(X_{s}\) containing kaons was produced and measured corresponding branching fractions [157, 158]. One can also mention the measurement of charmless semileptonic decays [159, 160] and the measurement with the electron in the final state [161], all of which were used to establish the \(|V_{ub}|\) value. In [162] the semileptonic decay with five particles in the final state \(D^{(*)}\pi^{+}\pi^{-}\ell\nu_{\ell}\), was confirmed.
Important contribution to measurements of semileptonic \(B\) decays comes form the Belle and Belle II collaborations.
Analyses [163, 164, 165, 166, 167] investigate both \(D\) and \(D^{*}\) decay channels (with \(\tau\) and \(\nu_{\tau}\)). They measure branching fractions and ratios \(R_{D^{(*)}}\), where they do not see significant deviations from the SM expectations. The last work focuses also on the extraction of parameters for the Caprini-Lellucho-Neubert form factor parameterization.
Specifically \(D^{*}\)-containing final states are addressed in [168, 169, 170, 171, 172, 173]. Also here the objects of interest are the branching fractions and the \(R_{D^{*}}\) observable and, again, no significant deviations from the SM are seen. Works [169, 173] present, in addition, the measurement of the \(|V_{cb}|\) matrix element and form factor analysis, in works [171, 172] the \(\tau\) lepton polarization is measured.
The references [174, 175] focus on the \(D\ell\nu_{\ell}\) final state. The first work is concerned with the branching fraction and form factors, in both works \(|V_{cb}|\) is measured. Authors of [176] report on the first observation of \(B\to\bar{D}_{1}\ell\nu_{\ell}\) decay and measure the branching fractions of \(B\to\bar{D}^{(*)}\pi\ell^{+}\nu_{\ell}\)
and \(B\to\bar{D}^{(*)}\pi^{+}\pi^{-}\ell^{+}\nu_{t}\) processes.
Production of strange mesons in semileptonic \(B\) decays is studied in [177, 178] for the \(K\) meson, in [179, 180, 181] for the \(K^{\star}\) meson and in [182] for both, \(K\) and \(K^{\star}\). Besides branching fractions and \(R_{K^{(*)}}\) ratios, some of the works present also measurements of angular and polarization variables and the isospin asymmetry. In general all measured values agree well with the SM predictions, some tensions for the subset of the optimized angular observables \(P_{i}\) were reported in [180].
Semileptonic decays to light mesons (\(\pi\), \(\rho\) and \(\eta\)) were described in [183, 184, 185, 186], the works are mostly concerned with the branching fractions and the determination of the \(|V_{ub}|\) element of the CKM matrix.
The Belle(II) collaboration also published articles on semileptonic \(B\) decays to a general hadronic state \(X\) containing the \(s\) quark, \(X_{s}\)[187, 188], the \(u\) quark, \(X_{u}\)[189, 190, 191] and the \(c\) quark, \(X_{c}\)[192, 193]. The main objects of interest were branching fractions, CKM elements \(|V_{ub}|\) and \(|V_{cb}|\) and first four moments of the lepton mass squared (for \(X_{c}\)). The question of the lepton flavor universality in semileptonic decays to a general hadronic state \(X\) was addressed in [194].
Other results from different experiments could be cited in the domain of semileptonic \(B\) decays, yet the measurements of the above-mentioned B-factories represent the most important data from both, the quantity and quality perspective.
The large number of theoretical works implies strong selection criteria which we base on the impact of the work with some preference for review and pedagogical texts. We have already mentioned nice reviews [33, 32, 70, 35, 117] which cover (also) the semileptonic \(B\) decays. Further survey papers are [195], where the SM theory and appropriate observables are presented, a pedagogically-written article [196], which focuses on the charged lepton flavour violation and also a generally-oriented texts [197, 198]. One can in addition mention [199], in which \(B\) flavor anomalies are discussed and also similarly oriented recent text [200].
Reliable SM predictions are the starting point for assessing various anomalies. Already decades ago a quark potential model was used to make predictions for semileptonic \(B\) and \(D\) decays [201] with an update several years later [202]. Decays to \(D^{(*)}\) mesons were addressed in [203], the analyticity and dispersion relations were used to produce parametrizations of the QCD form factors with small model dependence. The same authors later published QCD two-loop level computations [204] including lepton mass effect, higher resonances and heavy quark symmetry, which further improved the theoretical precision. The heavy quark spin symmetry was used in [205] to derive dispersive constraints on \(B\to D^{(*)}\) form factors and implications for the determination of \(|V_{cb}|\). Semileptonic decays to light mesons \(\rho\), \(\omega\), \(K^{\star}\) and \(\phi\) were discussed in [206] in the framework of light-cone sum rules, the authors claim 10% precision at zero momentum transfer. The angular analysis of the process \(\bar{B}\to\bar{K}\ell^{+}\ell^{-}\) was presented in [207]. The work is based on the QCD factorization and large recoil symmetry relations and besides angular coefficients it also gives a prediction of \(R_{K}\) and explores the potential of the introduced observables to reach the NP. Taking into the consideration also the excited state \(K^{\star}\), the publication [208] is dedicated to the charm-loop effect. The results are derived using QCD light-cone sum rules and hadronic dispersion relations and the evaluated charm loop effect, which is claimed to reach up to 20%, is represented as a contribution to the \(C_{9}\) Wilson coefficient. Lattice QCD was used in [209, 210, 211] to predict form factors and matrix elements for processes with \(D^{(*)}\) mesons. In [212] were the lattice form factors used as input and allowed to determine CKM matrix elements, or, alternatively, constrain the real part of the Wilson coefficients \(C_{9}\) and \(C_{10}\). The CKM matrix was also the subject of the work [213], where \(|V_{cb}|\) was extracted using the OPE, the expansion in powers of the heavy quark mass and constraints derived from the experimental values on the normalized lepton energy moments. A process with a vector meson particle production \(B\to V\ell^{+}\ell^{-}\) was considered in [214] where the authors used light-cone sum rules to predict form factors. The paper [215] has a somewhat review character, it present three common form factor parameterizations, summarizes the data and the available lattice information (as of 2016) and gives a special emphasis on the unitarity constraints. Then it presents fits to experimental points and to the lattice numbers from which the results on \(R_{D}\) and \(|V_{cb}|\) are extracted. Radiative corrections to the \(R_{K^{(*)}}\) observables are of a concern to the authors of [216], their thorough analysis indicates that these observables are indeed well suited to be a probe of NP. This work [216] was improved in [217] where a full Monte Carlo framework was built to describe QED corrections in \(\bar{B}\to\bar{K}\ell^{+}\ell^{-}\). A detailed numerical comparison with those obtained with the general-purpose photon-shower tool PHOTOS has been performed. The charmonium leading logs were fully simulated. Similar questions related to the same observables are addressed in [218]. Still the same observables are, together with the angular observables \(P_{i}\), discussed in a pedagogical way in [219] with special emphasis on the hadronic uncertainties. Coming back to \(D\) particles and works published within few years after the first measurements indicating a possible lepton-flavor violation, one can men
tion [220], where the coefficients of the Boyd-Grinstein-Lebed form factor parametrization were constrained by analyzing the form factor ratios and their uncertainties in the heavy quark limit. With this knowledge fits to experimental data were performed and \(R_{D^{*}}\) computed. In [221] two different form factors parameterizations are used to predict \(R_{D^{*}}\) and \(|V_{cb}|\). The approach uses, besides data, inputs from the light cone sum rules and lattice and the relations between form factors as given by HQET. To mention more recent theoretical works, one can point to e.g. [222, 223], where QED corrections and non-local matrix elements are discussed for \(B\) decays to dilepton and a kaon. It was shown in Ref. [222] that there cannot be any hard-collinear logs at the structure dependent level. This is important as in some parts of phase space they are 10-20% for scalar QED as used by PHOTOS. The status of the \(b\to cr\nu\) anomalies as of 2022 is summarized in [224], where the models for global fits are based mostly on the HQET and lattice results. The latter are also reviewed the Sec. 8 of [18].
The number of NP papers progressively grew as the evidence for tensions and anomalies became more and more convincing, with the first hints appearing at the beginning of the new millennium. Often, the NP is theoretically addressed by non-SM operators appearing in the effective Hamiltonian. So was done in [225], where the approach was applied to the \(b\to s\) process. No strong claims were given there, but it was shown that the evaluated NP effects can reach up to 13% for \(R_{K^{*}}\). The same effective-operator approach was applied in [226] to \(b\to c\) transition and the impact of the NP to \(B\to D^{*}\tau\bar{\nu}_{\tau}\) observables was evaluated. The authors demonstrated that it is significant, i.e. the sensitivity of the process is high enough for the NP to be detected. Effective operators were used also in [227], where, after the NP operator contributions were discussed, two leptoquark models were proposed to explain two out of three possible scenarios which lead to the observed \(R_{K}\) value. Leptoquarks (vector and scalar, respectively) are also considered in [228, 229], both works claim that their theory allows to simultaneously resolve discrepancies appearing in \(b\to s\) and \(b\to c\) transitions. Still leptoquarks, the authors of [230] investigate single leptoquark extensions of the SM with \(1\,{\rm TeV}\lesssim m_{LQ}\lesssim 2\,{\rm TeV}\) with conclusion that no such scalar leptoquark can be, a vector particle is the only option. The work [231] uses scenarios with light right-handed neutrinos appearing in low-scale seesaw models as the NP framework for analyzing the lepton flavor violation. Among other results the authors propose observables, i.e. properly chosen branching fraction ratios, which could discriminate between supersymmetric (SUSY) and non-SUSY NP realizations. Further works which analyze the \(R_{K}\) and \(R_{K^{*}}\) anomalies are [232] and [233], the former assumes a composite Higgs model, the latter uses a two-Higgs-doublet model. At last, let us mention a set of more generally-oriented works [234, 235, 236, 237, 97, 98] which focus mainly on \(b\to s\ell^{+}\ell^{-}\) and which aim to provide model-independent or theoretically clean conclusions. By different approaches they investigate the space for NP parameters and most of them presents arguments in favor of some NP scenario.
Semileptonic and radiative decays \(B_{s}\to\phi\ell^{+}\ell^{-}\) and \(B_{s}\to\phi\gamma\) in CCQM
The \(B_{s}\to\phi\ell^{+}\ell^{-}\) and \(B_{s}\to\phi\gamma\) decays were within the CCQM analyzed in [238]. The analysis was done in the light of the LHCb measurements [138, 139], where the second one was recent at that time. The measurement focused on angular observables and the branching fraction distribution and reported on a deviation from the SM in the latter exceeding \(3\sigma\) for \(1\) GeV\({}^{2}\leq q^{2}\leq 6\) GeV\({}^{2}\). Several years later two new measurements were performed. The work [239] addressed the angular distribution where no significant tensions with the SM were observed, [140] however confirmed the discrepancy from the previous branching fraction measurement. One may put this observation in relation with \(R_{K}\) and \(R_{K^{*}}\) anomalies, which also happen for the \(b\to s\) transition, from where the motivation to study this process in more details.
In [238] one analyzes both, the angular coefficients and the differential decay rate distribution. In addition to (5), the necessary model inputs are
\[\Lambda_{B_{s}}=2.05\ {\rm GeV}\quad{\rm and}\quad\Lambda_{\phi}=0.88\ {\rm GeV} \tag{43}\]
determined in prior works. The transition is expressed through two matrix elements
\[M_{1}^{\mu}=<\phi(p_{2},\epsilon)|SO^{\mu}b|B_{s}(p_{1}))>,\quad M_{2}^{\mu}=< \phi(p_{2},\epsilon)|s[\sigma^{\mu\nu}q_{\nu}(1+\gamma^{5})]b|B_{s}(p_{1}))>, \tag{44}\]
where \(O^{\mu}=\gamma^{\mu}(1-\gamma^{5})\) and \(p_{i}\) are momenta with \(q=p_{1}-p_{2}\) and \(P=p_{1}+p_{2}\). The appearing variables satisfy \(p_{1}^{2}=m_{B_{s}}^{2}\equiv m_{1}^{2}\), \(p_{2}^{2}=m_{\phi}^{2}\equiv m_{2}^{2}\) and \(\epsilon_{2}^{1}\cdot p_{2}=0\). In total seven invariant form factors, defined as coefficient functions in front of the Lorentz structures, are necessary to
parameterize them
\[M_{1}^{u} =\frac{\epsilon_{\nu}^{\dagger}}{m_{1}+m_{2}}\left[-g^{\mu\nu}P\cdot q A _{0}(q^{2})+P^{\mu}P^{\nu}A_{+}(q^{2})+q^{\mu}P^{\nu}A_{-}(q^{2})+i\varepsilon^ {\mu\nu\alpha\beta}P_{\alpha}q_{\beta}V(q^{2})\right], \tag{45}\] \[M_{2}^{u} =\epsilon_{\nu}^{\dagger}\left[-\left(g^{\mu\nu}-\frac{q^{\mu}q^{ \nu}}{q^{2}}\right)P\cdot q\,a_{0}(q^{2})+\left(P^{\mu}P^{\nu}-q^{\mu}P^{\nu} \frac{P\cdot q}{q^{2}}\right)a_{+}(q^{2})+i\varepsilon^{\mu\nu\alpha\beta}P_{ \alpha}q_{\beta}\,g(q^{2})\right]. \tag{46}\]
The same amplitudes can be expressed in the CCQM
\[M_{1,2}^{u} =N_{c}g_{B_{s}}g_{\phi}\int\frac{d^{4}k}{i(2\pi)^{4}}\tilde{\Phi }_{B_{s}}(-[k+w_{13}p_{1}]^{2})\tilde{\Phi}_{\phi}(-[k+w_{23}p_{2}]^{2})\times T _{1,2}, \tag{47}\] \[T_{1} =\text{tr}[O^{\mu}S_{b}(k_{1}+p_{1})\gamma^{5}S_{s}(k)t_{2}^{ \dagger}S_{s}(k+p_{2})],\] (48) \[T_{2} =\text{tr}[\sigma^{\mu\nu}q_{\nu}(1+\gamma^{5})S_{b}(k_{1}+p_{1}) \gamma^{5}S_{s}(k)t_{2}^{\dagger}S_{s}(k+p_{2})], \tag{49}\]
with \(S_{i}\) being quark propagators and \(N_{c}\) the number of colors. The origin of various terms in (47)-(49) is schematically represented in Fig. 9. Once the model expression (47) is evaluated to the level of invariant Lorentz structures, it can be compared to (45) and (46) and form factor expressions read out. Their behavior is shown in Fig. 10, it determines the necessary model input and completes the model-dependent part of the calculation.
Let us briefly review also the remaining steps to reach observable quantities. The set of the
Figure 10: Vector and tensor form factors for the \(B_{s}\to\phi\) transition as predicted by the CCQM. Figures were originally published in [238].
Figure 9: \(B_{s}\to\phi\) transition in the CCQM. Figure was originally published in [238].
SM four-fermion operators is written as
\[\mathcal{O}_{1} =(\bar{s}_{a_{1}}\gamma^{\mu}P_{L}c_{a_{2}})(\bar{c}_{a_{2}}\gamma_{ \mu}P_{L}b_{a_{1}}), \mathcal{O}_{2} =(\bar{s}\gamma^{\mu}P_{L}c)(\bar{c}\gamma_{\mu}P_{L}b),\] \[\mathcal{O}_{3} =(\bar{s}\gamma^{\mu}P_{L}b)\sum_{q}(\bar{q}\gamma_{\mu}P_{L}q), \mathcal{O}_{4} =(\bar{s}_{a_{1}}\gamma^{\mu}P_{L}b_{a_{2}})\sum_{q}(\bar{q}_{a_{ 2}}\gamma_{\mu}P_{L}q_{a_{1}}),\] \[\mathcal{O}_{5} =(\bar{s}\gamma^{\mu}P_{L}b)\sum_{q}(\bar{q}\gamma_{\mu}P_{R}q), \mathcal{O}_{6} =(\bar{s}_{a_{1}}\gamma^{\mu}P_{L}b_{a_{2}})\sum_{q}(\bar{q}_{a_{ 2}}\gamma_{\mu}P_{R}q_{a_{1}}), \tag{50}\] \[\mathcal{O}_{7} =\frac{e}{8\pi^{2}}\bar{m}_{b}(s\sigma^{\mu\nu}P_{R}b)F_{\mu\nu}, \mathcal{O}_{8} =\frac{g_{s}}{8\pi^{2}}\bar{m}_{b}(\bar{s}_{a_{1}}\sigma^{\mu\nu} P_{R}\mathbf{T}_{a_{1}a_{2}}b_{a_{2}})\mathbf{G}_{\mu\nu},\] \[\mathcal{O}_{9} =\frac{e^{2}}{8\pi^{2}}(\bar{s}\gamma^{\mu}P_{L}b)(\bar{\ell} \gamma_{\mu}\ell), \mathcal{O}_{10} =\frac{e^{2}}{8\pi^{2}}(\bar{s}\gamma^{\mu}P_{L}b)(\bar{\ell} \gamma_{\mu}\gamma_{5}\ell),\]
where \(P_{L,R}=(1\mp\gamma^{5})\), \(a_{i}\) are color indices (implicit for color singlet currents), \(\mathbf{T}_{a_{1}a_{2}}\) are generators of the \(SU(3)\) color group, \(\mathbf{G}_{\mu\nu}\) is the gluonic field strength and \(g_{s}\) is the QCD coupling (other symbols have meaning as defined before). Operators \(\mathcal{O}_{1}\) and \(\mathcal{O}_{2}\) are referred to as current operators, \(\mathcal{O}_{3}-\mathcal{O}_{6}\) are QCD penguin operators, \(\mathcal{O}_{7,8}\) are so-called magnetic penguin operators and \(\mathcal{O}_{8}\) and \(\mathcal{O}_{9}\) operators correspond to semileptonic electroweak penguin diagrams. The transition amplitude takes the form
\[\mathcal{M} =\frac{G_{F}}{2\sqrt{2}}\frac{\alpha|V_{tb}V_{ts}^{*}|}{\pi}\bigg{[} C_{9}^{\rm eff}\langle\phi|\bar{s}\gamma^{\mu}P_{L}b|B_{s} \rangle(\bar{\ell}\gamma_{\mu}\ell)-\frac{2\bar{m}_{b}}{q^{2}}C_{7}^{\rm eff} \langle\phi|\bar{s}i\sigma^{\mu\nu}q_{\nu}P_{R}b|B_{s}\rangle(\bar{\ell}\gamma _{\mu}\ell)\] \[+C_{10}\langle\phi|\bar{s}\gamma^{\mu}P_{L}b|B_{s}\rangle(\bar{ \ell}\gamma_{\mu}\gamma_{5}\ell)\bigg{]}. \tag{51}\]
The Wilson coefficients \(C_{1}-C_{6}\) are absorbed into the effective coefficients \(C_{7}^{\rm eff}\) and \(C_{9}^{\rm eff}\), \(C_{7}^{\rm eff}=C^{7}-C_{5}/3-C_{6}\) and \(C_{9}^{\rm eff}\) is defined by (28)(29), where, again, the \(\bar{c}c\) resonances appear in the Breit-Wigner form and one drops them by setting \(\kappa=0\). The renormalization scale is set to \(\mu=\bar{m}_{b,\;\rm pole}\). Numerical values of Wilson coefficients were taken from [109], as we described it already in Sec. 3.2. Also the QCD quark masses are the same as in the leptonic-decay section. In addition to the charm loop contribution, one takes into the consideration the two loop effects as computed in [240, 241]. They modify the effective coefficients
\[C_{7}^{\rm eff}\to C_{7}^{\rm eff}-\frac{\alpha_{s}}{4\pi}(C_{1}F_{1}^{(7)}+C _{2}F_{2}^{(7)}),\quad C_{9}^{\rm eff}\to C_{9}^{\rm eff}-\frac{\alpha_{s}}{4 \pi}(C_{1}F_{1}^{(9)}+C_{2}F_{2}^{(9)}), \tag{52}\]
where the functions \(F_{1,2}^{(7,9)}\) were made publicly available by authors of [241] as _Wolfram Mathematica_ code.
The differential decay rate is then expressed as
\[\frac{d\Gamma(B_{s}\to\phi\ell\ell)}{dq^{2}} =\frac{G_{F}^{2}}{(2\pi)^{3}}\left(\frac{\alpha|V_{tb}V_{ts}^{*}|}{ 2\pi}\right)^{2}\frac{|\mathbf{p_{2}}|q^{2}\beta_{\ell}}{12m_{1}^{2}}\mathcal{ H}_{tot}, \tag{53}\] \[\mathcal{H}_{\rm tot} =\frac{1}{2}\left(\mathcal{H}_{U}^{11}+\mathcal{H}_{U}^{22}+ \mathcal{H}_{L}^{11}+\mathcal{H}_{L}^{22}\right)+\delta_{\ell\ell}\left(\frac{ \mathcal{H}_{U}^{11}}{2}-\mathcal{H}_{U}^{22}+\frac{\mathcal{H}_{L}^{11}}{2}- \mathcal{H}_{L}^{22}+\frac{3\mathcal{H}_{S}^{22}}{2}\right), \tag{54}\]
where \(\delta_{\ell\ell}=2m_{\ell}^{2}/q^{2}\), \(\beta_{\ell}=\sqrt{1-2\delta_{\ell\ell}}\) and \(|\mathbf{p_{2}}|=\sqrt{\lambda^{\rm Kall}\left(m_{1}^{2},m_{2}^{2},q^{2}\right)} /(2m_{1})\) is the momentum of the \(\phi\) meson in the \(B_{s}\) rest frame. The objects \(\mathcal{H}_{X}^{ii}\) represent bilinear combinations of the helicity amplitudes
\[\mathcal{H}_{U}^{ii}=|H_{++}^{i}|^{2}+|H_{--}^{i}|^{2},\quad\mathcal{H}_{L}^{ii}=| H_{00}^{i}|^{2},\quad\mathcal{H}_{S}^{ii}=|H_{t0}^{i}|^{2}, \tag{55}\]
which are related to the invariant form factors through intermediate functions \(A_{+,-,0}^{i}\) and \(V^{i}\)
\[H_{t0}^{i} =\frac{1}{m_{1}+m_{2}}\frac{m_{1}|\mathbf{p_{2}}|}{m_{2}\sqrt{q^{2} }}\{Pq(-A_{0}^{i}+A_{+}^{i})+q^{2}A_{-}^{i}\}, \tag{56}\] \[H_{\pm\pm}^{i} =\frac{1}{m_{1}+m_{2}}(-PqA_{0}^{i}\pm 2m_{1}|\mathbf{p_{2}}|V^{i}),\] (57) \[H_{00}^{i} =\frac{1}{m_{1}+m_{2}}\frac{1}{2m_{2}\sqrt{q^{2}}}\{-Pq(m_{1}^{2}- m_{2}^{2}-q^{2})A_{0}^{i}+4m_{1}^{2}|\mathbf{p_{2}}|^{2}A_{+}^{i}\}, \tag{58}\]
with
\[V^{1} =C_{9}^{\rm eff}V+C_{7}^{\rm eff}\chi\,g, V^{2} =C_{10}V, \tag{59}\] \[A_{+}^{1} =C_{9}^{\rm eff}A_{+}+C_{7}^{\rm eff}\chi\,a_{+}, A_{\pm}^{2} =C_{10}A_{\pm}\] (60) \[A_{-}^{1} =C_{9}^{\rm eff}A_{-}+C_{7}^{\rm eff}\chi\,Pa\,(a_{0}-a_{+})/q^{2},\] (61) \[A_{0}^{2} =C_{10}A_{0}, \chi =2\bar{m}_{b}(m_{1}+m_{2})/q^{2}. \tag{62}\]
The full description of the \(B_{s}\to\phi\ell\ell\) decay requires, besides the \(q^{2}\), three additional angles, see for example Eq. (2.1) in [242], where completely analogous formula is written for fully differential decay rate of \(B_{d}\to K^{*}\mu^{+}\mu^{-}\). The advantage of the helicity formalism is that the angular observables, i.e. the coefficients in front of various angular terms, have simple expressions. For the longitudinal polarization fraction \(F_{L}\) and the forward-backward asymmetry \(A_{\rm FB}\) they stand
\[F_{L}=\frac{1}{2}\beta_{\ell}^{2}\frac{{\cal H}_{L}^{1}+{\cal H} _{L}^{2}}{{\cal H}_{\rm tot}},\qquad A_{\rm FB}=-\frac{3}{4}\beta_{\ell}\frac{ {\cal H}_{P}^{12}}{{\cal H}_{\rm tot}}, \tag{63}\] \[{\rm where}\ {\cal H}_{P}^{12}={\rm Re}\left[H_{++}^{1}(H_{++}^{2})^{ \dagger}\right]-{\rm Re}\left[H_{--}^{1}(H_{--}^{2})^{\dagger}\right]. \tag{64}\]
The CCQM-predicted behavior of the branching fraction and of the two angular observables \(F_{L}\) and \(A_{\rm FB}\) is, as function of \(q^{2}\), show in Fig. 11. The \(q^{2}\)-averaged numbers were computed for \(F_{L}\), \(A_{\rm FB}\), additional angular observables \(S_{3}\), \(S_{4}\) and also for optimized observables \(P_{1}\) and \(P_{4}^{\prime}\) which are derived from them, \(P_{1}=2S_{3}/(1-F_{L})\), \(P_{4}^{\prime}=S_{4}/\sqrt{F_{L}(1-F_{L})}\). The results are presented in Tab. 3. The table shows the branching fraction also for \(B_{s}\to\phi\nu\bar{\nu}\), the corresponding decay formula is indicated in Eqs. (34)-(36) of [238]. The text [238] also contains predictions for the
Figure 11: Branching fraction, \(F_{L}\) and \(A_{\rm FB}\) as function of \(q^{2}\) for \(\mu\) and \(\tau\) in the final state. Figures were originally published in [238].
radiative decay to \(\phi\gamma\) and non-leptonic decay to \(\phi J/\Psi\) (formulas (38) and (37) there)
\[\mathcal{B}(B_{s}\to\phi\gamma)=(2.39\pm 0.48)\times 10^{-5},\quad\mathcal{B}(B_{ s}\to\phi J/\Psi)=(1.6\pm 0.3)\times 10^{-3}. \tag{65}\]
The results can be compared to the actual experimental numbers [23].
\[\mathcal{B}(B_{s}\to\phi\mu^{+}\mu^{-})=(8.4\pm 0.4)\times 10^{-7}, \quad\mathcal{B}(B_{s}\to\phi\nu\rho)<540\times 10^{-5}, \tag{66}\] \[\mathcal{B}(B_{s}\to\phi\gamma)=(3.4\pm 0.4)\times 10^{-5},\quad \mathcal{B}(B_{s}\to\phi J/\Psi)=(1.04\pm 0.04)\times 10^{-3}. \tag{67}\]
The branching fraction to \(\phi\mu^{+}\mu^{-}\) is in good agreement with the SM, in fact the experimental numbers measured after the publication moved closer to the published CCQM value. The same is also true for the two non-leptonic decay channels, yet, here a discrepancy of the order of \(2\,\sigma\) remains.
Coming back to the semileptonic decays, a detailed interval values were presented in Tab. 6 of [238] for \(B_{s}\to\phi\mu^{+}\mu^{-}\). They mimic the way the experimental measurements are done and they are of the interest because the largest discrepancy observed by [139, 140] is the branching fraction on the \(q^{2}\) interval1\(1-6\,\mathrm{GeV}^{2}\). Also, the table presents the effect of the two-loop contributions by giving the numbers with and without them. We do not reproduce here all of them but focus only on the interval \(1\,\mathrm{GeV}^{2}\leq q^{2}\leq 6\,\mathrm{GeV}^{2}\) and observables measured on this interval, see Tab. 4. In the table also older measurements are indicated in brackets and one sees that for all indicated observables except \(S_{3}\) the new measurement bring the experimental value closer to the theoretical one. The large error of the \(S_{3}\) measurement implies that both CCQM predictions (1-loop and 2-loop) do not much exceed \(1\,\sigma\) deviation. Considering the 2-loop results one observes that no significant deviations from the experiment are observed, especially in the branching fraction case they bring the value closer to the measurement (w.r.t. one-loop calculations).
Footnote 1: In [239, 140] the lower interval limit is \(1.1\,\mathrm{GeV}^{2}\). This effect is considered as negligible because the measured quantities are intensive (not additive), e.g. the branching fraction measurement is \(q^{2}\)-averaged (the number of entries in the interval is divided by the integral length).
As summary we can conclude that the interesting decay channel \(B_{s}\to\phi\ell^{+}\ell^{-}\) was addressed in the framework of the CCQM. Already at the time of the publication the comparison with the LHCb numbers did not allow us to claim NP presence, the major discrepancy in the branching fraction on the \(1-6\,\mathrm{GeV}^{2}\) interval was reduced significantly by the CCQM prediction. This was true also for other discrepancies (\(F_{L}\), \(S_{4}\)) seen on other intervals. The new data further decreased the branching fraction discrepancy and with results of the CCQM one cannot talk about a discrepancy any longer.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & \(CCQM\), 2 loop & \(CCQM\), 1 loop & Experiment [140, 239] ([139]) \\ \hline \(10^{7}\mathcal{B}_{\mathrm{tot}}\). & \(1.56\pm 0.31\) & \(1.64\pm 0.33\) & \(1.41\pm 0.11\) & (1.29) \\ \(F_{L}\) & \(0.69\pm 0.14\) & \(0.71\pm 0.14\) & \(0.715\pm 0.036\) & (0.63) \\ \(S_{3}\) & \(-0.034\pm 0.007\) & \(-0.039\pm 0.008\) & \(-0.083\pm 0.047\) & (\(-0.02\)) \\ \(S_{4}\) & \(0.17\pm 0.03\) & \(0.19\pm 0.04\) & \(0.155\pm 0.058\) & (0.19) \\ \(S_{7}\) & \(0.0065\pm 0.0013\) & 0 & \(0.020\pm 0.059\) & (\(-0.03\)) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Branching fraction and selected angular observables on the interval \(1\,\mathrm{GeV}^{2}\leq q^{2}\leq 6\,\mathrm{GeV}^{2}\) for \(B_{s}\to\phi\mu^{+}\mu^{-}\). Indicated are the CCQM predictions with and without 2-loop contributions and the experimental value. Table contains a subset of data originally published in [238].
\begin{table}
\begin{tabular}{c c c c} \hline \hline & \(B_{s}\to\phi\mu^{+}\mu^{-}\) & \(B_{s}\to\phi\tau^{+}\tau^{-}\) & \(B_{s}\to\phi\nu\bar{\nu}\) \\ \hline \(\mathcal{B}_{tot}\) & \((9.11\pm 1.82)\times 10^{-7}\) & \((1.03\pm 0.20)\times 10^{-7}\) & \((0.84\pm 0.16)\times 10^{-5}\) \\ \(\langle A_{\mathrm{FB}}\rangle\) & \(-0.24\pm 0.05\) & \(-0.18\pm 0.04\) &. \\ \(\langle F_{L}\rangle\) & \(0.45\pm 0.09\) & \(0.09\pm 0.02\) &. \\ \(\langle P_{1}\rangle\) & \(-0.52\pm 0.1\) & \(-0.76\pm 0.15\) &. \\ \(\langle P_{4}^{\prime}\rangle\) & \(1.05\pm 0.21\) & \(1.33\pm 0.27\) &. \\ \(\langle S_{3}\rangle\) & \(-0.14\pm 0.03\) & \(-0.067\pm 0.013\) &. \\ \(\langle S_{4}\rangle\) & \(0.26\pm 0.05\) & \(0.083\pm 0.017\) &. \\ \hline \hline \end{tabular}
\end{table}
Table 3: Total branching fractions and averaged angular observables of selected decay channels for the whole kinematic region. Table contains data originally published in [238].
### Other CCQM results on semileptonic \(B\) decays.
Quite a few papers were dedicated to the study of semileptonic \(B\) decays in the framework of the CCQM. We will not include into the overview older texts, where an earlier version of the model was used [243, 244, 245, 246, 247, 54, 248, 249, 250].
The first text we mention [43] was already cited several times here. It is a generally oriented text focusing mostly on the model itself and presenting its various aspects, including, for the first time, also the infrared confinement of quarks. A global fit on basic experimental quantities, such as weak leptonic decay constants, was performed in order to determine universal and hadronic-specific model parameters. These parameters were used in the same text to predict weak leptonic decay constants (including for \(B\) mesons) and Dalitz decays of several light mesons. The results were encouraging, most of predictions were in a quite good agreement with measured data.
The paper [251] is dedicated to various \(B_{(s)}\) decays with, however, emphasis on the nonleptonic processes. In the first part of the text the global fits are refined and the model parameters are updated. Then, the semileptonic decays are addressed, but only in the context of the universal transition form factors to several final-state mesons (pseudoscalar and vector). The results on form factors are given in form of plots and the comparison with seven other authors based on the value at \(q^{2}=0\) is shown in Tab. 3.
Somewhat similar treatment of the semileptonic decays is given in [252]. Here again the emphasis is on exotic and nonleptonic decays. The semileptonic decays are addressed in the context of transition form factors, similarly to the previous text.
The publication [253] focuses on the semileptonic decays of \(B_{(s)}\) to scalar mesons with light masses (below 1 GeV) in the context of the \(B\to K^{*}(\to K\pi)\mu^{+}\mu^{-}\) decay. The CCQM form factors \(F_{\pm}\) and \(F_{T}\) are predicted for the range \(0.8\,\mathrm{GeV}\leq\Lambda_{S}\leq 1.5\,\mathrm{GeV}\) of scalar vector model parameters for the \(b\to u\), \(b\to d\) and \(b\to s\) transitions. The predictions are approximated for \(\Lambda_{S}=0.8\,\mathrm{GeV}\) and \(\Lambda_{S}=1.5\,\mathrm{GeV}\) by a simplified parameterization which depends on three numbers. They are given in Tab. 2 of the text, so as to make the results available to other authors. Branching fractions (\(\Lambda_{S}=1.5\,\mathrm{GeV}\)) for various semileptonic decays \(B_{(s)}\to S\ell\ell\), \(B_{(s)}\to S\ell\nu_{\ell}\) are shown in Tab. 4 of the work. The text then briefly discusses the role of the scalar \(K^{*}_{0}(800)\) particle in the cascade decay of the \(B\) meson pointing out the fact that the narrow-width approximation is not appropriate and estimating the \(S\)-wave pollution in the \(B\to K^{*}\ell^{+}\ell^{-}\) decay to 6%.
The leptonic and semileptonic processes \(B\to\ell\bar{\nu}\) and \(B\to D^{(*)}\ell^{-}\bar{\nu}\) are investigated in [114] to address the question of the lepton flavor universality. We have already commented before on the leptonic results, they are entirely linked to the weak decay constant which is for various \(B\) and \(D\) mesons computed in Tab. 1. Semileptonic decay are more demanding and the usual steps are taken: the SM CCQM form factors are determined (also the simplified parameterization is provided) and are used in a helicity formulation to predict the full four-dimensional differential distribution for the decay rate and various \(q^{2}\)-dependent distributions for angular and polarization observables. By integration one gets total branching fractions, shown in Tabs. 3 and 4 of the publication, and their ratios \(R_{D}\) and \(R_{D^{*}}\) (Tab. 5). The results are favorable to the NP presence, the deviation in \(R_{D^{(*)}}\) is not smaller than seen by other authors at that time.
An analogous process with the \(K^{*}\) meson in the final state is the subject of the analysis in [49]. The text follows the same logic as the previous one: the model is used to predict form factors and then the helicity formalism is employed to derive various differential distributions. Besides the branching fraction, the emphasis is on the angular coefficients \(A_{\mathrm{FB}}\), \(F_{L}\) and \(P_{i}^{(^{\prime})}\), \(i=1-5,8\) depicted in Figs. 7-11 of the publication. The numbers are given for integrated or averaged variables over the whole kinematical range (Tabs. 3 and 4) but also for various intervals (i.e. bins, Tabs. 3,8). The predicted branching fraction exceed the measured values, for what concerns the angular observables reliable conclusions require more precise experimental data.
The article [254] analyses possible NP scenarios for \(\vec{B}^{0}\to D^{(*)}\tau^{-}\bar{\nu}_{\tau}\) and in this way differs from the previous ones. The analysis relies on the usual effective Hamiltonian approach where beyond-SM four-fermion operators are introduced with the definition analogous to (40) where \(q\to c\). It is assumed that the NP affects only the leptons of the third generation and the effect of each NP operator is studied separately, with no other NP operator interfering. The form factors are computed in the CCQM framework from where observables quantities are obtained. By the fit to the \(R_{D^{(*)}}\) ratios, allowed regions of the complex plane for the Wilson coefficients \(V_{L,R}\), \(S_{L}\) and \(T_{L}\) are identified (Fig. 2 of the text). No room was found for the \(S_{R}\) coefficient to explain the observed ratio and thus the corresponding operator was removed from further considerations. Next, full four-fold differential distribution was derived and various \(q^{2}\)-differential distributions analyzed: the NP Wilson coefficient was perturbed on the \(2\sigma\) level from the central value and the effect on a given distribution depicted as a gray band around the central line (Figs. 4-9). Depending on what distributions will future measurements provide, the presented results can serve us to identify which NP Wilson coefficients play a role.
The same process is also considered in [255], once again in the NP scenario based on the
SM-extended effective Hamiltonian. Here the main topic are the longitudinal, transverse, and normal polarization components of the tau lepton and it is argued about their high sensitivity to NP effects. Using a model independent approach and the experimental data, constraints for various NP scenarios are derived and their effect on the polarization observables is investigated. To get numerical results the CCQM form factors are used. The acquired knowledge about the dependence of polarization observables on the NP Wilson coefficients may be useful in future data analysis as a guiding rule to differentiate between various NP scenarios.
Very similar analysis is performed in [115] but for different decays. The text focuses on the processes with light mesons in the final state \(\bar{B}^{0}\to\pi\tau\bar{\nu}\), \(\bar{B}^{0}\to\rho\tau\bar{\nu}\) and on the leptonic decay \(B_{c}\to\tau\bar{\nu}\) assuming an SM-extended set of four-fermion operators. It uses the observables (41) defined already in the leptonic section and the CCQM-predicted form factors to constrain the introduced NP Wilson coefficients. The effect of their variation on (41) and on selected angular observables is analyzed.
Yet another publication which follows the same logic is [256], focusing this time on the decays \(B_{c}\to J/\psi\tau\nu\) and \(B_{c}\to\eta_{c}\tau\nu\). The observables used to constrain the NP Wilson coefficients are \(R_{D}\), \(R_{D^{*}}\), \(R_{J/\psi}\) and \({\cal B}(B_{c}\to\tau\nu)\). With form factors derived in the CCQM assuming the NP, the impact of variation of these coefficients on other branching fraction ratios and angular observables is evaluated. The work provides a detailed comparison of the CCQM form factors with form factors from different approaches.
The work [257] is interested in \(B_{c}\to J/\psi\bar{\ell}\nu_{t}\) and in the hadronic decay \(B_{c}\to J/\psi\pi(K)\). This time a SM calculation is presented, the agreement with the SM is assessed through comparison of measured and predicted values for \(R_{J/\psi}\) and two additional observables
\[R_{\pi^{+}/\mu^{+}\nu} ={\cal B}(B_{c}^{+}\to J/\psi\pi^{+})/{\cal B}(B_{c}^{+}\to J/\psi\mu^{+} \nu_{\mu}), \tag{68}\] \[R_{K^{+}/\pi^{+}} ={\cal B}(B_{c}^{+}\to J/\psi K^{+})/{\cal B}(B_{c}^{+}\to J/\psi \pi^{+}). \tag{69}\]
The form factors are evaluated in the CCQM framework and results for a set of semileptonic decays with \(J/\psi\) or \(\eta\) in the final state are presented (Tab. 2 there). The conclusion regarding the ratios is that an agreement with the SM is reached for \(R_{\pi^{+}/\mu^{+}\nu}\) and \(R_{K^{+}/\pi^{+}}\), but the theoretical prediction for \(R_{J/\psi}\) is too low with respect to data.
The semileptonic decays \(B\to K^{*}\mu\mu\), \(B_{s}^{0}\to\phi\mu\mu\) and the leptonic decay \(B_{s}\to\mu^{+}\mu^{-}\) are addressed in [258]. This brief text summarizes selected results and refers to previous papers.
The next paper dedicated to semileptonic decays is [259]. It analyzes the \(B\to K^{(*)}\nu\bar{\nu}\) process, where the current experimental limits on the branching fraction are expected not to be very far from the central value predicted by theory (i.e. the central value may be measured in the future). The CCQM is used to predict hadronic form factors which are then used in the helicity framework to predict branching fractions. The results agree with the experimental limits and also wit most of other authors. Approximately, the value of limits are only four times higher than the central values predicted by the theory.
## 5 Nonleptonic decays of \(B\) mesons
### Overview
The number of experimental measurements concerning nonleptonic (or hadronic) \(B\) decays is even larger than for semileptonic ones. Again, we briefly review the LHCb results and the results of the two \(B\) factories, BaBar and Belle(II), as the most representative. Nevertheless, we do not provide an exhaustive list but mention only works with larger impact.
The question of NP is for hadronic decays less pronounced than for the semileptonic ones, since these are theoretically less clean. Yet, the NP is often mentioned and treated together with some of the usual topics such as (exotic) multiquark states, observations of new decay channels, CP-related measurements, fragmentation fractions or branching fractions determination. In what follows we will try to observe this classification.
The LHCb published several papers reporting the observation of a specific decay channel, some being observed for the first time. This comprises the first observations of \(B_{s}^{0}\to J/\psi f_{0}(980)\)[260], \(B_{c}^{+}\to J/\psi D_{s}^{+}\) and \(B_{c}^{+}\to J/\psi D_{s}^{*+}\)[261], \(B_{c}^{+}\to B_{s}^{0}\pi^{+}\)[262], \(B^{+}\to D_{s}^{+}D_{s}^{-}K^{+}\)[263], \(B_{s}^{0}\to D^{*+}D^{*-}\)[264], \(B^{+}\to J/\psi\eta^{\prime}K^{+}\)[265] or \(B_{s}^{0}\to\chi_{c1}(3872)\pi^{+}\pi^{-}\)[266]. For the most of these observations some quantitative numbers are given, usually branching fraction ratios to a different decay mode (normalization channel).
A special interest is given to the observation of "resonant structures", i.e. observation of possible exotic multiquark states which are sometimes seen in invariant mass distributions of particles originating from the \(B\) disintegration. An important contribution to the exotic physics
was done in 2013 when the LHCb measured, in the \(B\) decay channel, the quantum numbers of the \(X(3872)\) resonance [267], previously discovered by Belle. Contemporary texts [268], [269] and [270] analyze the \(\bar{B}^{0}_{s}\to J/\psi\pi^{+}\pi^{-}\) and \(\overline{B}^{0}\to J/\psi\pi^{+}\pi^{-}\) spectra, and identify various resonant structures; here only the usual SM resonances are seen. The possible tetraquark character of the \(f_{0}(980)\) invoked in the last text is rejected as inconsistent with data. The situation becomes different in [271], where four resonant structures, possibly tetraquarks, are observed and their quantum numbers are determined. The work [272] reports on two exotic particles having \(c\bar{c}u\bar{s}\) quark content determined with high significance and also confirms four previously reported states. The authors of [273] perform an amplitude analysis of the \(B^{-}\to J/\psi\Lambda\bar{p}\) process, where the \(J/\psi\Lambda\) mass spectrum contains a narrow resonance, possibly a strange pentaquark; its quantum numbers are measured. A resonant structure, referred to as \(X(3960)\), is also observed in the \(B^{+}\to D^{+}_{s}D^{-}_{s}K^{+}\) decay mode close to the \(D^{+}_{s}D^{-}_{s}\) production threshold [274]. It is established to be consistent with a four-quark state \(c\bar{c}s\bar{s}\) having quantum numbers \(J^{PC}=0^{++}\). The text [275] analyses the spectrum of \(B^{+}\to D^{+}D^{-}K^{+}\) and advances a hypothesis of new charm-strange resonances. Another recent text, [276], also sees a new resonance of mass \(4337\,\mathrm{MeV}\) in the \(J/\psi p\)\((J/\psi\bar{p})\) spectrum of the \(B^{0}_{s}\to J/\psi p\bar{p}\) decay. A very recent analysis [277] is concerned with decays of the \(B\) mesons to \(J/\psi\phi K^{0}_{S}\) and presents evidence for \(T^{0}_{\psi s1}\) state in the \(J/\psi K^{0}_{S}\) invariant spectrum, presumably a tetraquark.
Besides direct investigations of the invariant mass spectrum, many LHCb publications rely, to identify resonant components, on the Dalitz plot and amplitude analysis where further resonances are identified, see [278, 279, 280, 281, 282, 283, 284]. The hadronic \(B\) decays are also often studied in the context of the CP analysis and weak parameter determination [285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298]. Various topics are addressed in these works: observation of the CP violation in a specific decay, measurement of the CP-violating phase, \(B^{0}_{(s)}\)-\(\bar{B}^{0}_{(s)}\) oscillations and determination of the CKM angles. The \(B\) decay measurements are also used to determine basic particle quantities, such as production cross sections, branching ratios or fragmentation fractions [299, 300, 301, 302, 303, 304, 305, 306, 307, 308].
The publications of the BaBar experiment fall into similar categories. We choose to mention in more detail the CP-related results which had, in the domain of nonleptonic \(B\) decays, the most significant impact. Namely, the violation of the CP symmetry was before the BaBar measurement [309] only observed for kaons. The measurement was done for several decay modes of the \(B^{0}\) particle, for each decay the CP asymmetry \(A_{\mathrm{CP}}\) was measured. The latter was defined in terms of a decay-time distribution \(f_{\pm}(\Delta t)\) for \(B\) and \(\bar{B}\) decaying into the common final state. The results were derived for the \(\sin(2\beta)\) quantity, where \(\beta\) is an angle of the unitarity triangle constructed from the CKM matrix elements and its deviation from zero measures the CP violation. The significance of the measurement reached 4 \(\sigma\) level. The CP-violation topic was then discussed in further publications for the neutral [310, 311, 312, 313, 314, 315, 316, 317, 318] and also charged \(B\) meson [319, 320, 321, 322]. Both, indirect (i.e. involving particle-antiparticle oscillations) and direct CP violation was seen with relevant significance. Several texts present measurements were the branching fraction and the CP asymmetries were addressed at the same time [323, 324, 325, 326, 327, 328]. Besides the direct CP violation measurements, the closely related measurements of the CKM angles \(\alpha\) and \(\gamma\) were presented in [329, 330, 331, 332].
The BaBar collaboration also investigated, in a variety of publications [333, 334, 335, 336, 337, 338, 339, 340, 341, 342, 343, 344], the usual quantities which characterize decays, i.e. branching fractions, angular observables and branching fractions. The related topic of resonances and exotic states were subject to numerous analysis. The resonances were investigated by invariant mass spectra or the Dalitz-plot method, as presented in [345, 346, 347]. Concerning exotic states, most of the BaBar results are related to the \(X(3872)\) particle [348, 349, 350, 351, 352, 353, 354, 355, 356] and present related searches, observations and measurements in various decay modes. The state \(Y(3940)\), first discovered at Belle, was observed also (as a product of a \(B\) decay) and its mass and width were determined.
The Belle experiment was very successful in search for various exotic states, tetraquarks and pentaquarks. Not all were related to hadronic decays of the \(B\) meson, but the most cited result [357] was. It presents the discovery of the \(X(3872)\) particle seen in the \(\pi^{+}\pi^{-}J/\psi\) spectrum of \(B^{\pm}\to K^{\pm}\pi^{+}\pi^{-}J/\psi\). Another achievements were the detection of tetraquark candidates \(Z(4430)\)[358] and \(Y(3940)\)[359], both among the decay products of \(B\). In addition to these, further publications on this topic were issued [360, 361, 362, 363, 364, 365, 366, 367, 368, 369], all related to nonleptonic \(B\) decays. The physic program regarding the CP violation and the weak physics in general is also very present at Belle. The collaboration published the \(B^{0}\) CP-violation paper [370] only a short time after BaBar did. Yet, it drew a lot of attention as an independent measurement of the \(\sin(2\beta)\) parameter. The measurement was updated later in [371], direct
CP violation was reported in [372, 373]. Many additional papers were published by Belle where various CP parameters (CKM angles) and weak-physics related processes were studied [374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394].
Naturally, the research at Belle is devoted also to branching fraction measurements of different B decay modes [395, 396, 397, 398, 399, 400, 401, 402, 403, 404], observation and analysis of new decay channels [405, 406, 407, 408, 409, 410, 411, 412, 413], polarization studies [414, 415] and photon energy spectra analysis in radiative events [416, 417].
The large amount of data on hadronic \(B\) decays motivates the theorists to describe observations and prove our understanding of the underlying physics to be correct. The exotic multiquark states have a specific character from the perspective of \(b\) physics: as a matter of fact many of them originate from nonleptonic \(B\) decays, yet, these decays seen as exotic production processes, are not addressed very frequently. They often have a larger number of hadrons in the final state (three or more) and thus large phase space and technically complicated description. The exotic particles are usually treated in the scenario where they represent the initial state (for the CCQM model see [48]) and thus are not in the scope of this text (are not \(B\) mesons). The emphasis of the theoretical overview is therefore on the remaining topics: branching fractions and weak-interaction physics.
The theoretical grounds to describe (not only) hadronic \(B\) decays were laid decades ago. The CP violation in the SM stems from the flavor mixing through the CKM matrix which has an irreducible complex phase, as formulated in the pioneering works [418, 419]. This rapidly lead to first theoretical predictions. In [420] the expectation of a small but measurable CP non-invariance in \(B\) meson decays was expressed. The authors of [421] argued, studying the on-shell transitions in heavy meson cascade decays, that the effect may not be so small after all and propose methods to detect the CP violation in the \(B\) sector. The latter topic is also discussed in [422], where mainly the non-leptonic decay modes are addressed.
In parallel the issues related to the asymptotic behavior and quark interactions were considered. The nice review [423] addressed the question of the power behavior of amplitudes and its relation to mesonic wave functions and quantum numbers. As results quantitative conclusions are made for hadronic form factors, large angle scattering processes and other related quantities. The highly cited paper [424] presents a relativistic extension of the quark model based on one-gluon exchange and a linear confining quark potential. It is used to describe mesons, their spectroscopy and decays, and succeeds to large extent. The work [425] studies (among others also) \(B\) decays in the framework of the valence quark model; the model assumes factorization and good results are obtained especially for nonleptonic processes. Following works further sharpen the QCD SM prediction; the next-to-leading QCD corrections are computed in [426], the implications of the heavy quark symmetry are analyzed in [427], the generalized factorization hypothesis and its impact on the structure of non-factorizable corrections are presented in [428] and three-loop anomalous dimensions at the next-to-leading order in \(\alpha_{s}\) for weak radiative \(B\) decays are computed in [429]. The role of the charm penguin diagrams in the \(B\) decay to pions was evaluated by the authors of [430] and a next-to-leading order evaluation of the branching fraction and photon spectrum of the \(B\to X_{s}+\gamma\) process was presented in [431].
Coming back to the CP symmetry, one can mention the publication [432], where large time-dependent CP asymmetries in the \(B^{0}-\bar{B}^{0}\) system are predicted or [433] where it is shown that the theoretical uncertainty associated with penguin diagrams in the \(B^{0}\to\pi\pi\) decay can be reduced by considering isospin relations.
An important issue addressed by various authors is the factorization validity, often assumed for hadronic matrix elements of the four-fermion operators. In [434] a theoretical investigation of \(B\) branching fractions is undertaken and branching fraction ratios of selected two-body hadronic \(B\) decays are proposed as factorization experimental tests. The article [435] is focused on the factorization for heavy-light final states. Such decays are treated in the heavy quark limit and the validity of the factorization ansatz is in this scenario proven at the two-loop order. In the similar context the authors of [436] study processes with two light mesons (\(K\), \(\pi\)) in the final state. They argue that in the heavy quark limit the hadronic matrix elements of nonleptonic \(B\) meson decays can be computed from first principles which helps to reduce the errors on the weak phases \(\alpha\) and \(\gamma\). Very similarly is oriented the paper [437], where the proof of the factorization is provided for \(B^{-}\to D^{0}\pi^{-}\) and \(B^{0}\to D^{+}\pi^{-}\). The topic of the factorization is further treated in [438], where decays \(B\to PP\) and \(B\to PV\) are addressed, and also in [439], where soft-collinear effective theory is used to prove factorization for \(B\) decaying to two light particles (\(\pi\), \(K\), \(\rho\), \(K^{*}\)).
One should also mention new physics searches. The paper [440] studies the \(B\to\pi\pi\) process from which it extracts relevant hadronic parameters. These are then used, under plausible assumptions, to predict \(B\to\pi K\). Those observables (for the latter process) which have small EW penguin contributions seem to agree with the experiment, those with significant contributions do
not. This might indicate NP in the W penguin sector. Similar ideas are developed also in [441]. A related topic, the final state interactions in hadronic B decays, is treated in [442]. Indeed, when considering the \(B\) decays to light mesons, there are, generally speaking, some difficulties to describe the data. To disentangle possible NP, all SM effects need to be considered, rescattering included. The latter is here treated in a phenomenological way in terms of off-shell meson exchange.
Let us briefly mention other works of interest: papers [206, 443] apply the light-cone sum rules to tackle \(B\) decays to light vector and pseudoscalar mesons respectively, the authors of [444] compute, at next-to-next-to-leading order of QCD, the effective Hamiltonian for non-leptonic \(|\Delta F|=1\) decays, and the text [445] focuses on the \(B\) decays to two vector particles in the framework of the QCD factorization. At last we can mention the paper [446] which summarizes the status of our CKM matrix knowledge based on a global fit to various (leptonic, semileptonic, hadronic) data.
### Nonleptonic \(B\) decays in CCQM
#### Decay \(B_{s}\to J/\psi\eta^{(^{\prime})}\)
We have chosen to demonstrate the CCQM approach on two hadronic processes to point out various aspects of the model application. The first one is \(B_{s}\to J/\psi\eta^{(^{\prime})}\)[447], were a fit to the data was performed so as to determine the model input parameters. The \(\eta^{(^{\prime})}\) mesons are described as a superposition of light (\(q=u,d\)) and strange components, \(\eta=-\sin\delta(\bar{q}q)-\cos\delta(\bar{s}s)\) and \(\eta^{{}^{\prime}}=\cos\delta(\bar{q}q)-\sin\delta(\bar{s}s)\) where \(\delta=\varphi_{P}-\pi/2\), \(\varphi_{P}=41.4^{\circ}\)[448]. The considered decay was treated within the naive factorization picture in the leading order, meaning it was described as a \(B_{s}\to\eta^{(^{\prime})}\) transition where only the \(\bar{s}s\) component of the latter is taken into the account, see Fig. 12. The necessary inputs for the decay width formula (\(P=\eta\), \(\eta\) )
\[\Gamma(B_{s}\to J/\Psi+P)=\frac{G_{P}^{2}}{4\pi}|V_{cb}V_{cs}^{ \dagger}|C_{W}^{2}f_{J/\psi}^{2}|\mathbf{q}_{P}|^{3}\zeta_{P}^{2}[F_{+}^{g_{ \eta}{}^{(^{\prime})}}(m_{J/\Psi}^{2})]^{2},\;\zeta_{\eta}=\cos\delta,\;\zeta_ {\eta^{{}^{\prime}}}=\sin\delta \tag{70}\]
are the leptonic decay constants \(f_{J/\Psi}\equiv f_{V}\) and the transition form factor \(F_{+}\)
\[m_{V}f_{V}\epsilon_{V}^{\mu}=N_{c}g_{V}\int\frac{d^{4}k}{(2\pi)^ {4}}\tilde{\Phi}(-k^{2})\mathrm{tr}[O^{\mu}S_{1}(k+w_{1}p)\not{\!\!\!\!/}_{V} S_{2}(k-w_{2}p)],\quad p^{2}=m_{V}^{2}, \tag{71}\]
\[\langle P_{q_{1},q_{2}}(p_{2})|\bar{q}_{2}O^{\mu}q_{1}|B_{q_{2},q_ {2}}(p_{1})\rangle= F_{+}(q^{2})P^{\mu}+F_{-}(q^{2})q^{\mu}, \tag{72}\] \[= N_{c}g_{B}g_{P}\int\frac{d^{4}}{(2\pi)^{4}i_{4}}\tilde{\Phi}_{B} (-[k+w_{13}p_{1}]^{2})\tilde{\Phi}_{P}(-[k+w_{23}p_{2}]^{2})\] \[\times\mathrm{tr}[O^{\mu}S_{1}(k+p_{1})\gamma^{5}S_{3}(k)\gamma^ {5}S_{2}(k+p_{2})],\]
where the Wilson coefficient is given by \(C_{W}=C_{1}+C_{2}/N_{c}+C_{3}+C_{4}/N_{c}+C_{5}+C_{6}/N_{c}\) and the meaning of other symbols is analogous to Sec. 3.2 and 4.2. The results are derived in the large \(N_{c}\) limit \(N_{c}\to\infty\). To get to the form factor and the decay constants one needs to know the model \(\Lambda\) parameters \(\Lambda_{q}^{\bar{q}q}\), \(\Lambda_{\eta}^{\bar{s}s}\), \(\Lambda_{\eta^{{}^{\prime}}}^{\bar{q}q}\) and \(\Lambda_{\eta^{{}^{\prime}}}^{\bar{s}s}\), four in total if one treats \(q\) and \(s\) components as independent. They can be derived from various processes where they play a role, so, in addition to the two studied decay channels, also \(\eta\to\gamma\gamma\), \(\eta\to\gamma\gamma\), \(\varphi\to\eta\gamma\), \(\varphi\to\eta\gamma\), \(\varphi\to\eta\gamma\), \(\varphi^{0}\to\eta\gamma\), \(\omega\to\eta\gamma\), \(\eta^{{}^{\prime}}\to\omega\gamma\), \(B_{d}\to J/\Psi+\eta\) and \(B_{d}\to J/\Psi+\eta^{{}^{\prime}}\) have been chosen. Fitting all together 11 processes, the optimal-fit parameters were determined
\[\Lambda_{\eta}^{\bar{q}q}=0.881\,\mathrm{GeV},\quad\Lambda_{\eta}^{\bar{s}s}=1. 973\,\mathrm{GeV},\quad\Lambda_{\eta}^{\bar{q}q}=0.257\,\mathrm{GeV},\quad \Lambda_{\eta^{{}^{\prime}}}^{\bar{s}s}=2.797\,\mathrm{GeV}, \tag{73}\]
Figure 12: The \(B_{s}\to\eta^{(^{\prime})}J/\psi\) decay as a \(B_{s}\) transition to the \(\bar{s}s\) component of \(\eta^{(^{\prime})}\) (a) in the factorization picture (b). Figure was originally published in [447].
other model parameters were taken from previous works, namely \(\Lambda_{B_{s}}=1.95\) GeV, \(\Lambda_{B_{d}}=1.88\) GeV and \(\Lambda_{J/\Psi}=1.48\) GeV. Also hadron-independent parameters (5) were tuned to different values, see Eq. (6) of [447]. With these in hand one computes results, see Tab. 5. Generally speaking the discrepancies in terms of standard deviations are rather large, yet the model roughly (within the factor 2) reproduces the data. There might be reasons to the differences one needs to understand, e.g. a gluonin contribution to the \(\eta\) state [448] could weaken the largest disagreement for \(\Gamma(\eta^{{}^{\prime}}\to\omega\gamma)\). As pointed out in [447], other models on the market do not seem to perform better than us.
The Belle and LHCb collaborations also measured the ratio
\[R=\frac{\mathcal{B}(B_{s}\to J/\Psi+\eta^{{}^{\prime}})}{\mathcal{B}(B_{s}\to J /\Psi+\eta)}=\begin{cases}0.73\pm 0.14,&\text{Belle \@@cite[cite]{[\@@bibref{}{Belle:2012}{}{}]}}\\ 0.90\pm 0.1,&\text{LHCb \@@cite[cite]{[\@@bibref{}{LHCb:2012}{}{}]}}\\ 0.86,&\text{CCQM}\end{cases}. \tag{74}\]
Here the CCQM number reproduces well the measurements and through the predicted form factors adds a non-trivial factor 0.83 to the model-independent part of the calculation
\[R^{\text{theor}}=\left(\frac{\left|\mathbf{q}_{\eta^{{}^{\prime}}}\right|^{3} }{\left|\mathbf{q}_{\eta}\right|^{3}}\tan^{2}(\delta)\right)\times\left( \frac{F_{+}^{B_{s}\eta^{{}^{\prime}}}}{F_{+}^{B_{s}\eta}}\right)^{2}=1.04... \times 0.83...\approx 0.86. \tag{75}\]
The overall precision of results is not fully satisfactory and further efforts may be done to investigate the discrepancies. Yet, besides the results themselves we wanted, in this subsection, also to point to the methodology we adopt in the CCQM for determining the model inputs.
#### Decay \(B\to D_{(s)}^{(*)}h\), \((h=\pi,\rho)\)
The second process we want to review is the \(B_{d}\) decay to a \(D\) meson and a light particle [451]. The interest here comes form the observation confirmed by other authors too, that the predictions systematically overshoot the data, which might indicate the NP.
The processes is described in the leading order and naive factorization framework. These decays correspond to rich set of various spin states and diagram topologies, as is summarized in Fig. 13 and Table 6. One labels by \(D_{1,2,3}\) the diagram structure (color favored, color suppressed and their interference ), where within each group, various spin configurations are present (labeled \(A,\ldots,D\)).
Using the leading order operators
\[Q_{1}=[(\bar{q}_{1})_{i_{1}}(q_{2})_{i_{2}}|_{V-A}[(\bar{q}_{3})_{i_{2}}(q_{4} )_{i_{1}}|_{V-A},\quad Q_{2}=[(\bar{q}_{1})_{i_{1}}(q_{2})_{i_{1}}|_{V-A}][( \bar{q}_{3})_{i_{2}}(q_{4})_{i_{2}}]_{V-A}, \tag{76}\]
where \(i_{j}\) are color indices and \([q_{1}q_{2}]_{V-A}=\bar{q}_{1}\gamma^{\mu}(1-\gamma^{5})q_{2}\), one can derive form factors. They are in the case of the scalar-to-scalar transition given by (72), for the scalar-to-vector form factor the expression stands
\[\langle V_{q_{3},q_{2}}(p_{2},\epsilon)|\bar{q}_{1}O^{\mu}q_{2}|B _{q_{3},q_{1}}(p_{1})\rangle= \tag{77}\] \[\qquad=\frac{\epsilon_{\nu}^{\dagger}}{m_{B}+m_{V}}\left[-g^{\mu \nu}P\cdot qA_{0}(q^{2})+P^{\mu}P^{\nu}A_{+}(q^{2})+q^{\prime}P^{\nu}A_{-}(q^{ 2})+\epsilon^{\mu\nu\alpha\beta}P_{\alpha}q_{\beta}V(q^{2})\right].\]
\begin{table}
\begin{tabular}{c c c} \hline \hline Observable & CCQM & Exp.[23] \\ \hline \(\Gamma(\eta\to\gamma\gamma)\) & 0.380 keV & \(0.515\pm 0.020\) keV \\ \(\Gamma(\eta^{{}^{\prime}}\to\gamma\gamma)\) & 3.74 keV & \(4.34\pm 0.14\) keV \\ \(\Gamma(\eta^{{}^{\prime}}\to\omega\gamma)\) & 9.49 keV & \(4.74\pm 0.15\) keV \\ \(\Gamma(\rho\to\eta\gamma)\) & 53.07 keV & \(44.22\pm 0.24\) keV \\ \(\Gamma(\omega\to\eta\gamma)\) & 6.21 keV & \(3.91\pm 0.06\) keV \\ \(\Gamma(\varphi\to\eta\gamma)\) & 42.59 keV & \(55.28\pm 0.17\) keV \\ \(\Gamma(\varphi\to\eta\;\gamma)\) & 0.276 keV & \(0.26\pm 0.001\) keV \\ \(\mathcal{B}(B_{d}\to J/\Psi+\eta)\) & \(16.5\times 10^{-6}\) & \((10.8\pm 2.3)\times 10^{-6}\) \\ \(\mathcal{B}(B_{d}\to J/\Psi+\eta^{{}^{\prime}})\) & \(12.2\times 10^{-6}\) & \((7.6\pm 2.4)\times 10^{-6}\) \\ \(\mathcal{B}(B_{s}\to J/\Psi+\eta)\) & \(4.67\times 10^{-4}\) & \((4.0\pm 0.7)\times 10^{-4}\) \\ \(\mathcal{B}(B_{s}\to J/\Psi+\eta^{{}^{\prime}})\) & \(4.04\times 10^{-4}\) & \((3.3\pm 0.4)\times 10^{-4}\) \\ \hline \hline \end{tabular}
\end{table}
Table 5: Decay widths and branching fractions for various processes with \(\eta\) and \(\eta^{{}^{\prime}}\) mesons as predicted by the CCQM. Table contains a subset of data originally published in [447].
\begin{table}
\begin{tabular}{c l l l} \hline \hline Spin structure & \(D_{1}\) diagram & \(D_{2}\) diagram & \(D_{3}\) diagram \\ \hline (A) & \(\begin{array}{l}B^{0}\to D^{-}+\pi^{+}\\ \underline{P}S\to PS+PS\end{array}\) & \(\begin{array}{l}B^{0}\to\pi^{-}+D^{+}\\ \underline{B}^{0}\to\pi^{-}+D^{+}\\ \underline{B}^{0}\to\pi^{-}+D^{+}\\ \underline{B}^{+}\to\pi^{0}+D^{+}_{s}\end{array}\) & \\ \cline{2-3} (B) & \(\begin{array}{l}B^{0}\to D^{-}+\rho^{+}\\ \underline{B}^{0}\to\pi^{-}+D^{+}_{s}\end{array}\) & \(\underline{B^{0}\to\pi^{0}+\bar{D^{*}}^{0}}\) & \(\underline{B^{+}\to\bar{D^{0}}+\rho^{+}}\) \\ \cline{2-3} \(\underline{B^{+}\to\pi^{0}+D^{*+}_{s}}\) & & \\ \cline{2-3} (C) & \(\begin{array}{l}B^{0}\to D^{*-}+\pi^{+}\\ \underline{B}^{0}\to\rho^{-}+D^{+}_{s}\end{array}\) & \(\underline{B^{0}\to\rho^{0}+\bar{D^{0}}}\) & \(\underline{B^{+}\to\bar{D^{*}}^{0}+\pi^{+}}\) \\ \cline{2-3} (D) & \(\begin{array}{l}B^{0}\to D^{*-}+\rho^{+}\\ \underline{B}^{0}\to\pi^{-}+D^{*+}_{s}\end{array}\) & & \\ \cline{2-3} (D) & \(\begin{array}{l}B^{0}\to D^{*-}+\rho^{+}\\ \underline{B}^{0}\to\rho^{-}+D^{*+}_{s}\end{array}\) & & \\ \cline{2-3} \(\underline{B^{+}\to\rho^{0}+D^{*+}_{s}}\) & & \\ \hline \hline \end{tabular}
\end{table}
Table 6: Studied decays arranged with respect to the spin structure and diagram topology. Underlined parts correspond to the transition of the spectator quark (in case of \(D_{3}\) to the first diagram of Fig. 13(c) ). Table was originally published in [451].
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline & Process & Diagram & \(\mathcal{B}_{\rm CCQM}/\rm E\) & \(\mathcal{B}_{\rm PDG}/\rm E\) & E \\ \hline
[MISSING_PAGE_POST]
ho^{+}\) & \(D_{3}\) & \(11.75\pm 0.59\) & \(9.8\pm 1.7\) & \(10^{-3}\) \\ \hline \hline \end{tabular}
\end{table}
Table 7: CCQM branching fractions compared to data. Table was originally published in [451].
The obtained form factors are shown in Fig. 14, the hadron-specific and universal CCQM parameters used in their prediction are summarized in Table II of [451]. The corresponding decay-width formulas (see [451], page 3) then allow one to get results summarized in Tab. 7. The level of agreement between the model and the data can be visually estimated by looking at Fig. 15. Generally speaking, the description of data is not satisfactory. The agreement within errors is reached for measurements where only limits are given and for few other cases. This might be expected for a subset of the processes since the factorization assumption is not supposed to hold in the scenario where the spectator quark enters the light meson, see [435]. Yet one sees an overall overestimation including decays with the spectator quark entering the \(D\) meson. This observation joins similar observations made by other authors [452, 453, 454, 42], i.e. it is seen across various approaches which naturally rises the question about the NP. The authors of [454] talk about "novel puzzle" and NP scenarios are advanced to explain it in [454, 42].
### Other CCQM results on nonleptonic \(B\) decays.
The CCQM was also applied to other hadronic decay processes of \(B\) mesons. Skipping older publications [455, 250] with an earlier version of the model, we can mention again the generally oriented text [251] where decay width for \(B_{s}\) going to \(D_{s}^{-}+D_{s}^{(*)+}\), \(D_{s}^{*-}+D_{s}^{(*)+}\) and \(J/\Psi+\Phi\) are computed. They are determined within the effective Hamiltonian approach using the helicity formalism from the CCQM-predicted form factors. The numbers are in fair agreement with experimental measurements. The same results are reviewed in paper [252], which, in addition, treats the exotic state \(X(3872)\) as a tetraquark and evaluates its selected branching fractions.
The work [456] deals with double-heavy \(B_{c}\) particles and their decays to charmonia and various
Figure 14: Transition form factors as predicted by the CCQM. Figures were originally published in [451].
Figure 13: \(B\) decays to two hadrons: color favored \(D_{1}\) (a), color suppressed \(D_{2}\) (b) and their interference \(D_{3}\) (c). Figures were originally published in [451].
\(D\) mesons. Two diagrams contribute in the leading order, in one the \(B_{c}\) spectator quark \(\bar{c}\) goes to the charmonium state, in the other it forms the \(D\) meson. One thus needs to evaluate form factors of six transitions \(B_{c}\to D,D_{s},\eta_{c},D^{*},D^{*}_{s},J/\Psi\), their behavior is shown in Fig. 2 of the work and their values at zero are also presented. Next, helicity amplitudes are constructed and branching fractions calculated for in total 8 processes \(B_{s}\to\eta_{c}+D^{(*)}_{(s)}\) and \(B_{s}\to J/\Psi+D^{(*)}_{(s)}\) (all combinations of brackets). Comparison with the experiment is based on branching fraction ratios \({\cal R}(D^{+}_{s}/\pi^{+})\), \({\cal R}(D^{*+}_{s}/\pi^{+})\), \({\cal R}(D^{+}_{s}/D^{*}_{s})\) and also \(\Gamma_{++}/\Gamma\) measured by Atlas [457] and LHCb [261]. Here
\[{\cal R}(A/B)=\frac{{\cal B}(B^{+}_{c}\to J/\Psi A)}{{\cal B}(B^{+}_{c}\to J/ \Psi B)} \tag{78}\]
and \(\Gamma_{++}/\Gamma\) is the transverse polarization fraction in the \(B^{+}_{c}\to J/\Psi+D^{*+}_{s}\) decay. The results are presented in the Tab. VIII of [456] with no significant deviations from the SM. Yet, as two different sets of Wilson coefficients were investigated, it turned out that the results are quite sensitive to their choice.
Similar processes are addressed in [257], however with \(\pi\) or \(K\) in the final state instead of \(D\). Consequently only one diagram contributes which is the one corresponding to the transition to charmonium, since all other \(\pi/K\) production diagrams from \(B_{c}\) are of a higher order. Also the semileptonic mode to \(J/\Psi\mu_{\mu}\) is investigated so as to define observables \({\cal R}(\pi^{+}/\mu^{+})\), \({\cal R}(K^{+}/\pi^{+})\), \({\cal R}(J/\Psi)\) and \({\cal R}(\eta_{c})\), see (42), (78). With the CCQM transition form factors identical to those mentioned previously one gets in total eight decay widths \(B^{+}_{c}\to\eta_{c}+h\), \(B^{+}_{c}\to J/\Psi+h\), \(h\in\{\pi^{+},\rho^{+},K^{+},K^{*+}\}\) (Tab. 3 of the publication) and branching fraction ratios which can be compared to the LHCb numbers (Tab. 5 of [257]) and also to other theoretical works. The ratios are in an agreement with measurements except for \({\cal R}(J/\Psi)\), which deviates more than \(2\sigma\).
Let us, at last, mention the paper [458] dedicated to vector particles \(B^{*}\) and \(B^{*}_{s}\) and their transition to \(B_{(s)\gamma}\) and \(D^{*}_{(s)}+V\), \(V\in\{\rho,K^{*},D^{*}_{s}\}\). The radiative deexcitation processes use the formalism presented in Sec. 2.4 to describe the decay: a photon can be radiated from one of the valence quarks or from the non-local quark-hadron vertex. In the latter case, however, it can be shown that the contribution vanishes due to the anomalous nature of the \(V\to P\gamma\) process and so the calculation is simplified. The results on decay widths of \(B^{+}\), \(B^{0}\) and \(B^{*0}_{s}\), presented in Tab. V of the work, depend on radiative decay constants of the particles given in Tab. IV. For what concerns the decays to two vector particles, the computation proceeds in a usual way, where the CCQM invariant form factors are combined to helicity amplitudes to give branching fractions. Due to small cross sections of the studied processes the experimental numbers are not available and so the CCQM results are compared to other theoretical approaches (Tab. XII of [458]).
Figure 15: The comparison of CCQM predictions and data. Processes are numbered as in Tab. 7. Figure was originally published in [451].
Summary and outlook
We provided in this text a review of the results of the covariant confined quark model for \(B\) decays presented together with a survey of selected experimental and theoretical results. Unlike for other physics models and their achievements mentioned here, we explained in depth the principles of the CCQM (Sec. 2) and presented computational details for chosen processes, namely \(B_{s}\to\ell^{+}\ell^{-}\gamma\) (Sec. 3.2), \(B_{s}\to\phi\ell^{+}\ell^{-}\gamma\) (Sec. 4.2), \(B\to D_{(\omega)}^{(*)}h\), (\(h=\pi,\rho\)) and \(B_{s}\to J/\psi\eta^{(^{\prime})}\) (Sec. 5.2). For the sake of the review the decays were divided into three groups: leptonic, semileptonic and non-leptonic. Although somewhat arbitrary, this division allowed us to demonstrate the application of the CCQM in various situations. Generally speaking, despite some studies on NP contributions, the CCQM results do not provide strong indications for NP and suggest that further efforts within the SM may be needed.
One should also recall that we presented only a small section of what the CCQM can provide: it was, in many papers, successfully applied to describe baryon, tetraquark and other (than \(B\)) mesonic states. The quality of the CCQM is also confirmed by the interest of other authors. Narrowing the large number of citations to those related to \(B\) decays and referring to the recent version of the model (2010 and later, without conference papers) one sees that the model was noticed by large collaborations (LHCb [58, 459], ATLAS [460]).
The ongoing physics program on existing and future high-luminosity machines implies that the CCQM may also in the future be an appropriate theoretical tool which will contribute to unraveling the questions brought by experiments about the presence of NP or the nature of various (exotic) states. Together with other approaches, it may help to understand model-related uncertainties beyond which new physics observations can be claimed.
## Acknowledgement
S. D., A. Z. D. and A. L. acknowledge the support of the Slovak Grant Agency for Sciences VEGA, grant no. 2/0105/21.
|
2309.16308 | Audio Visual Speaker Localization from EgoCentric Views | The use of audio and visual modality for speaker localization has been well
studied in the literature by exploiting their complementary characteristics.
However, most previous works employ the setting of static sensors mounted at
fixed positions. Unlike them, in this work, we explore the ego-centric setting,
where the heterogeneous sensors are embodied and could be moving with a human
to facilitate speaker localization. Compared to the static scenario, the
ego-centric setting is more realistic for smart-home applications e.g., a
service robot. However, this also brings new challenges such as blurred images,
frequent speaker disappearance from the field of view of the wearer, and
occlusions. In this paper, we study egocentric audio-visual speaker DOA
estimation and deal with the challenges mentioned above. Specifically, we
propose a transformer-based audio-visual fusion method to estimate the relative
DOA of the speaker to the wearer, and design a training strategy to mitigate
the problem of the speaker disappearing from the camera's view. We also develop
a new dataset for simulating the out-of-view scenarios, by creating a scene
with a camera wearer walking around while a speaker is moving at the same time.
The experimental results show that our proposed method offers promising
performance in this new dataset in terms of tracking accuracy. Finally, we
adapt the proposed method for the multi-speaker scenario. Experiments on
EasyCom show the effectiveness of the proposed model for multiple speakers in
real scenarios, which achieves state-of-the-art results in the sphere active
speaker detection task and the wearer activity prediction task. The simulated
dataset and related code are available at
https://github.com/KawhiZhao/Egocentric-Audio-Visual-Speaker-Localization. | Jinzheng Zhao, Yong Xu, Xinyuan Qian, Wenwu Wang | 2023-09-28T10:01:08Z | http://arxiv.org/abs/2309.16308v1 | # Audio Visual Speaker Localization
###### Abstract
The use of audio and visual modality for speaker localization has been well studied in the literature by exploiting their complementary characteristics. However, most previous works employ the setting of static sensors mounted at fixed positions. Unlike them, in this work, we explore the ego-centric setting, where the heterogeneous sensors are embodied and could be moving with a human to facilitate speaker localization. Compared to the static scenario, the ego-centric setting is more realistic for smart-home applications e.g., a service robot. However, this also brings new challenges such as blurred images, frequent speaker disappearance from the field of view of the wearer, and occlusions. In this paper, we study egocentric audio-visual speaker Direction of Arrival (DOA) estimation and deal with the challenges mentioned above. Specifically, we propose a transformer-based audio-visual fusion method to estimate the relative DOA of the speaker to the wearer, and design a training strategy to mitigate the problem of the speaker disappearing from the camera's view. We also develop a new dataset for simulating the out-of-view scenarios, by creating a scene with a camera wearer walking around while a speaker is moving at the same time. The experimental results show that our proposed method offers promising performance in this new dataset in terms of tracking accuracy. Finally, we adapt the proposed method for the multi-speaker scenario. Experiments on EasyCom show the effectiveness of the proposed model for multiple speakers in real scenarios, which achieves state-of-the-art results in the sphere active speaker detection task and the wearer activity prediction task. The simulated dataset and related code are available at [https://github.com/KawhiZhao/Egocentric-Audio-Visual-Speaker-Localization](https://github.com/KawhiZhao/Egocentric-Audio-Visual-Speaker-Localization).
Audio Visual Speaker Localization, Egocentric Perception, Audio Visual Fusion
## I Introduction
Vision and hearing are two important modalities for humans to perceive the world. Specifically, vision provides an informative signal, from which we can localize the objects when they are visible. Hearing can assist the visual modality to improve localization robustness. The two modalities can serve as complementary signals of each other for joint localization. For instance, if the audio signal is contaminated by background noise and reverberation, or objects are silent, the visual signal can be used. On the contrary, if objects move outside the camera's Field-of-View (FOV) or being occluded, audio can be used.
The aim of Audio-visual Speaker Localization (AVSL) is to find the positions (either in 3D Cartesian coordinates or DOA) of the speakers using both audio and visual modalities, captured by microphones (or microphone arrays) and cameras, respectively. The AVSL systems could be used for speaker monitoring [1], speech enhancement [2] and speech separation [3]. For the visual modality, off-the-shell advanced algorithms, such as object detectors [4], head detectors [5] and face detectors [6], can be used to localize the speakers on the image plane. Traditional methods like color histogram [7] find the target position by comparing the similarity between the reference image and sub-regions of the whole image. It is often used as complementary measurements if the face detector fails e.g., when the speakers are not facing towards the camera or when the illumination changes [8]. For the audio modality, sound source localization algorithms, either parametric- or learning-based, can be used to estimate the speaker's position. A popular example of the parametric methods is the Time Difference of Arrival (TDOA)-based methods [9, 10, 11, 12], which estimate the time delay by maximizing the TDOA likelihood. The TDOA-based methods are computationally efficient but with poor generalization ability across different acoustic scenarios [10]. The learning-based methods have gained significant attention in recent years, primarily due to the widespread adoption of deep learning techniques and the accessibility of GPU resources. Different from the parametric-based methods which rely on statistical modeling, learning-based methods [13, 14, 15, 16] find the direct mapping between the input audio features and the speaker locations through training with large-scale annotated data. Thus, they are well generalized to sounds with room reverberation and background noise, and also act as the focus of our paper.
EgoCOM [22]. The task of EgoTracks in Ego4D is localizing the speakers in the image plane. In this case, when a speaker disappears from the view, the track is lost. In our task, we calculate the relative DOA1 of the speaker with respect to the wearer, irrespective of whether the speaker is visually observable or not. To this end, we propose a transformer-based method and considering the character of audio and visual modality that the visual modality is only useful within the FOV and audio modality is useful on the sphere space around the wearer, we design a training strategy which separates the contributions from vision and audio.
Footnote 1: We calculate DOA in the world coordinates instead of in the image plane.
To validate the effectiveness of our proposals, we propose a new simulated dataset Egocentric Audio Visual Speaker Localization (Ego-AVSL) for AVSL, which is built upon the Unity game engine. Specifically, the scene includes two people moving freely in an indoor environment, with one being the speaker and the other wearing a camera and a microphone array. Examples of this scenario can be found in Fig. 2. The main reason why we create a simulated dataset is that it can be easily extended to a large quantity without expensive manual annotation. Another reason is because of the privacy issue [23]. Some laws related to information security, such as the General Data Protection Regulation (GDPR) have been established in Europe, claiming that people's visual image data cannot be used without authentication. In addition, experiments in [23] show that the use of additional simulated data for training can improve the performance of the model on real datasets.
The most related works to ours are [24, 25] where the EasyCom dataset was created with multiple people sitting around a table, talking in turn and ordering meals. The wearer is equipped with a camera to record videos and Google AR glasses to obtain multi-channel audio. However, in this dataset, the subjects are mostly static where the view of the wearer remains unchanged to a large extent. Thus, the subjects are mostly visible in the wearer's view, which is not always the case in egocentric speaker tracking. Even though, EasyCom still stands for a real scenario which can be used for evaluation.
Our contributions are summarized as follows:
1. We propose an audio-visual fusion model for egocentric DOA estimation and a training strategy which separately processes the audio-only and audio-visual data to tackle challenging scenarios e.g., the speaker moves outside the camera's FOV.
2. We develop a new dataset, namely Ego-AVSL, to simulate the out-of-view scenarios and facilitate the study of egocentric AVSL.
3. Extensive experiments are conducted where the results demonstrate that the audio modality can serve as a complementary signal when the visual modality is missing. Using visual modality can improve the model performance when the speaker is visible.
4. Further experimental results on the EasyCom dataset show the effectiveness of our model for multiple speaker localization in real scenarios and the pretrained model on the simulated dataset can provide a good initialization.
The remainder of the paper is organized as follows. Section II reviews the related works and Section III defines the problem and analyses the characters of ego-centric scenarios. Section IV describes the architecture of our proposed model. In Section V, we introduce the generation process of the proposed dataset and highlight its distinctive features compared to existing datasets in the literature. Section VI shows experimental results and the corresponding visualizations. Section VII demonstrates the model's performance in real scenarios when multiple speakers exist. The last section concludes the paper.
## II Related work
In this chapter, we summarize related works and highlight their differences from ours.
### _Ego-centric Study_
Ego-centric study has thrived in the past few years as it describes human activities from a first-person perspective, which mimics the way how humans perceive the world. EPIC-KITCHENS [21] is a large-scale dataset that collects first-person video recordings of kitchen activities, which can be used for ego-centric segmentation, object tracking, action recognition, and detection. Ego4D [20] is the biggest ego-centric dataset till now, containing over 3,000 hours recorded by participants around the world. This dataset supports the exploration of episodic memory, social understanding, and forecasting. Unlike the previous works, we focus on the egocentric AVSL, which explores the complementary characteristics of audio and visual data for DOA estimation. Egocentric speaker localization plays a pivotal role in audio-visual navigation tasks, which employ an agent to locate a stationary sound source that emits continuous sounds.
### _Sound Source Localization and Speaker Localization_
The challenges of Detection and Classification of Acoustic Scenes and Events (DCASE) and Learning 3D Audio Source (L3DAS) have drawn much attention in sound source localization where an increasing number of learning-based methods [26] have recently appeared. In [13], a neural network based on MLP and CNN has been proposed to determine DOA. In [27], an end-to-end neural network has been designed, which takes raw waveform as input to predict the 3D coordinates. CRNN is introduced in [28] to predict the sound event and positions concurrently. A two-stage strategy is proposed in [16], where the sound event is predicted first to assist the location estimation. Instead of just using audio modality, in the task of sound source localization, audio and visual modalities can jointly work to infer the positions of sounding objects. As indicated in [29], fusion methods can be divided into early, intermediate, and late strategies. In particular, early fusion directly combines the extracted features. Intermediate fusion integrates the features of the two modalities and allows multi-modality interaction. Late fusion integrates the two modalities at the decision level.
Speaker localization can be seen as a subtask of sound source localization. In [30], the early concatenation of GCC-PHAT and the simulated visual features are input to MLP to obtain DOA. A similar idea is also applied in [31], which uses early fusion to integrate the STFT and visual Gaussian distribution. In [32], the audio-visual representation is learned through audio-visual correspondence and contrastive learning. Then the learned embedding can be used for downstream tasks such as audio visual object detection and speech separation. In [33], the audio visual modalities are intermediately fused with dynamic weights, indicating the changing importance of the two modalities at different time steps. In [34], audio and visual streams are fused intermediately. They are input to a variational autoencoder to generate correlation functions. Then beamforming is used to generate the acoustic map. In [35], a new dataset for audio-visual speaker DOA estimation is proposed. And a cross-modal attention mechanism is proposed for intermediate fusion. Intermediate fusion is also used for our method as it can use the complementary information of audio and visual modalities.
In this paper, we focus on egocentric speaker localization, which leverages sound source localization and speaker localization methods with audio-visual fusion, and try to mitigate the problems caused by egocentric scenarios such as frequent speaker disappearance.
## III Problem Formulation
### _Problem Formulation_
As an example, a speaker and a wearer are in the room in their own orientations and positions, and walk randomly. Given an audio clip and a video frame, the purpose is to calculate the relative DOA of the speaker of the wearer. This scenario can be seen in Fig. 2. The positive direction is set as the orientation of the wearer (the blue arrows in Fig. 2). The coordinates of the speaker are converted into the coordinate system centered on the wearer's position. The ground truth DOA is calculated by the tangent value of the converted coordinates of the speaker. Given the position of the speaker \((x_{1},y_{1},z_{1},r_{1})\) and the wearer \((x_{2},y_{2},z_{2},r_{2})\). The relative DOA is calculated as:
\[\theta=\arctan\frac{(x_{2}\sin r_{2}+z_{2}\cos r_{2})-(x_{1}\sin r_{2}+z_{1} \cos r_{2})}{(x_{2}\cos r_{2}-z_{2}\sin r_{2})-(x_{1}\cos r_{2}-z_{1}\sin r_{2 })} \tag{1}\]
where \(\theta\) denotes the azimuth, \((x,y,z)\) is the position and \(r\) is the rotation angle, which can be obtained by the quaternion coordinates.
### _Difference between Egocentric Scenario and General Scenario of Audio Visual Speaker Tracking_
We discuss the difference and the challenges in terms of audio modality and visual modality, respectively.
#### Iii-B1 Audio Modality
In ego-centric scenario, the wearer is also moving, which has a Doppler effect on the received audio and will affect the frequency of the received audio. The change of the frequency is calculated as follows:
\[\hat{f}=f\frac{c+v_{r}}{c-v_{s}} \tag{2}\]
where \(\hat{f}\) is the frequency of the received signal and \(f\) is the frequency of the emitted signal. \(v_{r}\) is the receiver's velocity and \(v_{s}\) is the source's velocity. \(c\) is the sound speed.
In the general speaker tracking scenario, only \(v_{s}\) affect the change of frequency. But in the egocentric speaker tracking scenario, both \(v_{r}\) and \(v_{s}\) contribute to the Doppler effect.
To validate the difference of Doppler effects, we show the time lag estimation of GCC-PHAT in different scenarios. As the change of frequency will affect the phase of the received audio and GCC-PHAT relies on the phase difference to estimate the time lag. The simulated scenario is in Fig. 2. People with gray clothes is the wearer and people with red clothes is the speaker. The wearer is equipped with a camera and a microphone array with two microphones, and is facing to the speaker. The blue arrow denotes the velocity direction of the wearer and the orange arrow denotes the velocity direction of the speaker. Three scenarios are simulated:
* Both the wearer and the speaker are static.
* Only the speaker is moving with a constant velocity.
* Both the wearer and the speaker are moving with constant velocities.
We record the received audio when the speaker and the wearer are at the same preset points. Although the audios are recorded at the same points among the above scenarios, the difference is that in the second scenario, the speaker has a velocity. And in the last scenario, both the wearer and the speaker have velocities. We ensure that the quaternion of the wearer and the speaker are the same in the three scenarios to
Fig. 1: Snapshots of AV16.3 dataset (the up row) and our simulated Ego-AVSL dataset (the bottom row) with the interval of one second.
Fig. 2: The scenarios of ego-centric speaker localization. The blue arrow of the wearers is the positive north direction and the red arrow is the positive east direction. The blue arrow is also the velocity direction of the wearer. The orange arrow is the velocity direction of the speaker.
remove the influence of the facing direction. We use the audio whose length is equivalent to two visual frames and assume that within such a short period, the wear and the speaker are static.
The simulation results for the three scenarios are shown in Fig. 4, Fig. 4 and Fig. 5, respectively. The simulation of the first scenario provides the ideal time lag estimation as there is no Doppler effect. The estimated time delay is 0 as the waerer is facing the speaker and there are no delays for two microphones to receive the audio. If the speaker is moving, the frequency (magnitude and phase) of the received audio is changed, which will affect the time lag estimated by GCC-PHAT as GCC-PHAT relies on the phase relationship of the received binaural audio. The estimated time lag is the same as that in the first scenario. When the wearer is moving as well, the estimation (0.0391) deviates from 0, which shows that the Doppler effect is stronger and affects the time lag estimations.
If the receivers move faster, like drones [36] and underwater vehicles [37], the larger speeds will have a more obvious Doppler effect.
#### Iii-B2 Visual Modality
The difference from general tracking scenarios in visual modality is more straightforward, which is the more frequent speaker disappearance. We show the sequences of the AV16.3 dataset [38] and our simulated Ego-AVSL dataset in Fig. 1. In the AV16.3 dataset, the camera is fixed and the speaker is in the view in most cases. While in the Ego-AVSL dataset, due to the movement of the wearer, the speaker is out of view at some moments, which may pose a challenge to data fusion. In [30], the early fusion is used and the features of the audio and visual modality are flattened and concatenated. However, this kind of fusion may degrade the model performance, as the visual image does not contain the speaker's position information when the speaker disappears.
In this paper, we mainly focus on mitigating the problem in visual modality. Although the Doppler effect is stronger than that in general tracking scenarios, it is still negligible given the low speed of movement. However, the out-of-view problems in visual modality pose challenges in audio-visual fusion.
## IV Proposed Method
In this part, we start by introducing the model which can mitigate the out-of-view problem. Then we introduce the EMD loss which considers the continuous ordinal relationships between DOA angles and is proven to be superior to Cross Entropy (CE) loss. Finally, we discuss the audio and visual features extracted to represent the spatial information.
### _Model_
As discussed in Section III-B2, in the egocentric scenario, the speaker moves out of the camera view from time to time. Early fusion of audio and visual features will bring redundant visual information when the speaker is not in the view. Besides, it is not robust against temporal misalignment and outliers [39]. Late fusion will make the prediction unreliable if the quality of one modality is bad. To this end, we use intermediate fusion with separate training to integrate audio and visual features, aiming to extract useful information and discard redundant parts. The architecture of the proposed model is shown in Fig. 6. The audio and visual features go through different encoders and are fused through a cross-attention module. We use Transformer encoder as the backbone. A \([CLS]_{a}\) token is added at the beginning of the audio feature and another \([CLS]_{v}\) token is added at the beginning of visual feature, aiming to learn the localization information using audio modality and visual modality, respectively. The features are added with the position encoding to retain positional information. The output tokens from encoders are concatenated and go through the cross-attention module, which integrates information from audio and visual modalities. The output of the cross-attention module is concatenated as fused feature and a classifier is used to obtain the DOA posterior based on the fused audio-visual feature. The DOA estimation process is as follows:
\[\mathbf{z}_{a}^{0}=[[CLS]_{a};\mathbf{a}_{1}A;...;\mathbf{a}_{L_{a}}A]+ \mathbf{a}^{pos} \tag{3}\]
\[\mathbf{z}_{v}^{0}=[[CLS]_{v};\mathbf{v}_{1}V;...;\mathbf{v}_{L_{v}}V]+ \mathbf{v}^{pos} \tag{4}\]
\[\hat{\mathbf{z}}_{m}^{d}=\mathrm{LN}\left(\mathrm{MA}\left(\mathbf{z}_{m}^{d -1}\right)\right)+\mathbf{z}_{m}^{d-1},\quad d=1\ldots D,\quad m=a,v \tag{5}\]
\[\mathbf{z}_{m}^{d}=\mathrm{LN}\left(\mathrm{MLP}\left(\hat{\mathbf{z}}_{m}^{d }\right)\right)+\hat{\mathbf{z}}_{m}^{d},\quad d=1\ldots D,\quad m=a,v \tag{6}\]
\[\hat{p}=\mathrm{MLP}\left(\mathrm{MSA}\left(\mathbf{z}_{m}^{d}\oplus\mathbf{ z}_{v}^{D}\right)\right) \tag{7}\]
where Eq. (3) and Eq. (4) are the input representations of audio and visual modalities. Eq. (5) and Eq. (6) denote the multi-head self-attention and feed-forward processes. Eq. (7) predicts the DOA given the joint audio visual features. \(\mathbf{a}_{1},...,\mathbf{a}_{L_{a}}\) denotes the audio feature with length \(L_{a}\) and \(\mathbf{v}_{1},...,\mathbf{v}_{L_{v}}\) denotes the visual feature with length \(L_{v}\). \(A\) and \(V\) denote the audio and visual embedding layers, respectively, which project the audio features and visual features into the same dimension. \(\mathbf{a}^{pos}\) and \(\mathbf{v}^{pos}\) represent the position encoding for audio and visual modalities. \(a\) denotes the audio modality and \(v\) denotes the visual modality. \(LN\) denotes layer normalization and \(MLP\) denotes the feed forward layer. \(p\) denotes the DOA likelihood and \(MSA\) denotes multi-head self-attention, which captures dependencies within a modality and is calculated as follows:
\[\mathrm{MSA}\left(\mathbf{z}\right)=[\mathbf{h}_{1};...;\mathbf{h}_{n}] \mathbf{W}_{o} \tag{8}\]
\[\mathbf{h}_{i}=\mathrm{softmax}\left(\frac{\mathbf{z}\mathbf{W}_{Q},\mathbf{ W}_{K}^{T}\mathbf{z}^{T}}{\sqrt{d_{k}}}\right)\mathbf{z}\mathbf{W}_{V_{i}} \tag{9}\]
where Eq. (8) aggregates the outputs from multiple heads and Eq. (9) defines the attention calculation within each head. \(n\) is the number of heads. \(\mathbf{W}_{Q}\), \(\mathbf{W}_{K}\) and \(\mathbf{W}_{V}\) are learnable matrix for calculating query, key and value. \(\mathbf{W}_{o}\) is the learnable output matrix and \(\mathrm{softmax}\) is used to normalize the attention matrix.
If the speaker is not in the camera view, adding visual information is not beneficial for localization. Thus we split each training batch into two parts. The part where the speaker is in the camera view and the other part where is speaker is out of the view. For the first part (\(I_{av}\)), both audio and visual modality are useful for localization, which are passed through separate encoders to obtain audio and visual representations. These representations are fused for DOA prediction. For the other part (\(I_{ao}\)), only the audio information is used. The audio features go through the audio encoder and the final classifier to obtain the DOA without the multimodal fusion (denoted as the dotted line in Fig. 6).
\[\hat{p}=\mathrm{MLP}\left(\mathbf{z}_{a}^{D}\right) \tag{10}\]
This allows the model to leverage both audio and visual cues when available, while relying solely on audio when the speaker is not in view. Thus the redundant visual information will not affect the DOA prediction. The detailed process of DOA estimation is shown in Algorithm 1.
```
Input :\(a_{1},...,a_{BatchSize}\), \(v_{1},...,v_{BatchSize}\), \(AudioEncoder\), \(VisualEncoder\), \(I_{av}=\varnothing\) Output :\(\hat{p}\)
1for\(i\gets 1\)to\(BatchSize\)do
2if speaker is not in the field of view in \(a_{i}\)then
3\(I_{ao}\gets i\)
4 end if
5else
6\(I_{av}\gets i\)
7 end if
8
9 end for
10\(\hat{p}_{ao}\gets MLP(AudioEncoder(a_{I_{ao}}))\);
11\(\hat{p}_{av}\gets MLP(MSA(AudioEncoder(a_{I_{av}})),\) \(VisualEncoder(v_{I_{av}}))\);
12 return\(\hat{p}=\hat{p}_{ao}\cup\hat{p}_{av}\);
```
**Algorithm 1**Egocentric Audio-Visual DOA Estimation
### _Learning Objective_
In [40], each DOA is treated as an independent class and CE loss is used. However, in DOA prediction, each angle is not independent but in order. CE loss ignores the relationship of different angles. Thus, we employ earth mover's distance (EMD) [41] loss, which is used in speech quality assessment [42]. Recently in [43], it has shown competitive effectiveness in sound source localization, even with the requirement of high localizing resolution. Compared to CE loss, EMD loss \(\mathcal{L}_{\text{EMD}}\) maintains the consistency of adjacent angles and measures the discrepancy between two distributions, which is calculated as the squared error between the ground truth DOA distribution \(p\) and the predicted DOA distribution \(\hat{p}\). \(p\) is a Gaussian distribution centered on the ground truth DOA \(\theta\) with a predefined variance \(\sigma\).
\[p =\mathcal{N}(\theta,\sigma^{2}) \tag{11}\] \[\mathcal{L}_{\text{EMD}} =\sum_{i=1}^{N}(p_{i}-\hat{p}_{i})^{2} \tag{12}\]
Fig. 6: The architecture of the proposed model. The input of audio encoder is binaural waveforms while the input of visual encoder is a sequence of flattened patches. Features from audio and visual modalities are extracted and fused intermediately.
where \(N\) represents the number of DOA intervals. In our experiments, we set the resolution of the angle as \(1^{\circ}\) thus \(N=360\).
### _Features_
#### Iv-C1 Audio Feature Extraction
For audio feature, we calculate the GCC-PHAT [9], which can be used to estimate TDOA between microphones within an array. It is widely used in sound source localization due to its simplicity and effectiveness. Compared to GCC, GCC-PHAT is normalized by magnitude and maintains the phase information, which is more robust against room reverberation and noise. GCC-PHAT is calculated by computing the cross-correlation between microphone signals in the frequency domain and normalizing by the magnitudes:
\[\mathbf{a}(t,\tau)=\int_{-\infty}^{+\infty}\frac{STFT_{1}(t,f)STFT_{2}^{*}(t,f) }{|STFT_{1}(t,f)|\,|STFT_{2}^{*}(t,f)|}e^{j2\pi f\tau}df \tag{13}\]
where \(STFT_{1}\) and \(STFT_{2}\) represent the short-term Fourier transform of the audio clips of the paired microphones \((1,2)\) in a microphone array. \(f\) is frequency, \(\tau\) is the inter-microphone time lag, \(*\) is the complex conjugate. \(\mathbf{a}\in\mathbb{R}^{L_{a}\times Z}\) with \(L_{a}\) denotes the time length and \(Z\) denoting the number of coefficients of time lags. GCC-PHAT is input as a sequence along the first dimension to the audio encoder.
#### Iv-C2 Visual Feature Extraction
Previous work [34] uses face detectors which detect face bounding boxes or Siamese network [44] which indicates the most possible speaker position by measuring the similarity between the reference image and test image. However, these methods need to employ an external network for detection, which is computationally expensive. To lower the computational cost and make the speaker localization end-to-end, we use the image patch embedding as similar in [45]. Specifically, the image \(\mathbf{p}\in\mathbb{R}^{3\times H\times W}\) is split into a sequence of flattened patches \(\mathbf{v}\in\mathbb{R}^{L_{v}\times h}\) with convolution operator and patch resolution \(r\), where \(H\) and \(W\) are the width and the height of the image, respectively, \(L_{v}=\frac{H\times W}{r^{2}}\) is the number of image patches and \(h\) is the hidden dimension.
## V Dataset
### _Difference with Previous Dataset_
There are several datasets used in audio-visual speaker localization and tracking such as AV16.3 [38], CAV3D [8], AVDIAR [46], SSLR [13] and TragicTalkers [47].
1. AV16.3 [38]: AV16.3 [38] is recorded with three cameras with a sampling rate of 25 fps and two 8-microphone small-sized circular arrays with a sampling rate of 16 kHz, which is mostly used in audio-visual speaker tracking.
2. CAV3D [8]: CAV3D is recorded by Co-located Audio-Visual sensors. It is recorded with an eight-microphone circular array with a sample rate of 96kHz and a camera with 25 fps. Compared to the AV16.3 dataset, scenarios in the CAV3D dataset are more challenging, as it has strong reverberation, contains more frames where speakers are occluded and speakers are outside the FOV.
3. AVDIAR [46]: AVDIAR is recorded with a dummy head containing two cameras and six microphones. It is used for speaker diarization and tracking.
4. SSLR [13]: SSLR is recorded by a robot that contains four microphones. As it lacks a visual modality, previous work [30] simulates visual features using 3D position ground truth with a camera projection model.
5. TragicTalkers [47]: This dataset was recorded through a camera rig containing 22 cameras and 38 microphones. There are two speakers in a studio and each speaker takes turns talking. It is used in audio-visual active speaker detection and localization.
However, they are restricted to size or lack the visual modality. For example, in the AV16.3 dataset, the length of the annotated sequence is less than 2 hours, which is not suitable for training a deep learning model. In the SSLR dataset, there are no visual images. In addition, neither of these datasets is in an egocentric view but with a fixed view. So we created a new simulated dataset for Egocentric audio visual speaker localization (EgOVSL). The comparision between our dataset and others is shown in TABLE I.
### _Dataset Generation_
Our dataset is developed based on Unity2 augmented by Resonance Audio3 to generate stereo audio. Resonance Audio can act in the same way as humans hear the sound and simulate head-related transfer function (HRTF), sound occlusions, and reflections, which is used in AR and VR. The characters and motion models are from Mixamo4. The scenes are from AI2THOR [48] and the speech clips for the speaker are from Librispeech [49]. For implementation, the speaker is augmented by **ResonanceAudioSource**, which takes single-channel audio as input and generates spatial binaural audio, and thus the wearer can receive stereo audio. For each scene, the reverberation time is different. The speaker and the camera wearer are initialized with different velocities, orientations and walking periods. In a fixed time interval, both the wearer and the speaker have the possibility to change the orientation. For the label, both the speaker and the wearer have the 3D position and orientation annotation. The speaker also has a 2D bounding box annotation calculated by the Unity camera projection model from the 3D position. The wearer also wears a depth camera, which is in the same position as the RGB camera to capture depth images. The audio receiver is equipped in the camera so that the devices of the three modalities are co-located as shown in Fig. 9. Both the RGB camera and the depth camera have the FOV of 60\({}^{\circ}\). In this paper, we do not explore the depth modality for estimating the DOA, which is left in future work. The simulated dataset includes 330 sequences with a total length of more than 10 hours. The images are captured at 50 fps with a resolution of 1920 \(\times\) 1080 while the audio sampling rate is 48k. Fig. 7 illustrates the key scenes with the three involved modalities.
### _Settings of Dataset Simulation_
Both the speaker and the wearer have an initial velocity of \(0.5\sim 1.5m/s\). After every moving period ranging from 2 to 4 seconds, they have the possibility of \(0.5\) to stand still, or start to walk towards a new direction.
### _Statistic Analysis of Dataset_
We show the ground truth distribution of DOA and distance between the speaker and the wearer in Fig. 8. In most cases, the distance is within 7 meters. However, in the development set, when the speaker is in the FOV, there are some cases where the speaker is far away from the wearer by over 15 meters. Besides, the percentage of data where the speaker is in FOV is \(58.84\%\) and the percentage of data where the speaker is out of FOV is \(41.16\%\). The simulation follows the realistic scenarios for egocentric AVSL with balanced distribution in different subsets.
## VI Experiments
### _Implementation Details and Evaluation Metrics_
#### Vi-A1 Model Configuration and Training Details
We use 265 sequences for training, 30 sequences for validation, and 20 sequences for testing. We divide each sequence into chunks. Each chunk has one image frame, one audio clip, and one depth image. We extract one chunk for every two image frames to mitigate data repetition. The length of the audio clip is equal to 25 image frames. After splitting, the number of chunks for training, validation and testing is around 460,000, 130,000 and 90,000, respectively. For calculating GCC-PHAT [9] of the audio clip, the length of fast Fourier transfer window is 1024, the hop size is 320 and the number of coefficients of time lags is 96. For visual features, the frames are resized to 224 \(\times\) 224. Each frame is split into 16 \(\times\) 16 patches. For the transformer encoder, we choose 2 layers, 4 attention heads, 256 intermediate dimensions, and 128 hidden dimensions. For the training process, the batch size is 512 and the learning rate is 1e-3. The epoch is 30 with the early stop mechanism.
#### Vi-A2 Evaluation Metrics
We use Absolute Error (AE) and Accuracy to measure the performance of models. AE is calculated as follows:
\[\mathrm{AE}=|\theta-\hat{\theta}| \tag{14}\]
Where \(\theta\) is the ground truth degree and \(\hat{\theta}\) is the predicted degree, selected as the maximum of \(\hat{p}\). Since the direction
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**Dataset** & **No. Cam** & **No. Mic** & **No. Spk** & **Anno.** & **Ego** \\ \hline AV16.3 & 3 & 16 & 1-3 & \(<\)2h & \(\times\) \\ \hline CAV3D & 5 & 8 & 1-3 & \(<\)2h & \(\times\) \\ \hline AVDIAR & 2 & 6 & 1-4 & \(<\)0.5h & \(\times\) \\ \hline TragicTalkers & 22 & 38 & 1 & \(\sim\)2.5h & \(\times\) \\ \hline Ego-AVSL & 1 & 2 & 1 & \(\sim\)10h & \(\check{\check{\check{\check{\check{\check{\check{\check{\check{\check{ \
of arrival value is cyclic. \(AE\) should range from 0 to 180\({}^{\circ}\). So if \(|\theta-\hat{\theta}|\) is larger than 180\({}^{\circ}\), we use \(360^{\circ}-|\theta-\hat{\theta}|\) as \(AE\). Accuracy is calculated as the percentage of the correct predictions. The prediction is regarded as correct if the AE between the prediction and the ground truth DOA is less than two degrees.
### _Experimental Results_
#### Iv-B1 Baselines
We reimplement and compare the following baseline methods:
1. SRP-PHAT [50]: Steered Response Power Phase Transform, also named Global Coherence Field. It is a signal processing-based method which adds up the GCC-PHAT from multiple microphone pairs within one microphone array and locates the sounding object with the maximum response power.
2. MLPGCC [13]: A learning-based method with fully-connected layers which takes flatted and concatenated GCC-PHAT from all possible microphone pair combinations as input.
3. MLPAVC [30]: A learning-based method with fully connected layers which takes the early concatenation of GCC-PHAT and visual features as input.
4. MLPAVAM [30]: Based on MLPAVC, a dynamic weighting mechanism is introduced to determine the importance of the audio and visual modality.
5. Multimodal Transformer (Multimodal Trn): Transformer with the input of the combination of \(CLS\), the GCC-PHAT and the split image patches. The output corresponding to the \(CLS\) is passed to DOA classifier for DOA prediction.
#### Iv-B2 Analysis
We show the localization results in TABLE II. The first three methods are audio methods while the remaining are audio-visual methods. In addition to the results on our proposed Ego-AVST, we also report the results on the noisy version of the test set of Ego-AVST. The noisy version is created by adding noise from DEMAND [51] which is recorded in real scenarios such as offices, subways and meeting rooms with an SNR level of 20 dB. Compared to the learning-based methods, the traditional SRP-PHAT method fails and the learning-based methods are more robust in the ego-centric scenario. MLPAVAM and MLPAVC outperform MLPGCC, showing the advantages of using both audio and visual modalities as the visual modality can help to localize the speaker. It can be seen that our proposed method outperforms the learning-based three baselines to a large extent, showing the modeling capacity of Transformer. The multimodal transformer takes the concatenation of GCC-PHAT and image patches as input. It does not show a good result, indicating that it is not suitable to encode the audio and visual features in one Transformer as there exists a domain gap between the two modalities.
We also show the DOA error distribution in Fig. 10, which demonstrates the error over all DOA in the test set. We can see that our method shows lower errors over most DOA except in the range of 260 \(\sim\) 300\({}^{\circ}\). Besides, our method shows larger errors in the range of 180 \(\sim\) 360\({}^{\circ}\) than those in the range of 60 \(\sim\) 120\({}^{\circ}\). The possible reason is that when the target DOA is in the range of 60 \(\sim\) 120\({}^{\circ}\), the visual information is available and beneficial for the localization. And the audio and visual modality can work jointly to strengthen the model performance. When the target DOA is in the range of 180 \(\sim\) 360\({}^{\circ}\), only audio is used and the localization resolution of audio is low.
We show two examples of DOA estimation by the baseline methods and our method in Fig. 11. The angle corresponding to the largest value within one color indicates the DOA estimation. In the first figure, the estimations of baselines deviate a lot from the ground truth. And in the second figure, the estimation of our method is closer to the ground truth compared to the baseline estimation. However, from TABLE II, it can be seen that the MLP-based methods show better robustness against background noise than the Transformer-based methods.
### _Ablation Study_
We do the ablation study to show the helpfulness of the EMD loss and the audio-visual fusion. The results of performance variation are listed in TABLE III. It is demonstrated that the EMD loss is superior to the Cross Entropy loss as the latter cannot model the relationship between different DOA. Without audio modality, the model performance degrades significantly,
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & \multicolumn{2}{c}{**Ege-AVST**} & \multicolumn{2}{c}{**Ego-AVST-Noise**} & \multicolumn{1}{c}{**Model Size**} \\ \hline & \multicolumn{2}{c}{**Accuracy\((\%)\)**} & \multicolumn{2}{c}{**AE\((\%)\)**} & \multicolumn{2}{c}{**Accuracy\((\%)\)**} & \multicolumn{2}{c}{**AE\((\%)\)**} & \multicolumn{2}{c}{**No. Params**} \\ \hline SRP-PHAT [50] & 10.060 & 142.00 & 10.850 & 141.35 & - \\ \hline MLPGCC [13] & 77.519 & 14.237 & 74.281 & 21.478 & 2.4M \\ \hline MLPAVC [30] & 82.788 & 10.530 & **80.729** & 16.661 & 2.5M \\ \hline MLPAVAM [30] & 82.992 & 10.387 & 80.540 & 16.265 & 2.6M \\ \hline Multimodal Trn & 86.591 & 4.157 & 77.135 & 17.244 & 3.0M \\ \hline Ours & **89.951** & **3.859** & 77.853 & **15.044** & 1.8M \\ \hline \hline \end{tabular}
\end{table} TABLE II: Experimental Results (No. Params denotes the number of trainable model parameters with the unit MegaBytes. ‘-’ denotes information not applicable.)
Fig. 10: The DOA error distribution of our method and baselines.
as the speaker is often out of the camera's FOV and the visual modality is of no use for localization. Compared to the model with audio only input, our model shows lower localization errors as the visual modality helps to localize the speaker when the speaker is in the view.
We also show the effectiveness of separate training. The second method **w/o Separate** means we directly input both the audio and visual stream into the model without recognizing the audio only part and audio-visual part. The results turn out that the performance is not as good as our method, as the visual information is not beneficial for localization when the speaker is out of FOV.
We show the effects of visual modality in TABLE IV. It can be seen that with visual information, the model performance degrades a little in the scenario where the speaker is out of the FOV. Even though the visual data does not take part in the training process when the speaker is out of the view, it influences the model optimization in the training batches where the speaker is in the view. However, with the help of visual modality, the model performance improves in the scenario where the speaker is in the camera view, especially lowering the MAE from 6.830 to 3.727.
In addition, we compare the multi-modal fusion methods. In TABLE III, **AV Trn \(\times\)** denotes the audio visual transformer with multiplication fusion. The posterior from the audio encoder and visual encoder are multiplied together to obtain the final posterior. The performance of **AV Trn \(\times\)** is not good than that of our method. The possible reason is that if the posterior of one modality is not reliable it will have a negative impact on the final decision of the model. However, the cross modality attention module used in our model will assign a hidden weight to each modality based on the modality interaction. If one modality is not reliable, a lower weight will be assigned, which will mitigate the negative impact.
### _Visualization Results_
In Fig. 12, we show some visualization results of DOA estimation with audio and visual input stream to show the effectiveness of the proposed model. In the first row, the speakers are out of FOV and no speaker appears in the egocentric image. The DOA are estimated correctly thanks to the audio modality. In the second row, the speakers are in the FOV with partial occlusions in the second egocentric image. Our model can also localize the speaker accurately by leveraging both audio and visual modalities.
We also show the t-SNE visualization of the extracted features by the baseline methods MLPGCC [13], MLPAVAM [30] and our method in Fig. 13. We use the features before the DOA classifier for visualization. As discussed in [35], the DOA is consistent between \(360^{\circ}\) and \(0^{\circ}\), thus the yellow points and dark blue points should be close to each other. It is observed in Fig. 13a and Fig. 13b, there are some points of different colors clustered in the center which are not well classified. While in Fig. 13c, different groups of points denoting different DOA ranges are well classified.
## VII Extension to Multiple Speakers in Real Scenarios
### _Speaker Activity Detection_
Although the proposed simulated dataset mitigates the problem of manual annotation and privacy issues, there are some limitations of the proposed dataset. On the one hand, there is only one speaker. While in the real scenarios there exists scenarios that multiple speakers talk at the same time. On the other, the simulated dataset has domain gaps with the real dataset and real scenario. Thus we evaluate the model performance on real dataset.
We evaluate the proposed method on the EasyCom dataset [24]. Scenarios in this dataset are introduced in Section I. Following the settings in [25], we predict a full \(360^{\circ}\) spherical speaker localization map with the dimension of \(90*\)180, where \(90\) represents the elevation and \(180\) represents the azimuth in \(2^{\circ}\) resolution. To adapt the proposed model from single speaker localization to multiple speakers localization, we keep most of the model components but the final classification layer. After obtaining the concatenated encoded \(CLS\) in Fig.
Fig. 11: The DOA estimation of two examples. The left figure shows the case where the speaker is in the front of the wearer while the right figure shows the case where the speaker is in the back.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline & \multicolumn{2}{c}{**In the View**} & \multicolumn{2}{c}{**Out of the View**} & \multicolumn{2}{c}{**Overall**} \\ \hline & **Acc**(\(\%\)) & **AE**(\(\%\)) & **Acc**(\(\%\)) & **AE**(\(\%\)) & **Acc**(\(\%\)) & **AE**(\(\lx@overaccentset{{\circ}}{}\)) \\ \hline AO & 92.625 & 6.830 & **75.822** & **3.689** & 89.762 & 6.297 \\ \hline AV & **93.596** & **3.727** & 72.466 & 4.545 & **89.951** & **3.859** \\ \hline \hline \end{tabular}
\end{table} TABLE IV: Effect of the visual signals (**AO** denotes ‘Audio Only’ and **AV** denotes ‘Audio Visual’)
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & \multicolumn{2}{c}{**Ego-AVST**} & \multicolumn{2}{c}{**Ego-AVST-Noise**} \\ \hline & \multicolumn{2}{c}{**Accuracy**(\(\%\))} & **AE**(\(\lx@overaccentset{{\circ}}{}\)) & **Accuracy**(\(\%\)) & **AE**(\(\lx@overaccentset{{\circ}}{}\)) \\ \hline w/o EMD & 50.671 & 56.644 & 50.671 & 56.644 \\ \hline w/o Separate & 86.446 & 4.411 & 77.288 & 16.213 \\ \hline Visual Only Trn & 57.989 & 48.490 & 57.989 & 48.490 \\ \hline Audio Only Trn & 89.762 & 6.297 & 77.460 & 15.370 \\ \hline AV Trn \(\times\) & 87.435 & 4.480 & 77.089 & 17.177 \\ \hline Ours & **89.951** & **3.859** & **77.853** & **15.044** \\ \hline \hline \end{tabular}
\end{table} TABLE III: Ablation Study (**w/o** denotes ‘without’, **Trn** denotes ‘Transformer’ and ‘\(\times\)’ denotes multiplication of multi-modal likelihood.)
6, we use two fully connected layers to project the hidden dimension to \(4050\) and then reshape it to \((45,90)\), respectively. Then the two reshaped tensor are stacked to \((2,45,90)\) and upsampled to \((2,90,180)\), representing the localization map. Non-maximum suppression is first applied to the map to remove the repeated predictions with radius 5 and threshold 0. The selected predictions are matched with ground truth positions with the Hungary algorithm.
The ground truth is obtained by transferring the annotation of 3D positions and quaternions to the \((90,180)\) map in which \(1\) indicates there exists a speaker. The ground truth is augmented by marking the cells near the speaker positions to \(1\). The model is trained with Cross Entropy loss. Following [25], we calculate Mean E1, Std1, Mean E2 and Std2 to evaluate the model performance, where Mean E1 and Std1 are the mean and standard deviation of the distance from prediction to ground truth, while Mean E2 and Std2 are the mean and standard deviation of the distance from the ground truth. The former two metrics consider the impact of false positives and the latter two metrics consider the impact of missing targets. EasyCom dataset has 12 sessions, we use sessions 4-12 for training and sessions 1-3 for testing, which keeps the same setting as [25].
Experimental results are shown in TABLE V, where the baseline models from [25] are compared. To show the usefulness of the proposed simulated dataset, we provide two versions of the model. The first one is trained from scratch on EasyCom dataset (Ours\(\dagger\)). The other is initialized with the weights pretrained on the proposed simulated dataset (Ours\(\ddagger\)) except the final projection layers. We can see that the model initialized with the pretrained weight outperforms the model trained from scratch in terms of Mean E1 and Mean E2,
\begin{table}
\begin{tabular}{l r r r r} \hline \hline
**Methods** & **Mean E1** & **Std1** & **Mean E2** & **Std2** \\ \hline AV(cor) & 16.77 & 12.63 & 6.56 & 8.77 \\ \hline AV(spec) & 8.81 & 9.63 & 6.21 & 6.89 \\ \hline DOA & 129.82 & 18.26 & 46.45 & 21.50 \\ \hline DOA+image & 66.81 & 7.89 & 36.48 & 8.97 \\ \hline AV-rawaudio & 40.14 & 10.55 & 140.75 & 19.58 \\ \hline Ours\(\dagger\) & 9.33 & 12.78 & 4.72 & 7.15 \\ \hline Ours\(\ddagger\) & **8.00** & 10.31 & **4.49** & 7.53 \\ \hline \hline \end{tabular}
\end{table} TABLE V: Experimental results on EasyCom Dataset for egocentric active speaker localization. Experimental results of _AV(cor)_, _AV(spec)_, _DOA_, _DOA_+_image_ and _AV-rawaudio_ are imported from [25]. \(\dagger\) denotes that the model is trained from scratch. \(\ddagger\) denotes that the model is initialized with the weights pretrained on the proposed simulated dataset.
Fig. 12: Visualization results of DOA estimations with the egocentric image and binaural audio. The blue arrow and the red arrow show the positive directions of the north and the east, respectively. The fan areas with white gradient starting from the origin in the polar coordinates show the FOV. The green straight line is the estimation results while the red straight line is the ground truth. In the first row, there are two examples where speakers are out of FOV. In the second row, the scenarios where the speakers are in the FOV are shown.
indicating that the model indeed learns some knowledge from the simulated dataset. Besides, it can be seen that our proposed model outperforms the baseline methods in term of Mean E1 and Mean E2, proving the model can handle both the simulated single speaker localization and multi-speaker localization in real scenarios.
### _Wearer Speech Activity Detection_
In EasyCom dataset, the wearer talks from time to time and the annotation not only contains the speech activity of the participants but the wearer's. We use the trained model in Section VII-A for predicting whether the wearer is talking. Specifically, after training the model for active speaker localization, we use the audio encoder in Section VII-A and add an extra fully connected layer behind the output _CLS_ token for binary classification. The model is trained with Cross Entropy loss. We calculate mAP and compare the performance with the baseline model in [25]. Experimental results are reported in TABLE VI. It can be seen that the proposed model outperforms the baselines, which demonstrates that our model has a good capacity for wearer activity prediction as well. Combined with the experimental results in Section VI-B2 and VII-A, it is proven that the proposed model is versatile and is able to predict the audio activity for both speakers and the wearer in both simulated and real scenarios.
## VIII Conclusion
In this paper, we propose an audio-visual fusion architecture for egocentric audio visual speaker tracking. Experimental results demonstrate the effectiveness of the proposed method and the capability for dealing with out-of-view problems. The ablation study shows that both audio and visual modality is helpful for localization. The visual modality is beneficial when the speaker is in the FOV. Experiments on EasyCom dataset also show the effectiveness on the real dataset and the usefulness of the proposed dataset.
For future work, we will explore to add the depth feature for the speaker localization. Actually, the ego-centric tracking scenario is very complicated. There are many problems which may occur in real applications, including motion blur, speaker disappearance, occlusions, surrounding noise, bad illumination conditions [25] and so on. In this paper we mainly deal with the challenges of speaker disappearance, occlusions and audio noise. We will continue to focus on the remaining problems.
|
2309.08136 | Let's Roll: Synthetic Dataset Analysis for Pedestrian Detection Across
Different Shutter Types | Computer vision (CV) pipelines are typically evaluated on datasets processed
by image signal processing (ISP) pipelines even though, for
resource-constrained applications, an important research goal is to avoid as
many ISP steps as possible. In particular, most CV datasets consist of global
shutter (GS) images even though most cameras today use a rolling shutter (RS).
This paper studies the impact of different shutter mechanisms on machine
learning (ML) object detection models on a synthetic dataset that we generate
using the advanced simulation capabilities of Unreal Engine 5 (UE5). In
particular, we train and evaluate mainstream detection models with our
synthetically-generated paired GS and RS datasets to ascertain whether there
exists a significant difference in detection accuracy between these two shutter
modalities, especially when capturing low-speed objects (e.g., pedestrians).
The results of this emulation framework indicate the performance between them
are remarkably congruent for coarse-grained detection (mean average precision
(mAP) for IOU=0.5), but have significant differences for fine-grained measures
of detection accuracy (mAP for IOU=0.5:0.95). This implies that ML pipelines
might not need explicit correction for RS for many object detection
applications, but mitigating RS effects in ISP-less ML pipelines that target
fine-grained location of the objects may need additional research. | Yue Hu, Gourav Datta, Kira Beerel, Peter Beerel | 2023-09-15T04:07:42Z | http://arxiv.org/abs/2309.08136v1 | # Let's Roll: Synthetic Dataset Analysis for Pedestrian Detection Across Different Shutter Types
###### Abstract
Computer vision (CV) pipelines are typically evaluated on datasets processed by image signal processing (ISP) pipelines even though, for resource-constrained applications, an important research goal is to avoid as many ISP steps as possible. In particular, most CV datasets consist of global shutter (GS) images even though most cameras today use a rolling shutter (RS). This paper studies the impact of different shutter mechanisms on machine learning (ML) object detection models on a synthetic dataset that we generate using the advanced simulation capabilities of Unreal Engine 5 (UE5). In particular, we train and evaluate mainstream detection models with our synthetically-generated paired GS and RS datasets to ascertain whether there exists a significant difference in detection accuracy between these two shutter modalities, especially when capturing low-speed objects (e.g., pedestrians). The results of this emulation framework indicate the performance between them are remarkably congruent for coarse-grained detection (mean average precision (mAP) for IOU=0.5), but have significant differences for fine-grained measures of detection accuracy (mAP for IOU=0.5:0.95). This implies that ML pipelines might not need explicit correction for RS for many object detection applications, but mitigating RS effects in ISP-less ML pipelines that target fine-grained location of the objects may need additional research.
Yue Hu\({}^{1}\), Gourav Datta\({}^{1}\), Kira Beerel\({}^{2}\), Peter Beerel\({}^{1}\)+\({}^{1}\)University of Southern California \({}^{2}\) Harvard Westlake High School
Footnote †: This research is partially supported by a grant from Samsung.
Synthetic dataset, rolling shutter effect, machine learning, detection model, mean average precision
## 1 Introduction
In the field of digital photography and videography, the choice of camera shutter mechanism plays a pivotal role in determining the quality and fidelity of captured images. The majority of mainstream cameras available in the market today employ a RS mechanism [1, 2]. While this mechanism offers several advantages, including reduced manufacturing costs and lower power consumption, it often distorts the image, particularly when there is relative motion between the camera and the subject being captured. Algorithms within image signal processing (ISP) pipelines correct for this distortion [3, 4, 5, 6, 7, 8]. By doing so, the resulting images, when viewed by the human eye, appear undistorted and true to the scene. These corrected images are typically also used as inputs to a CV pipeline, ensuring that the models are not adversely affected by the distortions inherent in RS captures.
Meanwhile, efforts to mitigate the energy consumption of camera-driven CV pipelines have gained a lot of attention in the literature, particularly for energy-constrained applications, such as autonomous drones, surveillance, and headsets for augmented reality [9, 10]. The research advocates bringing the compute-heavy ML algorithms as close to the image sensor as possible [11, 12]. This co-location can reduce the energy associated with transferring large amounts of sensor data between chips and, when taken to the extreme of implementing in-sensor computing, minimize the cost of energy-expensive analog-to-digital conversion within the sensor [13]. Unfortunately, the complexity of many algorithms, including RS correction, makes them difficult to implement in and near the sensor. This presents a compelling question: is the RS correction in a ML pipeline necessary? Instead, can the ML pipeline automatically compensate for RS artifacts as shown in Fig. 1?
To evaluate this question, we propose to use a rolling shutter (RS) dataset for training and fine-tuning ML models and compare their accuracies with those trained with a global shutter (GS) dataset. Unfortunately, existing public datasets tailored for studying the RS effect lack this crucial pairing with a GS dataset [14, 15, 8]. This void in resources compelled us to leverage the CV simulation capabilities of
Figure 1: Comparison of the baseline and our target CV pipelines; the latter avoids ISP correction for RS images.
Unreal Engine 5 (UE5). We first generated a GS dataset with a very high frame rate. To create the RS dataset, we then emulated the rolling shutter effect by amalgamating successive rows from sequences of generated GS images, mirroring the characteristic line-by-line scan intrinsic to rolling shutters. This work makes three key contributions to the field of object detection under different camera shutter effects.
* **Synthetic Dataset Generation for Shutter Effect Analysis:** We generate a synthetic paired GS/RS dataset using the real-time 3D creation software Unreal Engine 5 (UE5) designed to evaluate pedestrian detection models for both rolling and global shutters under various conditions, as illustrated in Fig. 2.
* **Empirical Validation of Detection Models under Different Shutter Effects:** We use synthetic dataset to conduct extensive experiments on mainstream object detection models, specifically YOLOv8 and DETR, to assess their performance under different shutter effects. The results show that ML pipelines need not correct for a RS for many coarse-grained object detection applications. However, for applications that require fine-grained location of the objects, the results suggest that achieving ISP-less CV pipelines for RS cameras may need additional effort. Our results also show that the accuracy of the pedestrian detection models can be significantly improved with our synthetic dataset while retaining their transferability to GS images.
* **Development of a Shutter Simulation Framework:** We have developed a comprehensive framework that simulates ultra-high frame rate GS images in order to simulate RS effects, providing a versatile toolset for generating pedestrian detection datasets under various shutter conditions.
## 2 Rolling Shutter vs Global Shutter
A digital camera typically captures images using either a RS or a GS mechanism. The primary distinction between the two is in the way they capture and process light onto the sensor. GS captures the entire image simultaneously. Every pixel on the sensor is exposed to light at the exact same moment, resulting in a distortion-free capture of fast-moving subjects, as shown in Fig. 2(a). In the RS mode, the image sensor scans and captures the image line by line, sequentially from top to bottom. This means that not all parts of the image are recorded at precisely the same time. For subjects in fast motion or when the camera itself is moving quickly, this sequential capturing can result in distortions, commonly referred to as 'rolling shutter artifacts' or the 'jello effect'. An example artifact is when pedestrians look bent or skewed, as shown in Fig. 2(b).
From the perspective of peripheral circuit design, the mechanism of choice has implications for readout circuitry, speed, and complexity. The choice is often a trade-off between cost, speed, and potential image artifacts [16, 17, 18].
When using RS cameras for CV applications, the detected position of the object can be affected by the time delay between the top and bottom rows of the sensor. This can result in misalignment or incorrect positioning of the detected object in the processed image. For instance, a fast-moving car could appear slightly tilted or elongated when captured by a camera with a RS mechanism, potentially leading to less accurate detection or misinterpretation of its speed and trajectory. We will investigate the impact of RS images on the detection performance of object detection models in the following sections.
## 3 Analyzing the Impact of RS using UE5
Capitalizing on UE5's capabilities, we used the "sample city" project [19] as our foundational city environment to conduct our experiment. We designed and implemented 40 distinct urban street scenes that span across 8 streets as shown in Fig. 1(a).
Figure 3: Rolling Shutter VS Global Shutter
Figure 2: Unreal Engine 5 Dataset Scene Generation
Each of these scenes showcases a unique environment adding diversity to the dataset.
**Temporal Setting Variations:** Every individual street scene is rendered under five different times of the day, emulating a comprehensive spectrum of lighting conditions. These distinct times are visually represented by varying light intensities and angles, as depicted in Fig. 2b.
**Crowd Dynamics:** The scenes incorporate randomized crowds to mimic real-world scenarios. Factors such as gender, height, body shape, skin tone, hair, and attire vary to introduce diversity and realism. Considering that pedestrians play a vital role in our analysis, their maximum walking speed is 2 meters per second in our normal walking speed dataset. Additionally, we provide a dataset with the pedestrians walking at 10x normal speed to study the detection model's performance on RS images under faster motion conditions.
**Camera Settings and Global Shutter Data Generation:** To capture the nuances of each scene and the effect of RS on object detection, we use the following camera settings: an aperture of f/2.8, a focal length of 35.0mm, a filmback ratio of 16:9 for digital film, a 12mm prime lens at f/2.8, and a frame rate set at 32,400fps.
Each scene was documented using five cameras positioned at diverse angles, with each camera continuously capturing 1080 frames per camera for each environmental condition. All images maintain a resolution of \(1920\times 1080\). Moreover, for every pedestrian that made an appearance in a given shot, we generated a bounding box annotation.
We use the first frame out of the 1080 frames as a frame in the GS dataset where an entire sequence of 1080 frames are used to create a single frame in the RS dataset, as described below. Thus, the GS dataset has a frame rate of 30 frames per second (i.e., \(1080\times 30=32400\)), a typical rate for cameras.
**Generation of Rolling Shutter Dataset:** To synthesize the RS dataset, we simulate the RS effect by sequentially replacing rows of pixels in a top-to-bottom fashion with the corresponding rows of a sequence of GS images, emulating the line-by-line scan typical of rolling shutters. Thus, for each sequence of 1080 images from the GS dataset, we produce a single image that captures the RS effect.
Following the generation of the RS images, we use an annotation tool [20] to manually label each pedestrian present in every frame, providing the ground truth needed to train our object detection models.
## 4 Experimental Results
**Dataset Specifications and Distribution:** In this paper, we generate four distinct datasets, namely Normal_RS and Normal_GS for pedestrians walking at a normal walking speed of \(2m/s\) and Faster_RS and Faster_GS for pedestrians moving \(10\times\) faster than normal walking speed. These datasets all have 1000 frames, where 800 for training, 100 for validation, and 100 for testing. For the Normal_RS training dataset, the average size of a bounding box is \(7,725\)px\({}^{2}\) with a total of \(2,428\) bounding boxes, yielding an average of \(3.17\) bounding boxes per image. Similarly, the Normal_GS training dataset has an average bounding box size of \(10,847\)px\({}^{2}\) and a cumulative count of \(2,337\) bounding boxes with an average of \(3.18\) bounding boxes across the images containing them. The Faster_RS training dataset has an average bounding box size of \(11,465\)px\({}^{2}\), with \(2,560\) bounding boxes in total and \(3.31\) boxes per image on average. Lastly, the Faster_GS training dataset has an average bounding box size of \(11,932\)px\({}^{2}\). The total number of bounding boxes is \(2,616\) with an average of \(3.43\) boxes per image.
The minor differences between the bounding box sizes of the GS and RS datasets can be attributed to differences between manual labelling of the RS datasets and the automatic labelling (by UE5) of the GS dataset. The minor difference between the number of frames can be due to the fact that GS datasets sample the image at the beginning of the frame interval, whereas RS takes images across the entire frame interval.
**Performance of RS and GS Datasets:** For the object detection efficacy assessment on these datasets, we tested datasets on the state-of-the-art model YOLOv8 [21] and the transformer-based model DETR [22]. Training for YOLOv8 was conducted with a set learning rate of 0.001, while DETR utilized a learning rate of 0.0001, with both models being trained for 100 epochs. The procedural flow of our experiment is graphically represented in Fig. 4. The performance metrics that we evaluated are precision (P), recall (R), and mean average precision (mAP) measured with Intersection over Union (IoU) thresholds of 0.5 as well as from a range of 0.5 to 0.95 with a step size of 0.051.
Footnote 1: Note that DETR model does not provide a metric for precision.
We pretrained all of our models on the COCO dataset and then validated the normal and faster datasets with YOLOv8 and DETR. The results presented in Table 1 reveal that the mean Average Precision (mAP) results for training and validation on GS and RS, when the Intersection Over Union (IOU) threshold is set at 0.5, are remarkably similar, with \(<\)\(2\%\) deviation. Remarkably, this congruence holds for both slower and faster walking conditions. However, when the de
Figure 4: Pedestrians Detection Experiment Pipeline
tection criteria is more stringent, measured with an IOU that ranges from 0.5 to 0.95, the discrepancy grows to 24%.
These results suggest that for coarse-grained detection of objects, it may not be necessary to correct the RS effect in CV pipeline. However, the results also show that the distortion of the pedestrians makes accurate sizing and positioning of their bounding boxes more difficult, and that this uncertainty is difficult to compensate with training.2
Footnote 2: Note that the faster datasets yield, on average, higher mAP scores than the normal dataset. We conjecture that this is due to the variation in the scene generation which resulted in the faster datasets having pedestrians that are closer to the camera and thus seem larger.
**Cross-Training and Validation of Datasets:** We also measured the detection outcomes of the models that are trained exclusively on COCO using YOLOv8. Comparing the results in Table 2 with that of Table 1, we see that training only on COCO yields significantly worse results for both mAP with [email protected] and [email protected]:0.95. This shows the importance of fine-tuning these models on application-specific datasets and, in particular, shows the value of our datasets for pedestrian detection.
The second dataset analysis shown in Table 3 presents the results on YOLOv8 with a combination of fast and slow pedestrians. The results show that training on our RS dataset significantly improves the test mAP of RS images compared to training on GS images, showing the efficacy of our dataset. Moreover, models trained with the RS dataset perform similarly when tested on a combination of RS and GS images, showing the transferability of the models to GS images.
Lastly, in order to show the evaluation results with different diversity, we analyze the impact of the number of scenes and camera views in the training dataset in Fig. 5. It shows that, with the increasing of training dataset size, which also increases the scene diversity in Fig. 4(a) and camera views diversity in Fig. 4(b), the mAP generally increases.
## 5 Conclusions
This paper analyzes the intricate relationship between camera shutter mechanisms and their implications on pedestrian detection models. Our findings identify the relative degree of accuracy achievable in ML detection accuracy between global and RS modalities. In particular, they show that RS correction is not necessary in scenarios where moderately-grained overlap of the bounding boxes with the ground truth is necessary, i.e., using the mean average precision metric with an IoU of 0.5. This result challenges the prevailing notion that RS corrections are indispensable for all camera operations, suggesting that for specific applications like pedestrian detection, such corrections might be unnecessary [1, 23, 24, 25]. This results helps quantify the impact of RS effects in recently proposed energy-efficient smart camera systems that propose to limit the application of an ISP pipeline and leverage in-pixel computing paradigms [26, 27, 28].
Our work's significance is amplified by the introduction of a synthetic dataset, hand-crafted using Unreal Engine 5 (UE5). This dataset, simulating ultra-high frame-rate GS images to emulate the impact of RS, stands as a testament to the fusion of advanced simulation capabilities with practical CV applications for RS cameras.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Dataset} & \multirow{2}{*}{P} & \multirow{2}{*}{R} & \multicolumn{2}{c}{mAP} \\ \cline{4-5} & & & [email protected] & [email protected]:0.95 \\ \hline YOLOv8 & Normal\_GS & 0.97 & 0.70 & 0.82 & 0.60 \\ & Normal\_RS & 0.94 & 0.67 & 0.82 & 0.44 \\ \hline \multirow{2}{*}{DETR} & Normal\_GS & \(-\) & 0.51 & 0.72 & 0.40 \\ & Normal\_RS & \(-\) & 0.40 & 0.71 & 0.28 \\ \hline YOLOv8 & Faster\_GS & 0.99 & 0.97 & 0.98 & 0.72 \\ & Faster\_RS & 0.98 & 0.97 & 0.99 & 0.59 \\ \hline \multirow{2}{*}{DETR} & Faster\_GS & \(-\) & 0.64 & 0.96 & 0.53 \\ & Faster\_RS & \(-\) & 0.61 & 0.98 & 0.48 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Model Performance with GS and RS Datasets
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{Train} & \multirow{2}{*}{Validation} & \multirow{2}{*}{P} & \multirow{2}{*}{R} & \multicolumn{2}{c}{mAP} \\ \cline{4-5} & & & [email protected] & [email protected]:0.95 \\ \hline RS & RS & 0.90 & 0.75 & 0.82 & 0.35 \\ GS & RS & 0.80 & 0.67 & 0.63 & 0.23 \\ RS & GS+RS & 0.94 & 0.71 & 0.80 & 0.36 \\ \hline \hline \end{tabular}
\end{table}
Table 3: RS & RS+GS Dataset Validation on YOLOv8 with Fast and Slow Pedestrians
Figure 5: Impact of the pedestrian detection performance on the deiversity of our dataset |
2309.07205 | Diquark Explanation of $b\to s\ell^+\ell^-$ | The discrepancies between $b\to s\ell^+\ell^-$ data and the corresponding
Standard Model predictions point to the existence of new physics with a
significance at the $5\sigma$ level. While previously a lepton flavour
universality violating effect was preferred, the new $R(K^{(*)})$ and
$B_s\to\mu^+\mu^-$ measurements are now compatible with the Standard Model,
favouring a lepton flavour universal beyond the Standard Model contribution to
$C_9$. Since heavy new physics is generally chiral, and because of the
stringent constraints from charged lepton flavour violation, this poses a
challenge for model building. In this article, we point out a novel
possibility: a diquark, i.e. a coloured scalar, induces the Wilson coefficient
of the $(\bar s \gamma^\mu P_L b) (\bar c \gamma_\mu P_L c)$ operator at
tree-level, which then mixes into $O_9$ via an off-shell photon penguin. This
setup allows for a lepton flavour universal effect of $C_9\approx-0.5$, without
violating bounds from $\Delta M_s$, $\Delta\Gamma$, $B\to X_s\gamma$ and
$D^0-\bar D^0$ mixing. This scenario predicts a small and negative
$C_9^{\prime}$ and a light diquark, preferably with a mass around $500\,$GeV,
as compatible with the CMS di-di-jet analysis, and a deficit in the inclusive
$b\to c\bar c s$ rate. | Andreas Crivellin, Matthew Kirk | 2023-09-13T18:00:00Z | http://arxiv.org/abs/2309.07205v2 | # Diquark Explanation of \(b\to s\ell^{+}\ell^{-}\)
###### Abstract
The discrepancies between \(b\to s\ell^{+}\ell^{-}\) data and the corresponding Standard Model predictions point to the existence of new physics with a significance at the \(5\sigma\) level. While previously a lepton flavour universality violating effect was preferred, the new \(R(K^{(*)})\) and \(B_{s}\to\mu^{+}\mu^{-}\) measurements are now compatible with the Standard Model, favouring a lepton flavour universal beyond the Standard Model contribution to \(C_{9}\). Since heavy new physics is generally chiral, and because of the stringent constraints from charged lepton flavour violation, this poses a challenge for model building. In this article, we point out a novel possibility: a diquark, i.e. a coloured scalar, induces the Wilson coefficient of the \((\bar{s}\gamma^{\mu}P_{L}b)(\bar{c}\gamma_{\mu}P_{L}c)\) operator at tree-level, which then mixes into \(O_{9}\) via an off-shell photon penguin. This setup allows for a lepton flavour universal effect of \(C_{9}\approx-0.5\), without violating bounds from \(\Delta M_{s}\), \(\Delta\Gamma\), \(B\to X_{s}\gamma\) and \(D^{0}-\tilde{D}^{0}\) mixing. This scenario predicts a small and negative \(C_{9}^{\prime}\) and a light diquark, preferably with a mass around \(500\,\mathrm{GeV}\), as compatible with the CMS di-di-jet analysis, and a deficit in the inclusive \(b\to c\bar{s}\) rate.
+
Footnote †: preprint: PSI-PR-23-35, ZU-TH 54/23
## I Introduction
While the Standard Model (SM) Cabibbo-Kobayashi-Maskawa (CKM) mechanism [1] of quark flavour violation was established by the \(B\) factories Belle [2] and BaBar [3], there is still room for new physics (NP) at the order of \(10\%\) in flavour changing neutral current (FCNC) processes. Such FCNC observables are loop suppressed and thus particularly sensitive to beyond-the-SM contributions. In fact, there are long-lasting hints for NP in \(b\to s\ell^{+}\ell^{-}\) observables. However, the picture changed radically with the release of the latest LHCb results for the ratios \(R(K^{(*)})=\mathrm{Br}[B\to K^{(*)}\mu^{+}\mu^{-}]/\mathrm{Br}[B\to K^{(*)}e^{+}e ^{-}]\)[4; 5], superseding their previous measurements [6; 7; 8]. While previously all global fits [9; 10; 11; 12; 13; 14; 15] preferred lepton flavour universality (LFU) violating NP [16; 17], now data is not only consistent with LFU but even stringently limits deviations from it.
Nonetheless, the case for physics beyond the SM in \(b\to s\ell^{+}\ell^{-}\) transitions remains very strong (see Ref. [18] for a recent review). The main tensions with the SM predictions are within the angular \(B\to K^{*}\mu^{+}\mu^{-}\) observable \(P_{5}^{\prime}\)[19; 20; 21; 22], the total branching ratio and angular observables in \(B_{s}\to\phi\mu^{+}\mu^{-}\)[23; 24; 25] as well as in \(\mathrm{Br}[B\to K\mu^{+}\mu^{-}]\)[26; 27]1, with tensions at the \(2-4\sigma\) level in each of these modes. In fact, while SM predictions are challenging, due to the hadronic form factors involved [33; 34; 35; 36; 37; 38] (including non-local charm-loop contributions [39; 40; 41]), the first lattice calculation over the full \(q^{2}\) range of \(\mathrm{Br}[B\to K\mu^{+}\mu^{-}]\) leads to a stronger tension of \(4.7\,\sigma\)[42]. Furthermore, \(P_{5}^{\prime}\), being an optimised angular observable [43; 44; 45] possess a reduced sensitivity to the form factors and semi-inclusive decays at high \(q^{2}\), that are independent of hadronic form factors, are fully compatible with the other observables [46]. Finally, dispersive methods based on analyticity confirm previous error estimates for the form factors [47; 48] (including their non-local parts).
Footnote 1: Measurements of these decays were also performed by the ATLAS, CMS and Belle collaborations [28; 29; 30; 31; 32] but with less precise results.
Combining the processes discussed above in a global fit together with all other available data on \(b\to s\ell^{+}\ell^{+}\) transitions leads to a coherent picture. In fact, while before the \(R(K^{(*)})\) update, the most strongly favoured scenarios were at least two-dimensional [49], now a single one-dimensional scenario is clearly favoured: the \(C_{9}^{U}\) scenario with a significance around \(5\sigma\)[50; 51; 52; 53]. This means that a left-handed \(b-s\) current and a vectorial flavour-universal lepton current (\(B_{s}\to\mu^{+}\mu^{-}\)[54; 55; 56; 57] constrains an axial current) is needed.
This poses a challenge for model building since both tree-level leptoquark effects [58; 59] as well as loop contributions of new scalars and fermions [60; 61; 62; 63], in general, give a chiral lepton current and have difficulties respecting the stringent bounds from \(\mu\to e\) flavour violating [64] unless multiple generations are involved [65]. This leaves \(Z^{\prime}\) models [66; 67; 68] as well as leptoquarks which generate a LFU effect in \(C_{9}^{U}\) via a tau-loop with an off-shell photon [69; 70; 71; 72; 73] as the remaining (simple) options. However, also in these cases \(B_{s}-\bar{B}_{s}\) mixing [74], LEP and LHC constraints [75; 76] make a full explanation challenging.
An alternative scenario that can naturally generate \(C_{9}^{U}\) is a NP contribution to the Wilson coefficient of the \((\bar{s}\gamma^{\mu}P_{L}b)(\bar{c}\gamma_{\mu}P_{L,R}c)\) operator [77] that mix into \(C_{9}\) via an off-shell photon penguin [78]. As a tree-level effect in \(\bar{s}b\bar{c}c\) operators is necessary, only \(Z^{\prime}\) bosons, heavy gluons, Higgses [79; 80; 81; 82] or diquarks (DQs) come into mind [83]. |
2306.17693 | Thompson sampling for improved exploration in GFlowNets | Generative flow networks (GFlowNets) are amortized variational inference
algorithms that treat sampling from a distribution over compositional objects
as a sequential decision-making problem with a learnable action policy. Unlike
other algorithms for hierarchical sampling that optimize a variational bound,
GFlowNet algorithms can stably run off-policy, which can be advantageous for
discovering modes of the target distribution. Despite this flexibility in the
choice of behaviour policy, the optimal way of efficiently selecting
trajectories for training has not yet been systematically explored. In this
paper, we view the choice of trajectories for training as an active learning
problem and approach it using Bayesian techniques inspired by methods for
multi-armed bandits. The proposed algorithm, Thompson sampling GFlowNets
(TS-GFN), maintains an approximate posterior distribution over policies and
samples trajectories from this posterior for training. We show in two domains
that TS-GFN yields improved exploration and thus faster convergence to the
target distribution than the off-policy exploration strategies used in past
work. | Jarrid Rector-Brooks, Kanika Madan, Moksh Jain, Maksym Korablyov, Cheng-Hao Liu, Sarath Chandar, Nikolay Malkin, Yoshua Bengio | 2023-06-30T14:19:44Z | http://arxiv.org/abs/2306.17693v1 | # Thompson Sampling for Improved Exploration in GFlowNets
###### Abstract
Generative flow networks (GFlowNets) are amortized variational inference algorithms that treat sampling from a distribution over compositional objects as a sequential decision-making problem with a learnable action policy. Unlike other algorithms for hierarchical sampling that optimize a variational bound, GFlowNet algorithms can stably run off-policy, which can be advantageous for discovering modes of the target distribution. Despite this flexibility in the choice of behaviour policy, the optimal way of efficiently selecting trajectories for training has not yet been systematically explored. In this paper, we view the choice of trajectories for training as an active learning problem and approach it using Bayesian techniques inspired by methods for multi-armed bandits. The proposed algorithm, Thompson sampling GFlowNets (TS-GFN), maintains an approximate posterior distribution over policies and samples trajectories from this posterior for training. We show in two domains that TS-GFN yields improved exploration and thus faster convergence to the target distribution than the off-policy exploration strategies used in past work.
Machine Learning, ICML
## 1 Introduction
Generative flow networks (GFlowNets; Bengio et al., 2021) are generative models which sequentially construct objects from a space \(\mathcal{X}\) by taking a series of actions sampled from a learned policy \(P_{F}\). A GFlowNet's policy \(P_{F}\) is trained such that, at convergence, the probability of obtaining some object \(x\in\mathcal{X}\) as the result of sampling a sequence of actions from \(P_{F}\) is proportional to a reward \(R(x)\) associated to \(x\). Whereas traditional probabilistic modeling approaches (e.g., those based on Markov chain Monte Carlo (MCMC)) rely on local exploration in \(\mathcal{X}\) for good performance, the parametric policy learned by GFlowNets allows them to generalize across states and yield superior performance on a number of tasks (Bengio et al., 2021; Malkin et al., 2022; Zhang et al., 2022; Jain et al., 2022; Jain et al., 2022; D
has been employed to much success across a variety of deep reinforcement learning tasks (Osband et al., 2016, 2018, 2019). The classical TS algorithm (Agrawal and Goyal, 2012, Russo et al., 2018) maintains a posterior over the model of the environment and acts optimally according to a sample from this posterior over models. TS has been generalized to RL problems in the form of Posterior Sampling RL (Osband et al., 2013). A variant of TS has been adapted in RL, where the agent maintains a posterior over policies and value functions (Osband et al., 2016, 2018) and acts optimally based on a random sample from this posterior. We consider this variant of TS in this paper.
**Our main contribution in this paper is describing and evaluating an algorithm based on Thompson sampling for improved exploration in GFlowNets**. Building upon prior results in Malkin et al. (2022); Madan et al. (2023) we demonstrate how Thompson sampling with GFlowNets allows for improved exploration and optimization efficiency in GFlowNets. We validate our method on a grid-world and sequence generation task. In our experiments TS-GFN substantially improves both the sample efficiency and the task performance. Our algorithm is computationally efficient and highly parallelizable, only taking \(\sim 15\%\) more computation time than prior approaches.
## 2 Related Work
Exploration in RLThere exists a wide literature on uncertainty based RL exploration methods. Some methods rely on the Thompson sampling heuristic and non-parametric representations of the posterior to promote exploration (Osband et al., 2013, 2016, 2018). Others employ uncertainty to enable exploration based on the upper confidence bound heuristic or information gain (Ciosek et al., 2019, Lee et al., 2021, O'Donoghue et al., 2018, Nikolov et al., 2018). Another set of exploration methods attempts to make agents "intrinsically" motivated to explore. This family of methods includesrandom network distillation (RND) and Never Give Up (Burda et al., 2018, Badia et al., 2020). Pan et al. (2022), proposes to augment GFlowNets with intrinsic RND-based intrinsic rewards to encourage better exploration.
MaxEnt RL RL has a rich literature on energy-based, or maximum entropy, methods (Ziebart, 2010, Mnih et al., 2016, Haarnoja et al., 2017, Nachum et al., 2017, Schulman et al., 2017, Haarnoja et al., 2018), which are close or equivalent to the GFlowNet framework in certain settings (in particular when the MDP has a tree structure (Bengio et al., 2021)). Also related are methods that maximize entropy of the state visitation distribution or some proxy of it (Hazan et al., 2019, Islam et al., 2019, Zhang et al., 2021, Eysenbach et al., 2018), which achieve a similar objective to GFlowNets by flattening the state visitation distribution. We hypothesize that even basic exploration methods for GFlowNets (e.g., tempering or \(\epsilon\)-noisy) could be sufficient exploration strategies on some tasks.
## 3 Method
### Preliminaries
We begin by summarizing the preliminaries on GFlowNets, following the conventions of Malkin et al. (2022).
Let \(G=(\mathcal{S},\mathcal{A})\) be a directed acyclic graph. The vertices \(s\in\mathcal{S}\) are called _states_ and the directed edges \((u\to v)\in\mathcal{A}\) are _actions_. If \((u\to v)\) is an edge, we say \(v\) is a _child_ of \(u\) and \(u\) is a _parent_ of \(v\). There is a unique _initial state_\(s_{0}\in\mathcal{S}\) with no parents. States with no children are called _terminal_, and the set of terminal states is denoted by \(\mathcal{X}\).
A _trajectory_ is a sequence of states \(\tau=(s_{m}\to s_{m+1}\to\ldots\to s_{n})\), where each \((s_{i}\to s_{i+1})\) is an action. The trajectory is _complete_ if \(s_{m}=s_{0}\) and \(s_{n}\) is terminal. The set of complete trajectories is denoted by \(\mathcal{T}\).
A _(forward) policy_ is a collection of distributions \(P_{F}(-|s)\) over the children of every nonterminal state \(s\in\mathcal{S}\). A forward policy determines a distribution over \(\mathcal{T}\) by
\[P_{F}(\tau=(s_{0}\to\ldots\to s_{n}))=\prod_{i=0}^{n-1}P_{F}(s_{i+1}|s_{i}). \tag{1}\]
Similarly, a _backward policy_ is a collection of distributions \(P_{B}(-|s)\) over the _parents_ of every noninitial state.
Any distribution over complete trajectories that arises from a forward policy satisfies a Markov property: the marginal choice of action out of a state \(s\) is independent of how \(s\) was reached. Conversely, any Markovian distribution over \(\mathcal{T}\) arises from a forward policy (Bengio et al., 2023).
A forward policy can thus be used to sample terminal states \(x\in\mathcal{X}\) by starting at \(s_{0}\) and iteratively sampling actions from \(P_{F}\), or, equivalently, taking the terminating state of a complete trajectory \(\tau\sim P_{F}(\tau)\). The marginal likelihood of sampling \(x\in\mathcal{X}\) is the sum of likelihoods of all complete trajectories that terminate at \(x\).
Suppose that a nontrivial (not identically 0) nonnegative reward function \(R:\mathcal{X}\to\mathbb{R}_{\geq 0}\) is given. The learning problem solved by GFlowNets is to estimate a policy \(P_{F}\) such that the likelihood of sampling \(x\in\mathcal{X}\) is proportional to \(R(x)\). That is, there should exist a constant \(Z\) such that
\[R(x)=Z\sum_{\tau in\mathcal{T}:\tau=(s_{0}\to\ldots\to s_{n}=x)}P_{F}(\tau)\quad \forall x\in\mathcal{X}. \tag{2}\]
If (2) is satisfied, then \(Z=\sum_{x\in\mathcal{X}}R(x)\). The sum in (2) may be intractable. Therefore, GFlowNet training algorithms require estimation of auxiliary quantities beyond the parameters of the policy \(P_{F}\). The training objective we primarily consider, _trajectory balance_ (TB), learns an estimate
of the constant \(Z\) and of a _backward policy_, \(P_{B}(s\mid s^{\prime})\), representing the posterior over predecessor states of \(s^{\prime}\) in trajectories that contain \(s^{\prime}\). The TB loss for a trajectory \(\tau\) is:
\[\mathcal{L}_{TB}(\tau;\theta)=\left(\log\frac{Z_{\theta}\prod_{t=0}^{n-1}P_{F}( s_{t+1}|s_{t};\theta)}{R(s_{n})\prod_{t=0}^{n-1}P_{B}(s_{t}|s_{t+1};\theta)}\right)^{2} \tag{3}\]
where \(\theta\) are the parameters of the learned objects \(P_{F}\), \(P_{B}\), and \(Z\). If \(\mathcal{L}_{TB}(\tau;\theta)=0\) for all \(\tau\), then \(P_{F}\) samples objects \(x\in\mathcal{X}\) with probability proportional to \(R(x)\), i.e., (2) is satisfied. Algorithms minimize this loss for trajectories \(\tau\) sampled from some _training policy_\(\pi_{\theta}\), which may be equal to \(P_{F}\) itself (_on-policy training_) but is usually taken to be a more exploratory distribution, as we discuss below.
Notably, any choice of a backwards policy \(P_{B}\) yields a unique corresponding \(P_{F}\) and \(Z\) which makes the expression on the right side of (3) equal to zero for all \(\tau\in\mathcal{T}\) (see Malkin et al. (2023) for interpretations of this result in terms of variational methods).
### GFlowNet exploration strategies
Prior work on GFlowNets uses training policies based on dithering or intrinsic motivation, including:
**On-policy**: The training policy is the current \(P_{F}\): \(\pi_{\theta}(s^{\prime}|s)=P_{F}(s^{\prime}|s;\theta)\).
**Tempering**: Let \(\alpha_{\theta}(s^{\prime}|s):\mathcal{S}\times\mathcal{S}\rightarrow\mathbb{R}\) be the logits of \(P_{F}\), then the training policy is a Boltzmann distribution with temperature \(T\in\mathbb{R}\) as \(\pi_{\theta}(s^{\prime}|s)\propto\exp{(\alpha_{\theta}(s^{\prime}|s)/T)}\).
**\(\epsilon\)-noisy**: For \(\epsilon\in[0,1]\), the training policy follows \(P_{F}\) with probability \(1-\epsilon\) and takes a random action with probability \(\epsilon\) as \(\pi_{\theta}(s^{\prime}|s)=(1-\epsilon)P_{F}(s^{\prime}|s;\theta)+\frac{ \epsilon}{\#\{s^{\prime\prime}:(s\to s^{\prime\prime})\in\mathcal{A}\}}\).
**GAFN (Pan et al., 2022)**: The training policy is the current \(P_{F}\), but \(P_{F}\) is learned by incorporating a pseudocount-based intrinsic reward for each state \(s\in\tau\) into the objective \(\mathcal{L}(\tau;P_{F},P_{B})\) so that \(\pi_{\theta}(s^{\prime}|s)=P_{F}(s^{\prime}|s;\theta)\).
### Thompson sampling for GFlowNets
Learning GFlowNets over large spaces \(\mathcal{X}\) requires judicious exploration. It makes little sense to explore in regions the GFlowNet has already learned well - we would much rather prioritize exploring regions of the state space on which the GFlowNet has not accurately learned the reward distribution. Prior methods do not explicitly prioritize this. Both dithering approaches (tempering and \(\epsilon\)-noisy) and GAFNs encourage a form of uniform exploration, be it pure random noise as in dithering or a pseudocount in GAFNs. While it is impossible to _a priori_ determine which regions a GFlowNet has learned poorly, we might expect that it performs poorly in the regions on which it is uncertain. An agent with an estimate of its own uncertainty could bias its action selection towards regions in which it is more uncertain.
With this intuition in mind, we develop an algorithm inspired by Thompson sampling and its applications in RL and bandits (Osband et al., 2016a, 2018). In particular, following Osband et al. (2016a) we maintain an approximate posterior over forward policies \(P_{F}\) by viewing the last layer of our policy network itself as an ensemble. To maintain a size \(K\in\mathbb{Z}^{+}\) ensemble extend the last layer of the policy network to have \(K\cdot\ell\) heads where \(\ell\) is the maximum number of valid actions according to \(G\) for any state \(s\in\mathcal{S}\). To promote computational efficiency all members of our ensemble share weights in all layers prior to the final one.
To better our method's uncertainty estimates, we employ the statistical bootstrap to determine which trajectories \(\tau\) may be used to train ensemble member \(P_{F,k}\) and also make use of randomized prior networks (Osband et al., 2018). Prior networks are a downsized version of our main policy network whose weights are fixed at initialization and whose output is summed with the main network in order to produce the actual policy logits. Prior networks have been shown to significantly improve uncertainty estimates and agent performance in reinforcement learning tasks.
Crucially, while we parameterize an ensemble of \(K\) forward policies we do not maintain an ensemble of backwards policies, instead sharing one \(P_{B}\) across all ensemble members \(P_{F,k}\). Recall from 3.1 that each \(P_{B}\) uniquely determines a \(P_{F}\) which \(\mathcal{L}_{TB}(\tau)=0\quad\forall\tau\in\mathcal{T}\). Specifying a different \(P_{B,k}\) for each \(P_{F,k}\) would result in setting a different learning target for each \(P_{F,k}\) in the ensemble. By sharing a single \(P_{B}\) across all ensemble members we ensure that all \(P_{F,k}\) converge to the same optimal \(P_{F}^{*}\). We show in Section 4.1 that sharing \(P_{B}\) indeed yields significantly better performance than maintaining separate \(P_{B,k}\).
With our policy network parameterization in hand, the rest of our algorithm is simple. First we sample an ensemble member \(P_{F,k}\) with \(k\sim\text{Uniform}\{1,\dots,K\}\) and then sample an entire trajectory from it \(\tau\sim P_{F,k}\). This trajectory is then used to train each ensemble member where we include the trajectory in the training batch for ensemble member \(P_{F,k}\) based on the statistical bootstrap with bootstrap probability \(p\) (\(p\) is a hyperparameter fixed at the beginning of training). The full algorithm is presented in Appendix A.
## 4 Experiments
### Grid
We study a modified version of the grid environment from (Bengio et al., 2021). The set of interior states is a 2
-dimensional grid of size \(H\times H\). The initial state is \((0,0)\) and each action is a step that increments one of the 2 coordinates by 1 without leaving the grid. A special termination action is also allowed from each state.
Prior versions of this grid environment provide high reward whenever the agent exits at a corner of the grid. This sort of reward structure is very easy for an agent to generalize to and is a trivial exploration task when the reward is not highly sparse (such reward structures are _not_ the focus of this paper). To compensate for this, we adopt a reward function based on a summation of truncated Fourier series, yielding a reward structure which is highly multimodal and more difficult to generalize to (see Figure 1). The reward function is given by
\[R(x)=\sum_{k=1}^{n}\cos(2a_{k,1}\pi g(x_{1}))+\sin(2a_{k,2}\pi g (x_{1}))+\] \[\cos(2b_{k,1}\pi g(x_{2}))+\sin(2b_{k,2}\pi g(x_{2}))\]
where \(a_{k,1},a_{k,2},b_{k,1},b_{k,2}\in\mathbb{R}\) are preset scaling constants \(\forall k\), \(n\) is a hyperparameter determining the number of elements in the summation, \(g:\mathbb{Z}_{\geq 0}\rightarrow[c,d],g(x)=\frac{x(d-c)}{H}+c\), and \(c,d\in\mathbb{R}\) are the first and last integer coordinates in the grid.
We investigate a \(64\times 64\) grid with this truncated Fourier series reward (see Appendix B for full reward setup details). We train the GFlowNets to sample from this target reward function and plot the evolution of the \(L_{1}\) distance between the target distribution and the empirical distribution of the last \(2\cdot 10^{5}\) states seen in training1.
Footnote 1: This evaluation is possible in this environment because the exact target distribution can be tractably computed.
The results (mean and standard error over five random seeds) are shown in Figure 2 (left side). Models trained with trajectories sampled by TS-GFN converge faster and with very little variance over random seeds to the true distribution than all other exploration strategies.
We also investigate the effect of sharing the backwards policy \(P_{B}\) across ensemble members in Figure 2 (right side). Maintaining a separate \(P_{B,k}\) for each \(P_{F,k}\) performs significantly worse than sharing a single \(P_{B}\) over all ensemble members. Maintaining separate \(P_{B,k}\) resulted in the GFlowNet learning much slower than sharing \(P_{B}\) and converging to a worse empirical \(L_{1}\) than sharing \(P_{B}\).
### Bit sequences
We consider the synthetic sequence generation setting from Malkin et al. (2022), where the goal is to generate sequences of bits of fixed length \(n=120\), resulting in a search space \(\mathcal{X}\) of size \(2^{120}\). The reward is specified by a set of modes \(M\subset\mathcal{X}=\{0,1\}^{n}\) that is unknown to the learning agent. The reward of a generated sequence \(x\) is defined in terms of Hamming distance \(d\) from the modes: \(R(x)=\exp\big{(}1-n^{-1}\min_{y\in M}d(x,y)\big{)}\). The vocabulary for the GFlowNets is \(\{0,1\}\). Most experiment settings are taken from Malkin et al. (2022) and Madan et al. (2023).
Models are evaluated by tracking the number of modes according to the procedure in Malkin et al. (2022) wherein we count a mode \(m\) as "discovered" if we sample some \(x\) such that \(d(x,m)\leq\delta\). The results are presented in Figure
Figure 1: Reward on the grid task. **Left:** true distribution (normalized reward function). **Right:** empirical distribution over last \(2\cdot 10^{5}\) states sampled from the GFlowNet at end of training.
Figure 3: Number of modes found as a function of training time for bit sequence task.
Figure 2: \(L_{1}\) distance between empirical and target distributions over the course of training on the hypergrid environment (mean is plotted with standard error bars over 5 random seeds). **Left:** Thompson sampling learns the distribution better and faster than all other methods. **Right:** sharing a backwards policy \(P_{B}\) performs significantly better than maintaining a separate backward policy \(P_{B,k}\) for each forward policy \(P_{F,k}\) in the ensemble.
3 (mean and standard error are plotted over five random seeds). We find that models trained with TS-GFN find 60% more modes than on-policy, tempering, and \(\epsilon\)-noisy. TS-GFN soundly outperforms GARN, whose pseudocount based exploration incentive is misaligned with the task's reward structure and seems to perform exploration in un-helpful regions of the (very large) search space.
## 5 Conclusion
We have shown in this paper that using a Thompson sampling based exploration strategy for GFlowNets is a simple, computationally efficient, and performant alternative to prior GFlowNet exploration strategies. We demonstrated how to adapt uncertainty estimation methods used for Thompson sampling in deep reinforcement learning to the GFlowNet domain and proved their efficacy on a grid and long sequence generation task. Finally, we believe that future work should involve trying TS-GFN on a wider array of experimental settings and building a theoretical framework for investigating sample complexity of GFlowNets.
## Acknowledgments
The authors acknowledge financial support from CIFAR, Genentech, IBM, Samsung, Microsoft, and Google.
|
2309.08650 | Adversarial Attacks on Tables with Entity Swap | The capabilities of large language models (LLMs) have been successfully
applied in the context of table representation learning. The recently proposed
tabular language models have reported state-of-the-art results across various
tasks for table interpretation. However, a closer look into the datasets
commonly used for evaluation reveals an entity leakage from the train set into
the test set. Motivated by this observation, we explore adversarial attacks
that represent a more realistic inference setup. Adversarial attacks on text
have been shown to greatly affect the performance of LLMs, but currently, there
are no attacks targeting tabular language models. In this paper, we propose an
evasive entity-swap attack for the column type annotation (CTA) task. Our CTA
attack is the first black-box attack on tables, where we employ a
similarity-based sampling strategy to generate adversarial examples. The
experimental results show that the proposed attack generates up to a 70% drop
in performance. | Aneta Koleva, Martin Ringsquandl, Volker Tresp | 2023-09-15T15:03:33Z | http://arxiv.org/abs/2309.08650v1 | # Adversarial Attacks on Tables with Entity Swap
###### Abstract
The capabilities of large language models (LLMs) have been successfully applied in the context of table representation learning. The recently proposed tabular language models (TaLMs) have reported state-of-the-art results across various tasks for table interpretation. However, a closer look into the datasets commonly used for evaluation reveals an entity leakage from the train set into the test set. Motivated by this observation, we explore adversarial attacks that represent a more realistic inference setup. Adversarial attacks on text have been shown to greatly affect the performance of LLMs, but currently, there are no attacks targeting TaLMs. In this paper, we propose an evasive entity-swap attack for the column type annotation (CTA) task. Our CTA attack is the first black-box attack on tables, where we employ a similarity-based sampling strategy to generate adversarial examples. The experimental results show that the proposed attack generates up to a 70% drop in performance.
Column Type Annotation, Adversarial Attack, Table Representation Learning +
Footnote †: leftmargin=*] 1.5em (2019) 1.
A survey by Zhang et al. [8] presents a comprehensive overview of attacks against text, highlighting the challenges that arise when attacking discrete data such as text, compared to continuous data such as images. BERT-Attack [6] proposes an adversarial attack against the BERT model [9] using the model itself to generate the adversarial samples. A recent gradient-based text attack [10] presents a white-box attack which uses a parameterized adversarial distribution for sampling the adversarial samples. However, despite the popularity of adversarial attacks on text, the field of tabular data remains unexplored for potential vulnerabilities to such attacks.
The few works that have been proposed so far, [11, 12, 13], focus on white-box attacks and target traditional machine learning models trained on tabular data. The main goal of these attacks is when generating adversarial examples to preserve the distributional consistency of the features of the data. In these works, the datasets used for evaluation usually contain many numerical values, such as financial data or healthcare analytics data.
The goal of our work is to define table attacks against TaLMs which are used for TI tasks. To the best of our knowledge, we present the first work on adversarial attacks targeting these models. Our research differentiates from the prior work w.r.t (1) the model observed, (2) the technique employed for generating adversarial samples, and (3) the evaluation task.
## 3 CTA Adversarial Attack
We define a table as a tuple \(T=(E,H)\), where \(E=\{e_{1,1},e_{1,2},\ldots,e_{i,j},\ldots,e_{n,m}\}\) is the set of table body entities for \(n\) rows and \(m\) columns. The table header \(H=\{h_{1},h_{2},\ldots,h_{m}\}\) is the set of corresponding \(m\) column header cells. We use \(T_{[i,:]}\) to refer to the \(i\)-th row, e.g., \(H=T_{[0,:]}\) and \(T_{[:,:]}=\{h_{j},e_{1,j},\ldots,e_{n,j}\}\) to refer to the \(j\)-th column of \(T\).
#### CTA Model
Let \(\mathcal{T}\) be the input space of tables and let \(J\) be the space of all possible column indices, i.e., \(J\subseteq\mathbb{N}\). Let \(\mathcal{C}\) be the output space, denoting the set of semantic types. A CTA model is a multilabel classification function \(h:\mathcal{T}\times J\xrightarrow{}P(\mathcal{C})\), i.e., given a table \(T\in\mathcal{T}\) and a column index \(j\in J\) the CTA task is to assign a subset of classes from the power set of \(\mathcal{C}\) to the corresponding column \(T_{[:,j]}\).
#### CTA Attack
Given classification model \(h\), the goal of a CTA attack is to transform a (correctly classified) test input \((T,j)\in\mathcal{T}\times J\) into an (untargeted) adversarial sample \((T^{\prime},j)\) such that \(h(T,j)\bigcap h(T^{\prime},j)=\emptyset\). In addition to fooling the classification model, the transformation from \(T\) to \(T^{\prime}\) should also be imperceptible for a human observer. In the CTA setting we define the imperceptibly condition to be met if all entities in column \(T^{\prime}_{[:,j]}\) are of the same class as the unmodified column. Formally, \(\forall e^{\prime}\in T^{\prime}_{[:,j]}\forall e\in T_{[:,j]}:c(e^{\prime})=c(e)\), where \(c\in\mathcal{C}\) represents the most specific class assigned to the column \(T_{[:,j]}\).
### Entity Swap Attack
In principle, a CTA attack can apply transformations to the full table \(T\); however, most importantly, it should focus on \(T_{[:,j]}\). Our attack, called **entity-swap**, follows a two-step approach inspired by adversarial attacks on LLMs [6, 14]. First, it picks a set of key entities \(\{e_{i}\in T_{[:,j]}\}\). The number of key entities can be controlled as a percentage \(p\) of the entities in the original column. In a second step, every key entity \(e_{i}\) is swapped with an adversarial entity \(e_{i}=e^{\prime}_{i}\) that most likely changes the predicted class from the ground truth. The proposed attack is a black-box attack, meaning we only have access to the predictions scores of the classifier.
### Key Entities
Finding which are the key entities to swap can increase the rate of success of the attack. In the case of the CTA task, the most informative entities are those which, when replaced, the model will misclassify the column. To find those entities, we calculate an importance score for every entity in the attacked column.
The output from the classification model \(h\) for a column \(T_{[:,j]}\) is the logit vector \(\mathbf{o_{n}}(T,j)\in\mathbb{R}^{k}\), where \(k\) is the number of ground-truth classes assigned to \((T,j)\)
Figure 1: Entity-level adversarial example for table attack
We calculate the importance score for entity \(e_{i}\in T_{[:,j]}\) as the difference between the logit output of the model for the ground truth classes when the entity is _in_ the column, denoted as \(\mathbf{o_{h}}\), and the logit output of the model when the entity is _replaced_ with the [MASK] token, denoted as \(\mathbf{o_{h}}\backslash\mathbf{e_{i}}\). Since, the CTA task is evaluated under the multi-label setting, we always take the maximum importance score for an entity.
\[score(e_{i})=max(\mathbf{o_{h}}-\mathbf{o_{h}}\backslash\mathbf{e_{i}}) \tag{1}\]
Figure 2 shows an example of how the importance score is calculated. We calculate \(\mathbf{o_{h}}\) as the logit output of the model without any perturbation, while \(\mathbf{o_{h}}\backslash\mathbf{e_{1}}\) represents the logit output of the model when the entity _Rafael Nadal_ is masked. After calculating the importance score for every entity in the column, we select the top \(p\) percent of entities (\(p\in 20,40,60,80,100\)) based on their importance scores and substitute them with adversarial entities. By sorting the entities according to their importance scores, we ensure that the attack consistently targets the key entities within the targeted column.
### Adversarial Entities
After identifying the key entities, the next step involves sampling adversarial entities for swapping. In order to adhere to the perceptibility assumption, we constrain the search space to include only entities belonging to the same class as the attacked column. Subsequently, we use a similarity-based strategy to sample adversarial entities.
Let \(e_{i}\in T_{[:,j]}\) be the key entity from the attacked column, and let \(c\in\mathcal{C}\) be the most specific class of this column. We use an embedding model to generate a contextualized representation for both the original entity, \(\mathbf{e_{i}}\), and all entities of the same class \(A_{c}=\{\mathbf{e^{\prime}_{1}},\mathbf{e^{\prime}_{2}},\dots,\mathbf{e^{ \prime}_{k}}\}\), such that \(c(e_{i})=c(e^{\prime}_{k})\) where \(e^{\prime}_{k}\in A_{c}\). Next, we calculate the cosine similarity between the original entity and each entity from the set \(A_{c}\). As an adversarial example, we take the most dissimilar entity from the original entity, such that \(e^{\prime}_{i}=\textsc{argmin}_{e^{\prime}_{k}}\textsc{CosineSimilarity}( \mathbf{e_{i}},\mathbf{e^{\prime}_{k}})\). We then swap the original entity \(e_{i}\) with the adversarial entity \(e^{\prime}_{i}\).
As we describe in the introduction, there is a substantial overlap of entities between the train and test set. Therefore, we propose two different sampling sets for adversarial entities. The first, is the set of entities per class from the WikiTables test dataset [3]; we refer to this set as _test set_. The second set contains only novel entities, i.e., entities that also appear in the training set, are removed from the test set. We refer to this set as the _filtered set_.
Metadata AttackIn addition to the proposed attack method for column values, we also introduce an attack specifically targeting column headers, considering that they often indicate the class of a column. However, in this case, we use an independent embedding model to identify similar entities instead of swapping with column names from the same class. For the generation of adversarial samples in the column headers, we first generate embeddings for the original column names and then substitute the column names with their synonyms. The library _TextAttack_[14] was used to generate the embeddings, and based on the embeddings to retrieve the synonyms for the column names.
## 4 Evaluation
ModelWe evaluate the performance of the CTA attack on the TURL model [3], which has been fine-tuned for the CTA task and uses only entity mentions. We use the WikiTables dataset for evaluation. We follow their evaluation procedure and report the achieved F1 score, precision, and recall.
To evaluate the influence of the proposed strategy for sampling adversarial samples, we compare it to a random sampling of adversarial entities. Similarly, to evaluate the influence of the importance scores, we compare with random sampling when choosing which entities to swap.
### Results
Table 2 shows the results of the CTA attack when swapping entities by their importance scores and sampling
Figure 2: Calculation of importance scores.
adversarial entities using the similarity-based strategy from the filtered set. We notice that as we increase the percentage of swapped entities, the performance of the model drops, even though the perturbed entities are of the same semantic type as the original entities. Another observation is that the drop in the F1 score is attributed to the sharp decline of the recall.
Effect of the importance scoreFigure 3 shows the benefit of using the importance scores. We notice that the drop in F1 score is around \(3\%\) higher when using the importance scores. This is consistent, regardless if we are substituting \(20\%\) or \(80\%\) of the entities, which suggests that the importance scores consistently identify entities that have a greater influence on the model's performance.
Effect of the sampling strategyFigure 4 shows the difference in F1 score drop when sampling adversarial entities from the test set versus the filtered set. The original F1 score is represented by the red line. Additionally, here we illustrate the advantages of using the similarity-based strategy over a random-based sampling of adversarial examples. For both cases, when sampling adversarial entities from the test and filtered set, the similarity-based strategy for sampling induces sharper drop of the F1 score. This suggests that this approach is successful in selecting entities that are more likely to cause misclassifications or confusion for the classification model.
Effect of perturbing the table metadataTo evaluate the relevance of the column header for the CTA task, we also propose an adversarial attack specific to the TURL model [3], which uses only the table metadata. Table 3 shows the effect of perturbing the table metadata. We observe similar results here, as we increase the percentage of perturbed column names, all the evaluation metrics decline. This indicates that the model's reliance on specific column names, affects its ability to accurately classify and predict the correct class.
## 5 Conclusion
In this paper, we introduce the formalization of an adversarial attack targeting TaLMs. Additionally, we identify and highlight an issue concerning the evaluation of the CTA task. The evaluation showed that TaLMs are susceptible to adversarial attacks. Even subtle modifications to the entities, guided by similarity, can lead to significant changes in the model's predictions and subsequently affect the F1 score. In future, we will extend our evaluation with more sophisticated attacks, targeting also other models used for table interpretation tasks.
\begin{table}
\begin{tabular}{c|c|c|c} \% perturb. & F1 & P & R \\ \hline
0 (original) & 88.86 & 90.54 & 87.23 \\
20 & 83.4 (6\%) & 90.3 (0.2\%) & 77.8 (11\%) \\
40 & 72.0 (19\%) & 87.9 (3\%) & 60.9 (30\%) \\
60 & 55.3 (38\%) & 80.4 (11\%) & 42.1 (52\%) \\
80 & 39.9 (55\%) & 67.7 (25\%) & 28.4 (67\%) \\
100 & 26.5 (70\%) & 50.8 (44\%) & 17.9 (80\%) \\ \end{tabular}
\end{table}
Table 2: Adversarial attack on the entities. The adversarial entities are sampled by their semantic similarity from the original entity.
Figure 4: Sampling adversarial entities from the test set vs the filtered set, at random and using the similarity strategy.
\begin{table}
\begin{tabular}{c|c|c|c} \% & F1 & P & R \\ \hline
0 (original) & 90.24 & 89.91 & 90.58 \\
20 & 78.4 (13\%) & 81.1 (10\%) & 76.0 (16\%) \\
40 & 77.1 (15\%) & 80.7 (10\%) & 73.8 (19\%) \\
60 & 75.2 (17\%) & 79.1 (12\%) & 72.2 (20\%) \\
80 & 65.1 (28\%) & 71.4 (22\%) & 60.4 (33\%) \\
100 & 51.2 (43\%) & 60.4 (33\%) & 44.4 (51\%) \\ \end{tabular}
\end{table}
Table 3: Attack on the column names where the adversarial samples are their synonyms.
Figure 3: Adversarial samples from the test set, replacing entities at random vs using the importance scores. |
2309.07563 | Keep your Identity Small: Privacy-preserving Client-side Fingerprinting | Device fingerprinting is a widely used technique that allows a third party to
identify a particular device. Applications of device fingerprinting include
authentication, attacker identification, or software license binding. Device
fingerprinting is also used on the web as a method for identifying users.
Unfortunately, one of its most widespread uses is to identify users visiting
different websites and thus build their browsing history. This constitutes a
specific type of web tracking that poses a threat to users' privacy. While many
anti-tracking solutions have been proposed, all of them block or tamper with
device fingerprinting techniques rather than just blocking their web tracking
application. Therefore, users may be limited in their experience while using a
website. In this paper, we propose Privacy-preserving Client-side
Fingerprinting (PCF), a new method that allows device fingerprinting on the
web, while blocks the possibility of performing web tracking. To this end, PCF
is built upon fingerprinting transparency: any website ought to declare its
fingerprinting scripts while users will compute them in a privacy-preserving
manner, limiting the resultant fingerprints for each different domain and,
therefore, making web tracking not feasible. | Alberto Fernandez-de-Retana, Igor Santos-Grueiro | 2023-09-14T09:45:29Z | http://arxiv.org/abs/2309.07563v2 | # Keep your Identity Small: Privacy-preserving Client-side Fingerprinting
###### Abstract
Device fingerprinting is a widely used technique that allows a third party to identify a particular device. Applications of device fingerprinting include authentication, attacker identification, or software license binding.
Device fingerprinting is also used on the web as a method for identifying users. Unfortunately, one of its most widespread uses is to identify users visiting different websites and thus build their browsing history. This constitutes a specific type of web tracking that poses a threat to users' privacy. While many anti-tracking solutions have been proposed, all of them block or tamper with device fingerprinting techniques rather than just blocking their web tracking application. Therefore, users may be limited in their experience while using a website.
In this paper, we propose _Privacy-preserving Client-side Fingerprinting_ (PCF), a new method that allows device fingerprinting on the web, while blocks the possibility of performing web tracking. To this end, PCF is built upon fingerprinting transparency: any website ought to declare its fingerprinting scripts while users will compute them in a privacy-preserving manner, limiting the resultant fingerprints for each different domain and, therefore, making web tracking not feasible.
## 1 Introduction
Device fingerprinting is a common practice on the Internet to uniquely identify a user [47]. Original applications of device fingerprinting include different tasks such as network identification [16, 25], device authentication [17, 18, 27, 38, 44], or attackers detection [24, 33].
While these applications are still used, device fingerprinting is associated with web tracking, since it allows third-party agents to uniquely identify browsers and compile users' browsing history. This application is used for many companies in conjunction with cookies to track devices for different purposes such as targeted advertising, fraud detection, content personalization among others [8, 9, 20, 23, 47, 23].
Several anti-fingerprinting solutions have been proposed: blacklisting browser extensions [4, 5] and fingerprint _spoofing techniques_[53, 43, 44]. Despite the fact that these techniques effectively solve web tracking techniques, they block any fingerprinting method regardless of its nature. Web tracking poses a serious threat to users' privacy and anonymity. However, device fingerprinting techniques can be used to enhance user experience, security, or content. Hence, blocking them without any more consideration, may result in losing the actual functionality of the visited website.
While existing anti-fingerprinting solutions block or limit the device fingerprinting techniques with little or no consideration of the possible impact for users' experience on the web, Torres et. al. [53] tried to tackle this problem with _web identities_, generating a fingerprint for each website. However, their approach limited the possible fingerprinting methods and therefore, it also affected seriously its widespread application.
In this paper, we aim at filling this gap between preserving users' privacy while maintaining web functionality and consistency. We present _Privacy-preserving Client-side Fingerprinting_ (PCF), a device fingerprinting protocol that allows device fingerprinting behaviors from hosts, but preserves the users' privacy, removing the possibility of web tracking when applied.
This paper presents a key insight by proposing a standardized approach that permits the legitimate use of device fingerprinting for purposes such as two-factor authentication (2FA) and bot detection, while restricting tracking solely to a per-domain basis. In our approach, websites will declare the scripts containing fingerprinting methods. The client receives the website as usual, and the identified fingerprint container scripts are executed in isolation from the rest of the web page. By utilizing Browser WebAPI's, such as screen resolution, these fingerprinting scripts are granted access to the actual device features. To mitigate the potential tracking risks associated with the utilization of real values, these scripts are also isolated in terms of communication, with only a few excep
tional cases allowed for data exchange. Section 3 provides an explanation of these exceptional cases, which enable per-domain tracking, bot detection, and two-factor authentication (2FA).
### Contributions
In summary, the main contributions of this paper are:
* We propose PCF, the first privacy-preserving client-side fingerprinting protocol.
* We demonstrate that our protocol preserves client privacy, while allowing legitimate device fingerprinting uses (e.g., bot detection, 2FA, per-domain tracking).
* We explain how PCF should be implemented by browser vendors to allow the legitimate use of device fingerprinting by websites.
* We provide a detailed description of how websites can effectively employ this protocol for per-domain tracking, bot detection, and two-factor authentication (2FA).
* We developed a standard that can be extended to accommodate future legitimate use cases.
## 2 Background
### Fingerprinting Actors
Online tracking and device fingerprinting is a very complex ecosystem with several involved parties that play several roles. We provide a summarization of the most relevant party roles that apply to PCF.
* **Client.** The client is the agent that visits a host and is a potential target to be fingerprinted (willing or unwillingly). In general, we can consider the client as the set of possible fingerprints that a fingerprinting script provider may use. This includes the browser, and every feature accessible from a client-side script (e.g., browsing data, list of installed fonts, and so on).
* **Fingerprinting Script Provider.** This party is the one that instructs the final client to execute a fingerprinting script. This party can be the host that the particular client is visiting (and, therefore, she may be aware of its existence) or can it be loaded as part of a third-party site as an iframe, an external import, and so on.
* **Fingerprint User.** Once the fingerprint is generated in the client by the provided script, the resultant ID is sent to a certain party. That party can be a script provider, storing the clients' browsing history, or be another third-party. In addition, since stateless fingerprints are not stored in the client, parties that receive the client's fingerprint may be able to share it with other parties, increasing the impact of a potential web tracking on that particular client. In addition to receiving just the ID, it is possible for this actor to also receive additional information about the client, such as, the screen resolution, fonts or the language.
* **Visited Host.** This party is the one which the client willingly decides to visit. Currently, a website includes much third-party content (imported or directly rendered) that may or may not be a potential _fingerprinting script provider_ or _fingerprint user_, and the visited host may also act as one, or both categories.
### Fingerprinting Techniques
Fingerprinting methods' objective is to uniquely identify a target. This identification can be used for a variety of applications including device authentication, software license binding [24, 48], wireless network identification [25, 16], or attackers identification [24, 33].
In the particular website of fingerprinting in the web scenario, a website owner (or a third-party linked by this website) computes a unique fingerprint for each user, without any storage on the client-side. This is the reason why these approaches are called stateless and are hard to block -- when they are used for web tracking. Sanchez, Santos, and Balzarotti [48] classified device fingerprinting techniques into two main groups depending on the features used:
* **Attribute-based Fingerprinting:** These fingerprinting techniques use attributes accessible through the browser e.g., browser attributes such as the list of installed fonts, the UserAgent, or the screen resolution. Their main advantage is that their computing overhead is small, and they are easy to obtain. However, the features used in these fingerprinting techniques might be changed by users during an update, and therefore, they are considered more ephemeral (for a comprehensive study of their ephemeral nature, please refer to [56]). Many attribute-based methods exist for fingerprinting such as fonts [22], or the combination of browser attributes [36].
* **Hardware-based Fingerprinting:** In order to avoid the problems of attribute-based fingerprinting, hardware-level features have been used to create a more precise form of fingerprinting. These methods employ differences in the hardware that are subtly detectable by calling certain APIs that use the underlying hardware in order to compute differences amid devices. _Camvas Fingerprint_ and _WebGL_[42] compute the subtle differences in the form text is rendered by the HTML5 Canvas or WebGL. Another notable web hardware-based fingerprinting technique _CryptoFP_[48] derives the quartz clock manufacturing differences, by exploiting the usage of fast Cryptography API functions.
### Application to Web Tracking
The relation between web tracking and device fingerprinting is old. Web tracking is a very common technique used on the Internet to retrieve user browsing data initially introduced for web advertisement or analytics [39]. This practice has changed its original goal over the years becoming a widely popular method for a variety of different goals.
Since web tracking allows third parties to discover users' browsing history, it can be used for improving users' experience and browsing. However, because they involve gathering users' data, web tracking can be considered "invasive" -- at the very least [50].
Many techniques exist that allow web tracking. The first one introduced was the _cookies_[52] that, even though its was not its primary goal when they were designed, it is still considered a popular technique for tracking (including users' cookie sharing and other privacy problems [1, 45, 31]). Evercookies, to bypass cookie cleaning, cookie syncing to allow trackers to share users' fingerprints, or ETags inserted within images used to check the user identity [14]; are some examples of web tracking early evolution. Device fingerprinting techniques create a unique id for the user without storing it in the machine (such as installed components such as fonts [22], the computed difference in rendering text by the HTML5 Canvas API [42], the combination of browser attributes [36], or timing difference when CPU clock is stressed [48]).
In this way, web tracking techniques have been classified regarding their need of storage [49, 50]. Stateful techniques (such as cookies) required storing the ids within the client machine, whereas stateless techniques (such as Canvas or CryptoFP fingerprinting) do not require that because they are computed each time the user visits a website. Stateless tracking and fingerprinting techniques are considered more dangerous because they are harder to limit or block, bypassing common countermeasures [9]. Furthermore, a recent work has measured tracking from users' side (using user telemetry data) and realized that, if users' general navigation trends are taken into account when measuring tracking, its prevalence and its impact on users' privacy is at least twice as estimated before [20].
Web tracking is a very widespread issue on the web. Studies [47, 23] show that more than 90% of the websites include (either as first or third party) at least one script with any sort of tracking behavior. In addition, web tracking is also a very profitable endeavor that generates billions of dollars [30, 37]. Despite the fact that device fingerprinting based web tracking is hugely prevalent on the Internet, end-users are not aware of it or its consequences. Recent works have tried to analyze users' perspective in this matter and the results showed that users are generally surprised or willing to adopt more private browsing when they discover the actual data they are providing [41, 58].
In summary, device fingerprinting (and particular stateless fingerprinting) is essentially any identification technique, which is difficult to block, and widely used for web tracking that presents several privacy concerns for users.
### Privacy Risk and Threat Model
Users privacy in device fingerprinting is an issue reported by numerous studies [49, 56, 21, 23, 24, 25] which users usually are not even aware of [41, 58] since no private alternative for device fingerprinting exist. However, legitimate uses of device fingerprinting exists such as software license binding [48, 24], device authentication, network discovery [16, 25], or attack detection [24, 33].
Imagine the agents involved in fingerprinting: (i) the client, (ii) the fingerprinting script provider, (iii) the fingerprint user, and (iv) the visited host. In a typical interaction:
1. A _client_ will visit a host www.example.com. This site will be the _visited host_.
2. The _visited host_ may include several fingerprinting scripts internally or imported as a third party. In our case, we will imagine 3 domains that use 3 fingerprint scripts, fp1.js, fp2.js, and fp3.js from a.com, b.com, and the _visited host_ www.example.com. These domains will be considered _fingerprinting script providers_.
3. On the client side, the scripts fp1.js, fp2.js, and fp3.js; are imported and executed generating 3 unique fingerprints: \(f1\), \(f2\), and \(f3\). Usually these fingerprinting algorithms will compute the fingerprint within the client's machine and will send the results back to the original _fingerprinting script providers_ domains or other domains. Any domain that receives a client's fingerprint is considered a _fingerprint user_.
4. The different _fingerprint users_ will compare these fingerprints with the ones already collected to identify the client and act accordingly. In some case, it will only imply personalization or even a second factor of authentication, while in many cases since fingerprints can be shared with other _fingerprint users_ or the third-party _fingerprinting script provider_ may include their fingerprinting script in many sites, fingerprints are used to retrieve browsing history performing web tracking.
This problem of web tracking and stateless device fingerprinting in particular, is the difficulty to opt-out by the user [48, 50, 9]. When we consider the current legislation and recommendations, the client must be given the choice to opt-out any type of user identification (e.g., GDPR or California privacy law). However, these legislations are only followed in the case of third-party cookies because of its stateful nature -- and even in this case, the application of the law is not as strict as it should [46].
### Legitimate Device Fingerprinting
Device fingerprinting, when used responsibly and within appropriate contexts, can serve several legitimate purposes that contribute to user security, authentication, and website functionality [35]. While there is a significant body of research focusing on techniques to block or mask device fingerprinting for privacy protection, there is a noticeable gap in literature when it comes to exploring a middle path that allows for the legitimate use of device fingerprinting.
One such purpose is user authentication [11, 12, 13, 55], where device fingerprinting can be utilized to verify the identity of a user based on unique device characteristics. This technique could be used as a two-factor authentication (2FA). By analyzing factors such as browser configuration, installed plugins, and system attributes, websites can develop mechanisms that aids in distinguishing legitimate users from potential impersonators or unauthorized access attempts. Another legitimate purpose for device fingerprinting is bot detection [32, 40, 57]. With the help of device fingerprinting, websites can analyze various device characteristics, behavior patterns, and fingerprinting attributes to differentiate between legitimate users and automated bots or malicious scripts. By implementing robust bot detection mechanisms based on device fingerprinting, websites can effectively mitigate fraudulent activities and maintain the integrity of their services. In addition to the aforementioned purposes, device fingerprinting can also be utilized to determine if the client software, including the operating system or browser, is outdated. This information enables websites to prompt users to update their software for improved security and compatibility, or restrict access to certain services until the necessary updates are applied. By leveraging device fingerprinting for software version detection, websites can enhance security measures and protect against potential vulnerabilities.
Finally, another legitimate goal of device fingerprinting is per-domain tracking, which is distinct from privacy-invasive web tracking. Per-domain tracking allows websites to track user activity within their own domain for various purposes, such as personalization, analytics, and security. By utilizing device fingerprinting techniques, websites can identify returning users and track their interactions within a specific domain, providing a more tailored and customized experience. It enables websites to maintain session information, remember user preferences, and deliver targeted content, all within the confines of their own domain and without infringing on user privacy across different websites.
### Motivations for PCF
Currently, browser vendors employ various protection and mitigation solutions to prevent web tracking. These measures range from basic approaches such as script blacklisting to more sophisticated implementations like randomization or masking of web APIs. It is important to note that both web tracking scripts and legitimate device fingerprinting scripts currently operate within the same execution context. While many publications and implementations address the blocking or masking of device fingerprinting for privacy concerns, there is a gap in mechanisms that allow for legitimate use cases of device fingerprinting. Current approaches may hinder the potential benefits in areas like security, authentication, and fraud detection. Given this context, we propose the need for a mechanism that allows the legitimate use of device fingerprinting for specific purposes, all while prioritizing the protection of user privacy.
In this contribution we present _Privacy-preserving Client-side Fingerprinting_ (PCF) protocol. PCF will allow _fingerprint users_ and _fingerprinting script providers_ to utilize the real device features, overcoming existing blocking mechanisms that are prevalent today in browser vendors. The primary objective of our protocol is to prevent tracking while still enabling the legitimate utilization of device fingerprinting for essential purposes such as security, authentication or per-domain tracking. In essence, our approach represents a middle ground between blocking fingerprinting altogether and permitting device fingerprinting without any restrictions violating users' privacy.
## Adherence to Existing Regulations
Currently, laws such as the GDPR provide users with the option to opt-out of certain data collection practices, including device fingerprinting. It is anticipated that there will be an increasing focus on privacy awareness and the enactment of additional laws to protect user privacy in the future. These laws aim to give users more control over their personal data and ensure transparency and accountability in data processing practices. _Privacy-preserving Client-side Fingerprinting_ (PCF) is designed to enforce privacy-compliant environments, aligning with the principles of laws like the GDPR. By implementing PCF, websites can adopt a responsible approach to device fingerprinting that respects user privacy and provides transparency and control over data collection practices. PCF enables websites to strike a balance between legitimate uses of device fingerprinting and protecting user privacy, ensuring compliance with evolving privacy awareness laws and regulations. In a similar manner, PCF empowers browser vendors to enforce the blocking of privacy-harming techniques associated with device fingerprinting in No-PCF scripts in order to respect existing regulations.
## Reliability
When adopting PCF, a client should not try to perturb the declared fingerprints, adding noise [34, 43] because her privacy is secured. In a similar vein, the _fingerprinting script provider
or the _fingerprint users_ can trust that the user will not tamper with the device values or modify them.
### Transparency of Used Methods
One benefit that derives for the protocol itself is that fingerprinting becomes transparent for client. Since the _fingerprinting script provider_ needs to declare the fingerprinting scripts so they can be executed within the client, the whole process of fingerprinting becomes more transparent and, therefore, a lesser shady activity.
### Web Tracking
However, device fingerprinting used for web tracking would not be possible adhering to PCF policies. Although this is a desired privacy-preserving design consequence, some _fingerprinting script providers_ and _fingerprint users_ would be affected because they use device fingerprinting to track users.
There may be legitimate reasons to track users outside a domain, such as advertisement or analytics. To this end, we believe that privacy-preserving techniques [10, 15, 26, 29, 54] should be used.
## 3 PCF: Privacy-preserving Client-side Fingerprinting
### General Overview
Device fingerprinting's intended use was originally to provide an identification method for a particular device or client. However, its usage for unwanted web tracking is vast [47, 23].
Nevertheless, third parties and websites may want to use these techniques for legitimate uses. Since current anti-tracking solutions mask or block these scripts [43, 34], we propose that device fingerprinting techniques should not be blocked when used in a legitimate, transparent manner. Especially when used without compromising user privacy. Therefore, we present _Privacy-preserving Client-side Fingerprint_ (PCF) protocol that guarantees to a great extent that the computed users' fingerprints will remain hidden and will not be shared for web tracking, while their utility for fingerprint providers/users will remain.
The overall behavior within PCF is as follows (see Figure 1 for a visual depiction):
1. _Client_ visits a website using PCF.
2. _Website_, has its own and third-party fingerprinting scripts properly declared as PCF. This declaration implies that each script that needs to use the real value of a fingerprinting method, has been declared as PCF. Then, _Website_ sends all its content, as well as the fingerprinting scripts.
3. _Client_ retrieves the content and before executing the scripts, check if any of them is marked with the PCF flag.
4. The script declares the PCF flag, indicating that it will execute in an isolated environment where web APIs provide genuine values, but communication is limited to declared legitimate messages.
5. If PCF is not declared, the script will execute in the normal mode where browser vendors implement security and privacy measures by default to prevent tracking blocking, randomizing and masking specific browser web APIs.
Scripts imported or loaded from third parties also require to be declared if they intent to use fingerprinting techniques for legitimate uses. Therefore, it is the responsibility of each script provider, regardless of whether they are a first-party or third-party entity, to appropriately indicate the presence of the PCF flag when it is deemed necessary.
### Client-side Fingerprint
The core of the computation required for PCF to work is performed on the client-side. In this way, the client retains control over its fingerprints and their delivery.
To this end, the client follows the next procedure. When the client visits a website, the website sends the contents as usual. In the case the website is adhering to PCF protocol, the scripts that computes device fingerprinting will be marked with the PCF flag. This includes third-party scripts that the website may have loaded.
When the script, regardless of whether it is a first-party or third-party script, declares the PCF header, it operates within an isolated context. Within this isolated context, the script can access authentic device features. However, certain restrictions are imposed on communications that extend beyond this isolated environment to avoid misuses of the protocol. To clarify, when a script is marked as PCF, it operates within a distinct and isolated runtime context. Within this PCF runtime context, the script has the capability to request the genuine device
values from the browser, enabling it to accomplish objectives such as bot detection. However, to prevent any potential web tracking within the PCF runtime, any communications originating from the script to other parts of the page or external servers must be subject to appropriate filtering measures.
This is the general behavior designed for a client in the PCF schema. In Section 4 of the paper, we present a detailed design of the protocol, outlining its various components and considerations. In Section 5 of the paper, we delve into a thorough discussion of the most intriguing aspects of the protocol and address the limitations of our contribution.
### Fingerprinting Provider
The implementation of the PCF standard is of great interest to _fingerprinting providers_, as they stand to benefit from its adoption. Fortunately, the adoption process for _fingerprinting providers_ is not overly complex, making it relatively straightforward for them to incorporate the PCF protocol into their existing systems. Since the _clients_ are the ones that generate the execution context and communicates back to the _fingerprint user_, the major responsibility of the _fingerprinting providers_ is to provide the means to that to happen. In other words, within the PCF protocol, the different _fingerprint providers_ just need to declare every script that requires device fingerprinting to allow the _client_ to manage them as described in Section 3.2.
The required declaration from _fingerprinting providers_ will be placed as HTTP header or as _'script' attribute_. In the next section 4, we will provide a detailed exploration of the declaration process for PCF scripts. The primary modification that fingerprinting providers need to make is to encapsulate all device fingerprinting logic within a single script, allowing it to compute the desired outcome for the legitimate purpose. Additionally, they would adhere to the permissible communications outlined in the PCF standard, while being aware that any other communication would be blocked.
While it is not guaranteed that every _website_ or _fingerprinting provider_ will declare all the fingerprinting scripts as PCF-compliant, the objective of this work is to establish a protocol that instills confidence in users and websites when executing device fingerprinting. The implementation of the PCF standard would enable browser vendors to employ more aggressive mitigations to prevent web tracking in scripts that are not compliant with PCF, ensuring stronger protection against unauthorized tracking practices. In this way, we acknowledge and recognize the work of other researchers in presenting technologies that focus on blocking and mitigating device fingerprinting (e.g., [34, 43]), which are fully complementary to the PCF standard.
## 4 Practical adoption in the real world
Our contribution introduces a protocol that safeguards users privacy whilst allowing the legitimate utilization of device fingerprints, such as, bot detection or per-domain tracking. In Section 2, we provided an overview of the motivations behind introducing a new standard for device fingerprinting. Section 3 outlined the high-level protocol, providing a general understanding of its components. However, in this section, we delve into the detailed design of the protocol, discussing its implementation in the real world and providing a comprehensive explanation of its various aspects.
### PCF Script Declaration
Web communications provide a large range of opportunities to declare security or privacy policies, offering a diverse range
Figure 1: Overview of the _Privacy-preserving Client-side Fingerprinting_ protocol.
Figure 2: Declaration of _PCF_ script by HTTP Header.
of possibilities. In the following lines, we outline our proposed protocol declaration. The initial method we propose for declaring a PCF script involves implementing a new HTTP Response Header named after the protocol itself, PCF. Figure 2 provides an illustrative example of this solution.
In fact, the proposed implementation of the PCF protocol as an HTTP header response offers a simple and straightforward approach that can be adopted by both browser vendors and script providers. By integrating the PCF header response into the HTTP protocol, browser vendors can implement the necessary mechanisms to enforce the execution context and communication restrictions defined by the PCF protocol. Script providers, on the other hand, can easily declare their scripts as PCF-compliant by including the PCF header in the HTTP response. This solution is similar to other mechanisms implemented in the Web Ecosystem, such as, _Content-Security-Policy (CSP)_, _X-Frame-Options_ or the deprecated _X-XSS-Protection_. However, this implementation has two main disadvantages. Firstly, in-line scripts cannot be declared as PCF compliant. Secondly, the first-party is unable to declare PCF by itself for scripts that are requested to third-parties.
To address these limitations, as illustrated in Figure 3, we propose an addition to this solution by advocating for the standardization of a new attribute for declaring scripts as PCF scripts. This proposal enables first-parties to declare third-party scripts as PCF, allowing their execution within the new isolated context. By implementing this approach and ensuring that first-parties have a properly declared Content-Security-Policy (CSP) header, they acquire enhanced control over the behavior of third-party scripts. This solution shares similarities with other existing approaches, such as the'sandbox' attribute in 'iframe' elements or the use of 'nonces' in'script' tags.
Our proposal, comprising two implementation designs, offers versatility and ease in declaring scripts as PCF. The declaration of PCF scripts follows similar specifications found in the web ecosystem, such as the Permission-Policy, which encompasses both the HTTP Response Header and the attribute for the 'iframe' tag. Moreover, this compound solution seamlessly integrates with existing security and privacy headers and attributes without any compatibility issues. Finally, we would like to emphasize that the attribute-based solution empowers websites to declare third-party scripts as PCF-compliant, granting first-parties control over the execution of these scripts.
### PCF Scripts Execution Context
#### 4.1.1 WebAPI Execution
As elucidated throughout the paper, both web tracking scripts and legitimate device fingerprinting scripts currently operate within the same execution context. In this execution context, web APIs are randomized or masked to prevent web tracking, potentially impacting the legitimate use of these APIs. In this section we describe the new execution context where declared PCF scripts (as explained in the previous section) operate within a parallel execution environment.
Within the PCF Execution context, web APIs would provide authentic values retrieved from the underlying system. For instance, if the user has a rare UTC timezone such as UTC-1, browsers, to safeguard the user's privacy, may return a more commonly used timezone, such as, UTC+2. This privacy-preservation method fundamentally involves aligning your fingerprint with that of other users, rendering individual identification unfeasible. However, within the PCF context, the web API would return the appropriate value, which in this case would be UTC-1 (real value). By utilizing the actual values provided by various APIs, the script can make informed determinations, such as distinguishing between a bot and a legitimate user or identifying if the user is attempting to log-in from a new device on the webpage. Figure 4 illustrates the distinctions between the two execution contexts, highlighting the variations and implications that arise when executing scripts within the PCF Execution context compared to the normal execution context.
PCF scripts run in a separate execution context parallel to that of other scripts on the page. This parallel instance ensures that PCF scripts operate independently and do not interfere with the execution or behavior of other scripts running simultaneously. By isolating PCF scripts in their own execution context, the protocol establishes a clear boundary between PCF operations and the rest of the script environment, blocking the exfiltration of device information. In this separate execution context, mechanisms that aim to block or mask device fingerprinting, such as Brave's _farbling_[3], would be deactivated. This allows PCF scripts to access and utilize the real device values without any interference or obfuscation to complete their goal.
#### Script Communications
Within the PCF framework, we introduce a novel execution context where scripts have the capability to access the genuine device values. However, this new context presents a potential
Figure 3: Declaration of PCF scripts as attribute.
risk of unintended exfiltration of these real device values beyond the PCF script, such as by other scripts or by a external fingerprinting user. To secure user privacy while enabling the utilization of device features within the script, we propose implementing filtered communications from PCF scripts. This approach ensures that only authorized and necessary data exchanges occur, thereby mitigating the risk of unintended information leakage.
The implementation of communication blocking would encompass any form of outgoing communication from the PCF script to external agents, whether it involves other scripts within the same webpage context or third-party servers. When it comes to communication with other scripts, there are specific techniques that should be blocked by-default within the PCF implementation. Here is a complete list of techniques which needs to be blocked:
* **Global Scope:** If the scripts are defined in the global scope, they can directly access and modify variables and functions defined by other scripts. They can share data by assigning values to global variables or by calling shared functions.
* **DOM Manipulation:** Scripts can interact with the Document Object Model (DOM) to communicate with other scripts. They can access and modify DOM elements, attributes, and properties, allowing them to exchange data and trigger events that can be listened to by other scripts.
* **Event System**: Scripts can utilize an event system to communicate with each other. They can define and dispatch custom events using the CustomEvent API and listen for those events using event listeners. Other scripts can listen for these events and respond accordingly.
* **Shared Storage:** If the scripts need to communicate even when the page is reloaded or reopened, they can utilize shared storage mechanisms such as cookies, local storage, or session storage. Scripts can read from and write to these storage mechanisms to share data between page loads or sessions.
The described methods will enable fingerprinting scripts to share genuine device information with the rest of the webpage. Based on this premise, we propose the utilization of a partitioned context mechanism for implementing the set of techniques mentioned above. This approach is similar to the partitioned context mechanisms currently being implemented by browser vendors [6]. It ensures that each script interacts within a unique scope, storage, or DOM, which is isolated from other scripts and execution contexts.
Despite the blocking of all the communications, we now describe the techniques that would be allowed by the PCF protocol. This filter applies to all web APIs that interact with the broader web ecosystem, including both the context of the website and external servers. This filter would impact web APIs such as the PostMessage, XMLHttpRequest, Fetch, and any other web APIs that facilitate communication between scripts and external entities. The purpose of this filtering mechanism is to exclusively allow communications that adhere to the protocol's guidelines, such as those related to legitimate use of device features (e.g., bot detection). To this end, we propose a well-defined set of permissible communications that are in line with these objectives.
In the proposed filtering mechanism, for external communications only one request is permitted for each specific purpose and site. This means that for each distinct purpose and each individual site, a single communication will be allowed. As defined in the HTML Standard [7], a site refers to a collection of websites served from the same domain and managed
Figure 4: Execution Context of _PCF_ implementation.
by a single organization (e.g., shop.example.com and coffee.example.com). Furthermore, these communications would be restricted to the HTTPS protocol to prevent man-in-the-middle attacks. In the case of communication to the website context, the filtering mechanism allows only one communication per purpose. So, for example, if script 'X' needs to communicate with the website context and the backend server to notify them that the user's software is outdated, the script would utilize the PostMessage API for one-time communication with the website context. Subsequently, it would employ the Fetch API to transmit this information to the backend server. This approach ensures that the PCF script is enforced to respect the privacy of the user by limiting the communication to the necessary and allowed channels for the specific purpose of notifying about outdated software.
Finally, we outline the communication messages that are allowed within the PCF framework. As mentioned earlier, these messages are designed to be highly restrictive to mitigate the risk of web tracking or data exfiltration. For scenarios where the purpose is to determine if the user meets certain requirements, such as bot detection, fraud detection, and two-factor authentication (\(2\)FA), we propose a payload that consists of a boolean value. This payload can be used to indicate whether the user fulfills the specified requirements. When it comes to communicating the user's fingerprint, we allow the use of any string identifier. However, to ensure privacy and restrict tracking to specific domains, we propose hashing the identifier with a domain-specific salt. This hashing process would be performed by the execution context before sending the message, ensuring that the fingerprint is only used for tracking purposes within that particular domain. This methodology ensures that tracking across different websites becomes impossible, thereby preserving user privacy. In our proposal, we suggest using JSON as the default format for sending data, where the key serves as the identifier for the data being transmitted.
In summary, communication design is one of the most critical and delicate aspects of PCF design. A bad communication specification could allow user data exfiltration, not achieving the intended goals of the protocol and compromising user privacy. Additionally, it is important to note that the legitimate payloads for communication are not fixed in a monolithic architecture; they can be adapted and expanded in the future to accommodate new legitimate communications that adhere to the PCF standard.
## 5 Discussion
In this paper, we presented _Privacy-preserving Client-side Fingerprinting_ (PCF) a standard proposal to allow legitimate device fingerprinting purposes, avoiding the risk of web tracking. Several aspects are topics of discussion regarding the application of the PCF method. In this section, we discuss the main implications, design choices, and limitations of our approach.
### Protocol Adoption
One of the key assumptions of PCF is the widespread adoption and acceptance of the standard by affected parties: browser vendors/clients, websites and third-parties. However, the development and implementation of PCF would improve the lives of all actors involved, except for those whose goal is to compromise user privacy (e.g., web tracking).
In the case of websites and third parties, the implementation of the PCF standard would greatly simplify the adoption and implementation of legitimate device fingerprinting techniques. Currently, these techniques face challenges due to browser mitigations and restrictions, such as, web api randomization. However, by adhering to the PCF standard, websites and third parties can overcome these limitations and leverage device fingerprinting in a responsible and effective manner, enabling various use cases such as user authentication, bot detection, and compatibility checks. For example, a bank webpage could utilize device fingerprinting to check if a user is running outdated software, which could potentially make them vulnerable to browser vulnerabilities. Based on this information, the bank could take appropriate actions, such as blocking access until the user updates their software.
In the case of clients, the implementation of the PCF standard enables browser vendors to employ more aggressive strategies for identifying and handling suspicious scripts. By creating an isolated context for device fingerprinting scripts, it establishes a controlled environment where legitimate device fingerprinting can occur, while any activities outside this context that are recognized as suspicious for web tracking can be effectively blocked. In simple terms, PCF provides a conducive environment for legitimate device fingerprinting by allowing various actors to perform such activities. This proposal enables browser vendors to apply stricter controls on scripts that are identified as engaging in device fingerprinting, as PCF is specifically designed to address this purpose. In summary, the adoption of the PCF protocol offers a practical and straightforward solution for browsers and websites without requiring complex or unconventional implementations.
### Implementation design
In our paper, we introduce PCF, a protocol that incorporates a diverse set of design policies to address various considerations and ensure effectiveness and privacy compliance. In the subsequent paragraphs, we delve into a discussion of alternative approaches that were considered for each design choice, highlighting their relevance and potential implications.
### PCF Declaration
We suggest that the declaration of PCF scripts can be done through two methods: using the HTTP Response Header or the script attribute. In these solutions, our focus has been primarily on developing the ability to declare whether a script is PCF-compliant or not, without delving into more detailed aspects of the declaration process. For instance, in addition to simply selecting whether a script will be executed as PCF, there could be the possibility to declare the specific goals or objectives of the script. This would allow for a more nuanced and detailed declaration of the script's intended purpose within the PCF framework.
### One-Time Communication policy
In PCF, we propose the implementation of a one-time policy for communications. This policy dictates that each PCF script is limited to one communication event with the webpage (e.g., using postMessage) and one communication event with each site [7]. By limiting the number of communication events, we mitigate the risk of unauthorized data transmission. For example, if multiple connections were allowed per origin, it could potentially enable the exfiltration of data across different subdomains within the same domain.
### Other Communications Types
As mentioned earlier, the default communication policy of PCF is to block, but there is room for additional possibilities that are not covered in this work, such as proposing and incorporating legitimate device fingerprinting methods for web personalization, as long as they do not compromise user privacy. This proposal remains open for developers and security researchers to explore and contribute further with newlegitimate device fingerprinting goals.
### Execution Context Isolation
In our contribution, we assert that there is an isolation between PCF scripts and the rest of the page, but we propose that within the PCF environment, scripts should be able to communicate with each other. This design proposal enables websites to leverage the functionality and methods developed by third-party scripts within the secure and controlled PCF environment.
### User preferences
Another intriguing aspect that could be considered in the future for PCF is the inclusion of user preferences. The design of PCF enables clients to choose when they want to undergo device fingerprinting and when they prefer not to. For instance, if a user wishes to browse a shopping page without being fingerprinted, they could deactivate PCF specifically for that site.
### User Identification Mechanisms
PCF offers a well-defined user identification method within a domain, based in the device features. By incorporating per-user salt in the fingerprint communication, PCF has the potential to not only identify the device but also the user associated with that device. This additional layer of salt, combined with the per-domain salt, can contribute to enhanced user identification within the PCF framework. Indeed, by incorporating per-user salt in the fingerprint communication within PCF, websites would have the ability to establish different webpage configurations based on the individual user. This customization could enable tailored user experiences, personalized content, and specific settings based on the identified user within the PCF framework.
### Limitations
PCF's client-side design is a significant limitation, as it requires executing all the logic for different purposes on the client side. This approach introduces challenges and complexities, similar to those found in the gaming ecosystem, where the server must provide the necessary logic and information in the script. Additionally, the client-side nature of PCF opens up the possibility of script or communication manipulation, although it is worth noting that this limitation is not unique to PCF and exists in normal script execution as well.
Another potential limitation of the protocol is its impact on performance. Implementing a sandboxed runtime inside the browser, even if only resulting in slight performance overhead, can introduce additional computational and resource requirements. The isolation and communication filtering layers introduced by the sandboxing mechanism may require additional processing power and memory, potentially affecting the overall performance of the system.
## 6 Related Work
A significant amount of research has been performed in the area of device fingerprinting. In particular, research has been made both in developing new fingerprinting methods [22, 36, 42, 48], studying their evolution [56], or their prevalence and relations [9, 21, 23, 49, 47, 23].
However, there is little effort from the community on providing privacy-preserving methods that maintain the fingerprinting functionalities while developing a privacy-preserving framework for both users and fingerprinting providers. The community has focused more on the usual applications of device fingerprinting rather than fingerprinting itself.
### Privacy-preserving Web Advertisement & Analytics
The most similar approaches to PCF can be found in privacy-preserving advertisement proposals. These approaches, instead of just blocking advertisements on the web, propose methods to avoid the private information leakage from users while maintaining their functionality intact.
Toubinana et al. [54] presented _Adnostic_ that enabled targeted advertisement without compromising users' privacy, performing her behavioral profiling in the user's browser. _Privad_[29] is a very similar approach for targeted advertisement that incorporates other actors to the proposed protocol such as _ad brokers_ or _dealers_. _RePriv_[26] explorer further browsers' capabilities by maintaining an inference model within the browser space for each user. In addition, Backes et al. [15] proposed _ObliviAD_, a provably secure architecture for privacy-preserving online behavioral advertisement. In this formal and cryptographic solution, there is no assumption of any trustable third parties.
Privacy-preserving methodologies have also been proposed for other domains directly connected to the usage of web analytics. Akkus et al. [10] presented a non-tracking web analytics system that allowed publishers to directly measure the information, rather than inferring it, via computing these statistics within the client.
### Anti-fingerprinting Solutions
Most solutions seeking to protect users' privacy in the realm of device fingerprinting have been focused on breaking the tracking and linkability of device fingerprints.
Blocking extensions exist that seek to block previously identified fingerprinting scripts before loaded by the browser [39, 47, 8, 23] (e.g., Ghostery [5] and Disconnect [4]).
In a similar vein, Tor browser is a modified Firefox for the Tor network. It limits the effects of fingerprinting by making them as uniform as possible [2]. This browser spoofs fingerprinting input values, modifies and/or removes attributes, making it recognizable by third parties and the generated fingerprints are hard to maintain. The main limitation of these extensions and browser is, because of their blacklisting nature, they require large files of scripts performing fingerprinting to block them. The evolving nature of the web, makes it difficult to maintain updated scripts lists [51, 34].
Another method used to mitigate device fingerprint is to tamper with the expected results of fingerprinting methods. There are many browser extensions that _spood_ device fingerprinting, but they do not produce consistent fingerprints and, therefore, user may lose functionality [43]. _Privaricator_[43] is an anti fingerprinting method that generates randomness in the requested fingerprinting, but at the same time, establishes several randomization policies to avoid losing consistency. In similar vein, _FP-Random_[34] introduces randomness to the particular JavaScript Engines that are used to generate the fingerprints in order to tamper with the generated fingerprinting while limiting the impact on consistency.
In contrast to our PCF approach, these approaches can block and limit the effects of tracking, their goal is to make the device fingerprint useless, omitting any legitimate use that device fingerprinting may have. However, these approaches can be used as a complement of PCF in order to detect and block misuses of the protocol by rogue third parties.
### Privacy-preserving fingerprinting
As aforementioned, most of the existing solutions in the literature seek to block or mitigate device fingerprinting and, in this way, eliminate any possibilities for web tracking.
FP-Block [53] implemented a solution based on the separation of web identities: FP-Block generated an unique fingerprinting for each host and that was the one used in the communications with that particular host. Our contribution, in contrast to FPBlock, goes beyond enabling per-domain fingerprinting and extends to support other legitimate device fingerprinting purposes. While FPBlock focuses on blocking fingerprinting scripts on a per-domain basis, our approach, PCF, provides a framework that allows for the execution of various types of device fingerprinting techniques for legitimate purposes. Furthermore, the main problem with FP-Block is that fingerprints result inconsistent, because of an incomplete coverage of methods used for fingerprinting [34]. Christof's solution needs an updated list of the methods used for fingerprinting, while could lead to a situation in which there are no more unique web identities. The client implementation also needed for the solution implies a complex logic which could reduce the overall performance. PCF, in contrast, can provide as much fingerprinting diversity as needed, overcoming these issues.
### Online Bot Detection & Trust
Elie et al. [19] presented _Picasso_, a lightweight device fingerprinting method to detect traffic sent by an emulator simulating a real device. Picasso is based upon HTML5 canvas graphical primitives. The authors demonstrated that their tool was able to perfectly distinguish between a real device, such as iPhone running Safari, from a desktop client spoofing the same configuration.
In a similar vein, Google announced Trust Tokens [28], a method against fraud, capable of distinguishing bots from real humans. Websites issued cryptographic tokens to users they trust (e.g., reCAPTCHA score). When users visited a given website, the server could accept the previously generated token as a proof that the user is not a bot. These tokens, in contrast to cookies, are not unique for each user.
In these approaches, their goal is to detect bots using techniques to identify and validate the integrity of users. While
the goal of these approaches is far from ours, the techniques presented might be adapted as a complement to enhance trust among parties within our PCF protocol.
## 7 Conclusions
We introduced _Privacy-preserving Client-side Fingerprinting_ (PCF) to fill the gap between allowing fingerprinting legitimate uses and blocking web tracking that threats users' privacy. PCF's goal is to advocate for device fingerprinting transparency: websites willing to use fingerprinting techniques while preserving users' privacy must declare their fingerprinting-containing scripts. In this way, we have demonstrated how the implementation of PCF would enable the execution of device fingerprinting for legitimate purposes.
In our contribution, we have provided a comprehensive description of the necessary implementation steps for the standardization of PCF. We have outlined the specific requirements for both browser vendors and fingerprinting script providers, highlighting the responsibilities and actions they need to take to adopt the PCF protocol.
We showed that PCF makes device fingerprinting easier for both parts, allowing a durable and reliable fingerprint generation and management, an authentication method for both client and website; and, overall, a more transparent and efficient device fingerprinting scenario for both users and hosts.
|
2309.11301 | Generalizing Across Domains in Diabetic Retinopathy via Variational
Autoencoders | Domain generalization for Diabetic Retinopathy (DR) classification allows a
model to adeptly classify retinal images from previously unseen domains with
various imaging conditions and patient demographics, thereby enhancing its
applicability in a wide range of clinical environments. In this study, we
explore the inherent capacity of variational autoencoders to disentangle the
latent space of fundus images, with an aim to obtain a more robust and
adaptable domain-invariant representation that effectively tackles the domain
shift encountered in DR datasets. Despite the simplicity of our approach, we
explore the efficacy of this classical method and demonstrate its ability to
outperform contemporary state-of-the-art approaches for this task using
publicly available datasets. Our findings challenge the prevailing assumption
that highly sophisticated methods for DR classification are inherently superior
for domain generalization. This highlights the importance of considering simple
methods and adapting them to the challenging task of generalizing medical
images, rather than solely relying on advanced techniques. | Sharon Chokuwa, Muhammad H. Khan | 2023-09-20T13:29:22Z | http://arxiv.org/abs/2309.11301v1 | # Generalizing Across Domains in Diabetic Retinopathy via Variational Autoencoders
###### Abstract
Domain generalization for Diabetic Retinopathy (DR) classification allows a model to adeptly classify retinal images from previously unseen domains with various imaging conditions and patient demographics, thereby enhancing its applicability in a wide range of clinical environments. In this study, we explore the inherent capacity of variational autoencoders to disentangle the latent space of fundus images, with an aim to obtain a more robust and adaptable domain-invariant representation that effectively tackles the domain shift encountered in DR datasets. Despite the simplicity of our approach, we explore the efficacy of this classical method and demonstrate its ability to outperform contemporary state-of-the-art approaches for this task using publicly available datasets. Our findings challenge the prevailing assumption that highly sophisticated methods for DR classification are inherently superior for domain generalization. This highlights the importance of considering simple methods and adapting them to the challenging task of generalizing medical images, rather than solely relying on advanced techniques.
Keywords:Domain Generalization Diabetic Retinopathy Variational Autoencoder
## 1 Introduction
Diabetic Retinopathy (DR) is a complication of Diabetes Mellitus (DM) which is characterized by impaired blood vessels in the eye due to elevated glucose levels, leading to swelling, leakage of blood and fluids, and potential ocular damage [6]. With the global population infected with DM projected to reach approximately 700 million by 2045, DR is expected to persist as a prevalent complication of DM, particularly in the Middle East and North Africa as well as the Western Pacific regions [25]. In general, the diagnosis of DR is based on the presence of four types of lesions, namely microaneurysms, hemorrhages, soft and hard exudates, and thus the categorization of DR typically comprises five classes, namely no DR, mild DR, moderate DR, severe DR, and proliferative DR.
The conventional method of diagnosing DR relies on manual examination of retinal images by skilled ophthalmologists. However, this approach is known to
involve time-intensive procedures, limited availability of trained professionals, and is susceptible to human error [21, 26]. Deep learning methods have emerged as an effective solution for diagnosing DR, addressing the limitations associated with traditional approaches [4, 27]. Despite the benefits offered by deep learning models, a major challenge they face is the issue of domain shift [27], which emanates from the oversimplified assumption of independence and identical distribution (i.i.d) between the training and testing data, leading to poor performance when these models are applied to new data from related but unseen distributions [7, 12]. The variations in fundus image acquisition procedures and the diverse populations affected by DR result in a substantial domain shift as shown in Fig. 1, which greatly hinders the deployment of large-scale models since a slight variation of the data-generating process often foresees a drastic reduction in model performance [30].
Domain generalization (DG) is a line of research with the goal of handling the domain shift problem [10] under minimal assumptions. It only relies on multiple or seldom single source domain(s) to train a model that can generalize to data from unseen domains, whose distribution can be radically different from source domains. To our knowledge, there exists a rather limited body of literature specifically addressing the problem of domain generalization for DR classification. Therefore, the investigation of DG for deep learning methods holds significant relevance in enhancing the accuracy of DR diagnosis across the various healthcare centers situated in different geographical locations.
In this paper, we propose our Variational Autoencoder for Domain Generalization (VAE-DG), which effectively manipulates the power of classical variational autoencoders (VAEs) [17], whose optimally disentangled latent space [13] enables the model to generalize well to unseen domains in DR classification by effectively capturing essential shared information while selectively disregarding domain-specific variations. Through the acquisition of disentangled representations that separate domain-specific and domain-invariant features, VAE-DG significantly enhances the model's ability to generalize across different domains, leading to improved performance and robustness. Our main contributions in this work are as follows:
1. We aim to inspire researchers to explore and leverage a wider spectrum of techniques, particularly simpler methods, in their pursuit of effective solutions for the challenging task of robustifying the DR classification problem.
2. To our knowledge, we are the first to explore the potential of harnessing VAEs for learning cross-domain generalizable models for the Diabetic Retinopathy classification task. Our extensive analysis reveals compelling evidence of its superiority over the state-of-the-art techniques for the DG approaches in the DR classification task.
3. We report our results using the training-domain validation criterion for model selection, which is an appropriate and widely-adopted model selection method for DG [10], thereby rectifying the existing work's [5] important limitations. To this end, we encourage future studies to conduct fair comparisons
with our methodology, establishing a standard for evaluating advancements in DG for DR classification task.
## 2 Related Works
**DG for DR classification:** DRGen [5] could be considered as the first work that tackles the DG challenge in DR classification, by combining the Stochastic Weight Averaging Densely (SWAD) [9] and Fishr [24] techniques. SWAD is a DG technique that promotes flatter minima and reduces gradient variance, while Fishr is a regularization method that aligns gradient variances across different source domains based on the relationship between gradient covariance, Hessian of the loss, and Fisher information. While the work by [5] played a pivotal role in bringing attention to this problem task, it should be noted that the results presented by the authors were based on target-domain-validation, which does not align with the established protocols of evaluating DG methods, as outlined by the widely recognized DomainBed framework [10]. We rectify this limitation by adopting the appropriate model selection strategy of source-domain validation, in accordance with accepted practices in the field of DG research.
**DG using feature disentanglement:** DG approaches based on feature disentanglement aim to disentangle the feature representation into distinct components, including a domain-shared or invariant feature and a domain-specific feature [29]. Methods like [14, 19] focus on disentangling multiple factors of variation, such as domain information, category information, or style; while this can be beneficial for certain applications, this may lead to limited interpretability and difficulties in finding an optimal balance between the different disentangled
Figure 1: A sample of fundus images from MESSIDOR-2 (top row) and EyePACS (bottom row) datasets. For an untrained expert, it is challenging to sometimes visually see the differences between the different grades, making the DR classification task challenging. Each dataset exhibits a diverse range of variations in the presentation of fundus images and furthermore, the provided sample from the two domains clearly demonstrates a significant domain shift.
factors causing complex training procedures. In contrast, our method provides a more holistic approach to feature disentanglement, and with appropriate regularization techniques, it can achieve stable training and straightforward optimization. [22, 23, 31] used fine-grained domain disentanglement, Unified Feature Disentanglement Network, and semantic-variational disentanglement, respectively, which introduces additional complexity to the model architecture, and often leads to increased computational costs during training and inference. On the contrary, our methodology which is both effective and simpler offers a more direct and efficient approach.
## 3 Method
**Overview:** In this section, we describe in detail on how we exploit conventional variational autoencoders to tackle the challenge of domain generalization by revisiting their operational principles and integrating them into our VAE-DG approach. This showcases their effectiveness in disentangling intricate DR datasets, within which we hypothesize that the optimally disentangled latent space contains domain-shared features, thereby yielding a substantial performance boost compared to existing domain generalization state-of-the-art methods. Our overall pipeline is shown in Fig. 2
**Problem settings:** Domain generalization for DR classification is defined within a framework that involves a collection of source domains denoted as \(\{S_{d}\}_{d=1}^{N}\), where \(N\) is the number of source domains. Each source domain \(S_{d}=\{({x_{i}}^{d},{y_{i}}^{d})\}_{i=1}^{n}\) comprises i.i.d data points, sampled from a probability distribution \(p(X_{d},Y_{d})\). \(Y_{d}\) is the target random variable corresponding to the progression of DR, while \(X_{d}\) is the input fundus image random variable, with each data point \({({x_{i}}^{d},{y_{i}}^{d})}\) representing an observation from its respective domain. The primary objective
Figure 2: Overview of our proposed method VAE-DG for domain generalization with a variational autoencoder by manipulating the disentangled fundus image representations to achieve a domain generalization objective.
in domain generalization thus becomes acquiring a predictor that exhibits robust performance on an unseen target domain \(T_{d}\)[10].
**Proposed method (VAE-DG):** To achieve domain generalization using our VAE-DG, we manipulate two variables (from the pooled source domains \(\{S_{d}\}_{d=1}^{N}\)) which are the input fundus image \(X_{d}\) and the latent variable \(Z_{d}\). When we consider only singular data points, \(z_{i}\) is drawn from the distribution \(z_{i}\sim p(z)\) and \(x_{i}\) is drawn from \(x_{i}\sim p(x|z)\), and their joint distribution is given by \(p(x,z)=p(x|z)p(z)\). The main goal of this probabilistic model becomes an inference problem of learning a distribution \(p(z|x)\) of some latent variables from which we can then sample to generate new fundus images which we will denote as \(x^{\prime}\). We know that this posterior distribution \(p(z|x)\) can be obtained using Bayes Theorem [15].
However, we utilize a 256-dimensional fundus latent vector whose marginal \(p(x)\) requires exponential computational time and hence becomes intractable, therefore, instead of directly calculating \(p_{\theta}(z|x)\), we resort to Variational Inference [8] such that we approximate this posterior with a tractable distribution \(q_{\phi}(z|x)\) which has a functional form. We use the Gaussian distribution as the approximation such that the problem decomposes to learning the parameters \(\phi\) = \((\mu,\sigma^{2})\) instead of \(\theta\). By incorporating this Gaussian prior as a constraint on the learned latent variables, our VAE-DG is coerced into disentangling the underlying factors of variation in the data. We can then use Kullback-Leibler (KL) divergence, to measure how well the approximation is close to the true distribution. By minimizing the KL divergence, we simultaneously approximate \(p_{\theta}(z|x)\) and the manipulation of the KL divergence expression (the complete derivation of which is beyond the scope of this discussion but can be found in [20]), we obtain Equation 1:
\[\log p_{\theta}(x)-D_{\text{KL}}\left(q_{\phi}(z|x)||p_{\theta}(x)\right)= \mathbb{E}_{z}\left[\log p_{\theta}(x|z)\right]-D_{\text{KL}}\left(q_{\phi}(z |x)||p(z)\right) \tag{1}\]
where; \(\mathbb{E}_{z}\left[\log p_{\theta}(x|z)\right]-D_{\text{KL}}\left(q_{\phi}(z |x)||p_{\theta}(z)\right)\) is known as the Evidence Lower Bound (ELBO), the former term thus becomes the lower bound on the log evidence. Subsequently, if we maximize the ELBO we thus indirectly minimize \(D_{\text{KL}}\left(q_{\phi}(z|x)||p_{\theta}(x)\right)\). Therefore, the objective function of a classical variational autoencoder can be expressed as:
\[\mathcal{L}(\theta,\phi;x)=-\mathbb{E}_{q_{\phi}(z|x)}\left[\log p_{\theta}( x|z)\right]+D_{\text{KL}}\left(q_{\phi}(z|x)||p(z)\right) \tag{2}\]
where the objective function is with respect to \(\theta\) and \(\phi\) which are the learnable parameters of the generative and inference models, respectively [16, 17].
For our VAE-DG we couple the classical variational autoencoder objective \(\mathcal{L}(\theta,\phi;x)\) with empirical risk minimization \(\sum_{i=1}^{n}\ell(f(x_{i}),y_{i})\)[28] to ensure the optimization of the original target task as illustrated in Equation 3, while simultaneously manipulating the domain-invariant latent variables acquired from the probabilistic encoder. Our final objective function consists of three distinct terms; the first term, denoted by \(-\mathbb{E}_{q_{\phi}(z|x)}\left[\log p_{\theta}(x|z)\right]\), serves as the reconstruction term, which quantifies the difference between the original fundus image
and the reconstructed \({x_{i}}^{\prime}\). The second term, \(\beta D_{\text{KL}}\left(q_{\phi}(z|x)||p(z)\right)\), is the regularizer term that minimizes the KL divergence between the encoder distribution \(q_{\phi}(z|x)\) and the prior distribution \(p(z)\), thereby promoting the learned latent representation \(z_{i}\) to follow the prior distribution. The strength of this regularization is controlled by the hyperparameter \(\beta\). The third term, \(\sum_{i=1}^{n}\ell(f(x_{i}),y_{i})\), assesses the difference between the true class labels \(y_{i}\) and the predicted class labels \(f(x_{i})\), subsequently, the parameter \(\alpha\) serves as a weight for this term.
\[\mathcal{L}=-\mathbb{E}_{q_{\phi}(z|x)}\left[\log p_{\theta}(x|z)\right]+\beta D _{\text{KL}}\left(q_{\phi}(z|x)||p(z)\right)-\alpha\sum_{i=1}^{n}\ell(f(x_{i}),y_{i}) \tag{3}\]
To optimize \(\mathcal{L}\) we use stochastic gradient descent with an incorporation of the alternate optimization trick [17] since we need to learn the parameters for both \(\theta\) and \(\phi\).
### Experiments
**Datasets:** We utilized four openly accessible datasets, namely EyePACS [3], APTOS [1], Messidor [2], and Messidor-2 [2] which according to their sources were obtained from different locations and populations, resulting in a notable domain shift due to variations in instruments, conditions, settings, and environmental contexts across datasets. Each dataset comprises of five distinct classes with the exception of Messidor, which lacks class 5 images. The dataset distribution for these sources is 88702, 3657, 1200, and 1744, respectively. The original images vary in size but are standardized to 224x224 pixels. Due to the inherent characteristics of real-world datasets, there exists an imbalance in class representation across all datasets with class 0 being the most dominant and class 4 the rarest.
**Implementation and evaluation criteria:** Our choice for the encoder architecture involves the Imagenet pretrained ResNet-50 [11] as the backbone. This is substantiated by existing literature [18], wherein the employment of transfer learning, despite the domain gap, has been demonstrated to accelerate the process of developing effective models even in medical imaging. We jointly trained on three source domains, with 0.2 of the source domains as the validation set, and finally evaluate on the unseen target domain using the best training-domain-validation model, this way we truly evaluate the domain generalizability of our model. The model is trained for 15,000 steps, with Adam optimizer, a learning rate of 0.0001, 256 dimensional \(z\) latent vector, and a batch size of 66 from the three source domains. To combat class imbalance we utilize resampling. \(\beta\) and \(\alpha\) are set as 50,000 to achieve a similar weighting with the magnitude of the reconstruction term. Accuracy is used as the evaluation metric in line with the established DG benchmarks [10]. All our experiments were run on 24GB Quadro RTX 6000 GPU. Our code is available at [https://github.com/sharonchokuwa/VAE-DG](https://github.com/sharonchokuwa/VAE-DG).
**Baselines:** We compare our method with the naive Empirical Risk Minimization (ERM) [10; 28] and with state-of-the-art domain generalization methods for this
problem task mainly DRGen [5] and Fishr [24]. To ensure a fair comparison, we adopt the same backbone and learning rate for all methods, except for DRGen; where we reproduce it using the original proposed learning rate of 0.0005, as the performance decreased when using 0.0001. The other method-specific hyperparameters were kept constant as proposed in the respective works.
**Results and Discussion:** Table 1 indicates that VAE-DG exhibits the highest average accuracy of \(68.11\pm 1.2\%\), which represents an 8.11% improvement over DRGen, 2.1% over Fishr, and 1.3% over ERM. Furthermore, VAE-DG demonstrates superior performance across most domains (APTOS, EyePACS, and Messidor-2) and exhibits the lowest standard error of 1.2%, indicating its relative robustness compared to the other methods. VAE-DG's enhanced performance solidifies the advantageous characteristics of this simpler approach whose latent space facilitates the explicit disentangling of domain-specific and domain-invariant features, ultimately improving target domain generalization. The oracle results [10] of VAE-DG are presented as a reference for the upper bound of the method, rather than for direct comparison, indicating that our proposed method achieves a 1.8% reduction compared to the upper bound.
ERM outperforms more sophisticated methods (DRGen and Fishr) because it is a simple approach and does not make strong assumptions about source-target domain relationships; it focuses on optimizing performance on available source domains and leveraging multiple domains to capture a wider range of variations, showcasing its ability to generalize to unseen target domains (if the domain shift is small [10]).
Overall, the relatively poor performances of DRGen and Fishr methods which attain 60.00% and 66.01% average accuracies respectively can be attributed to the fact that these methods often impose specific constraints or assumptions about the domain shift, which could limit their performance in scenarios that deviate from those assumptions. The lack of robustness of such methods with variations in the data is also vindicated by the large standard error (16.3%) for DRGen's Messidor-2 domain performance.
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline Method & \multicolumn{1}{l}{\multirow{2}{*}{Apto}} & \multicolumn{1}{l}{\multirow{2}{*}{EyePACS}} & \multicolumn{1}{l}{\multirow{2}{*}{Messidor}} & \multicolumn{1}{l}{\multirow{2}{*}{Messidor-2}} & \multicolumn{1}{l}{\multirow{2}{*}{Avg.}} \\ \hline ERM & \(63.75\pm 5.5\) & \(70.22\pm 1.6\) & \(\mathbf{66.11}\pm 0.8\) & \(67.38\pm 1.0\) & \(66.86\pm 2.2\) \\ DRGen & \(57.06\pm 0.9\) & \(72.52\pm 1.3\) & \(61.25\pm 4.2\) & \(49.16\pm 16.3\) & \(60.00\pm 5.7\) \\ Fishr & \(62.89\pm 5.0\) & \(71.92\pm 1.3\) & \(65.69\pm 1.1\) & \(63.54\pm 3.8\) & \(66.01\pm 2.8\) \\ VAE-DG & \(\mathbf{66.14}\pm 1.1\) & \(\mathbf{72.74}\pm 1.0\) & \(65.90\pm 0.7\) & \(\mathbf{67.67}\pm 2.0\) & \(\mathbf{68.11}\pm\mathbf{1.2}\) \\ \hline \multicolumn{5}{l}{_Oracle Results_} \\ VAE-DG & \(68.54\pm 2.5\) & \(74.30\pm 0.2\) & \(66.39\pm 1.3\) & \(70.27\pm 1.2\) & \(69.87\pm 1.3\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison between our proposed method with domain generalization methods for DR classification. Each experiment was repeated thrice, employing distinct random seeds (0, 1, 2), and the average accuracy (Avg.) and corresponding standard deviation are reported for each target domain.
In contrast to the findings of [5], our extended analysis presented in Table 2 reveals a significant decline in model performance by 23.14% when incorporating SWAD, aligning with [9]'s observation that SWAD is not a perfect or theoretically guaranteed solver for flat minima. We explored the influence of a larger network architecture (ResNet-152) and the obtained results indicate that a larger network architecture can improve image reconstruction quality but has a negative impact on the primary DG objective, as evidenced by the 1.5% drop.
**Ablation studies:** In order to comprehensively assess the individual contributions of each component towards our DG objective, we conducted ablation studies, as summarized in Table 2. Our investigation encompassed the following aspects: (i) Latent-Dim: varying the size of the latent dimension [64; 128; 256], (ii) Fixed latent space: evaluating the impact of a fixed latent dimension, (ii) determining the impact of the weighting for the KL divergence and classification terms (\(\beta\) and \(\alpha\)), (iii) assessing the effect of the reconstruction term, and (iv) examining the influence of the KL divergence term.
We noticed that a larger latent dimension of 256 leads to higher results, potentially due to its ability to effectively bottleneck information while preserving essential features. The performance difference between a fixed latent vector and a randomly sampled one is not very large, although using a fixed latent space reduces the standard error by nearly half, suggesting that randomly sampled vectors introduce additional variability that hinders the disentanglement of domain-invariant features. Notably, removing the reconstruction and KL diver
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & APTOS & EyePACS & Messidor & Messidor-2 & Avg. & Diff. \\ \hline \multicolumn{8}{l}{**Extended Analysis**} \\ \hline VAE-DG ResNet-152 & 61.45\(\pm\)8.2 & 71.44\(\pm\)3.1 & 65.94\(\pm\)1.0 & 67.81\(\pm\)2.6 & 66.66\(\pm\)3.7 & 1.45(\(\downarrow\)) \\ \hline VAE-DG + SWAD & 55.66\(\pm\)8.8 & 73.52\(\pm\)0.0 & 34.24\(\pm\)12.2 & 16.48\(\pm\)12.0 & 44.97\(\pm\)8.3 & 23.14(\(\downarrow\)) \\ ERM + SWAD & 54.93\(\pm\)0.6 & 71.35\(\pm\)0.5 & 64.76\(\pm\)0.7 & 58.48\(\pm\)3.1 & 62.38\(\pm\)1.2 & 4.5(\(\downarrow\)) \\ \hline \multicolumn{8}{l}{**Ablations Studies**} \\ \hline Latent-dim 64 & 62.15\(\pm\)3.1 & 73.80\(\pm\)0.4 & 66.42\(\pm\)2.1 & 68.98\(\pm\)3.0 & 67.84\(\pm\)2.2 & 0.27(\(\downarrow\)) \\ Latent-dim 128 & 62.61\(\pm\)3.5 & 73.64\(\pm\)0.6 & 66.60\(\pm\)1.9 & 66.09\(\pm\)2.2 & 67.23\(\pm\)2.0 & 0.88(\(\downarrow\)) \\ \hline Fixed latent space & 63.87\(\pm\)0.6 & 73.44\(\pm\)0.8 & 66.46\(\pm\)0.6 & 69.39\(\pm\)0.8 & 68.29\(\pm\)0.7 & 0.18(\(\uparrow\)) \\ \hline \(\beta\), \(\alpha\) = 10,000 & 64.38\(\pm\)1.8 & 73.17\(\pm\)0.5 & 65.42\(\pm\)0.4 & 69.27\(\pm\)4.0 & 68.06\(\pm\)1.7 & 0.05(\(\downarrow\)) \\ \(\beta\), \(\alpha\) = 100,000 & 62.50\(\pm\)3.5 & 72.30\(\pm\)1.6 & 66.56\(\pm\)1.3 & 67.88\(\pm\)1.0 & 67.31\(\pm\)1.8 & 0.80(\(\downarrow\)) \\ \hline No Recon Loss & 63.44\(\pm\)3.9 & 70.62\(\pm\)0.8 & 66.25\(\pm\)0.8 & 65.21\(\pm\)1.4 & 66.38\(\pm\)1.7 & 1.73(\(\downarrow\)) \\ No KL Divergence & 68.29\(\pm\)2.3 & 69.98\(\pm\)4.3 & 66.60\(\pm\)1.1 & 66.93\(\pm\)1.6 & 67.95\(\pm\)2.3 & 0.17(\(\downarrow\)) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Analysis and ablation studies. Average accuracy (Avg.) values represent the mean accuracy obtained from three independent trials. The ”Diff.” column indicates the performance variation compared to our main experiments shown in Table 1. A decrease in performance is denoted by (\(\downarrow\)), while an increase is denoted by (\(\uparrow\)).
gence terms in the model's objective leads to a decrease in performance, emphasizing the importance of incorporating these regularizations. Furthermore, experimentation with \(\beta\) and \(\alpha\) values within the range of [10,000, 50,000, 100,000] reveals that excessively high or low values are suboptimal.
## 4 Conclusion
In this paper, we explored the potential of classical variational autoencoders for domain generalization in Diabetic Retinopathy classification. We demonstrate that this simple approach provides effective results and outperforms contemporary state-of-the-art methods. By strictly following the established evaluation protocols of DG, we also addressed the important limitations in the evaluations of the existing method. Our study encourages the medical imaging community to consider simpler methods in order to realize robust models.
|
2309.12608 | SPGM: Prioritizing Local Features for enhanced speech separation
performance | Dual-path is a popular architecture for speech separation models (e.g.
Sepformer) which splits long sequences into overlapping chunks for its intra-
and inter-blocks that separately model intra-chunk local features and
inter-chunk global relationships. However, it has been found that inter-blocks,
which comprise half a dual-path model's parameters, contribute minimally to
performance. Thus, we propose the Single-Path Global Modulation (SPGM) block to
replace inter-blocks. SPGM is named after its structure consisting of a
parameter-free global pooling module followed by a modulation module comprising
only 2% of the model's total parameters. The SPGM block allows all transformer
layers in the model to be dedicated to local feature modelling, making the
overall model single-path. SPGM achieves 22.1 dB SI-SDRi on WSJ0-2Mix and 20.4
dB SI-SDRi on Libri2Mix, exceeding the performance of Sepformer by 0.5 dB and
0.3 dB respectively and matches the performance of recent SOTA models with up
to 8 times fewer parameters. Model and weights are available at
huggingface.co/yipjiaqi/spgm | Jia Qi Yip, Shengkui Zhao, Yukun Ma, Chongjia Ni, Chong Zhang, Hao Wang, Trung Hieu Nguyen, Kun Zhou, Dianwen Ng, Eng Siong Chng, Bin Ma | 2023-09-22T03:48:50Z | http://arxiv.org/abs/2309.12608v2 | # SpGM: Prioritizing Local Features for Enhanced Speech Separation Performance
###### Abstract
Dual-path is a popular architecture for speech separation models (e.g. Sepformer) which splits long sequences into overlapping chunks for its intra- and inter-blocks that separately model intra-chunk local features and inter-chunk global relationships. However, it has been found that inter-blocks, which comprise half a dual-path model's parameters, contribute minimally to performance. Thus, we propose the Single-Path Global Modulation (SPGM) block to replace inter-blocks. SPGM is named after its structure consisting of a parameter-free global pooling module followed by a modulation module comprising only 2% of the model's total parameters. The SPGM block allows all transformer layers in the model to be dedicated to local feature modelling, making the overall model single-path. SPGM achieves 22.1 dB SI-SDRi on WSJ0-2Mix and 20.4 dB SI-SDRi on Libri2Mix, exceeding the performance of Sepformer by 0.5 dB and 0.3 dB respectively and matches the performance of recent SOTA models with up to 8 times fewer parameters.
Jia Qi Yip\({}^{12}\), Shengkui Zhao\({}^{1}\), Yukun Ma\({}^{1}\), Chongjia Ni\({}^{1}\), Chong Zhang\({}^{1}\), Hao Wang\({}^{1}\),
Trang Hieu Nguyen\({}^{1}\), Kun Zhou\({}^{1}\), Diamwen Ng\({}^{12}\), Eng Siong Chng\({}^{2}\), Bin Ma\({}^{1}\)\({}^{1}\)Speech Lab of DAMO Academy, Alibaba Group
\({}^{2}\)Nanyang Technological University (NTU), Singapore speech separation, transformer, attentive pooling, feature modulation
## 1 Introduction
Single-channel Speech Separation (SS) is the task of obtaining clean, single-speaker speech from a speech mixture of multiple overlapping speakers. To manage the long sequence lengths necessitated by the waveform-to-waveform prediction task of SS, models reduce the sequence using UNet-like or dual-path architectures. UNet-like architectures championed by ConvTasNet [1] consist of a series of layers that model the feature sequence at different time scales before recombining them for the final output. The dual-path architecture championed by DPRNN [2] splits the long input into a series of overlapping chunks and uses an intra-block to model local features within a chunk, followed by an inter-block of the same size to model global features across chunks. Both approaches have remained popular, with UNet-like models [3][4][5][6] generally delivering better efficiency while dual-path models [7][8][9] continue to break performance records.
Given the impressive results achieved by dual-path models, some models have also sought to improve the dual-path architecture [10][11][12]. A known issue with the dual-path architecture is that the inter-block comprises half of the model's total parameters but does not deliver commensurate performance [7]. Recently, [10] validated this by training an auxiliary model to adaptively prune tokens from Sepformer that did not contribute to performance. While there was minimal pruning in the intra-block, most of the inter-block features were pruned. This suggests that the detailed modeling by transformers in the inter-block is excessive.
Thus, [10] replaced the inter-block with chunk-averaged memory tokens between each intra-block, creating an efficient model at the cost of poorer performance. Meanwhile, QDPN [11] used deep down- and up-sampling layers in the inter-block to efficiently extract global features, delivering good performance but with significantly increased model size.
In this paper, we address efficient global modeling in dual-path SS models through the Single-Path Global Modulation (SPGM) block. SPGM performs global pooling followed by feature modulation, as shown in Figure 1, requiring only 0.5M trainable parameters in 2 linear layers that scale with embedding size. SPGM's simple and efficient global modeling enables additional intra-block layers, improving performance. SPGM achieves an SI-SDRi of 22.1 dB on the WSJ0-2Mix dataset and 20.4 dB on the Libri2Mix dataset, outperforming the strong baseline of Sepformer by 0.5 dB and 0.3 dB respectively. SPGM also matches the performance of SOTA models, QDPN [11] and SSFRNet [13], using fewer parameters.
Figure 1: Overview of the proposed SPGM model. The SPGM block (yellow box) replaces an inter-block and represents our key contribution.
## 2 Methodology
### Model Architecture
The design of SPGM1, shown in Figure 1, aims to model global information as efficiently as possible so that more model layers can be dedicated to local feature modeling. This is driven by work [7][10] which showed how the Sepformer inter-blocks contribute minimally to overall performance, from which we infer that the global information required by the model across chunks is relatively simple and does not need to be modeled using transformer layers.
Footnote 1: Code will be released upon publication
To model global information, a block would need to perform 2 tasks. Firstly, the model must be able to model global, time-independent information from across the feature sequence. Secondly, it must pass the global information back to the entire feature sequence. In SPGM, detailed in Figure 2, the former is achieved in the pooling module through a parameter-free pooling operation while the latter is achieved in the modulation module through a mechanism which requires only 2 trainable linear layers.
The rest of the model as shown in Figure 1 consists of standard SS model components. The Encoder and Decoder are a single layer of 1D convolution and 1D transposed convolution respectively. Refer to Section 3.2 for implementation details of the model.
### Single Path Global Modulation
The SPGM pooling module, shown in the orange box in Figure 2, consists of chunk pooling and inter pooling. The overall change in dimensions across the SPGM pooling module is further illustrated in Figure 3. Chunk pooling obtains a single vector of embedding size from each chunk. Inter pooling is a per-channel average and is taken across the chunk dimension, resulting in a single global vector of embedding size. This global vector is then used as the input for the SPGM modulation module. We experiment with different chunk pooling methods, which we discuss in Section 2.3, while inter pooling is always done using simple averaging to give each chunk an equal weight in the final global vector.
The SPGM modulation module, shown in the blue box in Figure 2, accepts the features from the intra-block and the global vector. The modulation aims to condition each feature in the sequence by a time-independent global vector so that subsequent intra-blocks can refine the local features based on the imbued global information. The modulation block also has to do this with minimal computation and parameters.
The SPGM modulation module is designed based on feature-wise linear modulation [14] which has been used to fuse multi-scale information in recent SS models like TDANet [5] and S4M [6] to good effect and is thus highly suitable for its role in SPGM as well.
During the forward pass, the modulation layer refines the global vector from the pooling module. Our method makes use of a linear and non-linear path, each differentiated with a simple \(\mathcal{O}(N)\) linear layer in each path, where N is the channel size of the model. Modulation is achieved through element-wise multiplication of the embedding vectors and the feature sequence.
The modulation operation can be expressed as the following equation:
\[x_{o}=\sigma(W_{s}x_{emb}^{T})\otimes x_{f}+W_{g}x_{emb}^{T}\otimes x_{f} \tag{1}\]
where \(x_{emb}\in\mathbb{R}^{N}\) is the [1,1,N] global vector from the global pooling module, \(W_{s}\) and \(W_{g}\) are the weights of linear layers, as shown in Figure 2, each with size \(N\times N\), and \(x_{f},x_{o}\in\mathbb{R}^{K\times S\times N}\) are the feature input and output of the SPGM block respectively. \(\sigma\) represents the sigmoid activation function. K is the number of time steps, S is the number of chunks and N is the embedding size.
### Pooling Methods
In this work we experiment with two different pooling methods for chunk pooling in the SPGM global pooling module: last element selection and attentive pooling.
Figure 3: Illustration of the change in dimensions across the chunk and inter pooling process in the global pooling module.
Figure 2: The SPGM block consists of the global pooling module (orange) and the modulation module (blue). K is the number of time steps, S is the number of chunks and N is the embedding size. Refer to Equation 1 for the detailed implementation of the modulation module.
Last Element Selection (LE) refers to the selection of the last element of each chunk as the global vector as shown in Figure 4. This works because the model creates overlapping chunks, so the last element of each chunk would be redundant and can be repurposed by the preceding transformer layer as a specialized global vector. Through the attention mechanism, the intra-block transformer layers can adaptively store global information in the last element, which is both initialized by and superimposed on the feature sequence. Furthermore, this method requires no additional trainable parameters. Unlike the recently proposed Papez [10], which utilizes memory tokens in a single-path model, our method does not require an explicit memory token, eliminating the need for additional hyperparameter tuning over the random initialization of additional memory parameters.
Attentive Pooling (AP) seeks to develop an adaptive weight for each feature in a chunk for aggregation. It passes the features through a linear layer which outputs a single value for the weight of the feature in the final aggregated vector. Our implementation follows the attentive pooling method commonly used in speaker verification models [15]. Although this method is not strictly parameter-free, the parameters required for this pooling method are exactly 256 since it only requires a single linear layer to map an input of embedding size (256) to a single value, which is negligible in the context of a 26M parameter model.
## 3 Experiments
### Datasets
The WSJ0-2Mix [16] and Libri2Mix [17] datasets are standard benchmarks for SS. The WSJ0-2Mix dataset consists of mixtures drawn from the WSJ0 corpus. The Libri2Mix dataset is generated from the LibriSpeech train-clean-360 Corpus. In our experiments, both datasets have a sampling rate of 8kHz. Additionally, we use 10s cropping on the WSJ0-2Mix dataset and 5s cropping on the Libri2Mix dataset.
### Model Configuration
The original Sepformer's dual-path architecture consisted of intra- and inter- blocks consisting of 8 transformer layers each. Each Sepformer block consists of 1 intra-block and 1 inter-block. The model has 2 Sepformer blocks in total: 16 intra-transformer layers and 16 inter-transformer layers.
To enable direct comparisons, we train as baseline 2 intra-only Sepformer variants, labeled IntraSepformer, with 16 and 32 intra-transformer layers respectively. Our models, SPGM-x-S and SPGM-x (x denoting the chunk pooling method of either LE or AP), also have 16 and 32 Intra-transformer layers respectively, producing the 4 models reported in Table 1.
The encoder and decoder of all the models each have a kernel size of 16 and stride of 8. In all transformer layers, the number of channels is 256, while the size of the hidden feed-forward network is 1024. The number of heads in each transformer layer is 8.
### Training Parameters
All model training is performed using the Speechbrain framework [18]. We trained for a maximum of 200 epochs with a starting learning rate of \(1.5e^{-4}\) using the Adam optimizer. The learning rate is halved with patience of 3. Training is stopped when the maximum number of epochs is reached or when the learning rate reaches a minimum of \(1.0e^{-8}\). During training, data augmentation using Speed Perturbation of a random value between 95% and 105% is used. The utterance-level Permutation Invariant Training loss [19] was used to update the weights of the model during training and the performance of the model is measured using the standard Scale-invariant signal-to-distortion ratio improvement (SI-SDRi) [20].
## 4 Results
### Effectiveness of the SPGM block
The results of the experiments reported in in Table 1 demonstrate that although local feature modeling is important, some global information modeling is required, which can be provided using the SPGM block. Comparing the SPGM-x-S models, we see that the addition of SPGM blocks significantly improves performance over the IntraSepformer model variants by up to 2.1 dB on WSJ0-2mix and 1.4 dB on Libri2Mix. The SPGM-x model variants also outperform their IntraSepformer counterparts. This shows that naively increasing the number of Intra transformer layers without global modeling does not lead to better performance.
Among the different chunk pooling methods used on the SPGM-x-S models, Attentive Pooling performs best on the Libri2Mix dataset, while the Last Element Selection performs best on the WSJ02Mix dataset. However, in the case of the WSJ0-2Mix dataset, the performance gap of 0.1dB is smaller than the performance gap of 0.4dB seen on the Libri2Mix dataset.
Figure 4: Illustration of the last element selection (LE) pooling method using a chunk size of 4 with a 50% overlap on a single channel. LE selects the last element of each chunk as the global vector while the remaining features are not used to derive the global embedding.
When the number of intra-transformer layers is increased, the effect of the chunk pooling mechanism disappears and we observe that SPGM-LE and SPGM-Att have identical performance of 22.1 dB on WSJ02Mix and 20.4 dB on Libri2Mix. This could suggest that the attention pooling was performed implicitly within the Intra-layers itself and did not need to be done in the SPGM block. The models achieve the same performance regardless of pooling method because with the increased depth of the model, the increased number of layers allows the last element to receive global information from the chunk on par with the attention mechanism used in attentive pooling.
### Comparison with Recent Models
The results of the SPGM model in comparison with recent and past state-of-the-art models are reported in Table 2. Since SPGM-LE and SPGM-AP have the same performance, they are not differentiated here. On the Libri2Mix dataset, our model achieves the same performance of 20.4dB SI-SDRi as the much larger SFSRNet [13]. On the WSJ0-2Mix dataset, our model achieves the performance of 22.1dB SI-SDRi, on par with QDPN [11], which is also a very large model with 8 times the number of parameters.
Compared with the original Sepformer model, we achieved a performance improvement of 0.5dB on WSJ0-2Mix and 0.3dB on Libri2Mix with only a 0.5M increase in the number of parameters2. This is a modest increase in the number of parameters given the performance improvement, especially in comparison with SFSRNet and QDPN, which achieve the same performance improvement relative to Sepformer, but have at least double the parameters.
Footnote 2: These 0.5M parameters are the parameters of the 4 SPGM blocks which each have two sets of linear weights resulting in \(4\times 2\times 256\times 256=524,288\) parameters. This number of parameters is a function of the embedding size used in the encoder and decoder, which in this case is 256.
Additionally, we compute the MACs (Multiply-Accumulate Operations) using PyTorch-OpCounter3 for the various models trained in this work to assess the additional computation created by the SPGM block. The SPGM-block has a negligible impact on the overall model which is dominated by the MACs arising from the transformer layers. For example, on a 1-second sequence, each SPGM-block contributes 0.02 GMACs while the overall model requires 77 GMACs.
Footnote 3: [https://github.com/Lyken17/pytorch-OpCounter](https://github.com/Lyken17/pytorch-OpCounter)
## 5 Conclusion
We have shown here that local feature modeling is more important to the speech separation task, such that using a simple and efficient model for global modeling and reallocating parameters to local feature modeling is sufficient to surpass the performance of Sepformer. This allows SPGM to match the performance of large models while using significantly fewer parameters. We achieved SI-SDRi 22.1 dB on WSJ02Mix and 20.4 dB on Libri2Mix with 26.2M parameters matching the performance of models with 59M and 200M parameters respectively. Furthermore, we exceed the performance of Sepformer by 0.5 dB and 0.3 dB on the respective datasets as well. By demonstrating the significance of building an efficient global modulation block for speech separation, we hope that SPGM can serve as a template for future speech separation studies.
## 6 Acknowledgements
This work was supported by Alibaba Group through Alibaba Innovative Research (AIR) Program and Alibaba-NTU Singapore Joint Research Institute (JRI), Nanyang Technological University, Singapore. The computational work for this article was partially performed on resources of the National Supercomputing Centre, Singapore ([https://www.nscc.sg](https://www.nscc.sg)).
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline \multirow{2}{*}{Model} & Param & \multirow{2}{*}{Intra} & \multirow{2}{*}{Inter} & \multicolumn{2}{c|}{SI-SDRi (dB)} \\ \cline{3-5} & (M) & & & WJ2 & L2M \\ \hline Sepformer [7] & 25.7 & 16 & 16 & 21.6 & 20.1 \\ \hline IntraSepformer & 13.0 & 16 & 0 & 18.7 & 18.2 \\ SPGM-LE-S & 13.3 & 16 & LE & 20.8 & 19.2 \\ SPGM-AP-S & 13.3 & 16 & AP & 20.7 & 19.6 \\ \hline IntraSepformer & 25.7 & 32 & 0 & 19.6 & 18.8 \\ SPGM-LE & 26.2 & 32 & LE & 22.1 & 20.4 \\ SPGM-AP & 26.2 & 32 & AP & 22.1 & 20.4 \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of Sepformer variants with 16 intra-layers and 32 intra-layers against SPGM models with the same number of intra-layers and SPGM with different chunk pooling mechanisms. The datasets used are WSJ02Mix (W2J) and Libri2Mix (L2M). LE stands for Last Element selection and AP stands for Attention pooling used in the pooling module of the SPGM block used.
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline \multirow{2}{*}{Model} & Params & \multicolumn{2}{c|}{SI-SDRi (dB)} \\ \cline{3-5} & (M) & WSJ02Mix & Libri2Mix \\ \hline TDANet [5] & 2.3 & 10.8 & - \\ ConvTasNet [1] & 5.1 & 15.3 & - \\ SuDoRMRF [3] & 2.7 & 17.0 & - \\ DPRNN [2] & 2.6 & 18.8 & - \\ Papez [11] & 1.47 & 19.2 & 17.2 \\ DPTNet [21] & 2.7 & 20.2 & 16.2 \\ Wavesplit [22][7] & 29 & 21.0 & 19.5 \\ SFSRNet [13] & 59.0 & 22.0 & **20.4** \\ QDPN [11] & 200 & **22.1** & - \\ \hline Sepformer [7] & 25.7 & 21.6 & 20.1 \\ SPGM (Ours) & 26.2 & **22.1** & **20.4** \\ \hline \end{tabular}
\end{table}
Table 2: Performance of SPGM models on the WSJ0-2Mix and Libri2Mix datasets in comparison with Sepformer and other systems. |
2309.13516 | InSpaceType: Reconsider Space Type in Indoor Monocular Depth Estimation | Indoor monocular depth estimation has attracted increasing research interest.
Most previous works have been focusing on methodology, primarily experimenting
with NYU-Depth-V2 (NYUv2) Dataset, and only concentrated on the overall
performance over the test set. However, little is known regarding robustness
and generalization when it comes to applying monocular depth estimation methods
to real-world scenarios where highly varying and diverse functional
\textit{space types} are present such as library or kitchen. A study for
performance breakdown into space types is essential to realize a pretrained
model's performance variance. To facilitate our investigation for robustness
and address limitations of previous works, we collect InSpaceType, a
high-quality and high-resolution RGBD dataset for general indoor environments.
We benchmark 12 recent methods on InSpaceType and find they severely suffer
from performance imbalance concerning space types, which reveals their
underlying bias. We extend our analysis to 4 other datasets, 3 mitigation
approaches, and the ability to generalize to unseen space types. Our work marks
the first in-depth investigation of performance imbalance across space types
for indoor monocular depth estimation, drawing attention to potential safety
concerns for model deployment without considering space types, and further
shedding light on potential ways to improve robustness. See
\url{https://depthcomputation.github.io/DepthPublic} for data and the
supplementary document. The benchmark list on the GitHub project page keeps
updates for the lastest monocular depth estimation methods. | Cho-Ying Wu, Quankai Gao, Chin-Cheng Hsu, Te-Lin Wu, Jing-Wen Chen, Ulrich Neumann | 2023-09-24T00:39:41Z | http://arxiv.org/abs/2309.13516v2 | # InSpaceType: Reconsider Space Type in Indoor Monocular Depth Estimation
###### Abstract
Indoor monocular depth estimation has attracted increasing research interest. Most previous works have been focusing on methodology, primarily experimenting with NYU-Depth-V2 (NYUv2) Dataset, and only concentrated on the overall performance over the test set. However, little is known regarding robustness and generalization when it comes to applying monocular depth estimation methods to real-world scenarios where highly varying and diverse functional _space types_ are present such as library or kitchen. A study for performance breakdown into space types is essential to realize a pretrained model's performance variance. To facilitate our investigation for robustness and address limitations of previous works, we collect InSpaceType, a high-quality and high-resolution RGBD dataset for general indoor environments. We benchmark 11 recent methods on InSpaceType and find they severely suffer from performance imbalance concerning space types, which reveals their underlying bias. We extend our analysis to 4 other datasets, 3 mitigation approaches, and the ability to generalize to unseen space types. Our work marks the first in-depth investigation of performance imbalance across space types for indoor monocular depth estimation, drawing attention to potential safety concerns for model deployment without considering space types, and further shedding light on potential ways to improve robustness. See [https://depthcomputation.github.io/DepthPublic](https://depthcomputation.github.io/DepthPublic) for data.
## 1 Introduction
Given an image input \(I\), monocular depth estimation's target is to predict pixel-level depth map \(D\) corresponding to \(I\). It is a fundamental task in 3D vision for indoor applications, such as AR/VR gaming systems [38; 60; 56; 57], robot assistance and navigation [15], 3D photo creation [51], and novel view synthesis [13]. Challenges for indoor scenes lie in addressing highly diverse environments and arbitrarily arranged objects cluttered in the near field. Especially performances optimized for one environment may not apply to another due to highly varying structures between different space types of indoor environments.
Most monocular depth estimation works begin from algorithmic perspectives, including advances in network architecture [31; 29; 8; 22; 67; 25; 6; 32; 44; 29; 58], loss function [5; 18; 64; 33], and learning paradigm on self-supervised learning [59; 21; 72; 66]. NYU-Depth-V2 (NYUv2) pioneers in collecting indoor dense depth and thereafter becomes useful for monocular depth estimation in indoor scenes.
While being prominent in many prior studies, using NYUv2 as the only primary indoor depth estimation benchmark can bring potential shortcomings: (1) They always report error or accuracy numbers on the whole test set and overlook the variance of performance across different indoor _space types_, which becomes a main concern in robustness when an edge-user applies pretrained models to uncommon or tailed space types and may observe degradation. Objects and textures are highly diverse for indoor scenes and may be specific to some spaces. For example, kitchenware is specific to kitchen and rarely appear in other spaces, or desk and chair are specific to the classroom. Therefore, when a training set misses some space types, performances easily drop due to unseen objects or arrangements specific to those types. This issue becomes more serious because private
spaces appear more often than other spaces in NYUv2. Without a breakdown across space types, the **practicability** of prior methods is still _without verification_. (2) NYUv2 suffers from relatively low resolution (480\(\times\)640), an older camera imaging model, and high noise levels. These downsides make evaluations on NYUv2 less reliable and meet the needs of real high-quality applications.
To enhance robustness and address the aforementioned problems, we take a deeper analysis of indoor space types using our novel dataset, _InSpaceType, for benchmark and evaluation_. InSpaceType is collected by an off-the-shelf modern stereo camera system [1] with a high resolution (1242\(\times\)2208), much less noise, high-quality depth maps aligned with images, and optimized performance to near-range depth sensing that is suitable for indoor scenes. We compare with other commonly used evaluation protocols for indoor monocular depth estimation in Table 1. Fig. 1 show data examples.
InSpaceType captures common space types, including private room, office, hallway, lounge, meeting room, large room, classroom, library, kitchen, playroom, living room, and bathroom. A hierarchical system is designed to describe these indoor space types. As the first step, 11 high-performing methods pretrained on NYUv2 are collected for InSpaceType zero-shot benchmarks, including supervised and self-supervised learning methods. The overall performances and type breakdown are exhibited. We find that those prior methods suffer from severe performance imbalance between space types. They perform well in head types such as private room (\(\delta_{1}=92.05\)) but much worse in tailed types such as large room (\(\delta_{1}=54.93\)). The presented type breakdown is practical as it _goes beyond an average score and reveals performance variances on types_. Our analysis helps us understand strength and weakness of a pretrained model and potentially reveals its underlying biases.
In addition to NYUv2, 3 other training datasets, including SimSIN (aggregation of Replica [53], Matterport3D [11], and Habitat-Matterport3D), UniSIN [59], and Hypersim [48], are experimented. We dig into characteristics in performance trained on these datasets and enumerate certain space types these datasets tend to have bias towards or against. In particular, we find synthetic or simulation datasets cannot accurately capture the intricate complexities of cluttered or small objects in real scenes, which is common in spaces such as kitchen. To mitigate performance imbalance across types, 3 popular strategies are investigated: class re-weighting [12; 20], class-balanced sampling[24], and meta-learning[61]. Dividing the studied 12 types into 3 groups based on their spatial functions and then examining the generalizability between groups, we find generalization to unseen types is challenging. The best \(\delta_{1}\) accuracy can be as high as 98.12 for intra-group evaluation but drops to 59.02 in the worst case of inter-group evaluation. Our findings reveal that generalization to unseen groups is challenging due to high diversity of objects and mismatched scales across types. Overall, this work serves a practical purpose for robustness and emphasizes the importance of the usually overlooked factor- space type in indoor environments. We draw attention to potential safety concerns for model deployment without considering performance variance across space types.
Figure 1: **Data samples of our InSpaceType Dataset.**
Our contributions are summarized as follows:
* To our best knowledge, we are the first to present a thorough analysis that considers space type in indoor monocular depth estimation. We benchmark 11 recent methods and reveal that they are biased towards/against certain types. We emphasize the importance of being aware of such bias for real-world applications.
* We collect a dataset, InSpaceType, to facilitate our purpose of benchmarking and analyzing the variance of performances across multiple space types.
* We analyze 4 commonly-used training sets in indoor monocular depth estimation and enumerate their strengths and weaknesses towards certain space types. We further investigate 3 popular methods to mitigate performance imbalance.
## 2 Related Work
### Indoor Monocular Depth Estimation
Monocular depth estimation is a fundamental task in computer vision. Very early methods find cues in similar regions, shades, or motion to assign depth values [14], or use probabilistic models for depth estimation [50]. This task has especially been popular during deep learning era. NYUv2 pioneers to collect a dense depth dataset for indoor scenes toward to goal.
**Supervised method.** Most methods operate in supervised learning and directly learn from paired RGBD data in the training set. Earlier deep methods include advances in architecture like fully convolutional neural network [27] or operating in different learning paradigms such as multi-task learning [16], transfer learning [3], dual-stream network [30; 70], or multi-scale network [39]. We organize more recent research directions as follows.
\(\bullet\) Loss design and discretized depth intervals: DORN [18] uses a space-increasing discretization strategy and recasts the depth regression as ordinal regression. Adabins [5] designs adaptive discretization and combines pixel-level regression loss and bin-center density loss. LocalBins [6] learns per-pixel discretization instead of global distribution.
\(\bullet\) Planarity: BTS [28] use multi-scale planar guidance to estimate depth. P3Depth [42] estimates plane coefficients for pieces and uses them for adaptive fusion.
\(\bullet\) Conditional random fields (CRFs): Early probabilistic approaches build on CRFs [34; 47]. Recently NeWCRFs [67] applies CRFs in windows to reduce computation overhead in fully connected CRFs.
\(\bullet\) Normal: Normal is helpful in depth regularization. VNL [64] enforces virtual normal constraints for depth prediction. IronDepth [4] uses normal map to propagate depth between pixels.
\(\bullet\) Mixed-dataset training: MiDaS [46] pioneers to collect 11 different mixed data source and results in high generalization of depth estimation. DPT [45] further improves results using vision transformers. ZoeDepth [7] is based on DPT to further predict metric depth. LeReS [65; 41] also trains on mixed dataset for highly robust depth estimation.
\(\bullet\) Transformer: DepthFormer [31] and PixelFormer [2] both use vision transformers and large models for higher accuracy. GLPDepth [25] extracts global and local features with transformers and combines them with attention. Combined with transformers, MIM [62] studies masked image modeling and builds upon large transformer models [35] for visual pretraining to learn good representation first and then finetune on downstream tasks. AiP-T [40] uses VQGAN [17] and represents depth in a unified token space to attain high accuracy.
**Self-Supervised method.** Most methods use NYUv2 and learn from consecutive frames with photometric consistency [9; 69; 72]. DistDepth [59] adopts a distillation loss to learn from relative-depth pretrained models and incorporate left-right stereo consistency to learn metric depth.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline Dataset & Purpose & Resolution & Sensor & Real or & RGB-Imaging & Scene Diversity (\# of scenes, \# of RGBD \\ & & & Synthetic & Quality & pairs) \\ \hline Diode [54] & Indoor + Outdoor & 1024\(\times\)768 & Lidar & Real & High & Very Low (2 scenes, 753 indoor pairs) \\ Bins-1 [26] & Indoor focused & 640\(\times\)480 & Lidar & Real & Good & Low (20 scenes, 100 pairs) \\ NYUv2 [52] & Indoor focused & 640\(\times\)480 & Kinect-v1 & Real & Noisy & Medium (private room focused, 654 pairs) \\ VA [59] & Indoor focused & 640\(\times\)640 & - & Synthetic & High & Very Low (1 scenes, 3523 pairs) \\ InSpaceType & Indoor focused & 2208\(\times\)1242 & Stereo & Real & High & High (68 scenes, 1260 pairs) \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Comparison of popular evaluation protocols for indoor monocular depth estimation.**
### Evaluation on Indoor Monocular Depth Estimation
Diode [54] collects both outdoor and indoor scenes. It collects high-quality data, but it contains very low diversity with only 2 scenes for evaluation. IBims-1 [26] is also limited in the amount of scenes. VA [59] renders complex and high-quality images and depth maps but is only limited to one scene. See Table 1 for the organization.
NYUv2 [52] is still popular in the evaluation of indoor monocular depth estimation. However, it is collected by an older Kinect-v1 system [68] with noisy depth measurements and noisy imaging for RGB patterns. Its resolution is only 640\(\times\)480, which is much smaller than the needs of recent applications for robotics or high-resolution synthesis. Besides, NYUv2 mainly focuses on smaller and private rooms to evaluate performance. To overcome these limitations, our InSpaceType adopts a recent high-quality integrated stereo camera [1] to collect high-resolution images and depth. InSpaceType covers general-purpose and highly diverse indoor scenes, including private household spaces, workspaces, and campus scenes. We design a hierarchical system to describe space types and benchmark recent high-performing methods with detailed performance breakdown. This breakdown helps gain better understanding of performance variance across different spaces.
## 3 InSpaceType Dataset
We capture images and depth maps computed from left-right stereo pairs using a high-quality integrated stereo camera system, Zed-2i [1]. Its baseline is 12cm, field of view (FOV) is \(120^{\circ}\), working distance is up to 20m, and its backend engine enables dense depth outputs
It is particularly optimized for ranges within 15m, whose average error is within 5\(\%\) from its specification. This matches our needs to work in indoor environments. We operate in its ULTRA mode to output a high resolution of 2208\(\times\)1242. The stereo camera device is anchored on a hand-held stabilizer during data collection. Wide FOV makes scene images contain more cues to estimate distance. We do not zoom into small objects or flat walls, which will cause ambiguity in the scene scale. The pitch angle is about within \(\pm 30^{\circ}\), and the roll angle is within \(\pm 10^{\circ}\). This setting enriches scenes captured from different viewing directions without resulting in strange scenes or reaching a threshold where excessively large angles would eliminate cues that indicate depth ranges. For better quality, we also avoid non-Lambertian areas such as mirrors or highly reflective surfaces.
Our environments cover household spaces, workspaces, and campus spaces, including private room, office, hallway, lounge, meeting room, large room, classroom, library, kitchen. 88 different environments are visited in total. We record at 15fps while walking around those spaces. Around 40K images are collected. To create the evaluation set, we manually select 1260 images from all the environments.
Our selection criteria include (1) clear imaging with minimal motion blur, (2) not selecting from nearby 10 frames, and (3) containing sufficient cues that hint depth scales of the scene. Fig. 2 shows the dataset statistics. See Supplementary for more dataset descriptions.
Figure 2: **Statistics for InSpaceType evaluation set.**
## 4 Cross-Dataset Benchmarks
**[I: Benchmarks]** As the first step, we collect recent 11 high-performing methods and fetch open-sourced models pretrained on NYUv2 and use them to make inferences on InSpaceType. The following methods are included: DPT [45], GLPDepth [25], AdaBins [5], PixelFormer [2], NeWCRFs [67], BTS [28], MIM [63], Decomposition [23], ZoeDepth [7]. We adopt error (AbsRel, SqRel, RMSE) and accuracy metrics (\(\delta_{1}\), \(\delta_{2}\), \(\delta_{3}\) with base factor 1.25) commonly used in the literature of monocular depth estimation for evaluation. To compensate for different camera intrinsics between NYUv2 and InSpaceType, we follow prior protocols for cross-dataset evaluation [37; 61] and use median-scaling to calibrate prediction and groundtruth scale.
Table 2 shows the results. For reference, the performance for these methods on NYUv2 benchmark (in terms of lower RMSE) is listed: ZoeDepth > MIM > PixelFormer > NeWCRFs > GLPDepth > IronDepth > Decomposition > DPT > AdaBins > BTS. The ranking on InSpaceType is consistent to NYUv2. ZoeDepth, MIM and PixelFormer are top among published methods. They use large-size transformers, showing large models can learn better representations. We also notice DPT surpasses other methods in some metrics. This is because DPT was first pretrained on a mixture of datasets and then finetuned on NYUv2. Larger amount of data involved during pretraining helps the model to learn better representations for generalization. To verify the generalizability in Fig. 3, we show examples where training only on NYUv2 may stumble.
Self-supervised direction attracted more attention due to no depth groundtruth involved during training but generally lower performance, especially in driving scenarios. Indoor self-supervised methods received less attention due to the lack of indoor stereo pairs to enable robust monocular
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \hline Method & Year & Architecture & MAE & AbsRel & SqRel & RMSE & \(\delta_{1}\) & \(\delta_{2}\) & \(\delta_{3}\) \\ \hline \multicolumn{1}{|c|}{Supervised Learning} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ \hline BTS [28] & arXiv\({}^{1}\)9 & DenseNet-161 & 0.3602 & 0.1445 & 0.1162 & 0.5222 & 81.65 & 95.57 & 98.54 \\ AdaBins [5] & CVPR2-1 & Unet-v4daBins & 0.3341 & 0.1333 & 0.0975 & 0.4922 & 83.64 & 96.36 & 98.92 \\ DPT [45] & ICCV2-1 & DPT-Hybrid & 0.3090 & 0.1224 & 0.0773 & 0.4616 & 85.96 & 97.17 & 99.19 \\ GLPDepth [25] & arXiv2-22 & Mit-bit & 0.3068 & 0.1239 & 0.0788 & 0.4527 & 86.05 & 97.36 & 99.16 \\ IronDepth [4] & BMVC2-22 & EfficientNet-85 & 0.3271 & 0.1276 & 0.1022 & 0.4894 & 85.30 & 96.37 & 98.84 \\ Decomposition [23] & ECCV2-22 & EfficientNet-B5 & 0.3274 & 0.1278 & 0.1025 & 0.4899 & 85.25 & 96.35 & 98.83 \\ NeWCRFs [67] & CVPR2-22 & Swin-Large & 0.3028 & 0.1251 & 0.0823 & 0.4541 & 86.04 & 96.68 & 98.94 \\ PixelFormer [2] & WACV2-33 & Swin-Large & 0.2982 & 0.1252 & 0.0761 & 0.4392 & 86.08 & 97.03 & 99.10 \\ MIM [63] & CVPR2-22 & Swin-Large & 0.2807 & 0.1100 & 0.0679 & 0.4242 & 85.88 & 97.59 & 99.28 \\ ZoeDepth (NK) [7] & arXiv\({}^{2}\)3 & Beif-Large & **0.2469** & 0.0969 & **0.0527** & **0.3834** & 90.76 & 98.19 & 99.50 \\ ZoeDepth (N) [7] & arXiv\({}^{2}\)3 & Beif-Large & 0.2484 & **0.0956** & 0.0528 & 0.3887 & **90.81** & **98.22** & **99.52** \\ \hline \multicolumn{1}{|c|}{Self-Supervised Learning} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ \hline DistDepth (\(\text{APVL}_{\text{hybrid}}\)) [59] & CVPR2-22 & ResNet152 & 0.4688 & 0.1746 & 0.1718 & 0.6877 & 74.71 & 94.18 & 98.60 \\ DistDepth (\(\text{APVL}_{\text{hybrid}}\)) [59] & CVPR2-22 & ResNet152 & 0.3817 & 0.1447 & 0.1094 & 0.5758 & 81.05 & 95.46 & 98.69 \\ \hline \hline \end{tabular}
\end{table}
Table 2: **InSpaceType benchmark: overall performance.** The best number is in bold, and the second-best is underlined. We include ten recent high-performing methods, including supervised and self-supervising learning paradigms.
Figure 3: **Regions that trained on NYUv2 only cannot show**. InSpaceType contains several object arrangements that NYUv2 does not include, such as wall-hanging air-conditioner and phone are mostly exclusive to Asian styles rooms; a tilted viewing direction for pitch angle is shown in (B), where training on NYUv2 only cannot give robust results because NYUv2 has minor viewing pitch angle changes. DPT in their setting trains on 10 different dataset + NYUv2 (mixed-set training) attains more pleasant results. To verify generalizability, InSpaceType serves as a testbed to help find out cases where training on popular NYUv2 only cannot show.
depth scales. DistDepth [59] pioneers to create synthetic indoor stereo data and enable more stable scale estimation from monocular images and becomes current SOTA. We adopt DistDepth to show the current performance gap between supervised/ self-supervised learning. The gap tells us how far it is between training with/ without depth groundtruth.
**[II: Breakdown by Space Type]** Then, we break down MIM and PixelFormer's performances by space types and show results in Table 3. We show performance on each space type, list out top-5 high/ low-performing types, and show easy/ hard types based on the concordance in lower error/ higher accuracy (easy) and the opposite (hard). The easy types for MIM and PixelFormer are the same: playroom, private room, and classroom; the common hard types are: large room, lounge, and library. Clear easy and hard types indicate that strength and weakness for these models are apparent, showing they are biased towards/against some specific types. The easy and hard types are highly overlapping for both MIM and PixelFormer, which further unveils potential bias underlying NYUv2's training data. The most frequent space type for NYUv2 is private room and living room, which are typically small spaces. By contrast, large spaces of farther ranges are those less capable of.
From the above analysis, we find performance varies a lot across different space types. The identification of easy and hard types provides insights into the suitability of using pretrained models in specific scenarios or the need to avoid them. Furthermore, this analysis highlights that the learned representations from NYUv2 still have a gap to transfer to other space types. See Supplementary for performance breakdown for other methods and hierarchical description for types.
**[III: More Training Dataset Generalization]** Next, we validate performances trained on other popular training datasets. Specifically, we include the following datasets and models: SimSIN [59](self-supervised by DistDepth, ResNet152 [19]), UniSIN [59] (self-supervised by DistDepth, ResNet152), Hypersim [48] (supervised, ConvNeXt-Base [36]). SimSIN and UniSIN datasets are recently introduced along with DistDepth [59] with pretrained models released on both. Both datasets mainly serve the purpose of self-supervised studies that recently become popular. Thus, we consider including such self-supervised oriented datasets, SimSIN and UniSIN, and use DistDepth pretrained model for analysis. Table 4 and 5 show the results. We show depth distribution of these datasets in Supplementary for reference.
\(\bullet\) SimSIN: It contains data from Replica [53], Matterport3D [11], and HM3D [43], which are also focused on household spaces. From Table 4, its easy types are private room and living room, and hard types are large room, classroom, meeting room, and lounge. Its strength and weakness are also obvious, showing that SimSIN is heavily biased towards household spaces, and is especially under-performing in workspaces or campus scenes.
\(\bullet\) UniSIN: From Table 4, its easy types are bathroom and hallway, and hard type is only large room. One can observe UniSIN has less bias towards space types, having only few easy and hard types. We assume it is because UniSIN collects data from more diverse environments and avoids clear bias.
\(\bullet\) Hypersim: From Table 5, its easy types are private room, classroom, living room, and its hard types are large room, library, hallway, and lounge. It also has obvious bias, especially bias towards household spaces and classroom and bias against large room or types missed in the dataset such as
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{Type}} & \multicolumn{3}{c|}{MRI Results} & \multicolumn{3}{c|}{Reference Results} \\ \cline{2-13} & AbsRel & SqRel & RMSE & \(\delta_{1}\) & \(\delta_{2}\) & \(\delta_{3}\) & AbsRel & SqRel & RMSE & \(\delta_{1}\) & \(\delta_{2}\) & \(\delta_{3}\) \\ \hline Private room & 0.0927 & 0.0426 & 0.2556 & 92.058 & 99.93 & 98.013 & 0.0372 & 0.2638 & 90.17 & 98.70 & 99.80 \\ Office & 0.1106 & 0.0532 & 0.3313 & 87.67 & 94.94 & 0.4271 & 0.0669 & 0.3658 & 84.40 & 96.38 & 99.19 \\ Hallway & 0.1229 & 0.0805 & 0.5463 & 85.66 & 96.52 & 98.989 & 0.1418 & 0.0880 & 0.52ide & 81.46 & 96.40 & 99.05 \\ Louge & 0.1316 & 0.1290 & 0.2474 & 84.15 & 96.22 & 98.93 & 0.1500 & 0.1624 & 0.8215 & 9.922 & 94.79 & 98.41 \\ Meeting room & 0.0984 & 0.0483 & 0.3656 & 91.70 & 98.69 & 99.64 & 0.1103 & 0.0551 & 0.3873 & 8.25 & 98.57 & 99.76 \\ Large room & 0.2680 & 0.4499 & 1.9514 & 54.59 & 83.39 & 95.91 & 0.2125 & 0.3119 & 1.1396 & 67.71 & 88.59 & 95.40 \\ Classroom & 0.0781 & 0.0334 & 0.3071 & 45.92 & 94.92 & 99.83 & 0.0912 & 0.0403 & 0.368 & 91.37 & 99.14 & 99.86 \\ Library & 0.1342 & 0.0978 & 0.6281 & 86.561 & 96.57 & 98.85 & 0.150 & 0.1242 & 0.6527 & 82.57 & 94.88 & 97.73 \\ Kitchen & 0.1482 & 0.0971 & 0.3374 & 28.311 & 95.42 & 98.32 & 0.1899 & 0.0978 & 0.3521 & 78.86 & 90.56 & 96.63 \\ Payroom & 0.0202 & 0.0276 & 0.2466 & 94.54 & 98.26 & 99.79 & 0.1004 & 0.0766 & 0.3042 & 92.11 & 97.23 & 99.15 \\ Living room & 0.1033 & 0.0502 & 0.3448 & 89.07 & 97.88 & 99.52 & 0.1132 & 0.0568 & 0.3601 & 87.35 & 97.31 & 99.34 \\ Bathroom & 0.1456 & 0.0772 & 0.2788 & 83.45 & 96.07 & 98.13 & 0.1439 & 0.0570 & 0.2488 & 83.72 & 96.55 & 98.36 \\ \hline \multicolumn{1}{|c|}{\multirow{2}{*}{DataBuffment}} & \multicolumn{3}{c|}{**MIM**} & \multicolumn{3}{c|}{**DualFormer**} \\ \cline{2-13} Top-5 Lower RMSE & playroom, private room, bathroom, classroom, office & bathroom, private room, classroom, kitchen & & & & & & & & & & \\ Top-5 Higher \(\delta_{1}\) & playroom, classroom, private room, meeting room, living room & playroom, private room, classroom, meeting room, living room & & & & & & & & & \\ \hline Easy type & playroom, private room, classroom & playroom, private room, classroom, & playroom, private room, classroom, meeting room, living room & & & & & & & & & \\ \hline \multicolumn{1}{|c|}{\multirow{2}{*}{Image room, lounge, library, hallway, meeting room}} & \multicolumn{3}{c|}{playroom, private room, classroom, meeting room}} & \multicolumn{3}{c|}{playroom, private room, classroom, meeting room, living room} \\ \cline{2-13} Top-5 Higher RMSE & large room, lounge, library & & & & & & & & & & & & \\ Top-5 Lower \(\delta_{1}\) & large room, kitchen, bathroom, lounge, library & & & & & & & & & & & \\ \hline Hard type & large room, lounge, library & & & & & & & & & & & & & \\ \hline \end{tabular}
\end{table}
Table 3: **Performance breakdown by space types. We study MIM and PixelFormer, which are top-performing published methods. Beside the breakdown, we also list top-5 space types based on lower/higher error (RMSE) and accuracy (\(\delta_{1}\)). Easy and hard type are listed based on co-occurrence.**
library or hallway. Though Hypersim contains high-quality renderings from synthetic environments, it also focuses on household spaces and biases against several common space types.
One can find SimSIN and Hypersim, both are rendered from simulation platforms, have more obvious bias types, compared with UnSIN of real-world data collection. This indicates current trends to curate indoor synthetic data focus more on head types, especially private room and living room as the most frequent application scenarios, and may miss tailed types such as library, lounge, or hallway that is common but easily overlooked. Deploying models trained on those datasets may not be robustness in the wild. We point out this observation to call for attention when curating synthetic datasets.
\(\bullet\) Special Type: We find kitchen is a special type, which is of lower RMSE but also very low accuracy score \(\delta_{1}\) in SimSIN and Hypersim. We assume this is because kitchen contains many cluttered small objects, such as bottles, kitchenware, and utensils in the near field. SimSIN uses Habitat simulator [49], which renders images from synthetic (Replica [53]) or scanned but incomplete mesh (Matterport3D [11] and HM3D [43]). Hypersim is pure synthetically rendered from delicately modeled spaces. Those simulation strategies cannot faithfully reflect high complexity of cluttered and small objects in real scenes. Therefore, they attain lower \(\delta_{1}\), which indicates how object shapes are correctly estimated in the depth domain. This serves as an understanding of simulation versus real data, showing a gap still exists to transfer knowledge well to real scenes.
The above studies consider InSpaceType as a testing set. To further validate InSpaceType, we further create a training set for InSpaceType. The training set includes all 40K images except 1260 evaluation images and their nearby 2 frames. We experiment with training on InSpaceType, NYUv2 [52], and Hypersim [48] using DPT-Hybrid (initialized from pretrained weights) and test on Replica [10] and VA [59]. Results in Table 6 show better zero-shot cross-dataset generalization for the introduced InSpaceType.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Type**} & \multicolumn{4}{c|}{Trained on SimSIN} & \multicolumn{4}{c|}{Trained on UnSIN} & \multicolumn{4}{c|}{Trained on UnSIN} \\ \cline{2-13} & Abskel & SqqId & RMSE & \(\delta_{1}\) & \(\delta_{2}\) & \(\delta_{3}\) & Abskel & SqId & RMSE & \(\delta_{1}\) & \(\delta_{2}\) & \(\delta_{3}\) \\ \hline Private room & 0.1509 & 0.0986 & 0.4472 & 79.36 & 96.48 & 97.53 & 0.173 & 0.102 & 0.4539 & 72.81 & 93.65 & 98.68 \\ Office & 0.1812 & 0.1606 & 0.5789 & 74.01 & 94.97 & 97.80 & 0.1666 & 0.1113 & 0.5020 & 75.04 & 98.46 & 98.98 \\ Halway & 0.1597 & 0.1239 & 0.6324 & 78.12 & 94.92 & 98.64 & 0.1262 & 0.0765 & 0.4977 & 84.79 & 96.91 & 99.37 \\ Loonge & 0.1841 & 0.2148 & 0.9057 & 73.86 & 93.11 & 97.94 & 0.130 & 0.1428 & 0.7351 & 82.41 & 96.77 & 89.95 \\ Meeting room & 0.1962 & 0.2491 & 0.6491 & 66.91 & 95.98 & 94.95 & 0.130 & 0.1014 & 0.9597 & 83.94 & 97.36 & 96.63 \\ Large room & 0.1842 & 0.2639 & 1.0727 & 72.79 & 91.87 & 97.83 & 0.1680 & 0.2112 & 0.9447 & 75.95 & 94.83 & 98.68 \\ Classroom & 0.2099 & 0.2388 & 1.0292 & 67.11 & 91.96 & 96.15 & 0.1313 & 0.1166 & 0.6077 & 84.12 & 97.96 & 99.74 \\ Library & 0.1857 & 0.1913 & 0.8307 & 75.14 & 93.11 & 97.44 & 0.120 & 0.1146 & 0.6985 & 85.61 & 96.88 & 98.91 \\ Kitchaev & 0.2524 & 0.2083 & 0.5649 & 95.92 & 87.44 & 96.12 & 0.7241 & 0.1997 & 0.5740 & 52.30 & 85.15 & 96.28 \\ Playroom & 0.1597 & 0.1147 & 0.9496 & 75.59 & 97.95 & 99.67 & 0.1486 & 0.0822 & 0.4755 & 78.63 & 98.23 & 99.84 \\ Living room & 0.1600 & 0.1166 & 0.5248 & 77.05 & 94.90 & 98.98 & 0.1644 & 0.1513 & 0.5106 & 76.37 & 93.66 & 98.48 \\ Bathroom & 0.1751 & 0.1153 & 0.3900 & 4.242 & 90.84 & 96.46 & 0.1374 & 0.0409 & 0.2168 & 64.87 & 95.28 & 99.68 \\ All & 0.1746 & 0.1719 & 0.6877 & 74.72 & 94.18 & 98.61 & 0.1509 & 0.1143 & 0.5602 & 78.96 & 95.38 & 98.97 \\ \hline \multicolumn{13}{|c|}{SimSIN} & \multicolumn{4}{c|}{UnSIN} & \multicolumn{4}{c|}{Trained on UnSIN} \\ \hline \multicolumn{13}{|c|}{Top-5 Lower RMSE} & \multicolumn{4}{c|}{bathroom, private room, living room, kitchen, office} & \multicolumn{4}{c|}{bathroom, private room, playroom, hallway, office} \\ Top-5 Higher \(\delta_{1}\) & private room, hallway, living room, playroom, library & library, hallway, bathroom, classroom, meeting room & \multicolumn{4}{c|}{bathroom, hallway} \\ \hline \multicolumn{13}{|c|}{In Easy type} & \multicolumn{4}{c|}{living room, classroom, meeting room, lunge, library} & \multicolumn{4}{c|}{bathroom, hallway} \\ Top-5 Lower \(\delta_{1}\) & kitchen, meeting room, classroom, lunge room, lunge & \multicolumn{4}{c|}{bathroom, hallway} \\ \hline \multicolumn{13}{|c|}{In Best type} & \multicolumn{4}{c|}{large room, classroom, meeting room, lunge, library} & \multicolumn{4}{c|}{haven, private room, office, lurg room, living room} \\ \end{tabular}
\end{table}
Table 4: **Performance trained on SimSIN** and UnSIN**. We leverage pretrained DisDepth(\({}_{\text{DPT-Large}}\)) ResNet152 models [59] in both cases.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Type**} & \multicolumn{4}{c|}{**Abkel**} & \multicolumn{4}{c|}{**SqId**} & \multicolumn{4}{c|}{**RMSE**} & \multicolumn{4}{c|}{\(\delta_{1}\)} & \(\delta_{2}\) & \(\delta_{3}\) \\ \hline Private room & 0.1321 & 0.0607 & 0.376 & 84.53 & 97.26 & 99.36 \\ Office & 0.1678 & 0.1085 & 0.4916 & 76.01 & 93.60 & 97.87 \\ Hallway & 0.2126 & 0.1912 & 0.8347 & 65.67 & 89.99 & 97.15 \\ Loonge & 0.1937 & 0.2254 & 0.1943 & 71.31 & 92.58 & 97.58 \\ Meeting room & 0.3135 & 0.0837 & 0.5333 & 81.49 & 98.23 & 99.74 \\ Large room & 0.2525 & 0.4125 & 1.3897 & 65.48 & 85.13 & 94.95 \\ Classroom & 0.1258 & 0.0740 & 0.4700 & 85.07 & 98.50 & 99.80 \\ Library & 0.1766 & 0.1664 & 0.0854 & 73.73 & 93.54 & 86.03 \\ Kitchen & 0.2417 & 0.1343 & 0.4347 & 0.281 & 90.25 & 96.43 \\ Purvagon & 0.1629 & 0.1132 & 0.5663 & 77.09 & 93.87 & 99.15 \\ Living room & 0.1458 & 0.0872 & 0.4519 & 89.01 & 99.540 & 89.95 \\ Bathroom & 0.1648 & 0.0627 & 0.2960 & 76.11 & 98.90 & 98.89 \\ \hline All & 0.192 & 0.1187 & 0.3803 & 7.795 & 94.99 & 98.60 \\ \hline \multicolumn{13}{|c|}{Hopperim} \\ Top-5 Lower RMSE & \multicolumn{4}{c|}{} & \multic
## 5 Intra-Dataset Study
In Section 4, we emphasize cross-dataset benchmarks with space types. We benchmark several high-performing methods with a breakdown into space types. Further, we unveil and enumerate potential bias in several training sets. Next, we focus on InSpaceType itself for deeper analysis.
**[IV: Dataset Fitting]** We use InSpaceType training set and train on small-size ConvNeXt network using standard \(L_{2}\) loss supervised by groundtruth depth and test on the evaluation set. We use ConvNeXt networks for their high performance and as general-purpose networks to investigate data fitting and bias mitigation without loss of generality. Results in Table 7 show most space types can fit in well when training on all types together. Large room and lounge are large-size spaces and naturally result in slightly higher RMSE. Kitchen's \(\delta_{1}\) is a bit lower than other types due to the reason specified in [III]. It is worth noting that there is an apparent trend: for errors, larger rooms and longer ranges tend to have a higher estimation error; for accuracy, arbitrarily arranged small objects in the near field are challenging, a frequent scenario for kitchen.
**[V: Mitigation of Uneven Distribution]** We also experiment with several basic and popular strategies to help mitigate the imbalance issue across different types. Specifically, we examine class re-weighting (CR) [12; 20], class-balanced sampling (CBS) [24], and Reptile-like meta-learning (ML) [61]. CR use weights inversely proportional to occurrences to compensate for types of higher occurrences. CBS is to sample from all classes with equal probability. Reptile ML uses bi-level optimization for learning to learn across tasks to attain higher generalizability and may mitigate unbalance. We adopt ConvNeXt-small (Conv-sml) and ConvNeXt-base (Conv-b) as backbones and experiment on InSpaceType train and evaluation set. From Table 8, one can find CBS and ML are better strategies to attain lower standard deviation across types (t-STD) and better overall performance. Though CR attains lower t-STD, its overall performance drop as well. This is because CR could harm head-class performances as observed in literature [55; 71].
**[VI: Generalization to Unseen Types]** In addition to generalization to unseen datasets, we are also curious about generalization to unseen types. We next divide the whole InSpaceType training set into different splits, train on each division, and then evaluate on InSpaceType eval split. The whole training set is divided into three groups (G) based on types. G1: private room, kitchen, living room, bathroom; G2: office, hallway, meeting room, classroom; G3: lounge, large room, playroom, library. G1 is for household spaces; G2 is related to work or studies; G3 contains longer-range spaces.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline
**Test Region Region** & AbsRel & **RMSE** & \(\delta_{1}\) & \(\delta_{2}\) & \(\delta_{3}\) & Test@VA & AbsRel & RMSE & \(\delta_{1}\) & \(\delta_{2}\) & \(\delta_{3}\) \\ \hline Hyperun & 0.1547 & 0.3833 & 79.88 & 92.94 & 97.39 & Hyperun & 0.1620 & 0.2997 & 77.65 & 94.33 & 98.29 \\ NYUv2 & 0.1524 & 0.3652 & 80.62 & 93.11 & 97.65 & NYUv2 & 0.1584 & 0.2650 & 80.19 & 95.21 & 98.78 \\ InSpaceType & **0.1441** & **0.3347** & **81.82** & **93.51** & **98.12** & InSpaceType & **0.1507** & **0.2483** & **81.74** & **95.50** & **99.01** \\ \hline \end{tabular}
\end{table}
Table 6: **Zero-Shot generalization.** Hypersim, NYUv2, and InSpaceType are adopted as training sets. Replica (Left Table) and VA (Right Table) are used as indoor testing sets. Training on InSpaceType induces better results to validate InSpaceType’s quality for zero-shot cross-dataset scenarios.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline & Private room & Office & Hallway & Lounge & Meeting room & Large room \\ \hline RMSE & 0.1344 & 0.1729 & 0.2354 & 0.3185 & 0.1778 & 0.3153 \\ \(\delta_{1}\) & 98.41 & 96.93 & 95.81 & 96.88 & 98.11 & 98.22 \\ \hline & Classroom & Library & Kitchen & Playroom & Living room & Bathroom \\ RMSE & 0.1725 & 0.2543 & 0.1825 & 0.1707 & 0.1556 & 0.0943 \\ \(\delta_{1}\) & 98.77 & 97.34 & 94.63 & 96.86 & 98.14 & 96.57 \\ \hline \end{tabular}
\end{table}
Table 7: **Results of training and evaluation on train/eval splits of InSpaceType.**
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline
**Model** & **Strategy** & AbsRel & SqRel & RMSE & \(\delta_{1}\) & \(\delta_{2}\) & \(\delta_{3}\) & t-STD\({}_{\text{RMSE}}\) & t-STD\({}_{\delta_{1}}\) \\ \hline Conv-sml & w/o & 0.0542 & 0.0181 & 0.1918 & 97.66 & 99.60 & 99.88 & 0.0642 & 1.4850 \\ Conv-sml & w/ CR & 0.0606 & 0.0202 & 0.2071 & 97.06 & 99.52 & 99.86 & 0.0630 & 1.4371 \\ Conv-sml & w/ CBS & 0.0501 & 0.0166 & 0.1816 & 98.04 & 99.66 & 99.90 & 0.060 & **1.1632** \\ Conv-sml & w/ ML & 0.0482 & 0.0160 & 0.1769 & 98.21 & 99.68 & 99.89 & **0.0580** & 1.3829 \\ \hline \hline Conv-b & w/o & 0.0510 & 0.0174 & 0.1846 & 97.96 & 99.63 & 99.89 & 0.0673 & 1.2236 \\ Conv-b & w/ CR & 0.0567 & 0.0196 & 0.1986 & 97.47 & 99.58 & 99.87 & 0.0619 & 1.1577 \\ Conv-b & w/ CBS & 0.0439 & 0.0146 & 0.1667 & 98.52 & 99.73 & 99.90 & 0.0561 & **1.0990** \\ Conv-b & w/ ML & 0.0451 & 0.0156 & 0.1692 & 98.44 & 99.70 & 99.90 & **0.0540** & 1.1269 \\ \hline \end{tabular}
\end{table}
Table 8: **Comparison for imbalance mitigation strategies.** Class weights (CR), class-balanced sampling (CBS), and meta-learning (ML) are examined. Type standard deviation (t-STD) of RMSE and \(\delta_{1}\) computes standard deviation on twelve types. Higher t-STD indicates higher performance variation or more imbalance across types.
Models have only seen data from their respective group at training. For instance, a model trained on G1 cannot access RGBD pairs of classroom or hallway during training. The categorization is based on similarity between types and concerns a situation where one collects training data almost in the same functionality that matches the primary application scenarios without considering different user scenarios. This is frequently encountered in scope-focused applications. Such as gaming VR systems are primarily for households, but the performance may drop with outlier use cases in classroom or workplace. Results in Table 9 left half show generalization to other types, and the right half shows evaluation on different depth ranges. Three depth ranges are defined: close, medium, and far. Close: a scene whose maximal depth is approximately within 5 meters. Medium: a scene whose maximal depth is approximately within 5-10 meters. Far: a scene whose maximal depth is approximately within 10-20 meters. The average maximal depth value is 3.78m for G1, 5.49m for G2, and 12.08m for G3. We present another training set categorization based on ranges in Supplementary.
Training on specific groups can produce good performance on its dedicated types. However, one can observe training on only some types encounters severe issues in generalization to other unseen types, which further exhibit high variation between different indoor environments, and pretrained knowledge on some types may not easily transfer to other types. For example, training on G1's household spaces cannot generalize to large or spacious room (G1\(\rightarrow\)G3 or Far) and show higher RMSE and lower \(\delta_{1}\). Most indoor training datasets, such as NYUv2 or simulation from Matterport3D or Replica, are mostly curated for household spaces or smaller rooms. This may serve the needs of applications mainly for private room, but it potentially poses a training set bias towards close-range estimation, and the models trained on these datasets cannot be deployed to address different scenarios. Besides, we also observe training on large or spacious spaces (G3) can attain a bit better generalization to smaller rooms than the reverse setting, comparing between G3\(\rightarrow\)G1 and G1\(\rightarrow\)G3 or between G3\(\rightarrow\)Close and G1\(\rightarrow\)Far. We visualize the cross-group generalization result in Fig. 4.
## 6 Conclusion
Unlike previous methods that focus on algorithmic developments, we are the first work to consider space types in indoor monocular depth estimation for robustness and practicability in deployment. We point out limitations in previous evaluations where performance variances across types are overlooked and present a novel dataset, InSpaceType, along with a hierarchical space type definition to facilitate our study. We give thorough studies to analyze and benchmark performance based on space types. Ten high-performing methods are examined, and we find they suffer from severe performance imbalance between space types. We analyze a total of 4 training datasets and enumerate their strength and weakness space types. 3 popular strategies, namely, class reweighting, type-balanced sampler, and meta-learning, are studied to mitigate imbalance. Further, we find generalization to unseen space types challenging due to high diversity of objects and mismatched scales across types. Overall, this work pursues a practical purpose and emphasizes the importance of this usually overlooked factor-space type in indoor environments. We call for attention to safety concerns for model deployment without considering performance variance across space types.
**Limitations**. This work only considers monocular depth estimation. Other popular scopes for depth estimation such as outdoor domain, stereo approaches, or multiview scene reconstruction may also suffer from performance imbalance across different types. We choose to operate on monocular depth estimation since it is the most fundamental task with only an image needed and is especially widely useful in many recent popular applications or deployed systems such as indoor AR on smartphones,
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline
**G1\(\rightarrow\)Type** & **AshRef** & **SqRef** & **RMSE** & \(\delta_{1}\) & \(\delta_{2}\) & \(\delta_{3}\) & **G1\(\rightarrow\)Range** & AbsRef & SqRef & RMSE & \(\delta_{1}\) & \(\delta_{2}\) & \(\delta_{3}\) \\ \hline G1 & 0.0511 & 0.0134 & 0.1461 & 98.12 & 99.71 & 99.92 & Close & 0.0984 & 0.0266 & 0.2877 & 89.48 & 96.08 & 99.60 \\ G2 & 0.1667 & 0.1092 & 0.5501 & 77.22 & 94.91 & 98.90 & Medium & 0.1376 & 0.1241 & 0.9938 & 80.47 & 94.59 & 98.37 \\ G3 & 0.2669 & 0.3851 & 1.1987 & 59.02 & 85.02 & 94.12 & Far & 0.2897 & 0.4375 & 1.3003 & 55.57 & 82.74 & 93.16 \\ \hline \hline
**G2\(\rightarrow\)Type** & AbsRef & SqRef & RMSE & \(\delta_{1}\) & \(\delta_{2}\) & \(\delta_{3}\) & G2\(\rightarrow\)Range & AbsRef & SqRef & RMSE & \(\delta_{1}\) & \(\delta_{2}\) & \(\delta_{3}\) \\ \hline G1 & 0.1418 & 0.0666 & 0.3497 & 82.29 & 96.81 & 93.99 & Close & 0.1063 & 0.0464 & 0.2720 & 87.97 & 97.30 & 99.59 \\ G2 & 0.0673 & 0.0244 & 0.2250 & 96.33 & 94.95 & 98.36 & Medium & 0.1169 & 0.0728 & 0.2463 & 87.27 & 97.75 & 97.93 \\ G3 & 0.2139 & 0.2485 & 0.9424 & 68.38 & 90.35 & 96.61 & Far & 0.2336 & 0.2886 & 1.0028 & 66.01 & 88.76 & 95.50 \\ \hline \hline
**G3\(\rightarrow\)Type** & AbsRef & SqRef & RMSE & \(\delta_{1}\) & \(\delta_{2}\) & \(\delta_{3}\) & G3\(\rightarrow\)Range & AbsRef & SqRef & RMSE & \(\delta_{1}\) & \(\delta_{2}\) & \(\delta_{3}\) \\ \hline G1 & 0.1902 & 0.1416 & 0.4776 & 71.55 & 93.10 & 98.09 & Close & 0.2184 & 0.2070 & 0.5838 & 66.78 & 90.33 & 97.38 \\ G2 & 0.1967 & 0.1657 & 0.8116 & 69.16 & 92.38 & 98.04 & Medium & 0.2129 & 0.2430 & 0.7016 & 68.46 & 98.54 & 96.39 \\ G3 & 0.0727 & 0.0395 & 0.4734 & 95.79 & 99.14 & 99.71 & Far & 0.0819 & 0.0491 & 0.3784 & 94.08 & 98.65 & 99.50 \\ \hline \end{tabular}
\end{table}
Table 9: **Performance for training and evaluating on different groups. G1\(\rightarrow\) specifies the training group (G), and the entries below are evaluation groups. Three depth ranges: close, medium, and far are used to evaluate performances on scenes of different scales. See the text for the definition.**
VR gaming, novel view synthesis, or video generation for indoor scenes. This work specifically zooms into the factor space type, but this may not be the only important factor blocking the generalization.
|
2309.15173 | Observable Statistical Mechanics | Understanding equilibration and thermalization in isolated many-body quantum
systems is a central challenge in quantum physics. The traditional approach
focuses on the study of the full state of the quantum system which, at
equilibrium, is best described by the Diagonal Ensemble. Here, we present
Observable Statistical Mechanics, a novel paradigm that shifts attention from
the full quantum state to the statistics of measurement outcomes. This approach
is grounded in the Maximum Observable Entropy Principle, positing that
equilibrium measurement statistics tend to maximize observable entropy under
conserved average energy. By focusing on accessible measurements, the theory
accurately predicts equilibrium probability distributions without needing
detailed microscopic information like the energy eigenstates. Extensive
numerical experiments on 7 spin-1/2 Hamiltonians demonstrate the broad
applicability and robustness of this framework. | Lodovico Scarpa, Abdulla Alhajri, Vlatko Vedral, Fabio Anza | 2023-09-26T18:18:39Z | http://arxiv.org/abs/2309.15173v2 | # Observable Thermalization: Theory, Numerical and Analytical Evidence
###### Abstract
Predicting whether an observable will dynamically evolve to thermal equilibrium in an isolated quantum system is an important open problem, as it determines the applicability of thermodynamics and statistical mechanics. The Observable Thermalization framework has been proposed as a solution, characterizing observables that thermalize using an observable-specific maximum entropy principle. In this paper, we achieve three results. First, we confirm the dynamical relaxation of local observables towards maximum entropy, in a 1D Ising chain. Second, we provide the most general solution to the maximization problem and numerically verify some general predictions about equilibrium behavior in the same model. Third, we explore the emergence and physical meaning of an observable-specific notion of energy. Our results mark significant progress towards a fully predictive theory of thermalization in isolated quantum systems and open interesting questions about observable-specific thermodynamic quantities.
## I Introduction
The concept of thermal equilibrium is at the heart of thermodynamics and statistical mechanics. Broadly speaking, if a system is at thermal equilibrium, we are rigorously justified to use thermodynamics and statistical physics to investigate its behavior. While this was originally intended to work at the macroscopic scale, i.e. for systems with an Avogadro number of degrees of freedom of \(\mathcal{O}(10^{23})\), we now understand that both theories can work very well in regimes which are much closer to the microscopic scale -- well below the Avogadro scale. Quantum Thermodynamics [1; 2; 3; 4] is a direct example of this. Despite much progress, however, the necessary and sufficient conditions for the emergence of thermal equilibrium in quantum systems evolving unitarily are still not known. This long-standing problem has been open since the foundations of statistical mechanics were laid down at the end of the 19th century. Modern approaches to tackle the problem can be roughly organized in 4 major lines of inquiry: Quantum Chaos [5; 6; 7; 8], Dynamical Equilibration [9; 10; 11; 12], Typicality [13; 14; 15; 16] and the Eigenstate Thermalization Hypothesis (ETH) [7; 17; 18; 19; 20]. While each of them provides crucial inputs on _some_ aspects of the thermalization problem, none of them is complete: they lack predictive power since they do not provide a complete characterization (necessary and sufficient) of the conditions for the dynamical emergence of thermal equilibrium.
The Observable Thermalization approach [21; 22; 23], which we investigate here, emerged as an attempt to address this issue. A full-fledged review of Observable Thermalization and how it relates to the other approaches is beyond the scope of the paper. For our purposes, it suffices to say that it stems from the attempt to incorporate the lessons from all the other lines of research to draw a coherent picture of thermalization and synthesize it into a framework that can be used to have new and testable predictions. In fact, Observable Thermalization has already allowed determining a large class of observables, called Hamiltonian Unbiased Observables (HUOs), which always exhibit thermal behavior, and for which the ETH can be analytically derived [21]. This was the first proof of the predictive power of this framework as it provides us with a consistent way of finding observables that always thermalize dynamically.
In this work, we test the Observable Thermalization approach and its predictive capabilities. First, we verify the dynamical emergence of the Maximum Observable Entropy principle, which states that the measurement statistics of an observable's eigenvalues, at equilibrium, is the one that maximizes its Shannon entropy under some constraints [21; 23]. We find good agreement with the envisioned behavior: the observable entropy exhibits a short transient followed by settling on a constrained maximum value, with small fluctuations around it. Second, by finding the generic solutions to the equilibrium equations of Observable Thermalization, we analytically extract a general prediction for the equilibrium behavior of thermalizing observables. We numerically confirm its validity for one-body observables, in a one-dimensional non-integrable transverse-field Ising model, and for a one-parameter family of initial states. In all cases considered, the agreement between data and predictions is extremely good. This is our second result. In turn, this leads to the emergence of a new quantity, whose physical role is studied and understood as an observable-specific notion of energy. This is our third result.
This constitutes significant progress towards a predictive theory for the emergence of thermal equilibrium in isolated quantum systems and it provides additional evidence in support of the Maximum Observable Entropy
principle both as a dynamical mechanism and as a powerful predictive principle.
The paper is organized as follows. Sections II and III develop the theoretical and analytical arguments. In particular: section II provides a summary of the theory of Observable Thermalization, which also serves as a way to set language and notation; and, in section III we give the most general solution to the equilibrium equations, together with a new prediction of the theory, eq.(8). Section IV discusses the numerics and how the data supports the theory laid out before. Eventually, section V presents a general discussion about our results, and in section VI we draw some general conclusions.
## II Observable Thermal Equilibrium
Within classical and quantum statistical mechanics, the notion of thermal equilibrium is deeply intertwined with Gibbs' ensemble: a system is at thermal equilibrium if and only if its equilibrium behavior is accurately described by statistical mechanics, via one of Gibbs' ensembles. As previously argued [21], this condition is experimentally inaccessible, unless we deal with very simple systems. One would have to probe a huge amount of observables, each with large statistics, to make a proper statement about the density matrix of the system under study. A more reasonable alternative is to recognize the predominant role of quantum measurements in extracting information from a quantum system. One then ascribes the condition of thermal equilibrium to the observed measurement statistics. This perspective, and its need, was argued for in ref.[21]. Here we provide additional support and strengthen the case for Observable Thermalization. We begin by introducing the relevant quantities and setting up notation.
### Notation
We have a quantum system described with a finite-dimensional Hilbert space \(\mathcal{H}\) and we call \(D=\mathrm{dim}\mathcal{H}\) its dimension. The system is made by \(N\) repetitive units of interacting subsystems, each with Hilbert space \(\mathcal{H}_{1}\) of dimension \(d\). We thus have \(\mathcal{H}=\otimes_{k=1}^{N}\mathcal{H}_{1}\) and \(D=d^{N}\). The system is isolated and its evolution is generated by a time-independent Hamiltonian \(H=\sum_{n=1}^{d^{N}}E_{n}\left|E_{n}\right\rangle\!\!\left\langle E_{n}\right|\), so that \(\left|\psi_{t}\right\rangle=e^{-\frac{i}{\hbar}Ht}\left|\psi_{0}\right\rangle =\sum_{n}c_{n}e^{-\frac{i}{\hbar}E_{n}t}\left|E_{n}\right\rangle\). Following von Neumann [16; 24] and subsequent authors [25; 26; 27; 19; 15; 28; 29; 30], we also make the assumption that the fundamental frequencies \(\omega_{nk}\coloneqq\frac{E_{n}-E_{N}}{\hbar}\) are unique. This is meant to exclude non-interacting Hamiltonians and it is not a particularly restrictive assumption, as it can be relaxed in several ways without affecting the core argument [27; 31]. Given some initial state \(\left|\psi_{0}\right\rangle=\sum_{n}c_{n}\left|E_{n}\right\rangle\), it is well-known that an isolated quantum system can exhibit equilibration [10; 11; 32], in the following sense.
Consider an observable \(A\coloneqq\sum_{j=1}^{n_{A}}a_{j}A_{j}\) where \(A_{j}\coloneqq\sum_{s=1}^{d_{j}}\left|j,s\right\rangle\!\!\left\langle s,\right|\), with \(n_{A}\) the number of distinct eigenvalues, \(d_{j}=\mathrm{Tr}\,A_{j}\) the number of degeneracies in the eigenspace corresponding to the \(j\)-th eigenvalue and \(\left\{\left|j,s\right\rangle\right\}\) one of the eigenbases of \(A\). Then, its eigenvalues' probability distribution is given by \(\left\{p_{j}(t)\coloneqq\left\langle A_{j}\right\rangle(t)=\left\langle\psi_{t }|A_{j}|\psi_{t}\right\rangle\right\}_{j=1}^{n_{A}}\). We say that \(p_{j}(t)\) has equilibrated when, after some characteristic equilibration time-scale \(t>\tau_{eq}\), the time-dependent \(p_{j}(t)\) will be close to \(p_{j}^{DE}\coloneqq\left\langle A_{j}\right\rangle_{DE}\), the one computed on the diagonal ensemble \(\rho_{DE}\coloneqq\left|\overline{\psi_{t}}\overline{\left\rangle\!\!\left\langle \overline{\psi_{t}}\right|}^{\infty}\right\rangle=\sum_{n}\left|c_{n}\right|^{2 }\left|E_{n}\right\rangle\!\!\left\langle E_{n}\right|\), where the overbar helps us define infinite-time averages \(\overline{x_{t}}^{T}\coloneqq\frac{1}{T}\int_{0}^{T}x_{t}dt\) and \(\overline{x_{t}}^{\infty}=\lim_{T\rightarrow\infty}\overline{x_{t}}^{T}\). Thus, to predict the equilibrium behavior of an isolated quantum system, \(\rho_{DE}\) is our best bet. This, however, requires access to the energy eigenstates, and their occupation probabilities \(\left|c_{n}\right|^{2}\). Something we know is practically unfeasible for many-body quantum systems, even for relatively low sizes.
### Observable Thermal Equilibrium
The quantum statistical mechanics of isolated systems rely on the microcanonical ensemble \(\rho_{mc}\coloneqq\frac{1}{\mathcal{N}(E,\delta E)}\sum_{E_{n}\in I_{mc}} \left|E_{n}\right\rangle\!\!\left\langle E_{n}\right|\), where \(I_{mc}\coloneqq\left[E-\frac{\delta E}{2},E+\frac{\delta E}{2}\right]\), and \(E\), \(\delta E\) are, respectively, the average and standard deviation of the energy probability distribution. \(\mathcal{N}(E,\delta E)\) is the number of energy eigenstates contained in \(I_{mc}\). \(\rho_{mc}\), however, can never be reached via unitary dynamics from any \(\left|\psi_{0}\right\rangle\): \(\left\{\left|E_{n}\right\rangle\!\!\left\langle E_{n}\right|\right\}_{n=1}^{d^{N}}\) is an exponentially large set of conserved quantities, which retains partial memory of the initial conditions. Thus, such thermal equilibrium is impossible for an isolated system, which evolves with a time-independent Hamiltonian. However, if we focus on sub-systems \(\mathcal{H}_{S}=\otimes_{k=1}^{N_{S}}\mathcal{H}_{1}\), it is possible that \(D_{Tr}\left(\rho_{DE}^{S},\rho_{mc}^{S}\right)\ll 1\), where \(D_{Tr}(\rho,\sigma)\) is the trace distance and \(\rho^{S}\) is the reduced density matrix of the subsystem \(S\). When this is the case, the statistics of _all observables_ with support on \(S\) can be extracted from \(\rho_{mc}\), and we say that \(S\)_is at thermal equilibrium_.
Using such a definition for thermal equilibrium as the basis of quantum statistical mechanics brings two practical issues. First, verifying its validity is exponentially hard in the size of the system. Hence, it is experimentally useless even for systems of modest sizes, say 50 qubits. Second, for such a condition to be proven wrong it is sufficient to have just one (only one) observable that departs from the predictions of statistical mechanics. In that case, even though its predictions might still hold for a large class of other observables, we simply cannot use it. In other words, we might not be using the ability to estimate various observables, because one of them cannot be predicted.
Observable Thermal Equilibrium is an alternative
thermal equilibrium definition that addresses both issues. By reflecting what happens in real experiments, it provides an observable-specific notion of thermal equilibrium which allows us to push the boundaries of statistical mechanics, and better understand its dynamical foundations. Defining the quantities
\[\epsilon_{j}^{DE}(t)\coloneqq \left|p_{j}(t)-p_{j}^{DE}\right|,\] \[\epsilon_{j}^{mc}\coloneqq \left|p_{j}^{DE}-p_{j}^{mc}\right|,\]
we give the following, more practical, notion:
**Definition II.1** (Observable Thermal Equilibrium (OTE)).: We say an observable \(A\) is at thermal equilibrium when
\[\overline{\epsilon_{j}^{DE}(t)}^{\infty},\epsilon_{j}^{mc}\leq\epsilon_{A} \ll 1\quad\forall j \tag{1}\]
with \(\epsilon_{A}>0\) some small observable-specific quantity which is expected to be very small or vanish \(\epsilon_{A}\to 0\) in the thermodynamic limit.
We emphasize that we are using a definition of equilibrium that addresses the whole probability distribution. This means that not only \(\left\langle A\right\rangle\) will have a thermal behavior, but also all fluctuations like the standard deviation and all the higher moments \(\left\langle A^{n}\right\rangle\).
### OTE vs Standard Thermal Equilibrium
If OTE is true for all observables with support on some subsystem \(S\), then we return to the previous definition of thermal equilibrium for \(\rho^{S}\). This is proven via the definition of trace distance: \(D_{Tr}(\rho,\sigma)\coloneqq\max_{O}|\operatorname{Tr}(\rho O)-\operatorname{ Tr}(\sigma O)|\). Indeed, since the max is taken over all observables, it also includes all projectors. Calling \(\mathcal{A}_{S}\) the set of observables with support on \(S\), if OTE is verified for all of them, we have \(\epsilon_{A_{S}}\coloneqq\max_{A\in\mathcal{A}_{S}}\epsilon_{A}\ll 1\). We thus conclude that, \(D(\rho_{DE}^{S},\rho_{mc}^{S})\leq\epsilon_{A_{S}}\ll 1\). OTE is, therefore, the core concept underlying the standard definition of thermal equilibrium. Indeed, it is quite easy to imagine situations in which the condition on the density matrix is violated because there is one observable that clearly violates OTE, while all the other ones still verify it. An example of a system in which this is true is the XXZ model, in the Many-Body Localized phase [33; 34; 35]. The local magnetization along \(z\) never thermalizes, while the ones along \(x\) and \(y\) do. This was proven both analytically and numerically in ref.[23]. This example allows us to showcase the power of the Theory of Observable Thermalization and of the OTE. Indeed, quantum statistical mechanics breaks down in Many-Body Localized systems, so we cannot use it to study their equilibrium behavior. Observable Thermalization, however, allows us to use its prediction in the MBL phase, for some observables, namely those that satisfy OTE, thus pushing the boundary of statistical mechanics beyond its original domain of applicability.
### The three elements of the characterization problem
Looking at OTE from a mathematical perspective, the problem of finding the necessary and sufficient conditions for the emergence of OTE in an isolated quantum system is well posed if and only if the following three quantities are given: the initial state \(\left|\psi_{0}\right\rangle\), the Hamiltonian \(H\), and the observable \(A\) (or, the set of projectors \(\{A_{j}\}_{j=1}^{n_{A}}\)). This can be easily proven, as follows. Given some \(\left|\psi_{0}\right\rangle\) and some \(H\), we can always find observables that do not satisfy the OTE (\([A,H]=0\), such as the energy eigenprojectors) and observables that do satisfy the OTE (e.g., Hamiltonian Unbiased Observables, see ref.[21]) [36]. Given some \(H\) and \(A\), we can always find some \(\left|\psi_{0}\right\rangle\) for which the OTE is true (e.g. thermally pure states, see ref.[37]), and some other ones for which the OTE condition is clearly false (superpositions of a few macroscopically different eigenenergies). Finally, given \(A\) and \(\left|\psi_{0}\right\rangle\), we can always find some \(H\) which makes OTE true (e.g. \(H\) such that \(A\) is a HUO) and vice versa (when \([H,A]=0\)).
We conclude that OTE, and any other thermal equilibrium definition that builds on it, is a joint property of the triple \(\mathbb{T}\coloneqq\left\{\left|\psi_{0}\right\rangle,A,H\right\}\), which we call _Thermal Equilibrium Characterization Triple_. Without specifying all three elements, the question is not well-defined and cannot be answered appropriately.
### Equilibrium Equations: A road to a predictive framework
Given a triple \(\mathbb{T}\), when does \(p_{j}(t)\) satisfy OTE? It is straightforward to see that, under experimentally realistic initial conditions [38; 25], the ETH is sufficient for OTE [19; 23]. Indeed, ETH's validity has been numerically checked in many cases [39; 40; 41; 42; 43; 44; 20]. The necessity of the ETH has also been argued for [45; 46; 11], but it is currently less solid. While this clearly establishes the ETH as a core element of thermalization, this is a condition whose validity can be checked only through knowledge of the exact eigenstates. Hence, the problem of predicting which observables will satisfy OTE needs additional input.
Following Jaynes [47; 48], inference principles can help bridge the gap and constitute a principled, reliable, and established way to provide estimates in situations of lack of knowledge. In ref.[21], Anza and Vedral posited that one could characterize observables at equilibrium using the Maximum Observable Entropy principle. They then derive equilibrium equations to predict their distribution. Evidence of the validity of this theoretical framework, and its practical utility, were given in refs. [21; 49; 22]. Here we give a quick summary of the rationale behind the equilibrium equations, and how to derive them.
Given that our task is to estimate \(p_{j}\), the Maximum Observable Entropy principle prescribes that our best bet is
\(p_{j}^{eq}\), which is the probability distribution that maximizes the Shannon entropy \(S_{A}\coloneqq-\sum_{j=1}^{n_{A}}p_{j}\log p_{j}\), compatibly with the validity of some set of constraints. How does one choose them? Given that the underlying dynamics is unitary, the memory of the initial conditions is written in the expectation value of the energy eigenstates \(|c_{n}|^{2}\). These are constant quantities and one should therefore include all of them or any linear combination thereof. Such choice guarantees that we always get the exact answer [50, 10, 11], namely \(p_{j}^{DE}\coloneqq\operatorname{Tr}(\rho_{DE}A_{j})\). This, however, requires knowledge of the energy eigenstates. Moreover, due to the locality of physically reasonable observables, we do expect many of these constraints to not be relevant. A good approximation scheme is to trade the whole probability distribution \(\{|c_{n}|^{2}\}\) with a suitable set of moments \(\left\{\left\langle H^{k}\right\rangle=\sum_{n}|c_{n}|^{2}E_{n}^{k}\right\}_{ k=0}^{M}\). If \(M=D-1\) this is equivalent to giving the full set \(\left\{|c_{n}|^{2}\right\}_{n=0}^{D-1}\). For \(M<D-1\) this provides a hierarchy of reasonable approximations, which was explicitly studied in ref. [50]. We now argue why, for experimentally realistic initial conditions, we believe \(M=1\) to be sufficient. As discussed in [38, 25], in the thermodynamic limit, we expect \(|c_{n}|^{2}\) to be concentrated in an energy window which is small at the macroscopic scale (thermodynamic energy can be estimated with relatively low error), while still hosting a huge number of energy levels (the energy density is fantastically large). \(|c_{n}|^{2}\) is therefore peaked around \(\left\langle H\right\rangle\), and we expect its actual shape to be largely irrelevant. With such setup, the task is to find the distribution \(p_{j}^{eq}\) which maximizes \(S_{A}\) compatibly with the validity of the constraints fixing normalization and average energy. We reiterate that the standard notion of thermal equilibrium can be recovered from OTE by simply requiring the constrained maximization of the minimum among all possible observable entropies since \(\min_{A}S_{A}=S_{\text{vN}}\), where \(S_{\text{vN}}\coloneqq-\operatorname{Tr}(\rho\log\rho)\) is the von Neumann entropy. Hence, OTE is a generalization of the usual definition of thermal equilibrium.
The technical problem can be tackled using the Lagrange multipliers technique [51]. As discussed above, we keep only the normalization of probabilities and average energy constraints. After some algebraic manipulations, one obtains the following equilibrium equations, whose solution \(p_{j}^{eq}\) is expected to characterize the equilibrium value of \(p_{j}(t)\) when \(t>\tau_{eq}\):
\[\operatorname{Tr}\big{(}[A_{j},H]\rho\big{)}\overset{\text{eq}}{ \rightleftharpoons}0 \tag{2}\] \[-p_{j}\log p_{j}\overset{\text{eq}}{\rightleftharpoons}(1+ \lambda_{N})p_{j}+\lambda_{E}R_{j} \tag{3}\]
where \(\lambda_{E}\) and \(\lambda_{N}\) are, respectively, the Lagrange multipliers for the energy and normalization constraint, and
\[R_{j}\coloneqq\frac{1}{2}\operatorname{Tr}\big{(}\{A_{j},H\}\rho\big{)}. \tag{4}\]
Note that we do not expect these equations to be exactly satisfied for all times \(t>\tau_{eq}\). However, we do expect them to hold, within some accuracy given by dynamical fluctuations, for most times. The first equation is about dynamical equilibration, i.e. it states the distribution must be invariant under the unitary dynamics generated by \(H\). Indeed, using von Neumann's equation one has \(i\hbar\frac{\partial}{\partial t}p_{j}=\operatorname{Tr}\big{(}[A_{j},H]\rho \big{)}\overset{\text{eq}}{\rightleftharpoons}0\). The second equation characterizes the shape of \(p_{j}^{eq}\). For example, if we sum over the index \(j\), we can see that the entropy of the equilibrium distribution has a thermodynamic flavor in the sense that it has a linear relation with the average energy:
\[S_{A}^{eq}=\log\mathcal{Z}_{A}+\beta_{A}E, \tag{5}\]
where \(\beta_{A}\coloneqq\lambda_{E}\) plays the role of an observable-specific inverse temperature and, calling \(\log\mathcal{Z}_{A}\coloneqq 1+\lambda_{N}\), \(\mathcal{Z}_{A}\) is an observable-specific partition function. All aspects of the derivation and a more detailed discussion on the equilibrium equations can be found in ref.[21].
Note furthermore that the object \(R_{j}\) is an inner product between the operators \(A_{j}\) and \(H\)[52] and can also be expressed as
\[R_{j}=\operatorname{Cov}(A_{j},H)+p_{j}E \tag{6}\]
by using the definition of the symmetrized covariance between two operators: \(\operatorname{Cov}(X,Y)\coloneqq\left\langle X\circ Y\right\rangle-\left\langle X \right\rangle\left\langle Y\right\rangle\), where \(X\circ Y\coloneqq\frac{XY+YX}{2}\) is the Jordan product between \(X\) and \(Y\).
We now discuss various strategies to solve the equilibrium equations, together with known analytical solutions.
## III Solving the equilibrium equations
Eqs.(2) and (3) are quite involved. Firstly, they both have parts that have an implicit dependence on \(p_{j}\). Indeed, they depend explicitly on \(\rho\), of which \(p_{j}\) is simply a projection. Then, the second equation is highly nonlinear and implicit, containing \(R_{j}\). Nevertheless, there is only one form of the equilibrium probability distribution which is compatible with the form of the Shannon entropy in eq.(5) above. This is the exponential family
\[p_{j}^{eq}=\begin{cases}\frac{1}{\mathcal{Z}_{A}}e^{-\lambda_{E}\varepsilon_{j} ^{eq}}&\forall\,a_{j}\in\mathcal{I}\subseteq\sigma_{A}\\ 0&\forall\,a_{j}\in\sigma_{A}/\mathcal{I}\end{cases} \tag{7}\]
where \(\mathcal{Z}_{A}\coloneqq\sum_{j}e^{-\lambda_{E}\varepsilon_{j}^{eq}}\), \(\varepsilon_{j}^{eq}\) is some energetic quantity such that \(\sum_{j}p_{j}^{eq}\varepsilon_{j}^{eq}=E\), and \(\mathcal{I}\) is an (improper) subset of the spectrum of \(A\), \(\sigma_{A}\). This follows straightforwardly from the Gibbs' inequality; see [53, 54]. Thus, if a solution exists it must have this form. However, \(p_{j}^{eq}\) must also satisfy eq.(3); therefore, we plug the solution into this equation and obtain
\[R_{j}^{eq}=\varepsilon_{j}^{eq}p_{j}^{eq}. \tag{8}\]
This is a highly non-trivial prediction of the equilibrium equations and is required to hold for the solution to exist.
Thus, according to Observable Thermalization, at equilibrium, we must have \(\left\langle A_{j}\circ H\right\rangle_{eq}=\varepsilon_{j}^{eq}p_{j}^{eq}\) for some \(\varepsilon_{j}^{eq}\) yet to be characterized. To obtain a complete solution one still needs to fix the value of the Lagrange multiplier \(\lambda_{E}\) by using the second constraint equation. That is, we need to find
\[\lambda_{E}\ :\ \sum_{j}p_{j}^{eq}\varepsilon_{j}^{eq}=-\frac{\partial\log \mathcal{Z}_{A}}{\partial\lambda_{E}}=E. \tag{9}\]
For a binary observable (\(j\in\{0,1\}\)) we can analytically solve this non-linear equation:
\[\lambda_{E}=\frac{1}{\delta\varepsilon}\operatorname{arctanh}\bigg{(}\frac{ \bar{\varepsilon}-E}{\delta\varepsilon}\bigg{)}=\frac{1}{\varepsilon_{1}- \varepsilon_{0}}\ln\bigg{(}\frac{\varepsilon_{1}-E}{E-\varepsilon_{0}}\bigg{)}, \tag{10}\]
where \(\bar{\varepsilon}:=\frac{\varepsilon_{1}+\varepsilon_{0}}{2}\) and \(\delta\varepsilon:=\frac{\varepsilon_{1}-\varepsilon_{0}}{2}\).
We now investigate the validity and consequences of eq.(8). In the next section, we will also provide numerical evidence supporting the arguments developed here.
### Hamiltonian Unbiased Observables
To understand eq.(8), we begin by looking at what happens when \(\rho_{DE}\to|E_{n}\rangle\!\langle E_{n}|\), that is, we consider a microcanonical energy window with \(\delta E\to 0\), such that in the limit it contains only one energy eigenstate. Physically, we do so because we believe the argument behind the ETH to be correct: thermalization occurs because thermal property can emerge already at the level of a single energy eigenstate. In this limiting case, we have that \(R_{j}\to E_{n}p_{j}(E_{n})\). This is indeed the form predicted by the equilibrium equations and, plugging it back into the second equilibrium equation we get \(\varepsilon_{j}=E_{n}\ \forall j\) and therefore \(p_{j}(E_{n})=p_{j}^{eq}=\frac{1}{n}\). This happens when the observable \(A\) is a Hamiltonian Unbiased Observable (HUO) [21], i.e., the observable energy eigenabes are mutually unbiased [55; 56; 57; 58]. In this case, we have \(R_{j}^{HUO}\stackrel{{ eq}}{{=}}p_{j}^{HUO}E=\frac{E}{n_{A}}\), which implies \(\operatorname{Cov}(A_{j}^{HUO},H)\stackrel{{ eq}}{{=}}0\), by eq.(6). These were the solutions to the generic equilibrium equations originally found in [21; 23]. When initialized in out-of-equilibrium configurations, HUOs do exhibit fast equilibration and thermalization to the flat distribution, with truly maximal entropy \(S_{A}=\log n_{A}\). HUOs are a very useful model to understand thermalization, for three reasons.
* First, the ETH holds for all of them, and this has been proven analytically in ref.[21].
* Second, in a statistically precise sense (Haar measure), most observables are expected to be quite close to being HUOs. This was proven in ref.[22], along with other statements clarifying the physical relevance of HUOs.
* Third, for Many-Body Localized systems, extensive sets of HUOs can be found analytically, and they are all quasi-local. This clearly shows the advantage of OTE against standard thermal equilibrium. Indeed, while it is true that MBL systems escape quantum statistical mechanics in the standard sense, there are several local observables that still exhibit OTE, even in the localized phase. They exhibit fast equilibration, and their long-time behavior is thermal. This was shown in ref.[23], and used in ref.[49].
### Beyond HUOs
While HUOs capture core aspects of observable thermalization, they have one major drawback: they are insensitive to the overall energy scale of the system. Technically, this is due to the fact that \(R_{j}^{HUO}/p_{j}=E\), which does not depend on \(j\). When this is plugged into the equilibrium equation, we see that the dependence of \(p_{j}^{eq}\) on \(j\) also disappears, thus leading to a flat distribution. We can also see this from eq.(6). HUOs are observables such that their projectors have no correlations with the Hamiltonian at equilibrium: \(\operatorname{Cov}(A_{j}^{HUO},H)\stackrel{{ eq}}{{=}}0\). We do not expect this to be true exactly for physical observables which, at equilibrium, do exhibit a smooth dependence on the energy scale of the system. Nevertheless, we believe the core mechanism to be approximately correct. In other words, while small, we expect a non-vanishing covariance between the observable projector \(A_{j}\) and the Hamiltonian operator at equilibrium. The question thus becomes, is it possible to solve the equilibrium equations and use them to predict the equilibrium values of certain observables? Via eq.(8), this becomes a question about \(R_{j}\), which we answer now.
Within a microcanonical energy window \(I_{mc}\), \(R_{j}\approx\sum_{E_{n},E_{m}\in I_{mc}}c_{n}^{*}c_{m}\,e^{-i(E_{m}-E_{n})t} \left(\frac{E_{n}+E_{m}}{2}\right)\left\langle E_{n}|A_{j}|E_{m}\right\rangle\) and \(\left(\frac{E_{n}+E_{m}}{2}\right)\in I_{mc}\) so we found the following straightforward bound
\[\bigg{(}1-\frac{\delta E}{2E}\bigg{)}p_{j}\lesssim\frac{R_{j}}{E}\lesssim \bigg{(}1+\frac{\delta E}{2E}\bigg{)}p_{j} \tag{11}\]
This implies the existence of some \(\tilde{\varepsilon}_{j}(t)\in I_{mc}\) such that \(R_{j}(t)\approx\tilde{\varepsilon}_{j}(t)p_{j}(t)\). In the thermodynamic limit, we have \(\delta E/E\ll 1\). So we expect the bound to be more and more stringent as we increase the size of our system. This means eq.(8) might indeed be reasonably true. We therefore define the ratio \(\tilde{\varepsilon}_{j}(t)\coloneqq R_{j}(t)/p_{j}(t)\) which, in general, exists (unless \(p_{j}(t)=0\)) and has the dimension of energy. While \(\tilde{\epsilon}_{j}(t)\) exists, in principle it could fluctuate wildly, oscillate permanently, or exhibit other forms of dynamical behavior, rather than settling on a constant, as expected from eq.(8). We now provide numerical evidence that strongly supports \(\tilde{\varepsilon}_{j}=\varepsilon_{j}\) and show that eq.(8) is indeed verified in the class of models we analyze.
This concludes the discussion about the general aspects of the theory and the generic solutions of the equilibrium
equations. We now present a numerical analysis that supports the theoretical picture. The physical interpretation of the quantity \(\varepsilon_{j}\), together with a detailed discussion of the general solution found here, will be given later, in section V.
## IV Predictions in concrete models
The theory laid out in Sections II and III provides a set of principles to estimate the equilibrium probability distribution of observables. Here we test its predictions in a one-dimensional Ising model described by the Hamiltonian
\[H=\sum_{n=0}^{L-1}\big{(}J^{z}Z_{n}Z_{n+1}+B^{z}Z_{n}+B^{x}X_{n}\big{)}, \tag{12}\]
where \(n\) is an index that runs over the \(L\) lattice sites, and \(Z_{n}\equiv\sigma_{z,n}\) etc., with \(\sigma_{\alpha,n}\) (\(\alpha=x,y,z\)) representing the Pauli operator with Pauli matrix \(\sigma_{\alpha}\) acting on the lattice site \(n\), i.e.
\[\sigma_{\alpha,n}\coloneqq\mathbb{I}^{\otimes n}\otimes\sigma_{\alpha}\otimes \mathbb{I}^{\otimes(L-1-n)}. \tag{13}\]
We use periodic boundary conditions, so that \(\sigma_{\alpha,L}=\sigma_{\alpha,0}\). We choose the parameter values \((J^{z},B^{x},B^{z})=(1,0.9045,0.8090)\), since they are guaranteed to give a robustly non-integrable model [42, 59]. We have considered system sizes up to \(L=10\) spins, and a class of initial states given by \(|\psi_{0}(\theta_{m})\rangle=R_{y}(\theta_{m})^{\otimes L}\,|01\ldots 01\rangle\), where \(R_{y}(\theta)=\exp(-iY\theta/2)\) is the rotation operator along the \(y\) axis [60] and \(\theta_{m}=\frac{m-1}{19}\frac{\pi}{2}\). These states interpolate between the antiferromagnetic state along the \(z\) direction (\(\theta_{1}=0\)) and the one along the \(x\) direction (\(\theta_{20}=\frac{\pi}{2}\)). Note that the extremes are states of zero Shannon entropy for the observables \(Z_{n}\) and \(X_{n}\) respectively, thus maximally out of equilibrium for these observables. Throughout this study, we consider only local observables of the form given in eq.(13), whose coarse-grained eigen-projectors are
\[A_{j}^{\alpha,n}=\frac{\mathbb{I}+(-)^{j}\sigma_{\alpha,n}}{2}\quad(j=0,1). \tag{14}\]
In the following, we present results only for the lattice site \(n=0\), as there is no difference between sites due to the translation invariance of the model.
We use exact diagonalization to obtain the state dynamics and then look at the behavior of the Shannon entropy of local observables \(S_{A}(t)=-\sum_{j}p_{j}(t)\log p_{j}(t)\) with \(p_{j}(t)=\left\langle\psi_{t}\right|A_{j}\left|\psi_{t}\right\rangle\). We observe that the entropy rapidly equilibrates to a stationary value \(S_{A}^{\rm max}\), after an initial transient. This supports the use of a maximum entropy principle to determine the equilibrium probability distribution. Plots for observables \(X_{0}\), \(Y_{0}\) and \(Z_{0}\) in a system with \(L=10\) spins and with initial state \(|\psi(0)\rangle=|0101\ldots 01\rangle\) are shown in fig.1. The antiferromagnetic state along \(z\) is an eigenstate of \(Z_{0}\), therefore this observable has zero Shannon entropy at \(t=0\). Its entropy quickly grows to the maximum and then settles on a stationary value. Instead, for \(X_{0}\) and \(Y_{0}\) the initial state chosen corresponds to maximal Shannon entropy \(\log_{2}(2)\), and these observables equilibrate to a (slightly) lower value of entropy than their initial one. This is in agreement with the maximum entropy principle, as the presence of the constraints means \(S_{A}=\log_{2}(2)\) is an out-of-equilibrium situation and the observables' entropy relaxes to the maximum value allowed \(S_{A}^{max}\).
We now look at the dynamical emergence of the prediction given by eq.(8). Since we expect \(\varepsilon_{j}\) to be well defined at equilibrium, we look at its dynamic counterpart \(\tilde{\varepsilon}_{j}(t)=R_{j}(t)/p_{j}(t)\). In principle, this can fluctuate wildly in out-of-equilibrium situations, due to the possibility of a vanishing denominator. However,
after some initial transient, we expect it to settle on a fixed value. To see if this is true, we look at the time-implicit plot \(\left\{(x(t),y(t))=\left(p_{j}(t),R_{j}(t)\right)\right\}_{t}\). In fig.2 we show a few examples where eq.(8) is respected. We clearly see that points settle on a stable orbit described by the linear law \(R_{j}(t)=\varepsilon_{j}p_{j}(t)\), within some degree of tolerance. We find this to hold for all observables and initial states analyzed here. This is a highly non-trivial prediction about the dynamical phenomenology of many-body quantum systems, which emerges directly from the equilibrium equations.
Finally, since our goal is to predict the observables' probability distribution at equilibrium, we look at the time evolution of \(p_{j}(t)\) and see that it equilibrates to its time average \(\overline{p_{j}(t)}\) with small fluctuations around it. This means the first equilibrium equation, eq.(2) is satisfied, as the distribution is approximately constant under unitary evolution at equilibrium. We then extract \(\varepsilon_{j}^{eq}\) from time-averages \(\varepsilon_{j}^{eq}=\overline{R_{j}(t)}/\overline{p_{j}(t)}\), and the Lagrange multiplier \(\lambda_{E}\) using eq.(10) (alternatively, one could also do a numerical optimization of the energy constraint equation, eq.(9)). Given these two quantities, we can now calculate \(p^{eq}\propto e^{-\lambda_{E}\varepsilon_{j}}\) and compare it to the exact probability distribution after the transient is removed. The data shows excellent agreement with the theory, with \(|\overline{p_{j}(t)}-p_{j}^{eq}|<10^{-7}\), thus supporting the framework's validity. Fig.3 shows this for observables \(X_{0}\), \(Y_{0}\) and \(Z_{0}\) in a system of size \(L=10\) and with initial state \(|\psi(0)\rangle=|0101\ldots 01\rangle\).
## V Discussion
We now discuss the results obtained in the previous section, with respect to the general theory outlined in section II and III.
### Dynamical Emergence of MaxEnt
Observable Thermalization is founded on the principle of Maximum Observable Entropy. It essentially states that the equilibrium distribution we observe is the one given by a constrained maximum entropy principle, applied to the measurement statistics of the observable's eigenvalues \(p_{j}\).
By looking at the full out-of-equilibrium dynamics of one-body observables for a one-parameter class of initial states, we observe a phenomenology compatible with the Maximum Observable Entropy principle: the entropy \(S_{A}(t)\) rapidly relaxes to a stationary value, and then fluctuates around it. This constitutes good dynamical evidence that, at least for the model studied here, we can indeed use the Maximum Observable Entropy to predict the equilibrium value of one-body observables. This is our first result.
### Solution of the Equilibrium Equations and \(R_{j}=\varepsilon_{j}^{eq}p_{j}\) prediction
Thanks to Gibbs' inequality, it is possible to give the most general solution to the Equilibrium Equations, eqs.(2) and (3), using the exponential family of probability distributions: \(p_{j}^{eq}\propto e^{-\lambda_{E}\varepsilon_{j}}\). However, the quantities \(\varepsilon_{j}\) are unknown and it is not clear what role they play. Using this, in section III we made a general prediction about equilibrium behavior: eq.8, namely \(R_{j}=\varepsilon_{j}p_{j}\) for some constant \(\varepsilon_{j}\). By studying time-implicit plots, see fig.2, we have observed that the prediction holds remarkably well. The points \((p_{j}(t),R_{j}(t))\) settle on an orbit described by a linear law. The value of \(\varepsilon_{j}\) can then be extracted either by performing a linear fit or using time-averages, \(\overline{R_{j}(t)}\) and \(\overline{p_{j}(t)}\). Both give essentially the same value. The prediction is verified for all one-body observables and for the whole one-parameter family of initial states \(|\psi_{0}(\theta_{n})\rangle\). Confirming the validity of this prediction from Observable Thermalization is our second result.
While this gives us a dynamical interpretation of \(\varepsilon_{j}\) as the proportionality constant between \(R_{j}\) and \(p_{j}\) at equilibrium, its physical meaning is yet to be understood. Thus, the question now becomes, what is \(\varepsilon_{j}\) and what role
Figure 2: Time-implicit plot of \(R_{0}(t)\) against \(p_{0}(t)\) for observables \(X_{0}\), \(Y_{0}\) and \(Z_{0}\) for \(L=10\) and \(|\psi(0)\rangle=|0101\ldots 01\rangle\). Note that we are considering equilibrium quantities, as the transient has been removed from the data. Note also that the time-implicit plot for \((p_{1}(t),R_{1}(t))\) has the same behavior since \(p_{1}=1-p_{0}\) and \(R_{1}=E-R_{0}\).
Figure 3: Time evolution of the eigenvalue probability distribution \(p_{j}\) of observables \(X_{0}\), \(Y_{0}\) and \(Z_{0}\) for \(L=10\) and \(|\psi(0)\rangle=|0101\ldots 01\rangle\). Note that the transient has been removed from the data.
does it play? We address this in the next paragraph.
### The physical interpretation of \(\varepsilon_{j}^{eq}\)
In order to predict the equilibrium distribution \(p_{j}^{eq}\), it is crucial to have knowledge of the object \(\varepsilon_{j}^{eq}\), so we now provide a physical interpretation for it. In fact, this quantity has an important physical meaning at equilibrium: it is the (conditional) expected energy stored in \(\mathcal{H}_{j}\subset\mathcal{H}\), the image of \(A_{j}\). To see this we can compute the value of \(\varepsilon_{j}\) given by the diagonal ensemble, \(\varepsilon_{j}^{DE}=R_{j}^{DE}/p_{j}^{DE}\), since we expect \(\varepsilon_{j}^{eq}\approx\varepsilon_{j}^{DE}\) (see section II):
\[\varepsilon_{j}^{DE} \coloneqq\frac{R_{j}^{DE}}{p_{j}^{DE}}=\sum_{n}\frac{|c_{n}|^{2}p_ {j}(E_{n})}{\sum_{k}|c_{k}|^{2}p_{j}(E_{k})}E_{n}\] \[=:\sum_{n}q_{n|j}E_{n}, \tag{15}\]
with \(R_{j}^{DE}\coloneqq\frac{1}{2}\operatorname{Tr}(\{A_{j},H\}\rho^{DE})\) and \(p_{j}(E_{n})\coloneqq\langle E_{n}|\,A_{j}\,[E_{n}]\). We have also defined \(q_{n|j}\coloneqq\frac{|c_{n}|^{2}p_{j}(E_{n})}{\sum_{k}|c_{k}|^{2}p_{j}(E_{k})}\)--the conditional probability of observing \(E_{n}\), given the knowledge that, at equilibrium, our system inhabits the subspace \(\mathcal{H}_{j}\). Indeed:
\[\operatorname{Prob}\left(E_{n}|a_{j}\right)\coloneqq \frac{\operatorname{Prob}_{DE}(E_{n})\operatorname{Prob}(a_{j}|E_ {n})}{\sum_{k}\operatorname{Prob}_{DE}(E_{k})\operatorname{Prob}(a_{j}|E_{k})} \tag{16}\] \[= \frac{|c_{n}|^{2}p_{j}(E_{n})}{\sum_{k}|c_{k}|^{2}p_{j}(E_{k})}=q_ {n|j} \tag{17}\]
Eventually, we can see \(\varepsilon_{j}^{DE}\) is the expectation value of \(E_{n}\) taken using \(q_{n|j}\) as a probability distribution. We thus have that \(\varepsilon_{j}^{DE}\)_is the conditional expected value of the energy, conditioned on the fact that the system inhabits the subspace \(\mathcal{H}_{j}\)_. In this statistical sense, \(\varepsilon_{j}^{DE}\) is the amount of energy stored in the eigenspace \(\mathcal{H}_{j}\) at fixed eigenvalue \(a_{j}\). This physical interpretation relies on Bayes' theorem and, therefore, on the existence of a joint distribution \(p_{DE}(E_{n},a_{j})\). Since in general \([H,A]\neq 0\), a unique joint distribution does not exist. Or, more accurately, the joint distribution depends on the order in which the measurements are performed: \(p_{DE}(E_{n},a_{j})\neq p_{DE}(a_{j},E_{n})\). However, at equilibrium, \(\left\langle\left[H,A\right]\right\rangle_{eq}=0\), due to the first equilibrium equation eq.(2). So, while not generically correct, we expect our interpretation to hold, in a weak sense, at equilibrium. Moreover, while a unique joint distribution does not exist, once we've specified the order of the measurements, the joint distribution for the order "energy first, observable second" obviously exists. Thus, all manipulations required to interpret \(q_{n|j}\) as a conditional probability distribution are allowed and follow from the existence of \(p_{DE}(E_{n},a_{j})\). This is indeed confirmed by the fact that \(q_{n|j}\) satisfies all the properties required by a conditional probability distribution \(\operatorname{Prob}(E_{n}|a_{j})\). These are \(\sum_{n}\operatorname{Prob}(E_{n}|a_{j})=1\) and \(\sum_{j}\operatorname{Prob}_{DE}(a_{j})\mathbb{E}\left[E_{n}|a_{j}\right]= \langle H\rangle\). Here \(\mathbb{E}[E_{n}|a_{j}]=\varepsilon_{j}^{DE}\) is the conditional expectation of the energy, given that we know the system to be in a state at given eigenvalue \(a_{j}\). These are both satisfied by \(q_{n|j}\):
\[\sum_{n}q_{n|j}=1 \tag{18}\] \[\sum_{j}p_{j}^{DE}\varepsilon_{j}^{DE}=\sum_{j}R_{j}^{DE}=E \tag{19}\]
Note that for HUOs \(\varepsilon_{j}^{eq}=E\,\forall j\), which then turns \(p_{j}^{eq}\) into a flat distribution. This goes well with the physical interpretation of \(\varepsilon_{j}\) given above, and we can use it to understand why this happens, from a statistical mechanics perspective. Since \(\sum_{j}p_{j}\varepsilon_{j}=E\), having a fixed \(\varepsilon_{j}=E\) means that each subspace \(\mathcal{H}_{j}\) hosts the same amount of average energy \(\varepsilon_{j}=E\). The equilibrium distribution is flat because the various subspaces \(\mathcal{H}_{j}\) are energetically equivalent and the dynamics of the system make no distinction between them. If, however, the conditional energies are not the same, the equilibrium distribution will be biased, favoring the ones with smaller conditional energy.
This is our third and most important result. Indeed, understanding \(\varepsilon_{j}\) as a notion of energy for the observable under study paints a rather compelling picture for the emergence of statistical mechanics and thermodynamics.
To improve the predictivity of the theory, a necessary next step is to provide ways of computing \(\varepsilon_{j}^{eq}\) from first principles rather than extracting it from the dynamics, for instance by using a perturbative approach. In fact, if an observable is HVO with respect to an unperturbed Hamiltonian whose eigenvalues and eigenvectors are known, it will be close to HUO when the perturbation is small and so one could use perturbation theory to compute its equilibrium probability distribution. Work is currently ongoing in this direction.
### Observable Thermodynamics
Eventually, we would like to discuss the inherently thermodynamic character of our equilibrium solutions, eq.(7). Since we obtain \(S_{A}^{eq}=\log\mathcal{Z}_{A}+\beta E\), it is tempting to look at \(\mathcal{Z}_{A}\) as an observable-specific partition function. Indeed, eq.9 have a distinctively thermodynamic flavor: it is the well-known relation among partition function, inverse temperature and internal energy. This suggests defining the _observable free energy_\(F_{A}\) via \(\mathcal{Z}_{A}\coloneqq e^{-\lambda_{E}F_{A}}\). We then have
\[F_{A}=E-T_{A}S_{A}\,, \tag{20}\]
where we have defined the observable temperature \(T_{A}\coloneqq\lambda_{E}^{-1}\), and re-introduced the index \(A\) for clarity. This relation is essentially a form of energy conservation in the isolated system which, at equilibrium, turns into a relation between the observable entropy \(S_{A}\), the observable
free energy \(F_{A}\), and the average energy \(E\). While the dynamic emergence of this relation is highly non-trivial, observing equilibration to the predictions of Observable Thermalization guarantees it. Again, this strengthens the case for observable-specific thermodynamics and statistical mechanics. However, in order to have a complete picture we would need a kinetic, or operational, interpretation of the observable-specific thermodynamic quantities \(T_{A}\) and \(F_{A}\). This is currently being developed and will be reported in future work.
## VI Conclusions
Over 100 years after the foundation of statistical mechanics and thermodynamics we still do not know what are the necessary and sufficient conditions which guarantee the dynamical emergence of thermal equilibrium.
The theory of Observable Thermalization provides a predictive framework, based on the Maximum Observable Entropy principle, to tackle this issue. In this paper, we have investigated this theory's assumptions and predictions in a \(1D\) model of spin \(1/2\). After giving the most general form of the solution of the equilibrium equations, we numerically confirmed the dynamical emergence of the Maximum Observable Entropy principle and confirmed the validity of a general prediction of the equilibrium equations, namely eq.(8). This is found to be true in all cases considered. The prediction led us to the remarkable emergence of the quantity \(\varepsilon_{j}=R_{j}/p_{j}\). We have understood its physical meaning and its role in dynamical thermalization. \(\varepsilon_{j}\) is the average amount of energy contained in \(\mathcal{H}_{j}\), the subspace at fixed eigenvalue \(a_{j}\). Therefore, it plays the role of energy for the observable under study, \(A\). Once the \(\varepsilon_{j}\) are correctly evaluated, the equilibrium distribution can be found by solving the optimization problem over the Lagrange multiplies \(\lambda_{E}\), which establishes compatibility with the constraint of fixed average energy. Predictions arising from this method give remarkably good results for the equilibrium distribution, in all cases studied.
While we have made significant progress towards a fully predictive theory of thermalization, several avenues for future work remain open. We now mention a few of them. First, confirming the validity of the theoretical framework in other Hamiltonian models and for other observables would certainly bring additional support, and strengthen the case for the maximum observable entropy principle. Second, finding a generic way to compute the \(\varepsilon_{j}\) from first principles, rather than from dynamics, is the most important step to achieve full predictive power. Third, the operational and kinetic interpretations of observable-specific thermodynamic quantities such as \(T_{A}\) is certainly of interest. It would broaden our understanding of thermodynamics and expand the domain of applicability of statistical mechanics.
## VII Acknowledgments
L.S. thanks the "Angelo Della Riccia" Foundation for their continued support, and is grateful to Luis Pedro Garcia Pintos, Samuel Slezak and Zoe Holmes for useful discussions. F.A. acknowledges support from the Templeton World Charity Foundation under grant TWCF0336. F.A. would like to thank J.P. Crutchfield and C. Jarzynski for discussions about the dynamical emergence of thermal equilibrium. V.V. is grateful to the Moore Foundations and Templeton Foundation for supporting his research.
|
2309.13502 | Globally Solving a Class of Bilevel Programs with Spatial Price
Equilibrium Constraints | Bilevel programs with spatial price equilibrium constraints are strategic
models that consider a price competition at the lower level. These models find
application in facility location-price models, optimal bidding in power
networks, and integration of renewable energy sources in distribution networks.
In this paper, for the case where the equilibrium at the lower level can be
formulated as an optimization problem, we introduce an enhanced single-level
formulation based on duality and show that its relaxation is stronger than the
single-level formulation obtained using KKT conditions. Compared to the
literature [1, 2], this new formulation (i) is computationally friendly to
global solution strategies using branch-and-bound, and (ii) can tackle
instances of larger size. Further, we develop a heuristic procedure to find
feasible solutions inside of the branch-and-bound tree that is effective on
instances of large size and produces solutions whose objective values are close
to the relaxation bound. We demonstrate the benefits of this formulation and
heuristic through an extensive numerical study on synthetic instances of
Equilibrium Facility Location [3] and on standard IEEE bus networks for
planning renewable generation capacity under uncertainty. | Akshit Goyal, Jean-Philippe P. Richard | 2023-09-23T23:53:25Z | http://arxiv.org/abs/2309.13502v3 | # Globally Solving a Class of Bilevel Programs with Spatial Price Equilibrium Constraints
###### Abstract
Bilevel programs with spatial price equilibrium constraints are strategic models that consider a price competition at the lower-level. These models find application in facility location-price models, optimal bidding in power networks, and integration of renewable energy sources in distribution networks. In this paper, for the case where the equilibrium at the lower level can be formulated as an optimization problem, we introduce an enhanced single-level formulation based on duality and show that its relaxation is stronger than the usual single-level formulation obtained using KKT conditions. Compared to the literature [1, 2], this new formulation is (_i_) computationally friendly to global solution strategies using branch-and-bound, and (_ii_) able to handle larger instance sizes. Further, we develop a heuristic procedure to find feasible solutions inside of the branch-and-bound tree that is effective on large-sized instances and produces solutions whose objective values are close to the relaxation bound. We demonstrate the benefits of this formulation and heuristic through an extensive numerical study on synthetic instances of Equilibrium Facility Location [3] and on standard IEEE bus networks for planning renewable generation capacity under uncertainty.
**Keywords:** bilevel optimization, spatial price equilibrium, facility location, renewable generation unit
## 1 Introduction
Bilevel programs incorporating spatial price equilibrium (SPE) constraints at the lower level are used to model competitive facility location on networks [1, 3, 4] and analyzing
bidding decision of a generating firm in an electric power network [5]. At the core of these bilevel programs lies the concept of SPE, which involves computing the supply price, demand price, and commodity flow in a network while satisfying the equilibrium condition that the demand price equals the supply price plus the transportation cost if there is a non-zero flow between a pair of supply and demand markets. In the literature, the general SPE problem has been formulated as a variational inequality (VI) problem [6] and several VI-based iterative solution procedures, such as the Frank-Wolfe method and projection methods, have been proposed [7; 8]. Algorithms based on complementarity formulations of the SPE problem have also been developed [9; 10]. We refer to [11] for a detailed review of general network equilibrium models and of spatial price equilibria in networks.
The models we study in this paper utilize SPE as the lower-level problem within a bilevel program. This approach is necessary to accurately model how the upper-level decisions affect the equilibrium market price of a commodity, taking into account market competition. The resulting equilibrium price then directly impacts the objective function at the upper-level. In [4], the authors formulate a bilevel model of this type to locate a firm's production facilities and to determine production decisions at the upper-level in order to maximize the firm's profit, which depends on the prices arising from the resulting equilibrium at the lower-level. In [1], the authors argue the existence of solutions to this equilibrium facility location (EFL) model whereas [1; 4] provide heuristic approaches which involve successive linearization of the nonlinear upper-level objective based on the sensitivity analysis results for VIs discussed in [12]. The authors of [2] extend the work of [4] by allowing for additional shipping decisions to be made at the upper-level. All of these articles, however, focus on heuristic solution methods, which aim at finding feasible solutions of good quality, but do not provide guarantees on the quality of solutions obtained. Further, these heuristics are tested on small-sized problem instances.
In this paper, we present an approach to obtain globally optimal solutions to a class of bilevel programs with SPE constraints which encompasses the bilevel application models described above. Our approach relies on identifying a single-level reformulation of the original bilevel program that is well-solved by branch-and-bound and permits the global solution of larger EFL instances than those reported to date. Compared to the classical single-level reformulation of the original bilevel problem, this new reformulation allows substantial computational speed ups. To illustrate the advantages and generality of this formulation we consider, in addition to EFL, an application to a stochastic variant of a bilevel SPE model that optimizes the deployment of renewable generation units (RGUs) in power distribution networks under uncertainty [13]. For larger-sized EFL and RGU planning instances, which cannot be solved quickly to optimality with the new reformulation, we develop a heuristic procedure to aid the branch-and-bound search. The addition of this heuristic helps solve most instances within an optimality gap of less than 1% in a reasonable amount of time.
This paper makes the following contributions:
1. We derive a new stronger single-level reformulation for bilevel programs with SPE constraints at the lower-level, when the variational inequality of the lower-level can be cast as an optimization problem. This reformulation has the advantage of
having a provably bounded root node relaxation. This is in contrast with the usual single-level reformulation of the bilevel program, which often has an unbounded root relaxation (as we argue theoretically and show computationally in this paper).
2. We conduct extensive numerical experiments on randomly generated instances of EFL [4] of varying sizes. To the best of our knowledge, this is the first extensive computational study for this class of bilevel programs.
3. We introduce a stochastic version of bilevel programs with SPE constraints that provides a novel approach to optimally locating RGUs in power distribution networks. Further, we perform numerical experiments on standard IEEE bus networks that show the numerical potential of the approach.
4. We develop a generic rounding heuristic procedure for these problems. For larger-sized instances, this heuristic helps to significantly reduce the time required by branch-and-bound solvers to obtain high quality solutions.
The remainder of the paper is structured as follows. In Section 2, we introduce the problem, notations, and assumptions that are used throughout the paper. We derive a single-level formulation based on KKT conditions in Section 3.1 that we further reformulate using Lagrangian duality in Section 3.2. The theoretical properties of the relaxations of these two formulations are discussed in Section 3.3. Lastly, in Section 4, we conduct an extensive computational study on two applications. In Section 4.1, we consider EFL on networks. In Section 4.2, we study the problem of planning the location of RGUs under uncertainty.
## 2 Problem description and preliminaries
We study bilevel programs with equilibrium price constraints where the leader problem is
\[\max_{\mathbf{x},\mathbf{z}} \mathbf{\pi_{0}^{\intercal}}\mathbf{x}-\mathbf{c_{x}^{\intercal}}\mathbf{x}-\mathbf{ c_{z}^{\intercal}}\mathbf{z}\] (1a) s.t. \[A_{x}\mathbf{x}+A_{z}\mathbf{z}\leq\mathbf{b} \tag{1b}\] \[0\leq\mathbf{x}\leq\mathbf{\overline{x}}\] (1c) \[\mathbf{x}\in\mathbb{Z}^{d_{\mathbf{z}}^{\mathbf{\times}}}\times\mathbb{R}^{ d_{\mathbf{z}}^{C}},\ \ \mathbf{z}\in\{0,1\}^{d_{\mathbf{z}}}\] (1d) \[\mathbf{\pi}=\begin{pmatrix}\mathbf{\pi_{0}}\\ \mathbf{\pi_{1}}\end{pmatrix}:=\mathbf{\psi}(\mathbf{x}), \tag{1e}\]
in which \(\mathbf{z}\) and \(\mathbf{x}\) are the binary and mixed-integer decisions of the leader, respectively, \(\mathbf{\overline{x}}\) is a parameter assumed to be finite, \(\mathbf{\pi}\) is the equilibrium price vector defined as a function \(\mathbf{\psi}(\cdot)\) of leader's mixed-integer decision vector \(\mathbf{x}\). The constraints of the model are linear and the objective involves a bilinear product term between price variables \(\mathbf{\pi_{0}}\) and mixed-integer decision variables \(\mathbf{x}\). The form of function \(\mathbf{\psi}(\mathbf{x})\) is not given explicitly. Instead, \(\mathbf{\pi}\) is the dual solution corresponding to the equality constraints of a variational inequality (whose feasible region depends on \(\mathbf{x}\)) which we refer to as the _follower problem_. More specifically, for a given \(\mathbf{x}\), the follower problem is to determine
a vector \(\mathbf{y}^{*}\) such that
\[\text{VI}(\Phi,\mathcal{Y}(\mathbf{x})):\ \ \ \langle\Phi(\mathbf{y}^{*}),\mathbf{y}^{ \prime}-\mathbf{y}^{*}\rangle\geq 0\ \ \forall\mathbf{y}^{\prime}\in\text{proj}_{\mathbf{y}}\mathcal{Y}(\mathbf{x}), \tag{2}\]
where \(\Phi(\mathbf{y})=\left(\Phi_{i}(\mathbf{y}),\ i\in[d_{\mathbf{y}}]\right)\in\mathbb{R}^{d_ {\mathbf{y}}}\) is the cost vector of the follower and where we use the notation \([d_{\mathbf{y}}]\) for the set \(\{1,\ldots,d_{\mathbf{y}}\}\). In (2), \(\mathcal{Y}(\mathbf{x})\) is the set of feasible follower solutions, which we assume takes the form
\[\mathcal{Y}(\mathbf{x})=\left\{\begin{array}{ll}&G_{0}\mathbf{y}+H_{0}\mathbf{w}-\mathbf{x} =\mathbf{h_{0}},\ \ [\mathbf{\pi_{0}}]\\ &G_{1}\mathbf{y}+H_{1}\mathbf{w}=\mathbf{h_{1}},\ \ \ \ \ \ \ [\mathbf{\pi_{1}}]\\ (\mathbf{y},\mathbf{w})\in\mathbb{R}^{d_{\mathbf{y}}+d_{\mathbf{w}}}&\mathbf{y}\geq 0,\
**Remark 2**.: _Note that all equality and inequality constraints in \(\mathcal{Y}(\mathbf{x})\) are linear. Further, Assumption 4 ensures feasibility of follower problem. In conjunction with Assumption 3, this implies that the refined Slater's condition holds [15, pg.227]. Since the follower problem is convex (attaining a noninfinite primal optimal value from Remark 1), the refined Slater's condition further implies that (\(i\)) the strong duality holds, and (\(ii\)) the dual optimal value is attained, i.e., there exists a dual optimal solution; see [15, pg.227]._
## 3 Single-level reformulations
In this section, we describe two formulations for the problem. The first one, described in Section 3.1, is obtained by reformulating the equilibrium problem using the KKT conditions of its equivalent optimization formulation. The second one, described in Section 3.2, which is new to this work, is obtained by using duality to rewrite the objective function in the first reformulation using strong duality of follower problem. Finally, in Section 3.3, we compare the strength of the relaxations of these two formulations and develop additional insights into the special case where the cost vector is affine in Sections 3.3.1 and 3.3.2.
### KKT-based reformulation
To obtain the first formulation, we replace (1e) in (1) with the KKT conditions of Problem (4). We obtain the following MINLP formulation, which has complementarity constraints and a bilinear objective function,
\[\vartheta_{\text{KKT}}=\max_{\begin{subarray}{c}\mathbf{z},\mathbf{x}, \mathbf{y},\mathbf{w}\\ \mathbf{\pi},\mathbf{\mu},\mathbf{\theta}\end{subarray}} \ \ \mathbf{\pi}_{\mathbf{0}}^{\intercal}\mathbf{x}-\mathbf{c}_{\mathbf{x}}^{ \intercal}\mathbf{x}-\mathbf{c}_{\mathbf{z}}^{\intercal}\mathbf{z}\] (6a) s.t. \[A_{x}\mathbf{x}+A_{z}\mathbf{z}\leq\mathbf{b} \tag{6b}\] \[0\leq\mathbf{x}\leq\overline{\mathbf{x}}\] (6c) \[\mathbf{x}\in\mathbb{Z}^{d^{I}}\times\mathbb{R}^{d^{C}},\ \ z\in\{0,1\}^{d_{\text{z}}}\] (6d) \[G\mathbf{y}+H\mathbf{w}-\begin{pmatrix}\mathbf{x}\\ 0\end{pmatrix}=\mathbf{h}\] (6e) \[0\leq\mathbf{y}\leq\overline{\mathbf{y}},\ \ 0\leq\mathbf{w}\leq\overline{\mathbf{w}}\] (6f) \[\mathbf{\mu}\geq 0,\ \mathbf{\theta}\geq 0\] (6g) \[\Phi(\mathbf{y})+G^{\intercal}\mathbf{\pi}+\mathbf{\theta}^{\mathbf{y}}-\mathbf{\mu}^ {\mathbf{y}}=0\] (6h) \[H^{\intercal}\mathbf{\pi}+\mathbf{\theta}^{\mathbf{w}}-\mathbf{\mu}^{\mathbf{w}}=0\] (6i) \[\mathbf{y}^{\intercal}\mathbf{\mu}^{\mathbf{y}}=0,\ \ (\overline{\mathbf{y}}-\mathbf{y})^{ \intercal}\mathbf{\theta}^{\mathbf{y}}=0\] (6j) \[\mathbf{w}^{\intercal}\mathbf{\mu}^{\mathbf{w}}=0,\ \ (\overline{\mathbf{w}}-\mathbf{w})^{ \intercal}\mathbf{\theta}^{\mathbf{w}}=0, \tag{6k}\]
where \(G=\begin{pmatrix}G_{0}\\ G_{1}\end{pmatrix}\), \(\ H=\begin{pmatrix}H_{0}\\ H_{1}\end{pmatrix}\), \(\ \mathbf{h}=\begin{pmatrix}\mathbf{h_{0}}\\ \mathbf{h_{1}}\end{pmatrix}\), and \(\mathbf{\pi}=\begin{pmatrix}\mathbf{\pi_{0}}\\ \mathbf{\pi_{1}}\end{pmatrix}\). In this formulation, constraints (6e)-(6f) are the primal feasibility conditions of the KKT system of Problem (4), whereas (6g) are its dual feasibility conditions, (6h)-(6i) are its stationary
conditions, and (6j)-(6k) are its complementarity slackness conditions. This reformulation is exact as KKT conditions are necessary and sufficient for Problem (4) as it is a convex optimization problem that satisfies Abadie's constraint qualification.
The complementarity constraints (6j)-(6k) can be reformulated as big-M constraints. This approach, however, can lead to sub-optimal or erroneous solutions when the choice of value for \(M\) is not appropriate, as discussed in [16]. Instead, when solving this model with commercial software, we use the SOS1 reformulation of (6j)-(6k):
\[\begin{split}\{\mathbf{y}_{i},\mathbf{\mu_{i}^{y}}\}\ \ \text{is}\ \ \text{SOS1},\ \ \{\mathbf{\overline{y}}_{i}-\mathbf{y}_{i},\mathbf{\theta_{i}^{y}}\}\ \ \text{is}\ \ \text{SOS1}\ \ \ \forall i\in[d_{\mathbf{y}}]\\ \{\mathbf{w}_{i},\mathbf{\mu_{i}^{w}}\}\ \ \text{is}\ \ \text{SOS1},\ \ \{\mathbf{\overline{w}}_{i}-\mathbf{w}_{i},\mathbf{\theta_{i}^{w}}\}\ \ \text{is}\ \ \text{SOS1}\ \ \ \forall i\in[d_{\mathbf{w}}]\end{split} \tag{7}\]
as is recommended in [16].
A natural relaxation of (6) is obtained after relaxing complementarity constraints (6j)-(6k) and integrality constraints (6d). It is described as
\[\vartheta^{\text{relax}}_{\text{KKT}}=\max_{\begin{subarray}{c} \mathbf{z},\mathbf{x},\mathbf{y},\mathbf{w}\\ \mathbf{\pi},\mathbf{\mu},\mathbf{\theta}\end{subarray}}\ \mathbf{\pi_{0}^{\intercal}}\mathbf{x}-\mathbf{c_{x}^{ \intercal}}\mathbf{x}-\mathbf{c_{z}^{\intercal}}\mathbf{z}\] (8a) \[\
observe that
\[\min_{\mathbf{w}}\ \mathcal{L}^{1}_{\mathbf{x}}(\mathbf{w};\mathbf{\pi},\mathbf{\mu}^{\mathbf{w}},\mathbf{ \theta}^{\mathbf{w}})=\begin{cases}0&\text{if }H^{\intercal}\mathbf{\pi}+\mathbf{\theta}^{\mathbf{w}}-\mathbf{\mu}^{\mathbf{w}}=0\\ -\infty&\text{o.w.}\end{cases}\]
Second, for fixed \(\mathbf{\mu}\geq 0\), \(\mathbf{\theta}\geq 0\), and \(\mathbf{\pi}\), consider the minimization problem
\[\min_{\mathbf{y}}\ \mathcal{L}^{2}_{\mathbf{x}}(\mathbf{y};\mathbf{\pi},\mathbf{\mu}^{\mathbf{y}}, \mathbf{\theta}^{\mathbf{y}}).\]
Let \(\tilde{\mathbf{y}}^{*}\) be a minimizer. Assumption 2 implies that \(\mathcal{L}^{2}_{\mathbf{x}}(\cdot;\mathbf{\pi},\mathbf{\mu}^{\mathbf{y}},\mathbf{\theta}^{\mathbf{y}})\) is convex in \(\mathbf{y}\) for given \(\mathbf{\pi},\mathbf{\mu},\mathbf{\theta}\). So, the following optimality condition is necessary and sufficient:
\[\nabla_{\mathbf{y}}\mathcal{L}^{2}_{\mathbf{x}}(\tilde{\mathbf{y}}^{*};\mathbf{\pi},\mathbf{\mu} ^{\mathbf{y}},\mathbf{\theta}^{\mathbf{y}})=\Phi(\tilde{\mathbf{y}}^{*})+G^{\intercal}\mathbf{\pi }+\mathbf{\theta}^{\mathbf{y}}-\mathbf{\mu}^{\mathbf{y}}=0.\]
In fact, Assumption 2 implies strict convexity of \(\mathcal{L}^{2}_{\mathbf{x}}(\mathbf{y};\mathbf{\pi},\mathbf{\mu}^{\mathbf{y}},\mathbf{\theta}^{\mathbf{y}})\) and invertibility of \(\Phi(\cdot)\). If \((\mathbf{\pi},\mathbf{\mu},\mathbf{\theta})\) is such that \(\mathbf{\mu}^{\mathbf{y}}-\mathbf{\theta}^{\mathbf{y}}-G^{\intercal}\mathbf{\pi}\in\text{dom}\left( \Phi^{-1}\right)\), the minimizer \(\tilde{\mathbf{y}}^{*}\) exists and is uniquely given by \(\tilde{\mathbf{y}}^{*}=\Phi^{-1}(\mathbf{\mu}^{\mathbf{y}}-\mathbf{\theta}^{\mathbf{y}}-G^{ \intercal}\mathbf{\pi})\). According to Remark 2, the dual optimal value is attained which means there always exist \((\mathbf{\pi},\mathbf{\mu},\mathbf{\theta})\) such that \(\mathbf{\mu}^{\mathbf{y}}-\mathbf{\theta}^{\mathbf{y}}-G^{\intercal}\mathbf{\pi}\in\text{dom} \left(\Phi^{-1}\right)\). As a result, substituting \(\tilde{\mathbf{y}}^{*}=\Phi^{-1}(\mathbf{\mu}^{\mathbf{y}}-\mathbf{\theta}^{\mathbf{y}}-G^{ \intercal}\mathbf{\pi})\) gives the well-defined dual problem (9).
**Theorem 1**.: _Model_
\[\vartheta_{\text{dual}}=\max_{\begin{subarray}{c}\mathbf{z},\mathbf{x}, \mathbf{y},\mathbf{w}\\ \mathbf{\pi},\mathbf{\mu},\mathbf{\theta}\end{subarray}} -\left\langle\Phi(\mathbf{y}),\ \mathbf{y}\right\rangle- \overline{\mathbf{w}}^{\intercal}\mathbf{\theta}^{\mathbf{w}}-\overline{\mathbf{y}}^{ \intercal}\mathbf{\theta}^{\mathbf{y}}-\mathbf{h}^{\intercal}\mathbf{\pi}-\mathbf{c}_{\mathbf{x}}^{ \intercal}\mathbf{x}-\mathbf{c}_{\mathbf{z}}^{\intercal}\mathbf{z}\] (11a) \[\text{s.t.}\] ( 6 ) \[-\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq
**Remark 4**.: _Formulations (6) and (11) share the same variables and constraints. They differ only in their objective functions._
Consider the following relaxation of (11) obtained after relaxing its complementarity constraints (6j)-(6k) and integrality constraints (6d):
\[\vartheta^{\mathrm{relax}}_{\mathrm{dual}}=\max_{\begin{subarray}{ c}\mathbf{z},\mathbf{\varpi},\mathbf{y},\mathbf{w}\\ \mathbf{\mu},\mathbf{\theta}\end{subarray}}\;-\left\langle\Phi(\mathbf{y}),\;\mathbf{y} \right\rangle-\overline{\mathbf{w}}^{\intercal}\mathbf{\theta}^{\mathbf{w}}-\overline{\bm {y}}^{\intercal}\mathbf{\theta}^{\mathbf{y}}-\mathbf{h}^{\intercal}\mathbf{\pi}-\mathbf{c}_{\mathbf{ x}}^{\intercal}\mathbf{x}-\mathbf{c}_{\mathbf{z}}^{\intercal}\mathbf{z}\] (14a) \[\mathrm{s.t.}\;\;\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:
\(\mathbf{\theta}_{\mathrm{w}}^{w}=(\mathbf{\theta}_{i}^{w},\ i\in\mathrm{w})=0\). Hence, the constraint set (6e)-(6i) in relaxations (8) and (14) becomes
\[G_{\mathrm{y}}\mathbf{y}_{\mathrm{y}}+G_{\mathrm{y^{\prime}}}\mathbf{y}_{ \mathrm{y^{\prime}}}+H_{\mathrm{w}}\mathbf{w}_{\mathrm{w}}+H_{\mathrm{w^{\prime}}} \mathbf{w}_{\mathrm{w^{\prime}}}=\mathbf{h}+\begin{pmatrix}\mathbf{x}\\ 0\end{pmatrix} \tag{16a}\] \[\mathcal{Q}_{\mathrm{y}}\mathbf{y}_{\mathrm{y}}+\mathcal{Q}_{\mathrm{ y^{\prime}}}\mathbf{y}_{\mathrm{y^{\prime}}}+G^{\intercal}\mathbf{\pi}+I_{\mathrm{y^{ \prime}}}\mathbf{\theta}_{\mathrm{y^{\prime}}}^{\mathbf{y}}-\mathbf{\mu}^{\mathbf{y}}=-q\] (16b) \[H^{\intercal}\mathbf{\pi}+I_{\mathrm{w^{\prime}}}\mathbf{\theta}_{ \mathrm{w^{\prime}}}^{\mathbf{w}}-\mathbf{\mu}^{\mathbf{w}}=0\] (16c) \[0\leq\mathbf{y}\leq\overline{\mathbf{y}},\ \ 0\leq\mathbf{w}\leq \overline{\mathbf{w}}\] (16d) \[\mathbf{\mu}\geq 0,\ \ \mathbf{\theta}\geq 0. \tag{16e}\]
The relaxations (8) and (14) can thus be written as
\[\vartheta_{\mathrm{KKT}}^{\mathrm{relax}}=\max_{\begin{subarray} {c}\mathbf{z},\mathbf{x},\mathbf{y},\mathbf{w}\\ \mathbf{\pi},\mathbf{\mu},\mathbf{\theta}\end{subarray}} \mathbf{\pi}_{\mathbf{0}}^{\intercal}\mathbf{x}-\mathbf{c}_{\mathbf{x}}^{\intercal}\mathbf{x }-\mathbf{c}_{\mathbf{z}}^{\intercal}\mathbf{z}\] (17a) \[\text{s.t.} \eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:
the solution associated with \(\rho\) can then be verified to be \(\vartheta_{\mathrm{KKT}}^{0}+\rho\widetilde{\pi}_{0}^{\intercal}\dot{\mathbf{x}}\to\infty\). Since \(\widetilde{\pi}_{0}^{\intercal}\dot{\mathbf{x}}>0\), the optimal value grows without bound as \(\rho\to\infty\), _i.e._, \(\vartheta_{\mathrm{KKT}}^{\mathrm{relax}}=\infty\).
**Remark 5**.: _When its conditions are satisfied, Lemma 3 suggests that branch-and-bound might struggle in solving Formulation (6), as the problem relaxation at the root node will be unbounded (barring success from generic cuts or pre-processing routines at bounding the objective.) This has the potential to significantly slow down further search as branching decisions will be harder to make and many nodes will need to be explored before a reasonable upper bound is obtained._
**Lemma 4**.: _Assume \(\mathbf{h}=\mathbf{0}\). Then \(\vartheta_{\mathrm{dual}}^{\mathrm{relax}}<\infty\), i.e., relaxation (18) has bounded objective._
Proof.: The following terms in objective function (18a) are bounded over the constraints (18b)
\[-\overline{\mathbf{w}}^{\intercal}\mathbf{\theta}^{\mathbf{w}}\leq 0,\quad-\overline{\mathbf{y}} ^{\intercal}\mathbf{\theta}^{\mathbf{y}}\leq 0,\quad-\mathbf{c}_{\mathbf{x}}^{\intercal}\mathbf{x} \leq-\min_{0\leq\mathbf{x}\leq\overline{\mathbf{x}}}\mathbf{c}_{\mathbf{x}}^{\intercal}\mathbf{x},\quad-\mathbf{c}_{\mathbf{z}}^{\intercal}\mathbf{z}\leq-\min_{\mathbf{z}\in[0,1]^{d_{\mathbf{z} }}}\mathbf{c}_{\mathbf{z}}^{\intercal}\mathbf{z}.\]
For \(\mathbf{h}=\mathbf{0}\), we have that \(\mathbf{h}^{\intercal}\mathbf{\pi}=0\). Define function \(\widetilde{\phi}_{\mathrm{p}}(\mathbf{y}):=\mathbf{y}^{\intercal}\mathcal{Q}\mathbf{y}+q^ {\intercal}\mathbf{y}\). Since \(\mathcal{Q}\succ 0\) then \(\widetilde{\phi}_{\mathrm{p}}(\mathbf{y})\) is coercive which implies that it has a global minimizer on \(\mathbb{R}^{d_{\mathbf{y}}}\), _i.e._, \(\widetilde{\phi}_{\mathrm{p}}(\mathbf{y})\geq\min_{\mathbf{y}}\widetilde{\phi}_{ \mathrm{p}}(\mathbf{y})>-\infty\). Thus, objective function (18a) is bounded above over the constraints (18b) as
\[-\widetilde{\phi}_{\mathrm{p}}(\mathbf{y})-\mathbf{h}^{\intercal}\mathbf{\pi} -\mathbf{c}_{\mathbf{x}}^{\intercal}\mathbf{x}-\mathbf{c}_{\mathbf{z}}^{\intercal}\mathbf{z}\leq -\min_{\mathbf{y}\in\mathbb{R}^{d_{\mathbf{y}}}}\widetilde{\phi}_{\mathrm{ p}}(\mathbf{y})-\min_{0\leq\mathbf{x}\leq\overline{\mathbf{x}}}\mathbf{c}_{\mathbf{x}}^{ \intercal}\mathbf{x}-\min_{\mathbf{z}\in\{0,1\}^{d_{\mathbf{x}}}}\mathbf{c}_{\mathbf{z}}^{ \intercal}\mathbf{z}\] \[<\infty.\]
**Remark 6**.: _When \(\mathbf{h}=\mathbf{0}\), Lemma 4 establishes that even if there exists \((\widetilde{\mathbf{y}},\widetilde{\mathbf{w}},\widetilde{\mathbf{\pi}},\widetilde{\mathbf{ \mu}}^{\mathbf{w}},\widetilde{\mathbf{\mu}}^{\mathbf{w}},\widetilde{\mathbf{\theta}}^{\mathbf{y}}, \widetilde{\mathbf{\theta}}^{\mathbf{w}})\geq 0\) satisfying (19) and such that \(\widetilde{\mathbf{\pi}}_{0}^{\intercal}\dot{\mathbf{x}}>0\), then relaxation (18) is bounded, which is a significant advantage over relaxation (17)._
**Remark 7**.: _Lemma 4 generalizes to non-affine vector functions \(\Phi(\mathbf{y})\) (satisfying Assumptions 1-3) for which \(\widetilde{\phi}_{p}(\mathbf{y}):=\langle\Phi(\mathbf{y}),\ \mathbf{y}\rangle\) is coercive. An example is \(\Phi(\mathbf{y})=\big{(}1/\sqrt{1+\mathbf{y}_{i}^{2}}+2\mathbf{y}_{i},\ i\in d_{\mathbf{y}} \big{)}\) for which \(\widetilde{\phi}_{p}(\mathbf{y})=\sum_{i=1}^{d_{\mathbf{y}}}(\mathbf{y}_{i}/\sqrt{1+\mathbf{y} _{i}^{2}}+2\mathbf{y}_{i}^{2})\) is coercive._
#### 3.3.2 Example
We next illustrate the difference in the strength of the two formulations presented above on an example. This example is a simple instance of the EFL problem that we will discuss in detail in Section 4.1. The instance we consider has variables \(\mathbf{x}\in\mathbb{R},\ \mathbf{z}\in\mathbb{R},\ \mathbf{y}\in\mathbb{R}^{4}\) and has no variable \(\mathbf{w}\). Bounds on the variables are chosen so that \(\overline{\mathbf{x}}=7.5\) and \(\overline{\mathbf{y}}=\infty\). The constraints of the follower set are defined by \(G_{0}=\big{(}1\ 0\ -1\ 0\big{)}\), \(\mathbf{h_{0}}=0,\ G_{1}=\big{(}-1\ 1\ 0\ -1\big{)}\), and \(\mathbf{h_{1}}=0\), whereas \(H_{0}\) and \(H_{1}\) are not defined since there are
no variable \(\mathbf{w}\). The objective functions are defined through \(\Phi(\mathbf{y})=\begin{pmatrix}\mathbf{y}_{1}+10\\ \mathbf{y}_{2}-20\\ \mathbf{y}_{3}+10\\ \mathbf{y}_{4}+20\end{pmatrix}\) and the cost parameters are chosen so that \(\mathbf{c_{x}}=0.5\), and \(\mathbf{c_{z}}=0.5\). Hence, the formulations of the problem we described in Sections 3.1 and 3.2 have the constraints
\[0\leq\mathbf{x}\leq 7.5,\ \ \mathbf{z}\in\{0,1\},\ \ \mathbf{x}-10\mathbf{z}\leq 0 \tag{20a}\] \[\mathbf{y}_{1}-\mathbf{y}_{3}-\mathbf{x}=0,\ \ -\mathbf{y}_{1}+\mathbf{y}_{2}-\mathbf{y}_{4}=0,\] \[\mathbf{y}_{1}\geq 0,\ \ \mathbf{y}_{2}\geq 0,\ \ \mathbf{y}_{3}\geq 0,\ \ \mathbf{y}_{4}\geq 0\] (20b) \[\mathbf{\mu}_{1}\geq 0,\ \ \mathbf{\mu}_{2}\geq 0\] (20c) \[\mathbf{y}_{1}+10+\mathbf{\pi}_{1}-\mathbf{\pi}_{2}-\mathbf{\mu}_{1}=0,\] \[\mathbf{y}_{2}-20+\mathbf{\pi}_{2}-\mathbf{\mu}_{2}=0,\] \[\mathbf{y}_{3}+10-\mathbf{\pi}_{1}-\mathbf{\mu}_{3}=0,\] \[\mathbf{y}_{4}+20-\mathbf{\pi}_{2}-\mathbf{\mu}_{4}=0\] (20d) \[\{\mathbf{y}_{1},\mathbf{\mu}_{1}\}\ \ \text{is SOS1},\ \ \{\mathbf{y}_{2},\mathbf{\mu}_{2}\}\ \ \text{is SOS1},\] \[\{\mathbf{y}_{3},\mathbf{\mu}_{3}\}\ \ \text{is SOS1},\ \ \{\mathbf{y}_{4},\mathbf{\mu}_{4}\}\ \ \text{is SOS1}. \tag{20e}\]
The following are the single-level reformulations (6) and (11) for this instance:
\[\vartheta_{\text{KKT}}=\max\ (\mathbf{\pi}_{1}-0.5)\mathbf{x}-0.5\mathbf{z} \vartheta_{\text{dual}}=\max\ -\mathbf{y}_{1}^{2}-\mathbf{y}_{2}^{2}-\mathbf{y}_{3}^{2}-\mathbf{y}_{4}^{2}-10\mathbf{y}_{1}+20 \mathbf{y}_{2}\] \[\text{s.t.}\ \eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq
## 4 Numerical experiments
In this section, we study the performance of the single-level reformulations derived in Section 3 on two applications. In Section 4.1, we consider an equilibrium facility location (EFL) problem on networks. In Section 4.2, we consider a location planning problem for renewable generation units (RGUs) in distribution networks under uncertainty. The computational details and test instances used for the two applications are described in Sections 4.1.1 and 4.2.1, respectively. The results and insights gained from the various experiments conducted are discussed in Sections 4.1.2, 4.1.3, and 4.2.2.
### Application 1: Equilibrium facility location on networks
Consider a directed network \(G=(\mathsf{N},\mathsf{A})\) (where \(\mathsf{N}\) is the set of nodes and \(\mathsf{A}\) is the set of arcs) with existing demand and supply nodes for a single commodity denoted as \(\mathsf{N}_{\boldsymbol{D}}\subseteq\mathsf{N}\) and \(\mathsf{N}_{\boldsymbol{S}}\subseteq\mathsf{N}\), respectively. A leader firm wishes to locate production facilities at a subset of a set of potential nodes, say \(\mathsf{N}_{0}\subseteq\mathsf{N}\), of the network and determine their production levels subject to capacity constraints with the goal of maximizing profit. Let \(\boldsymbol{z}=(\boldsymbol{z}_{i},\ i\in\mathsf{N}_{0})\) denote the vector of binary decisions of locating a production facility at \(i\) and let \(\boldsymbol{Q}=(\boldsymbol{Q}_{i},\ i\in\mathsf{N}_{0})\) denote the vector of production quantities at facility \(i\) with per unit production cost of \(v_{i}\) and production capacity of \(\overline{\boldsymbol{Q}}_{i}\) for \(i\in\mathsf{N}_{0}\). The total capacity budget over all opened production facilities is \(Q_{\max}\).
At the lower level, we let \(\boldsymbol{f}=(\boldsymbol{f}_{ij},\ (i,j)\in\mathsf{A})\), \(\boldsymbol{D}=(\boldsymbol{D}_{i},\ i\in\mathsf{N}_{\boldsymbol{D}})\), \(\boldsymbol{S}=(\boldsymbol{S}_{j},\ j\in\mathsf{N}_{\boldsymbol{S}})\) be the vectors of flow, demand, and supply in the network, respectively. Assuming competition between new firms and those already in place, the production decisions \(\boldsymbol{Q}\) change the total supply of commodity in the market and impact equilibrium prices and flow in the network. The resulting commodity flows and their corresponding demand and supply quantities correspond to a new competitive equilibrium which is obtained by solving the variational inequality [4]
\[\langle\boldsymbol{\alpha}(\boldsymbol{f}^{*}),\boldsymbol{f}-\boldsymbol{f}^ {*}\rangle-\langle\boldsymbol{\beta}(\boldsymbol{D}^{*}),\boldsymbol{D}- \boldsymbol{D}^{*}\rangle+\langle\boldsymbol{\gamma}(\boldsymbol{S}^{*}), \boldsymbol{S}-\boldsymbol{S}^{*}\rangle\geq 0,\quad\forall(\boldsymbol{f}, \boldsymbol{D},\boldsymbol{S})\in\Omega(\boldsymbol{Q}),\]
where \(\boldsymbol{\alpha}(\boldsymbol{f})\), \(\boldsymbol{\beta}(\boldsymbol{D})\), \(\boldsymbol{\gamma}(\boldsymbol{S})\) are the inverse flow, inverse demand, and inverse supply cost vector functions, respectively. The set \(\Omega(\boldsymbol{Q})\) is comprised of the network flow balance and non-negativity constraints
\[\Omega(\boldsymbol{Q})=\left\{\begin{array}{l|l}\mathbb{I}_{0\boldsymbol{f }}\boldsymbol{f}+\mathbb{I}_{0\boldsymbol{D}}\boldsymbol{D}-\mathbb{I}_{0 \boldsymbol{S}}\boldsymbol{S}-\boldsymbol{Q}=0,\\ \mathbb{I}_{1\boldsymbol{f}}\boldsymbol{f}+\mathbb{I}_{1\boldsymbol{D}} \boldsymbol{D}-\mathbb{I}_{1\boldsymbol{S}}\boldsymbol{S}=0,\\ \boldsymbol{f}\geq 0,\ \boldsymbol{D}\geq 0,\ \boldsymbol{S}\geq 0\end{array} \right\},\]
\begin{table}
\begin{tabular}{c|c c c c} \hline & **RootRelax** & **\# Nodes** & **ObjVal** & **ObjBnd** \\ \hline KKT-based formulation & \(\infty\) & 31 & 10.78124986 & 10.78125191 \\ Duality-based formulation & 11.12347 & 1 & 10.78125 & 10.78125 \\ \hline \end{tabular}
\end{table}
Table 1: GUROBI results for Section 3.3.2 Example.
where \((\mathbb{I}_{0\boldsymbol{f}})_{\mathsf{N}_{0}\times\mathsf{A}}\), \((\mathbb{I}_{0\boldsymbol{D}})_{\mathsf{N}_{0}\times\mathsf{N}_{\boldsymbol{D}}}\), \((\mathbb{I}_{0\boldsymbol{S}})_{\mathsf{N}_{0}\times\mathsf{N}_{\boldsymbol{S}}}\) are the node-arc incidence, demand node incidence, and supply node incidence matrices corresponding to the nodes in \(\mathsf{N}_{0}\), respectively, and \(\mathbb{I}_{1\boldsymbol{f}}\), \(\mathbb{I}_{1\boldsymbol{D}}\), \(\mathbb{I}_{1\boldsymbol{S}}\) are similar matrices corresponding to the nodes in \(\mathsf{N}_{1}=\mathsf{N}\setminus\mathsf{N}_{0}\). Define \(\mathbb{I}_{\boldsymbol{f}}=\begin{pmatrix}\mathbb{I}_{0\boldsymbol{f}}\\ \mathbb{I}_{1\boldsymbol{f}}\end{pmatrix}\), \(\mathbb{I}_{\boldsymbol{D}}=\begin{pmatrix}\mathbb{I}_{0\boldsymbol{D}}\\ \mathbb{I}_{1\boldsymbol{D}}\end{pmatrix}\), \(\mathbb{I}_{\boldsymbol{S}}=\begin{pmatrix}\mathbb{I}_{0\boldsymbol{S}}\\ \mathbb{I}_{1\boldsymbol{S}}\end{pmatrix}\). The constraints of the single-level reformulations of EFL are:
\[\boldsymbol{1}^{\intercal}\boldsymbol{Q}\leq Q_{\max},\ \ \boldsymbol{z}\in\{0,1\}^{| \mathsf{N}_{0}|} \tag{21a}\] \[0\leq\boldsymbol{Q}_{i}\leq\overline{\boldsymbol{Q}}_{i} \boldsymbol{z}_{i}\ \ \ \forall i\in\mathsf{N}_{0}\] (21b) \[\mathbb{I}_{\boldsymbol{f}}\boldsymbol{f}+\mathbb{I}_{ \boldsymbol{D}}\boldsymbol{D}-\mathbb{I}_{\boldsymbol{S}}\boldsymbol{S}- \begin{pmatrix}\boldsymbol{Q}\\ 0\end{pmatrix}=0\] (21c) \[\boldsymbol{f}\geq 0,\ \boldsymbol{D}\geq 0,\ \boldsymbol{S}\geq 0,\] (21d) \[\boldsymbol{\mu}^{\boldsymbol{f}}\geq 0,\ \ \boldsymbol{\mu}^{ \boldsymbol{D}}\geq 0,\ \ \boldsymbol{\mu}^{\boldsymbol{S}}\geq 0,\] (21e) \[\boldsymbol{\alpha}(\boldsymbol{f})+\mathbb{I}_{\boldsymbol{f}}^{ \intercal}\boldsymbol{\pi}-\boldsymbol{\mu}^{\boldsymbol{f}}=0\] (21f) \[-\boldsymbol{\beta}(\boldsymbol{D})+\mathbb{I}_{\boldsymbol{D}}^ {\intercal}\boldsymbol{\pi}-\boldsymbol{\mu}^{\boldsymbol{D}}=0\] (21g) \[\boldsymbol{\gamma}(\boldsymbol{S})-\mathbb{I}_{\boldsymbol{S}}^ {\intercal}\boldsymbol{\pi}-\boldsymbol{\mu}^{\boldsymbol{S}}=0\] (21h) \[\boldsymbol{f}^{\intercal}\boldsymbol{\mu}^{\boldsymbol{f}}=0,\ \ \boldsymbol{D}^{ \intercal}\boldsymbol{\mu}^{\boldsymbol{D}}=0,\ \ \boldsymbol{S}^{\intercal}\boldsymbol{\mu}^{\boldsymbol{S}}=0. \tag{21i}\]
The KKT-based reformulation of EFL is
\[\vartheta_{\text{KKT}}=\max\ \ (\boldsymbol{\pi_{0}}-v)^{\intercal} \boldsymbol{Q}-c^{\intercal}\boldsymbol{z}\] (22) \[\text{s.t.}\ \ (\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq
\[[\mathbf{\beta}(\mathbf{D})]_{i}=-\beta_{i}^{1}\cdot\mathbf{D}_{i}+\beta_{i}^{0}, \forall i\in\mathsf{N}_{\mathbf{D}},\] \[[\mathbf{\gamma}(\mathbf{S})]_{j}=\gamma_{j}^{1}\cdot\mathbf{S}_{j}+\gamma_{j}^ {0},\qquad\forall j\in\mathsf{N}_{\mathbf{S}},\]
where \(\alpha_{ij}^{1}>0\), \(\beta_{i}^{1}>0\), and \(\gamma_{j}^{1}>0\) so that Assumptions 1-4 are satisfied. Since \(\mathbf{h}=\mathbf{0}\) in EFL, we have from Lemma 4 that the root relaxation of Formulation (23) is bounded and amounts to solving a strictly convex quadratic program. Denote \(\mathrm{Unif}(a,b)\) as the continuous uniform distribution over the interval \((a,b)\). For each instance, the lower-level cost parameters are generated as \(\alpha_{ij}^{0}\sim\mathrm{Unif}(0,3),\ \ \alpha_{ij}^{1}\sim\mathrm{Unif}(0,2),\ \ \beta_{i}^{0}\sim \mathrm{Unif}(1300,1500),\ \ \beta_{i}^{1}\sim\mathrm{Unif}(3,4),\ \ \gamma_{j}^{0}\sim \mathrm{Unif}(1,2),\ \ \mathrm{and}\ \gamma_{j}^{1}\sim\mathrm{Unif}(0,1)\). The upper-level cost parameters are generated as \(c_{i}\sim\mathrm{Unif}(150,200)\) and \(v_{i}\sim\mathrm{Unif}(3,5)\) whereas the capacity parameters are generated as \(\overline{\mathbf{Q}}_{i}\sim\mathrm{Unif}(100,200)\) and \(Q_{\mathrm{max}}=350\cdot|\mathsf{N}_{0}|/4\).
The performance of the two single-level formulations of EFL is evaluated on medium-sized and large-sized networks in Section 4.1.2 and 4.1.3, respectively. A heuristic approach is described in Section 4.1.3 to aid branch-and-bound in solving large-sized network instances. All models are written in Python 3.7 and solved using GUROBI (v9.5.2) with parameters: TimeLimit=600 (for medium-sized networks), TimeLimit=1200 (for large-sized networks), MIPGap=0.01%, IntFeasTol=1e-9, DualReductions=0, and NonConvex=2. The experiments are run on an Intel(R) Core(TM) i7-10510U CPU @ 1.80GHz machine with 16GB RAM.
#### 4.1.2 Medium-sized networks
In this section, we compare model formulations (22) and (23) on networks with \(|\mathsf{N}|\in\{10,20,30,40\}\) and \(|\mathsf{A}|\in\{15,25,35,45,55,65\}\). For each \((|\mathsf{N}|,|\mathsf{A}|)\) pair, five random instances of network, cost, and capacity parameters are generated. Tables 2-3 summarizes the results, where \(\mathbf{I}\) is the instance number, \(\mathbf{T}\) is the solution time (in seconds), **#N** is the number of nodes explored during branch-and-bound, **ObjVal** is the best objective value found, **ObjBnd** is the best relaxation bound, and **%Gap** is the optimality gap at termination. The entries marked as "-" in the table denote cases where GUROBI fails to find a finite relaxation bound. We make the following observations:
1. In the time limit of 600 seconds, Formulation (22) can only solve instances with approximately 10 to 20 nodes and 35 to 45 arcs whereas Formulation (23) solves all instances within the optimality gap tolerance of 0.01%.
2. When considering only instances successfully solved by Formulation (22), the runtime and explored nodes count is several orders of magnitude higher as compared to that of Formulation (23). In fact, Formulation (23) solves most instances at the root node and all of them within 1 second of computation time.
From the above observations, we conjecture that the result of Lemma 3 holds for EFL constraints (21c), (21f)-(21h) which implies the unboundedness of the root relaxation of Formulation (22) and offers severe computational limitations, even for medium-sized networks. Hence, we restrict our attention to Formulation (23) for large-sized networks.
\begin{table}
\begin{tabular}{c c c|c c c c|c c c c c c} & \multicolumn{8}{c|}{**FORMULATION (22)**} & \multicolumn{8}{c}{**FORMULATION (23)**} \\ \(|\)**N**\(|\) & \(|\)**A**\(|\) & **I** & **T** & **ObjVal** & **ObjBnd** & \%Gap** & **\#N** & **T** & **ObjVal** & **ObjBnd** & \%Gap** & **\#N** \\ \hline \multirow{8}{*}{10} & & 1 & 0.95 & 30,196.00 & 30,196.28 & 0.001 & 9.1E+03 & 0.03 & 30,196.00 & 30,198.82 & 0.009 & 1 \\ & & 2 & 0.20 & 189,463.31 & 189,463.31 & 0.000 & 1.5E+03 & 0.02 & 189,463.31 & 189,463.31 & 0.000 & 1 \\ & 15 & 3 & 1.09 & 128,572.14 & 128,572.49 & 0.000 & 6.5E+03 & 0.03 & 128,572.14 & 128,572.14 & 0.000 & 1 \\ & 4 & 1.56 & 196,712.65 & 196,712.85 & 0.000 & 8.7E+03 & 0.01 & 196,712.65 & 196,712.90 & 0.000 & 1 \\ & 5 & 2.63 & 164,950.80 & 164,951.00 & 0.000 & 1.7E+04 & 0.02 & 164,950.70 & 164,950.80 & 0.000 & 1 \\ \hline \multirow{8}{*}{10} & & 1 & 10.62 & 38,675.81 & 38,675.81 & 0.000 & 5.4E+04 & 0.04 & 38,675.81 & 38,676.11 & 0.001 & 1 \\ & 2 & 2.75 & 123,011.06 & 123,011.04 & 0.000 & 2.0E+04 & 0.04 & 123,010.65 & 123,019.24 & 0.007 & 1 \\ & 25 & 3 & 15.39 & 97,375.16 & 97,379.06 & 0.004 & 4.5E+04 & 0.03 & 97,375.18 & 97,375.18 & 0.000 & 1 \\ & 4 & 10.52 & 104,015.06 & 120,415.72 & 0.001 & 3.5E+04 & 0.02 & 120,415.06 & 120,416.22 & 0.001 & 1 \\ & 5 & 9.86 & 111,571.93 & 111,574.53 & 0.002 & 4.7E+04 & 0.11 & 111,571.93 & 111,571.93 & 0.000 & 1 \\ \hline \multirow{8}{*}{10} & & 1 & 205.47 & 37,636.65 & 37,636.65 & 0.000 & 5.3E+05 & 0.08 & 37,636.65 & 37,636.65 & 0.000 & 1 \\ & 2 & 24.85 & 95,544.40 & 95,549.08 & 0.005 & 1.2E+05 & 0.12 & 95,544.56 & 95,544.56 & 0.000 & 47 \\ & 35 & 215.47 & 88,376.93 & 88,378.22 & 0.001 & 6.1E+05 & 0.09 & 88,376.94 & 88,377.46 & 0.001 & 1 \\ & 4 & 195.94 & 93,342.01 & 93,342.01 & 0.000 & 6.2E+05 & 0.11 & 93,342.01 & 93,342.29 & 0.000 & 1 \\ & 5 & 121.67 & 107,445.38 & 107,445.47 & 0.000 & 3.6E+05 & 0.07 & 107,445.38 & 107,447.96 & 0.002 & 1 \\ \hline \multirow{8}{*}{20} & & 1 & 19.54 & 498,845.56 & 498,845.68 & 0.000 & 1.1E+05 & 0.17 & 498,845.56 & 498,845.56 & 0.000 & 47 \\ & 2 & 0.22 & 433,849.50 & 433,850.56 & 0.000 & 2.3E+03 & 0.07 & 433,849.50 & 433,852.40 & 0.001 & 1 \\ & 25 & 3 & 2.45 & 504,330.80 & 504,332.56 & 0.000 & 2.0E+04 & 0.03 & 504,330.80 & 504,341.46 & 0.002 & 1 \\ & 4 & 11.47 & 509,104.85 & 509,105.03 & 0.000 & 5.8E+04 & 0.11 & 509,104.85 & 509,131.45 & 0.005 & 1 \\ & 5 & 6.35 & 499,875.99 & 499,885.33 & 0.002 & 4.7E+04 & 0.02 & 499,875.99 & 499,920.72 & 0.009 & 1 \\ \hline \multirow{8}{*}{20} & & 1 & 17.23 & 203,934.06 & 203,936.43 & 0.001 & 8.1E+04 & 0.08 & 203,934.06 & 203,934.06 & 0.000 & 1 \\ & 2 & 7.86 & 487,560.62 & 487,560.84 & 0.000 & 3.5E+04 & 0.05 & 487,514.05 & 487,560.62 & 0.010 & 1 \\ & 35 & 3 & 13.87 & 306,256.69 & 306,260.87 & 0.001 & 6.9E+04 & 0.05 & 306,256.70 & 306,284.36 & 0.009 & 1 \\ & 4 & 368.39 & 407,975.87 & 407,999.58 & 0.006 & 1.2E+06 & 0.06 & 407,975.87 & 407,978.52 & 0.001 & 1 \\ & 5 & 66.09 & 327,594.00 & 327,615.84 & 0.007 & 3.0E+05 & 0.27 & 327,594.29 & 327,594.29 & 0.000 & 50 \\ \hline \multirow{8}{*}{20} & & 1 & 311.67 & 142,070.13 & 142,074.87 & 0.003 & 1.2E+06 & 0.12 & 142,070.14 & 142,070.14 & 0.000 & 1 \\ & 2 & 176.98 & 449,096.58 & 449,096.58 & 0.000 & 3.9E+05 & 0.07 & 449,096.58 & 449,096.58 & 0.000 & 1 \\ & 3 & 600.02 & 282,641.07 & – & – & 1.2E+06 & 0.02 & 282,641.07 & 282,663.94 & 0.008 & 1 \\ & 4 & 600.01 & 406,302.66 & – & – & 1.6E+06 & 0.34 & 407,886.81 & 407,906.37 & 0.005 & 1 \\ & 5 & 381.94 & 318,349.93 & 318,379.28 & 0.009 & 1.1E+06 & 0.39 & 318,349.99 & 318,351.48 & 0.000 & 1 \\ \hline \multirow{8}{*}{30} & & 1 & 600.01 & 710,038.90 & – & – & 2.5E+06 & 0.12 & 714,821.96 & 714,870.07 & 0.007 & 1 \\ & 2 & 600.02 & 68,847.05 & – & – & 2.3E+06 & 0.28 & 201,915.23 & 201,915.50 & 0.000 & 1 \\ & 4 & 600.03 & 461,7
#### 4.1.3 Large-sized networks
We conduct another set of experiments on networks with 100 nodes, _i.e._, \(|\mathsf{N}|=100\), and with a varying number \(|\mathsf{A}|\) of arcs. The aim is to investigate the performance of the stronger formulation (23) on large-sized networks. Only one instance is considered for each value of \(|\mathsf{A}|\). Specifically, \(|\mathsf{A}|\) is gradually increased in steps of 470 by randomly adding new arcs to the previous set of arcs while keeping all other parameters unchanged. The results are summarized in the first half of Table 4.
Formulation (23) is able to handle instances until approximately 3400 arcs but struggles to find a lower bound (_i.e._, to find a feasible solution) for instances having 3800 or more arcs. The bounded objective of Formulation (23) at the root node ensures that an upper bound is found for all instances.
We develop a simple rounding heuristic procedure (**RH**) that can be called during branch-and-bound for finding good quality feasible solutions. A pseudo-code of this procedure is given in Algorithm 1. Given a fractional solution \((\widetilde{\mathbf{z}},\widetilde{\mathbf{Q}})\) available at any point of the branch-and-bound procedure, this heuristic rounds the components of \(\widehat{\mathbf{z}}\) larger than a rounding threshold \(RndTh\_RH\) to 1, sets the other to zeros, and decrease the production quantities of facilities that were just closed to 0. Clearly, the solution \((\widetilde{\mathbf{z}},\widetilde{\mathbf{Q}})\) so obtained satisfies the upper level constraints (21a). For this vector of leader variables, the heuristic then solves the follower problem. Because we selected affine lower-level costs in our instances, the primal and dual problems are convex quadratic programs that can be efficiently solved using GUROBI; see (A1) and (A2) in Appendix A.
Inside of the branch-and-bound process, we employ the heuristic with a probability \(prob\_RH\), which means that Algorithm 1 is run at a node only if there is a successful Bernoulli trial with success probability \(prob\_RH\).
In our implementation, we select the **RH** parameters \(RndTh\_RH\) to be 0.5 and \(prob\_RH\) to be as follows:
\[prob\_RH=\begin{cases}100\%&\text{ if }\#FeasSoln<2\\ 5\%&\text{ o.w.}\end{cases} \tag{24}\]
\begin{table}
\begin{tabular}{c c|c c c c c c|c c c c} & & \multicolumn{8}{c|}{**FORMULATION** (22)} & \multicolumn{8}{c}{**FORMULATION** (23)} \\ \(|\mathsf{N}|\) & \(|\mathsf{A}|\) & **I** & **T** & **ObjVal** & **ObjBnd** & \%Gap** & **\#N** & **T** & **ObjVal** & **ObjBnd** & \%Gap** & **\#N** \\ \hline \multirow{4}{*}{40} & & 1 & 600.01 & 689,235.43 & – & – & 2.1E+06 & 0.20 & 873,272.97 & 873,310.71 & 0.004 & 1 \\ & & 2 & 600.02 & 354,451.47 & – & – & 1.4E+06 & 0.18 & 899,208.53 & 899,273.88 & 0.007 & 1 \\ & 55 & 3 & 600.02 & 555,781.08 & – & – & 1.0E+06 & 0.21 & 1,059,822.79 & 1,059,897.36 & 0.007 & 1 \\ & & 4 & 600.04 & 465,578.18 & – & – & 1.6E+06 & 0.04 & 1,168,314.25 & 1,168,325.19 & 0.001 & 1 \\ & & 5 & 600.03 & 389,722.74 & – & – & 1.7E+06 & 0.07 & 711,535.60 & 711,537.96 & 0.000 & 36 \\ \hline \multirow{4}{*}{40} & & 1 & 600.02 & 394,876.13 & – & – & 1.6E+06 & 0.15 & 883,248.41 & 883,255.85 & 0.001 & 1 \\ & & 2 & 600.02 & 314,351.29 & – & – & 1.4E+06 & 0.22 & 899,579.67 & 899,607.05 & 0.003 & 57 \\ & & 3 & 600.01 & 566,669.73 & – & – & 1.3E+06 & 0.52 & 924,406.16 & 924,492.16 & 0.009 & 1 \\ & & 4 & 600.03 & 701,145.59 & – & – & 1.9E+06 & 0.11 & 1,139,441.56 & 1,139,441.56 & 0.000 & 1 \\ & & 5 & 600.02 & 92,212.37 & – & – & 9.7E+05 & 0.44 & 570,315.76 & 570,319.21 & 0.001 & 7 \\ \end{tabular}
\end{table}
Table 3: (Cont’d) **FORMULATION** (22) vs (23) on medium-sized EFL instances.
where \(\#FeasSoln\) is the current number of feasible solutions found in the branch-and-bound tree.
The second half of Table 4 summarizes the results for Formulation (23) when **RH** is incorporated in the solution procedure. In this table, **#RH Execs** is the number of times **RH** is called during the branch-and-bound search and **Time/RH Exec** is the average time required per execution of **RH**. We make the following observations:
1. Using **RH**, Formulation (23) solves all instances within an optimality gap of 1%, but most of them reach the time limit of 1200 seconds.
2. As the number of arcs increase, the size of the follower primal-dual problems becomes larger, which is also reflected by the gradual increase in the time spent per **RH** execution and by a decrease of the number of **RH** executions.
3. Fewer nodes are explored when using **RH** as compared to when **RH** is not used. Moreover, as the number of arcs in a network increases for given cost parameters, the optimal value decreases. The reason is the increase in number of constraints (21h) on the equilibrium price vector \(\mathbf{\pi}\).
### Application 2: Planning of renewable generation units
Due to rising electricity demand in the past few decades, power grids are often burdened with very large loads, which may result in power outages in the worst case. A possible solution is to integrate renewable generation units (RGUs) into power distribution networks to improve reliability. As a result, the optimal deployment of RGUs in distribution networks has attracted recent attention from the research community; see [13]. In a power distribution network, there are several firms, each controlling a number of generating units. Each generation unit submits a bid to the independent system operator (ISO). This bid defines the supply curve at each of the supply nodes.
\begin{table}
\begin{tabular}{c|c c c c c|c c c c|c c c c} & \multicolumn{6}{c|}{**FORMULATION (23)**} & \multicolumn{6}{c}{**FORMULATION (23) + RH [\(RndTh\_RH=0.5\), \(prob\_RH\):(24)]**} \\ \(|\)A| & **T** & **ObjVal** & **ObjBnd** & \%Gap** & **\#N** & **T** & **ObjVal** & **ObjBnd** & \%Gap** & **\#N** & **\# RH** & **Time/** \\ & & & & & & & & & & & & & \\
[MISSING_PAGE_POST]
\end{tabular}
\end{table}
Table 4: **FORMULATION (23) with and without Rounding Heuristic (RH) on large-sized EFL instances having \(|\mathbf{N}|=100\).**
ISO then decides how much power to buy from the different units, how much to deliver to consumers, and what prices to charge based on the solution of an Optimal Power Flow (OPF) problem.
Consider a power distribution system represented as a directed network \(G=(\mathsf{N},\mathsf{A})\) (where \(\mathsf{N}\) is the set of nodes/buses and \(\mathsf{A}\) is the set of arcs/lines) with demand nodes \(\mathsf{N}_{\boldsymbol{D}}\subseteq\mathsf{N}\) and supply nodes \(\mathsf{N}_{\boldsymbol{S}}\subseteq\mathsf{N}\). Define \(\mathsf{N}_{0}\) to be the set of nodes/buses under control of the leader firm where RGUs with capacity \(\boldsymbol{Q}_{i}\) for \(i\in\mathsf{N}_{0}\) can be located. We assume that there is one generation unit per node which means \(\mathsf{N}_{\boldsymbol{S}}\cap\mathsf{N}_{0}=\emptyset\) and that ISO accepts all the RGU generation so that there is no bidding for nodes in \(\mathsf{N}_{0}\). The lower-level is the OPF problem faced by the ISO, which can be understood as a single commodity SPE problem (with additional constraints due to Kirchoff's voltage law) [5] where the supply curves are the bid submitted by generation units at nodes \(\mathsf{N}_{\boldsymbol{S}}\). To simplify the derivations, we consider a DC OPF model where resistance is assumed negligible relative to reactance and is ignored. Denote \(\boldsymbol{f}=(\boldsymbol{f}_{ij},\ (i,j)\in\mathsf{A})\), \(\boldsymbol{D}=(\boldsymbol{D}_{i},\ i\in\mathsf{N}_{\boldsymbol{D}})\), \(\boldsymbol{S}=(\boldsymbol{S}_{j},\ j\in\mathsf{N}_{\boldsymbol{S}})\) to be the vectors of power flows, demands, and supplies in the network, respectively. Assuming competition within the network, the installation of RGU capacity \(\boldsymbol{Q}\) increases total power generation capacity and impacts equilibrium prices and power flows in the distribution network. We also consider uncertainty in RGU generation [13] using \(\boldsymbol{\xi}=(\boldsymbol{\xi}_{i},\ i\in\mathsf{N}_{0})\) where each \(0\leq\boldsymbol{\xi}_{i}\leq 1\). Here, \(\boldsymbol{\xi}_{i}\) is the fraction of capacity \(\boldsymbol{Q}_{i}\) that is realized into actual RGU generation. For a given \(\boldsymbol{\xi}\), the resulting power flows, demand, and supply will produce a new competitive equilibrium, which is obtained by solving the variational inequality
\[-\langle\boldsymbol{\beta}(\boldsymbol{D}^{*}),\boldsymbol{D}-\boldsymbol{D}^ {*}\rangle+\langle\boldsymbol{\gamma}(\boldsymbol{S}^{*}),\boldsymbol{S}- \boldsymbol{S}^{*}\rangle\geq 0,\quad\forall(\boldsymbol{D},\boldsymbol{S}) \in\mathrm{proj}_{\boldsymbol{D},\boldsymbol{S}}\Omega(\boldsymbol{Q}, \boldsymbol{\xi}),\]
where \(\boldsymbol{\beta}(\boldsymbol{D})\) and \(\boldsymbol{\gamma}(\boldsymbol{S})\) are the inverse demand cost vector and supply bid functions, respectively. The set \(\Omega(\boldsymbol{Q},\boldsymbol{\xi})\) corresponds to the power flow balance, Kirchoff's voltage law [5] and line and generation capacity constraints given as
\[\Omega(\boldsymbol{Q},\boldsymbol{\xi})=\left\{\begin{array}{ll}&\mathbb{I }_{0\boldsymbol{f}}\boldsymbol{f}+\mathbb{I}_{0\boldsymbol{D}}\boldsymbol{D}- \mathsf{diag}(\boldsymbol{\xi})\boldsymbol{Q}=0,\\ (\boldsymbol{f},\boldsymbol{D},\boldsymbol{S})&\mathbb{I}_{1\boldsymbol{f}} \boldsymbol{f}+\mathbb{I}_{1D}\boldsymbol{D}-\mathbb{I}_{1\boldsymbol{S}} \boldsymbol{S}=0,\\ &R\boldsymbol{f}=0,\\ &0\leq\boldsymbol{f}\leq\overline{\boldsymbol{f}},\ 0\leq\boldsymbol{S}\leq \overline{\boldsymbol{S}},\ \boldsymbol{D}\geq 0\end{array}\right\},\]
where \(\mathbb{I}_{1\boldsymbol{f}}\), \(\mathbb{I}_{1\boldsymbol{D}}\), and \(\mathbb{I}_{1\boldsymbol{S}}\) are the node-arc incidence, demand node incidence, supply node incidence matrices corresponding to the set of nodes \(\mathsf{N}_{1}=\mathsf{N}\setminus\mathsf{N}_{0}\), respectively, and matrices \((\mathbb{I}_{0\boldsymbol{f}})_{\mathsf{N}_{0}\times\mathsf{A}}\) and \((\mathbb{I}_{0\boldsymbol{D}})_{\mathsf{N}_{0}\times\mathsf{N}_{\boldsymbol{ D}}}\) are defined similarly for the set \(\mathsf{N}_{0}\). Further, \(\overline{\boldsymbol{f}}\) is the vector of line capacities, \(\overline{\boldsymbol{S}}\) is the vector of generator capacities, and \(R\) is the incidence matrix of signed reactance coefficients [5], _i.e._,
\[R_{m,ij}=\begin{cases}s_{ijm}r_{ij}&\text{if }(i,j)\in L_{m}\\ 0&\text{o.w.}\end{cases}\]
where \(m\) indexes Kirchoff voltage loops1, \(L_{m}\) is the ordered set of arcs in loop \(m\), \(s_{ijm}=\pm 1\) depending on the orientation of arc \((i,j)\) in loop \(m\), and \(r_{ij}\) is the reactance of line \((i,j)\). Under Assumption 2, the following optimality conditions are necessary and sufficient for the problem faced by ISO problem where dual variables are specified in square brackets:
Footnote 1: The Kirchoff voltage loops in an undirected network can be determined using cycle_basis() function as part of Python package NetworkX. Direction of arcs can then be used to determine their orientation in a loop.
_Primal Feasibility:_
\[\mathbb{I}_{0\boldsymbol{f}}\boldsymbol{f}+\mathbb{I}_{0 \boldsymbol{D}}\boldsymbol{D}-\mathsf{diag}(\boldsymbol{\xi})\boldsymbol{Q}=0 [\boldsymbol{\lambda_{0}}] \tag{25a}\] \[\mathbb{I}_{1\boldsymbol{f}}\boldsymbol{f}+\mathbb{I}_{1 \boldsymbol{D}}\boldsymbol{D}-\mathbb{I}_{1\boldsymbol{S}}\boldsymbol{S}=0 [\boldsymbol{\lambda_{1}}]\] (25b) \[R\boldsymbol{f}=0 [\boldsymbol{\alpha}]\] (25c) \[\boldsymbol{S}\leq\overline{\boldsymbol{S}},\ \boldsymbol{f}\leq \overline{\boldsymbol{f}}\] (25d) \[\boldsymbol{f}\geq 0,\ \boldsymbol{D}\geq 0,\ \boldsymbol{S}\geq 0 \tag{25e}\]
_Dual Feasibility:_
\[\boldsymbol{\mu^{f}}\geq 0,\ \boldsymbol{\mu^{D}}\geq 0,\ \boldsymbol{\mu^{S}}\geq 0,\ \boldsymbol{\theta^{f}}\geq 0,\ \ \boldsymbol{\theta^{S}}\geq 0 \tag{25f}\]
_Stationarity Conditions:_
\[\mathbb{I}_{0\boldsymbol{f}}^{\intercal}\boldsymbol{\lambda_{0}}+ \mathbb{I}_{1\boldsymbol{f}}^{\intercal}\boldsymbol{\lambda_{1}}+R^{\intercal} \boldsymbol{\alpha}+\boldsymbol{\theta^{f}}-\boldsymbol{\mu^{f}}=0 \tag{25g}\] \[-\boldsymbol{\beta}(\boldsymbol{D})+\mathbb{I}_{0D}^{\intercal} \boldsymbol{\lambda_{0}}+\mathbb{I}_{1\boldsymbol{D}}^{\intercal}\boldsymbol{ \lambda_{1}}-\boldsymbol{\mu^{D}}=0\] (25h) \[\boldsymbol{\gamma}(\boldsymbol{S})-\mathbb{I}_{1\boldsymbol{S}}^ {\intercal}\boldsymbol{\lambda_{1}}+\boldsymbol{\theta^{S}}-\boldsymbol{\mu^{S }}=0 \tag{25i}\]
_Complementarity Slackness:_
\[\boldsymbol{f}^{\intercal}\boldsymbol{\mu^{f}}=0,\ \ \boldsymbol{D}^{ \intercal}\boldsymbol{\mu^{D}}=0,\ \ \boldsymbol{S}^{\intercal}\boldsymbol{\mu^{S}}=0 \tag{25j}\] \[(\overline{\boldsymbol{f}}-\boldsymbol{f})^{\intercal}\boldsymbol{ \theta^{f}}=0,\ \ (\overline{\boldsymbol{S}}-\boldsymbol{S})^{\intercal} \boldsymbol{\theta^{S}}=0. \tag{25k}\]
Denote by \(\Psi(\boldsymbol{Q},\boldsymbol{\xi})=\{(\boldsymbol{f},\boldsymbol{D}, \boldsymbol{S},\boldsymbol{\lambda},\boldsymbol{\alpha},\boldsymbol{\mu}, \boldsymbol{\theta}):\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:
Compared to the general problem formulation of Section 2, in this problem \(\mathbf{y}:=\big{(}\mathbf{D},\,\mathbf{S}\big{)}\), \(\mathbf{\overline{y}}:=(\mathbf{\infty},\mathbf{\overline{S}})\), \(\mathbf{w}:=\mathbf{f}\), \(\mathbf{\overline{w}}:=\mathbf{\overline{f}}\), \(G:=\begin{pmatrix}\mathbb{I}_{0\mathbf{D}}&0\\ \mathbb{I}_{1\mathbf{D}}&-\mathbb{I}_{1\mathbf{S}}\\ 0&0\end{pmatrix}\), \(H:=\begin{pmatrix}\mathbb{I}_{0\mathbf{f}}\\ \mathbb{I}_{1\mathbf{f}}\\ R\end{pmatrix}\), and \(\mathbf{h}:=\begin{pmatrix}0\\ 0\\ 0\end{pmatrix}\).
Assume next that we draw finite samples for the uncertainty \(\{\mathbf{\xi}^{(n)}\}_{n=1}^{N}\). Then using sample average approximation for the expectation, the single-level reformulation becomes
\[\max_{\begin{subarray}{c}\mathbf{z},\mathbf{Q},\mathbf{f}^{(n)},\mathbf{D}^{(n)}, \mathbf{S}^{(n)},\\ \mathbf{\lambda}^{(n)},\mathbf{\alpha}^{(n)},\mathbf{\mu}^{(n)},\mathbf{\theta}^{(n)}\end{subarray}} \frac{1}{N}\sum_{n=1}^{N}\mathbf{\lambda}_{0}^{(n)}{}^{\mathsf{T}} \mathsf{diag}(\mathbf{\xi}^{(n)})\mathbf{Q}-c^{\mathsf{T}}\mathbf{z}-v^{\mathsf{T}}\mathbf{Q} \tag{27}\] \[\text{s.t.}\] \[(\mathbf{f}^{(n)},\mathbf{D}^{(n)},\mathbf{S}^{(n)},\mathbf{\lambda}^{(n)},\mathbf{ \alpha}^{(n)},\mathbf{\mu}^{(n)},\mathbf{\theta}^{(n)})\in\Psi(\mathbf{Q},\mathbf{\xi}^{(n)}), \forall n\in[N],\] \[\mathbf{\lambda}^{(n)}\geq 0,\forall n\in[N].\]
Using Theorem 1 for each scenario of uncertainty, the objective function in (27) can be re-expressed to obtain
\[\max_{\begin{subarray}{c}\mathbf{z},\mathbf{Q},\mathbf{f}^{(n)},\\ \mathbf{D}^{(n)},\mathbf{S}^{(n)},\mathbf{\lambda}^{(n)},\\ \mathbf{\alpha}^{(n)},\mathbf{\mu}^{(n)},\mathbf{\theta}^{(n)}\end{subarray}} \frac{1}{N}\sum_{n=1}^{N}\left(\left\langle\mathbf{\beta}(\mathbf{D}^{(n)}),\,\,\mathbf{D}^{(n)}\right\rangle-\left\langle\mathbf{\gamma}(\mathbf{S}^{(n)}),\,\,\mathbf{S }^{(n)}\right\rangle-\mathbf{\overline{f}}^{\mathsf{T}}\mathbf{\theta}\mathbf{f}^{(n)}- \mathbf{\overline{S}}^{\mathsf{T}}\mathbf{\theta}\mathbf{S}^{(n)}\right) \tag{28}\] \[\text{s.t.}\] \[(\mathbf{f}^{(n)},\mathbf{D}^{(n)},\mathbf{S}^{(n)},\mathbf{\lambda}^{(n)},\mathbf{ \alpha}^{(n)},\mathbf{\mu}^{(n)},\mathbf{\theta}^{(n)})\in\Psi(\mathbf{Q},\mathbf{\xi}^{(n)}), \forall n\in[N]\] \[\mathbf{\lambda}^{(n)}\geq 0,\forall n\in[N].\]
#### 4.2.1 Test instances
We select the standard IEEE bus systems summarized in Table 5 as the power distribution networks in our numerical study. As is common in the power generation literature, we combine all lines between each pair of nodes in the data set into an equivalent single line. Each line is then transformed into a pair of opposite arcs in order to obtain a directed network which allows power flow in either direction between a pair of nodes.
The set of potential RGU locations \(\mathsf{N}_{0}\) is randomly selected from set \(\mathsf{N}\setminus\mathsf{N}_{\mathbf{S}}\) where \(|\mathsf{N}_{0}|\) is given in Table 6. The first-stage cost parameters of the upper-level problem are randomly generated as \(c_{i}\sim\text{Unif}(150,200)\) and \(v_{i}\sim\text{Unif}(3,5)\) for each \(i\in\mathsf{N}_{0}\). The lower-level cost vectors are chosen to be affine of the form:
\[[\mathbf{\beta}(\mathbf{D})]_{i}=-\beta_{i}^{1}\cdot\mathbf{D}_{i}+\beta_{i}^{0},\,\,\,\, \,\forall i\in\mathsf{N}_{\mathbf{D}},\]
\[[\mathbf{\gamma}(\mathbf{S})]_{j}=\gamma_{j}^{1}\cdot\mathbf{S}_{j}+\gamma_{j}^{0},\qquad\forall j \in\mathsf{N}_{\mathbf{S}},\]
where \(\beta_{i}^{1}>0\) and \(\gamma_{j}^{1}>0\) so that Assumptions 1-4 are satisfied. Since \(\mathbf{h}=\mathbf{0}\) in this application, we have from Lemma 4 that the root relaxation of Formulation (28) is bounded. For a load bus \(i\), the intercept parameter \(\beta_{i}^{0}\) is set to \(40\) and the slope \(\beta_{i}^{1}\) is determined so that the resulting cost is \(30\) at the rated load in megawatt (MW). More specifically, we set
\[\beta_{i}^{1}=\frac{40-30}{\text{Load MW rating at bus }i}.\]
For a generator bus \(j\), the intercept and slope parameters \(\gamma_{j}^{0}\) and \(\gamma_{j}^{1}\) are fixed such that \(10<\gamma_{j}^{0}<33\) and \(0.03<\gamma_{j}^{1}<0.70\). The uncertainty samples are drawn according to a uniform distribution \(\mathbf{\xi}_{i}\sim\text{Unif}(0,1)\) for \(i\in\mathsf{N}_{0}\) where the sample size \(N\in\{10,25,50,100\}\). All models are written in Python 3.7 and solved using GUROBI (v9.5.2) with the same parameters and machine settings as described in Section 4.1.1.
#### 4.2.2 Computational results
First we compare Formulations (27) and (28) in Table 7 on a 3 bus network with 3 lines, 2 generators, and 2 load buses. We fix the cost parameters \(c=0\) and \(v=0\) and vary the sample size \(N\) of uncertainty from 1 to 5. In this case, we again conjecture that the result of Lemma 3 holds for constraints (25a)-(25c), (25g)-(25i) which implies that the root relaxation of Formulation (27) is unbounded and limits the solution to atmost 4 scenarios in a time limit of 600 seconds; see Table 7. Further for \(N\in\{1,2,3,4\}\), Formulation (28) is much faster and solves all instances at the root node whereas Formulation (27) explores a number of nodes that is several orders of magnitude larger. Therefore, for the remainder of this section we focus on Formulation (28).
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline
**Dataset** & 14 Bus & 30 Bus & 57 Bus & 118 Bus & 300 Bus \\ \hline \(|\mathsf{N}_{0}|\) & 5 & 10 & 20 & 40 & 80 \\ \hline \end{tabular}
\end{table}
Table 6: Size of \(\mathsf{N}_{0}\) for IEEE test networks.
\begin{table}
\begin{tabular}{c|c|c|c} \hline
**IEEE** & **\# Lines** & **\# Load** & **\# Generator** \\
**Dataset, \(|\mathsf{N}|\)** & **\#** **lines** & **Buses, \(|\mathsf{N}_{D}|\)** & **Buses, \(|\mathsf{N}_{S}|\)** \\ \hline
14 Bus & 20 & 11 & 2 \\
30 Bus & 41 & 21 & 2 \\
57 Bus & 80 & 42 & 4 \\
118 Bus & 186 & 91 & 19 \\
300 Bus & 411 & 188 & 56 \\ \hline \end{tabular}
\end{table}
Table 5: IEEE Power Flow Test Cases ([https://labs.ece](https://labs.ece). uw.edu/pstca/).
Second, we study the performance of Formulation (28) on standard IEEE instances and cost parameters described in Section 4.2.1. Table 8 summarizes the results where the last column **Avg Time RootRelax** is the average time spent in the solving the root relaxation. For 300 bus networks and \(N\in\{50,100\}\), the time limit is set to 1200 seconds. For the remaining combinations, the time limit is 600 seconds. We make the following observations from the first half of Table 8:
1. For 14 and 30 bus networks with sample size \(N\in\{10,25,50\}\), the solver is able to successfully find lower and upper bounds on the optimal value. The instances, however, reach the time limit of 600 seconds and terminate with a gap larger than tolerance of 0.01%.
2. The 57 bus network with \(N\in\{10,25\}\) and 118 bus network with \(N=10\) can be handled. For larger sample sizes, however, the solver cannot find a lower bound (_i.e._, find a feasible solution) within 600 seconds. In the case of the 300 bus network, no lower bound is found for any value of \(N\).
3. The last column shows that the time spent in solving the root relaxation grows roughly fourfold for each twofold increase in sample size \(N\). As the root relaxation becomes computationally expensive with increasing \(N\), fewer branch-and bound nodes are explored in the given time limit.
The above observations suggest that finding a feasible solution as early as possible in the branch-and-bound tree should help solving the larger sized instances. Hence, we use a rounding heuristic **RH** similar to that described in Section 4.1.3; see Appendix B.1 for details. We select the **RH** parameters \(RndTh\_RH\) to be 0.5 and \(prob\_RH\) to be as follows:
\[prob\_RH=\begin{cases}100\%&\text{if }\#FeasSoln<1\\ 0\%&\text{o.w.}\end{cases} \tag{29}\]
Thus, **RH** runs only until one feasible solution is found. The results obtained after using the **RH** are given in Table 8 where **Time/RH Exec** is the time spent per **RH** execution. We make significant progress within the time limit on instances previously unsolved (except for the case with a 300 bus and with \(N=100\)) by exploring fewer nodes for most cases. For instances that were solved before, sometimes **RH** improves either the runtime (_e.g._, 57 bus with \(N=10\)) or the optimality gap at termination
\begin{table}
\begin{tabular}{c c c c|c c c c c c c c c} \multicolumn{1}{c}{} & & \multicolumn{6}{c|}{**FORMULATION** (27)} & \multicolumn{6}{c}{**FORMULATION** (28)} \\ \(|\textbf{N}|\) & **A** & **\# RGUs, \(|\textbf{N}_{0}|\)** & \(N\) & **T** & **ObjVal** & **ObjBnd** & \%**Gap** & **\#N** & **T** & **ObjVal** & **ObjBnd** & \%**Gap** & **\#N** \\ \hline \multirow{4}{*}{3} & \multirow{4}{*}{6} & \multirow{4}{*}{1} & \multirow{4}{*}{0.08} & 272.39 & 272.39 & 0 & 1.3E+02 & 0.01 & 272.39 & 272.39 & 0 & 1 \\ & & & 2 & 0.50 & 206.75 & 206.75 & 0 & 5.6E+03 & 0.01 & 206.75 & 206.75 & 0 & 1 \\ & & & 3 & 11.01 & 316.89 & 316.89 & 0 & 7.9E+04 & 0.02 & 316.89 & 316.89 & 0 & 1 \\ & & & 4 & 363.72 & 436.66 & 436.66 & 0 & 1.8E+06 & 0.02 & 436.66 & 436.66 & 0 & 1 \\ & & & 5 & 600.01 & 480.63 & — & – & 2.1E+06 & 0.03 & 480.63 & 480.63 & 0 & 1 \\ \hline \end{tabular}
\end{table}
Table 7: **FORMULATION** (27) vs (28) on 3 bus network.
(_e.g._, 14 bus with \(N=50\)). Also for a given bus network, the gaps increase with \(N\), as would be expected.
## 5 Conclusion
In this paper, we study single-level reformulations of bilevel programs with spatial price equilibrium constraints. A new single-level formulation (11) is derived based on Lagrangian duality whose relaxation is much stronger and is provably bounded under minor assumptions, which distinguishes it from the usual single-level formulation (6). As a result, the new formulation overcomes computational limitations of the usual single-level formulation. The strength of the model is illustrated through numerical experiments on various small- to medium-sized instances of EFL problem and RGUs planning under uncertainty. For larger-sized instances, using the new stronger formulation yields tight upper bounds but the solver struggles with lower bounds, _i.e._, it struggles to find feasible solutions. To deal with this issue, we develop a rounding heuristic procedure and demonstrate its effectiveness in successfully handling difficult instances.
Our results reveal several computational challenges that need attention for the future. First, for large-sized instances of RGU planning, say 300-bus networks with 50 or 100 scenarios, solving the root relaxation alone takes between 700 and 1200 seconds. To speed up relaxation solution time, one possible direction is to look into generalized Benders decomposition for solving two-stage nonlinear programs with convex second-stage problem. Secondly, the gap at termination after 600 seconds for most instances of RGU planning is between 1 and 2% and sometimes reaches 6%. This opens up opportunities for identifying valid inequalities to strengthen the relaxation.
## Data availability statement
[https://t.ly/PC3D6](https://t.ly/PC3D6) contains all the data used in numerical experiments (Section 4).
* [https://t.ly/kHwCP](https://t.ly/kHwCP) for medium-sized networks in Section 4.1.2, and [https://t.ly/QVBSX](https://t.ly/QVBSX) for large-sized networks in Section 4.1.3.
* Links have folder for each (# Nodes, # Arcs) pair;
* Each folder contains data of different instances of a (# Nodes, # Arcs) pair;
* For each instance, separate.csv files specify upper & lower-level parameters.
* [https://t.ly/iP6iW](https://t.ly/iP6iW) for IEEE bus networks in Section 4.2.2. The link has folders for each IEEE bus in Table 5. Each folder contains-
* A separate.csv file that specifies the parameters for upper-level.
* A subfolder is present that has.csv files for lower-level parameters.
* Separate.csv files for data of different # of uncertain scenarios in renewable generation.
|
2309.07577 | Modeling and Simulating X-ray Spectra | X-ray spectroscopy is a powerful technique for the analysis of the energy
distribution of X-rays from astrophysical sources. It allows for the study of
the properties, composition, and physical processes taking place at the site of
emission. X-ray spectral analysis methods are diverse, as they often need to be
tailored to the specific type of instrument used to collect the data. In
addition, these methods advance together with the improvement of the technology
of the telescopes and detectors. Here, we present a compact overview of the
common procedures currently employed in this field. We describe the fundamental
data structure and the essential auxiliary information required for conducting
spectral analysis and we explore some of the most relevant aspects related to
statistical and computational challenges in X-ray spectroscopy. Furthermore, we
outline some practical scenarios in the context of data reduction, modeling and
fitting of spectra, and spectral simulations. | L. Ducci, C. Malacaria | 2023-09-14T10:21:37Z | http://arxiv.org/abs/2309.07577v2 | # Modeling and Simulating X-ray Spectra
###### Abstract
X-ray spectroscopy is a powerful technique for the analysis of the energy distribution of X-rays from astrophysical sources. It allows for the study of the properties, composition, and physical processes taking place at the site of emission. X-ray spectral analysis methods are diverse, as they often need to be tailored to the specific type of instrument used to collect the data. In addition, these methods advance together with the improvement of the technology of the telescopes and detectors. Here, we present a compact overview of the common procedures currently employed in this field. We describe the fundamental data structure and the essential auxiliary information required for conducting spectral analysis and we explore some of the most relevant aspects related to statistical and computational challenges in X-ray spectroscopy. Furthermore, we outline some practical scenarios in the context of data reduction, modeling and fitting of spectra, and spectral simulations.
Keywords:Methods: data analysis; X-rays: general; Techniques: spectroscopic. +
Footnote †: journal: Modeling and Simulating X-ray Spectra |
2308.16753 | Context Aware Query Rewriting for Text Rankers using LLM | Query rewriting refers to an established family of approaches that are
applied to underspecified and ambiguous queries to overcome the vocabulary
mismatch problem in document ranking. Queries are typically rewritten during
query processing time for better query modelling for the downstream ranker.
With the advent of large-language models (LLMs), there have been initial
investigations into using generative approaches to generate pseudo documents to
tackle this inherent vocabulary gap. In this work, we analyze the utility of
LLMs for improved query rewriting for text ranking tasks. We find that there
are two inherent limitations of using LLMs as query re-writers -- concept drift
when using only queries as prompts and large inference costs during query
processing. We adopt a simple, yet surprisingly effective, approach called
context aware query rewriting (CAR) to leverage the benefits of LLMs for query
understanding. Firstly, we rewrite ambiguous training queries by context-aware
prompting of LLMs, where we use only relevant documents as context.Unlike
existing approaches, we use LLM-based query rewriting only during the training
phase. Eventually, a ranker is fine-tuned on the rewritten queries instead of
the original queries during training. In our extensive experiments, we find
that fine-tuning a ranker using re-written queries offers a significant
improvement of up to 33% on the passage ranking task and up to 28% on the
document ranking task when compared to the baseline performance of using
original queries. | Abhijit Anand, Venktesh V, Vinay Setty, Avishek Anand | 2023-08-31T14:19:50Z | http://arxiv.org/abs/2308.16753v1 | # Context Aware Query Rewriting for Text Rankers using LLM
###### Abstract.
Query rewriting refers to an established family of approaches that are applied to underspecified and ambiguous queries to overcome the vocabulary mismatch problem in document ranking. Queries are typically rewritten during query processing time for better query modelling for the downstream ranker. With the advent of large-language models (LLMs), there have been initial investigations into using generative approaches to generate pseudo documents to tackle this inherent vocabulary gap. In this work, we analyze the utility of LLMs for improved query rewriting for text ranking tasks.
We find that there are two inherent limitations of using LLMs as query re-writers - concept drift when using only queries as prompts and large inference costs during query processing. We adopt a simple, yet surprisingly effective, approach called context aware query rewriting (CAR) to leverage the benefits of LLMs for query understanding. Firstly, we rewrite ambiguous training queries by context-aware prompting of LLMs, where we use only relevant documents as context. Unlike existing approaches, we use LLM-based query rewriting only during the training phase. Eventually, a ranker is fine-tuned on the rewritten queries instead of the original queries during training. In our extensive experiments, we find that fine-tuning a ranker using re-written queries offers a significant improvement of up to **33%** on the passage ranking task and up to **28%** on the document ranking task when compared to the baseline performance of using original queries.
query rewriting, rank fusion, ranking performance +
Footnote †: 2019
## 1. Introduction
The _vocabulary mismatch_ between user queries and the documents is a well known problem in the field of Information Retrieval (IR). The user information needs usually represented in the form of keyword queries can be ambiguous and may not be lexically similar to the documents. The queries could be under-specified, rendering it difficult to understand the user intent, which affects the downstream retrieval performance. For instance, the query definesri could refer to "sanskrit word sri" or "Socially Responsible Investment". To disambiguate the users' information need, many approaches have been proposed to address the vocabulary gap.
Classical approaches like pseudo-relevance feedback mechanism (Vaswani et al., 2017; Vaswani et al., 2017) expand the original query with keywords from top-ranked results to the original query. Alternate term-based approaches represent the queries in a _continuous vector space_, followed by a KNN-search to expand query terms (Vaswani et al., 2017; Vaswani et al., 2017). There exist other term-based approaches that use seq2seq model (Vaswani et al., 2017) to generate query rewrites. However, all the above approaches are term-based approaches that improve retrieval but not necessary in document ranking.
Going beyond keywords, researchers have explored the possibility of natural language question generation for clarifying queries (Sutton et al., 2016) or rephrasing original queries using deep generative models for alternative formulations of the original query (Sutton et al., 2016). More recently, document expansion approaches have been adopted to improve retrieval performance. Doc2Query (Sutton et al., 2016) employs neural models to predict queries relevant to documents and enhances document representations. Especially with the advent of large language models or LLMs (Vaswani et al., 2017; Vaswani et al., 2017), recent approaches have started to investigate the utility of LLMs to aid in enhancing query and document representations (Vaswani et al., 2017; Vaswani et al., 2017). LLMs are promising because encode world knowledge (Vaswani et al., 2017) from pre-training can be employed to reformulate the under-specified queries. In principle, the LLMs can be used to expand/explain/specify concise and ambiguous queries, thereby disambiguating the user information need. This bridges the gap between the latent user intent in the original queries and the re-triever's representation of the user information need. The approach query2doc (Vaswani et al., 2017), generates pseudo-documents by few-shot prompting LLMs and concatenates them with the original query to form a new _expanded query_. However, there are two major limitations to using LLMs for generating plausible query re-writes.
**The problem of misaligned query and documents.** Many queries have multiple aspects, senses, and intent granularities. When using LLM-based query expansions, the expanded query results in choosing one aspect among the many possible aspects. This entails in a problem that the chosen intent by the expansion might not align with the ground-truth intent. This lack of alignment between the actual intent and the inferred intent is problematic during training. Specifically, such intent non-alignment results in the case where the relevant document is forced to be matched with an erroneously expanded query. For instance, in Table 1, we observe that for the query definesri, Query2Doc drifts away from the intent and generates a query related to _Stanford Research Institute_ instead of denoting the Sanskrit word. In a similar manner, for the query HS norms the generated output reflects multiple intents.
**The problem of efficient inference.** Secondly, a serious limitation (as acknowledged in [44]) is the computational overhead from both the retrieval and re-ranking phases. Expanded query terms in the re-written queries increase index lookups. More acutely, query expansions use tokenwise auto-regressive decoding during inference. However, current infrastructures just cannot support efficient LLM-based inference during query processing. Therefore, to stay within sub-second ranking requirements without using prohibitively expensive compute, the choice of a LLM during query processing should be avoided.
**Contextualized query rewriting.** We make two simple yet important design decisions to overcome the problem of misalignment and efficiency. We first generate query rewrites by additionally
\begin{table}
\begin{tabular}{l l} \hline \hline
**Method** & **Query Text** \\ \hline
**Original query** & **define sri** \\ Query2Doc & SRI stands for “Stanford Research Institute,” which is a nonprofit research organization that was founded in 1946 as part of Stanford University. Over the years, SRI has conducted groundbreaking research in a wide range [...] \\ Davinci-003 & Sri is a Sanskrit word meaning “holy” or “auspicious.” It is often used in Hinduism as an honorific title for a deity or spiritual teacher. It is also used in Buddhist and Jain contexts, and is an important concept in South Asian culture. \\ ChatGPT & Find the definition and meaning of the acronym SRI (Socially Responsible Investing). \\ CAR(Davinci-003) & What is the meaning of Sanskrit word “Sri” and how is it used as a title of veneration? \\ CAR(ChatGPT) & What is the meaning of the Sanskrit word “Sri” and how is it used in Indian subcontinent? \\ CAR(Davinci-003)(doc) & What is the origin and meaning of the term “Sri”? \\ CAR(ChatGPT)(doc) & What is the origin of the word “Sri” used in India and other South Asian countries? \\ \hline \hline
**Original query** & **hs worms** \\ Query2Doc & HS Worms can refer to several different things depending on the context. If you are referring to HS Worms as an educational institution, then it stands for Hochschule Worms which is a university of applied sciences or it can refer to heartworms [...] \\ Davinci-003 & Find information about heartworms in dogs \\ ChatGPT & Find information about the symptoms, causes, and treatment options for heartworm disease in dogs (also known as HS worms). \\ CAR(Davinci-003) & What are the courses offered by Hochschule Worms? \\ CAR(ChatGPT) & What are the programs offered by Hochschule Worms? \\ CAR(Davinci-003)(doc) & What is the country rank and world rank of HS Worms University? \\ CAR(ChatGPT)(doc) & What is Hochschule Worms (HS Worms) and where is it located? \\ \hline \hline
**Original query** & **ephesians 6:5-9** \\ Query2Doc & Ephesians 6:5-9 is a passage from the New Testament that speaks to the relationship between slaves and masters in ancient times. It begins by instructing slaves to obey their earthly masters with respect [...] \\ Davinci-003 & Find information about Ephesians 6:5-9. \\ ChatGPT & What is the meaning and interpretation of Ephesians 6:5-9 in the Bible? \\ CAR(Davinci-003) & 7 With good will render service, as to the Lord, and not to men, 8 knowing that whatever good thing each one does, this he will receive back from the Lord \\ CAR(ChatGPT) & How does the passage in Ephesians 6:5-9 instruct slaves to behave towards their masters? \\ CAR(Davinci-003)(doc) & too have a Master in heaven. What does Ephesians 6:5-9 teach about the relationship between slaves and masters? \\ CAR(ChatGPT)(doc) & What is the meaning of Ephesians 6:5-9 in relation to submission and slavery? \\ \hline \hline
**Original query** & **price nrz** \\ Query2Doc & Price NRZ stands for “Non-Return-to-Zero* and refers to a digital signal encoding technique used in telecommunications. In this technique, the voltage level of the signal remains constant during each bit interval[...] \\ Davinci-003 & Find information on the Pricing of Non-Recourse Z-Bonds. \\ ChatGPT & Find information on the pricing strategy for non-return-to-zero (NRZ) encoding. \\ CAR(Davinci-003) & What is the pricing of New Residential Investment Corp. common stock on Jan 30, 2017 7:16 PM EST? \\ CAR(ChatGPT) & What is the public offering price of New Residential Investment Corp:’s common stock? \\ CAR(Davinci-003)(doc) & Anounced today that it has entered into definitive agreements to acquire Shellpoint Partners, LLC (”Shellpoint”), a leading mortgage servicer and originator.hat is the price of New Residential Investment Corp (NRZ) stock \\ CAR(ChatGPT)(doc) & What is the current share price and financial information of New Residential Investment Corp (NRZ)? \\ \hline \hline \end{tabular}
\end{table}
Table 1. Comparing Original ambiguous queries with their rewrites using Query2Doc, Davinci-003, ChatGPT approach with in-context and context aware rewriter(CAR) techniques. Approach with (doc) are rewrites using MS MARCO document corpus and the rest from MS MARCO passage corpus.
providing the relevant document as context during training. Consequently, the generated query rewrite is fully aligned with the context improving training of text rankers. Secondly, and unlike existing works, we fully avoid any query rewriting using LLMs **during inference**. In other words, we assume that training a ranker to match LLM-generated queries with relevant documents results in learning a generalized ranking model.
Methodologically, we propose an LLM-based context-aware query rewriting framework CAR, that replaces the traditional query rewriter with a LLM for rewriting ambiguous queries. We employ context-aware prompting to test the effectiveness of LLMs as a disambiguation engine, using the ambiguous query and the relevant document as a context in the prompts. We also include a context selection mechanism to select relevant sections when a context is long and spans multiple topics to handle _topic drift_. The outputs of our query rewriter are used to fine-tune a ranking model to transfer the knowledge of user information needs to the ranking model. At **inference time**, the ranker equipped with knowledge of query disambiguation mechanism improves the document ranking performance for subsequent ambiguous queries without the rewriter component. The proposed framework can be used with any off-the-shelf ranker. During inference, the ranking model fine-tuned on re-written queries yields much better ranking performance in comparison to the original queries. Our approach _obviates the need for the LLM prompting during inference_ because we consider efficiency and latency requirements during inference _as a non-negotiable constraint_. From Table 1 we observe that the proposed context-aware query reformulation approach CAR, can generate concise rewrites that reflect the intent when compared to QueryDoc. We also observe that it performs better than pure in-context learning based approaches, as seen from Table 1 and also through our empirical analysis of ranking results.
We perform extensive experiments using LLMs of different parameter scales and demonstrate that the proposed approach results in better ranking performance. Specifically, we find that fine-tuning a ranker using re-written queries offers a significant improvement of up to **33%** on the passage ranking task and up to **28%** on the document ranking task when compared to the baseline performance of using original queries.
### Research Questions
We address the following research questions:
**RQ1**: Can we employ LLMs to generate fluent natural language rewrites of original queries from ambiguous queries?
**RQ2**: How effective is a ranker fine-tuned on rewritten queries for downstream document ranking task?
Towards answering these research questions, we do extensive experiments on trec web 2012 using LLM models (Section 4.2) for evaluating quality of rewrites and on TREC-DL-19 and TREC-DL-20 passage and document dataset for re-ranking.
### Contributions
In summary, here is a list of our contributions:
1. We propose a Context Aware Rewriter (CAR) for query reformulation, based on context-aware prompting of Large Language Models (LLMs) which generate natural language rewrites for _ambiguous queries_. The natural language rewrites are used to fine-tune the ranker for encoding the ability to disambiguate the user information need.
2. We experiment with LLMs of different parameter scales and with different pre-training objectives to demonstrate the difference in quality of rewrites.
3. We show that the ranking model fine-tuned on generated rewrites delivers substantial performance improvement on the ambiguous user queries at inference time.
## 2. Related Work
In this section, we discuss two aspects of related work, namely query expansion and generative capabilities of Large Language Models. We further discuss how LLMs are leveraged in addressing the vocabulary mismatch problem in information retrieval.
### Query rewriting
One of the central problems in IR is bridging the lexical gap between the user specified query and the documents. This can also be seen as reconciling the difference between user intent and the intent of the system (_machine intent_). The common approaches of query reformulation involve query expansion, synonym substitution and paraphrasing.
Query expansion approaches have been adopted to bridge this gap. These approaches typically involve addition of terms to the original query based on relevance feedback (Zhou et al., 2017; Wang et al., 2018). When user relevance feedback is unavailable, pseudo-relevance feedback mechanism is applied (Zhou et al., 2017; Wang et al., 2018). Here, the top-ranked results to the original query are used to expand the query with additional terms. However, the performance of the rewritten query is severely limited by the quality of the top-ranked results (Chen et al., 2018) and is rarely used in dense retrieval (Chen et al., 2018). Alternatively, researchers have proposed rephrasing the original queries to tackle the "lexical chasm" problem. In the work (Wang et al., 2018), the authors rephrase the user specified query iteratively by mining similar phrases from WordNet. Alternatively, researchers have also explored substituting terms in the input query with synonymous terms based on user query logs (Zhou et al., 2017). However, these approaches are term based and expand queries based on static sources, making it difficult to adapt to changes in the retrieval system.
The generative approaches in query rewriting involve generating paraphrases of the original queries. In the work (Wang et al., 2018), the authors employ statistical approaches to rephrase terms of the query and add the equivalent terms to the original query. however, this could lead to ill-formed queries, as the term level paraphrasing does not consider the surrounding context. It also heavily relies on user feedback to select the best paraphrased version which is cumbersome. More promising approaches in query expansion involve paraphrasing queries using a generative model (Wang et al., 2018) at once instead of phrasal level rewrites. For instance, the query "average tesla cost" is rephrased to "what is the cost of the new tesla". Other generative approaches use chain-of-thought and relevance feedback (Zhou et al., 2017) to expand query terms or use pseudo relevance feedback along with prompting or fine-tuning (Wang et al., 2018) for query reformulation. However, this approach just provides alternative formulations of query leveraging equivalent queries data from MS MARCO dataset
[3]. They do not focus on disambiguation of user intent in the ambiguous queries. They also do not consider performance prediction on downstream retrieval and is also susceptible to exposure bias. To enhance the performance on downstream retrieval, DRQR [46] leverages deep reinforcement learning to train recurrent models by incorporating query performance predictors as reward signals. However, the authors mention that the proposed approach does not improve downstream retrieval by a significant margin.
More recently, natural language question generation approaches have been adopted for query reformulation. [35] introduced a model to generate clarifying questions to compensate for the missing information. They used a reinforcement learning model to maximize a utility function based on the value added by the potential response to the question. [43], on the other hand, focused on identifying unclear posts in a community question answering setting that require further clarification. Query rewriting approaches have been of huge interest in the e-commerce domain and mostly require relevance feedback from the user for personalization [24, 26, 59]. While these methods are domain-specific, [52] propose a template-based weak supervised learning approach for open domain IR.
_Document expansion_ approaches have also been proposed to tackle the vocabulary mismatch issue in IR. Doc2query [34] and docTTTquery [8], generates pseudo queries from document context using a seq2seq model and appends them to documents to enhance the ranking results. Works like InPars [5] and PromptAgator [12] focus on using large generative models to generate queries from sampled documents to increase training data for dense retrieval tasks. However, these approaches are prone to hallucination and may drift away from the original intent of the query [17]. In this paper, we take a step in the direction of rewriting ambiguous queries by incorporating relevant context to transfer the knowledge of the disambiguation mechanism to a ranker for improved re-ranker performance. Our approach also departs from the traditional approaches by employing a monolithic LLM for query rewriting than multiple components that could lead to error propagation.
### Large Language Models
Recent advances in generative modeling have led to large language models that apply to a wide range of tasks [1, 6, 47]. These models trained in a self-supervised fashion demonstrate emergent capabilities where they can extrapolate to new tasks with few demonstration samples [14, 27, 48]. These advances have also led to researchers rethinking parts of retrieval systems [57]. Generative retrieval is one such emerging paradigm [4, 23, 30, 42] where neural language models are used as indices for retrieval. The model employs conditional decoding approaches to generate document identifiers that directly map to ground truth documents. This is accomplished by training the language models on relevant data.
More recently, researchers have tested the effectiveness of LLMs for document expansion [16, 44] in a zero-shot and few-shot setting for dense retrieval. However, these approaches have serious limitations at inference time in terms of efficiency due to the autoregressive decoding of LLMs. These approaches are also prone to hallucination due to the generation of long context for ranking. Query rewriting using LLMs has also found interest in QA tasks with the new rewrite-retrieve-read paradigm [29]. In this work, the authors fine-tune a small LLM for query rewriting by employing reward signals from the reader. However, this approach is specific to QA tasks and the retriever is a frozen model, relying on the reader for fine-tuning the rewriter. The approach also requires the expensive query rewriting mechanism during inference.Researchers have also explored task-aware tuning of dense retrievers [2] by appending queries with task-specific instructions as input to the dense encoder. The instructions equip the retriever with the ability to decipher user's information needs. However, the instructions require manual annotation and are limited to specific tasks. In this paper, we make the ranking model aware of disambiguation mechanism of under-specified queries by fine-tuning it on rewritten queries generated by a LLM.
## 3. Method
In this section, we describe the proposed Context Aware Rewriter (CAR) framework for ambiguous query reformulation. Figure 1 shows the working of the proposed framework during the training and inference stages. During training, the framework is composed of a query reformulation phase and a document ranking phase. In the query reformulation phase, we employ a context-aware prompting of a LLM (Section 3.1). During inference, the ranker which was fine-tuned on disambiguated queries is directly employed to rank
Figure 1. Overview of the proposed CAR framework of contextual re-writing queries
documents for new queries without rewriting them, as shown in Figure 1.
### Query Rewriter
Algorithm 1 details the query reformulation process. Given an ambiguous query \(q\), our goal is to generate a rewrite \(q^{*}\) that helps disambiguate the intent of the original query. We generate query rewrites by conditioning the LLM (\(RE\)) through few-shot prompting on the relevant document for the query \(d^{*}\) to avoid topic drift. An example of the few-shot prompting employed is shown in Figure 2.
\[q^{*}=RE(q,d^{*})\]
We first instruct the LLM (Figure 2) to perform the task of query reformulation in the context of the given document. We then provide an ambiguous query (\(q\)) concatenated with the corresponding relevant document as a part of the prompt. This _context-aware prompting_ of LLMs results in better rewrites without topic drifts, as the LLM is conditioned on intents conveyed in the document. We also introduce a constraint on the maximum length of the output sequence generated. This constraint coupled with grounding of the generation on the relevant document context through prompting to prevent hallucination and topic drift.
Note that the intent conveyed for the rewrite can be controlled by using a different document context in the prompt. We experiment with different prompting approaches by varying the document context. We discuss the different variations of few-shot prompting and the corresponding results in Section 4.2 and Section 5 respectively.
#### 3.1.1. Hyperparameters
We use a temperature of 0.5 to balance exploration and deterministic generation. We set presence penalty and frequency penalty to 0.6 and 0.8 respectively to minimize redundancy in generated reformulations of the original query.
```
//TrainingPhase Input:training batch \(D\) Output:training batch with rewritten queries \(D^{\prime}\)
1\(D^{\prime}\leftarrow\) empty list foreach\((q,d^{+})\)in \(D\)do //createrewrittenquery
2\(q^{*}\gets RE(q,d^{+})\) append \((q^{*},d^{+})\) to \(D^{\prime}\) Fine-tune ranker on \(D^{\prime}\)
3 end foreach return\(D^{\prime}\) //InferencePhase Input:test set \(D_{t}\) Output:Ranked documents foreach\((q_{t},d)\)in \(D_{t}\)do \(\hat{y}\leftarrow(q_{t},d)\) end foreach
4
5 end foreach
```
**Algorithm 1**Query Rewriting, Fine-tuning and Ranking
The resulting rewritten query is used as an input to fine-tune the ranker. Note that we only rely on LLMs to rewrite ambiguous queries for fine-tuning the ranker. During inference, the ranker directly ranks relevant documents for the new queries without the LLM based rewriting component.
### Ranking Phase
Our goal is to train a model for _re-ranking_ documents on the query rewrites for under-specified queries. Given a query-document pair \((q^{*},d)\) as input, the ranking models output a relevance score. This score can then be used to rank documents based on their relevance to the given query.
Formally, the training set comprises pairs \(q^{*}_{i},d_{i}\), where \(q^{*}_{i}\) is a rewritten (disambiguated) query using an LLM and \(d_{i}\) is a relevant or irrelevant document to the query. The aim is to fine-tune a ranker \(R\) that predicts a relevance score \(\hat{y}\in[0;1]\) given a reformulated query \(q^{*}\) and a document \(d\):
\[R:(q^{*},d)\mapsto\hat{y} \tag{1}\]
The fine-tuned ranking model (\(R\)) can be employed to re-rank a set of documents obtained from a first-stage lightweight frequency-based retriever. Recent studies have indicated that pre-trained language models which jointly model queries and documents tasks (Golovolov et al., 2016; Golovolov et al., 2016; Golovolov et al., 2016) have demonstrated significant performance on ranking tasks. In this work, we employ the BERT(Golovolov et al., 2016) model for ranking. The input to the ranker is of format:
\[\texttt{[CLS]}\ q\ \texttt{[SEP]}\ d\ \texttt{[SEP]}. \tag{2}\]
We employ the pointwise loss to train the ranker. Let us assume of a mini batch of \(N\) training examples \(\{x_{i},y_{i}\}_{i=1,\dots,N}\). The ranking task is cast as a binary classification problem, where each training instance \(x_{i}=(q^{*}_{i},d_{i})\) is a query-document pair and \(y_{i}\in 0,1\) is a relevance label. The predicted score of \(x_{i}\) is denoted as \(\hat{y}_{i}\). The cross-entropy loss function is defined as follows:
\[\mathcal{L}_{\texttt{Point}}=-\frac{1}{N}\sum_{i=1}^{N}\left(y_{i}\cdot\log \hat{y}_{i}+(1-y_{i})\cdot\log(1-\hat{y}_{i})\right) \tag{3}\]
Note that we only fine-tune the ranker on query reformulations of ambiguous or under-specified queries. At inference time, the
Figure 2. An example of prompting LLM for query rewriting using CAR.
ranker equipped with the knowledge of user information needs is directly deployed to rank relevant documents for new queries as shown in **Algorithm 1**.
#### 3.2.1. Passage selector for Documents
: The phenomenon of _topic drift_ has been identified in the context of utilizing the CAR (Conversational Response Ranking) approach with documents. **Topic drift** is characterized by the divergence of a rewritten query's specificity when applying the CAR approach to documents. This occurs due to documents encompassing multiple topics, potentially relating to one or more aspects of the query. To mitigate this occurrence of _topic drift_, we employ supervised passage selection techniques (Attention, Linear) as proposed in (Zhou et al., 2017). These techniques aid in selecting the most relevant passage from a document, aligning it more closely with a given query.
* **Linear Selection:** This method involves the transformation of both the query and the passage into their average embedding representations. These representations are then processed through a singular feed-forward layer, followed by a calculation of their similarity through dot product operation. The sentence score, denoted as \(s_{ij}\) with respect to the query \(q\), is computed as follows: (4) \[\text{score}_{\text{Lin}}(q,s_{ij})=(\text{Enc}(q),\text{Enc}(s_{ij})),\] (5) where \(\langle\cdot,\cdot\rangle\) is the dot product.
* **Attention selection** : The Attention-based selector operates by deriving passage-level representations through the utilization of the QA-LSTM model (Wang et al., 2017). Initially, both the query and the document undergo contextualization by being subjected to shared bi-directional Long Short-Term Memory (LSTM) processing of their token embeddings. Subsequently, the query representation \(\hat{q}\) is derived through the application of element-wise max-pooling over these contextualized embeddings. \[\hat{q} =\text{Max-Pool}(\text{Bi-LSTM}(q)),\] (6) \[d^{\text{LSTM}} =\text{Bi-LSTM}(d).\] (7) For each hidden representation \(d^{\text{LSTM}}_{i}\), attention to the query is computed as \[m_{i} =W_{1}h_{1}+W_{2}\hat{q},\] (8) \[h_{i} =d^{\text{LSTM}}_{i}\exp\left(W_{3}\tanh\left(m_{i}\right) \right),\] (9) where \(W_{1}\), \(W_{2}\) and \(W_{3}\) are trainable parameters. For \(s_{ij}\), let \(h_{ij}\) denote the corresponding attention outputs. The sentence representation is computed similarly to the query representation, i.e., \[\hat{s}_{ij}=\text{Max-Pool}(h_{ij}).\] (10) The final score of a sentence is computed as the cosine similarity of its representation and the query representation: \[\text{score}_{\text{Att}}(q,s_{ij})=\cos(\hat{q},\hat{s}_{ij}).\] (11)
## 4. Experimental Setup
In this section, we describe the setup we used to answer the following research questions:
**RQ1**: Can we employ LLMs to generate fluent natural language rewrites of ambiguous and under-specified queries?
**RQ2**: How effective is a ranker fine-tuned on rewritten queries using LLMs for downstream document ranking task?
Towards answering these research questions we employ the following datasets, rankers and training settings
### Datasets
Our goal is to have natural language expansions of the query for downstream ranking. Since there are no existing datasets explicitly tackling this problem, we curate queries and their descriptions from several sources, as detailed below.
#### 4.1.1. Datasets for checking effectiveness of the LLM-based re-writer
To tackle **RQ1**, we collect several datasets from Trec web track (Wang et al., 2017). Particularly, we collect topics and corresponding descriptions from Trec web tracks from 2009 to 2011 to be used as dataset for fine-tuning the generative model. We sourced the (topic, topic description) pairs to train a generative model. We obtain 1143 samples for training, 126 samples for validation. Finally, we use 103 samples of the Trec Web 2012 topics, subtopics, and their descriptions for testing the rewriter.
#### 4.1.2. Google People Also Asked (PAA) Data
Since the TREC topics/queries do not disambiguate the multiple intents a query could have, we propose to collect questions from the Google _people also ask (PAA)_ section as external knowledge for each query. These texts from Google PAA could serve as expanded versions of the ambiguous query that convey different intents. For instance, for a topic _403b_, the proposed pipeline provides different questions like "What are the withdrawal limitations for a 403b retirement plan?" and "What is the difference between a 401k and 403b?". We augment the query topics in the dataset with the corresponding Google PAA questions. We fine-tune the rewriters like BART and GPT-2 with topics concatenated with the corresponding Google PAA question. However, on the test set (TREC web 2012) we leverage only the topics as access to an external knowledge base cannot be assumed during real-world deployments.
#### 4.1.3. Datasets for Ranking Experiments
:
**MS MARCO (Passage dataset)**: We consider the passage dataset from the TREC Deep Learning track (2019) (Zhou et al., 2017). We evaluate our model on TREC-DL-19 and TREC-DL-20 passage dataset, each containing 200 queries. For training, we select 1200 ambiguous/under-specified queries from TREC-DL-19 passage training dataset, then we rewrite these query using our query rewrite models (Subsection 4.2). The ambiguous queries are selected based on heuristics like query length, acronyms and entities with varied semantics. Each of these rewrites correspond to one dataset. So we end up with 13 rewrite training dataset and one baseline dataset (original queries).
**MS MARCO (Document dataset)**: Similar to above passage dataset, we evaluate on TREC-DL-19 and TREC-DL-20 document collection. For training we select 1200 ambiguous/under-specified queries from TREC-DL-19 document training dataset and use exactly the same method as above to create training datasets using different approaches.
### Rewriter Models
We employ several generative models as the backbone for query rewriting to compare and contrast the quality of query rewrites. We employ two settings where the rewrites are smaller language models fine-tuned on TREC web dataset for generating rewrites or LLMs which are prompted to generate rewrites.
**GPT-2**.: We fine-tune GPT-2 a transformer decoder based generative model with 117M parameters and 12 layers to generate topic descriptions from topics on the TREC web dataset. We use a learning rate of 3.2e-5, weight decay of 0.01, batch size of 16 and train the model for 6 epochs.
**BART**.: We employ BART (base version) a denoising autoencoder pre-trained using the objective for reconstructing the corrupted text. As a result, BART is able to generate robust fluent natural language queries from under-specified and ambiguous queries. We fine-tune BART to generate topic descriptions from given input topics on TREC web dataset. We use a learning rate of 2e-5, batch size of 16 and train the model for 8 epochs
**BART (topic+PAA)**.: We also propose a variation BART (topic+PAA) where we concatenate the topics with external knowledge in the form of related Google PAA questions (c.f. Section 4.1.2) for fine-tuning the BART base model on the TREC web dataset collected. However, when generating rewrites for new ambiguous queries in the MS MARCO dataset, using the fine-tuned model, we provide only topics as input. We propose this approach to test if augmentation of Google PAA questions as context to topic names during fine-tuning aid in disambiguation of the user information need for smaller models. This variant also categorizes to context-aware rewriting class of approaches CAR proposed in this work, where Google PAA acts as context. We use similar hyperparameters for BART as discussed. This is slightly different from the proposed prompting based CAR approaches and is proposed as an alternative to test the capabilities of smaller models.
**Davinci-003 (prompt1)**.: Since prompting LLMs like GPT-3 have proven to yield more stable outputs with less factual errors, we employ two variants of prompts to Davinci-003. We feed the prompts "Generate short sentence expanding:" or "Generate short sentence question:" followed by the topic name to Davinci-003. We call these two variants Davinci-003 (prompt1) and Davinci-003 (prompt2) respectively. We use a temperature of 0.5, max token length of 35 to generate short natural language rewrites of the original query. For frequency penalty and presence penalty, we use values of 0.8 and 0.6 to avoid redundancy in generated outputs.
**Davinci-003 + in-context examples**.: Since, plain prompting might result in topic drifts of generated text and the model might misinterpret the intended task, we also adopt in-context learning (Krizhevsky et al., 2017). In-context learning treats the LM as a black-box and instructs the model of the task without gradient descent through examples. We provide examples in form of
\[<\text{query}_{1},\text{desc}_{1}>,<\text{query}_{2},\text{desc}_{2}>\dots \text{query}_{\text{test}},\text{[insert]}\]
and instruct the model to fill the description in the placeholder provided. We use the same hyperparameters as discussed for Davinci-003 (prompting).
**Other GPT-3 models**.: We also test with models which are variants of GPT-3 of different parameter scales. These models include Ada-001, Davinci-002, Curie-001 and Babbage-001. The models and the number of parameters are shown in Table 2. We employ incorrect learning by providing demonstration samples. The prompt is as follows:
_Generate short sentence as expansion for the given test query like the following examples,_
\[<\text{input}:\text{query}_{1},\text{output}:\text{desc}_{1}>,\dots\text{ input}:\text{query}_{\text{test}},\text{output}:\]
Note that the prompt is a bit different from Davinci-003 as we observed that smaller models need detailed instructions for better generation capabilities. This maybe due to the difference in instruction fine-tuning approaches. We employ the same hyperparameters, queries and their descriptions as discussed earlier.
**ChatGPT + in-context examples**.: We also test the in-context learning capabilities of gpt-3.5-turbo (ChatGPT) for query reformulation. We employ the same prompt as Davinci-003. We only prepend the task instruction to set the role of the system. The task description is : _You are a system that gives an expansion for queries, expanding abbreviations and acronyms when applicable. Some examples are_ followed by the demonstration samples as discussed earlier. We employ the same values for hyperparameters as discussed for other LLM prompting approaches.
**Query2Doc**: We also use the recently proposed document expansion approach, Query2Doc which generates pseudo documents to aid in document ranking, as a baseline. We use the gpt-3.5-turbo model with max tokens of 128 for generation and follow the original hyperparameters used in the work (Zhu et al., 2017) for reproducibility.
### Ranking Models
For experiments on TREC-DL we use BERT-base (He et al., 2017), a pre-trained contextual model based on the transformer architecture. In principle, one can use different transformer architectures but focus on BERT as a representative model in our experiments. We use the _base_ version with 12 encoder layers, 12 attention heads and 768-dimensional output representations. The input length is restricted to a maximum of 512 tokens. We use cross-attention BERT architecture for ranking, sometimes also referred to as MonoBert. The baseline model is trained on the original set of ambiguous queries using a pointwise ranking loss objective. The improved ranking models are also BERT base models but trained on re-written queries using LLMs such as GPT-3 variants, BART, ChatGPT, etc. In the experiments, we refer to these improved ranking models after the
\begin{table}
\begin{tabular}{|l|c|} \hline
**Model** & **\# parameters** \\ \hline text-ada-001 (Ada-001) & 350M \\ text-babbage-001 (Babbage-001) & 3B \\ text-curie-001 (Curie-001) & 13B \\ text-davinci-002 (Davinci-002) & 175B \\ text-davinci-003 (Davinci-003) & 175B \\ gpt-3.5-turbo (ChatGPT) & 154B \\ \hline \end{tabular}
\end{table}
Table 2. LLM ( GPT3 models) of different parameter scales
LLM query re-writer model that used to generate the training data for them.
### Metrics
To evaluate the quality of re-writes we use the TREC topic descriptions as targets and compute ROUGE-L (Rendle, 2015) scores and BERTScore(Bartart et al., 2015; Dey et al., 2016; Dey et al., 2016) which are commonly used automated evaluation metrics used in text several generation tasks. While ROUGE-L is an n-gram overlap metric, BERTScore correlates with human judgments by employing contextualized embeddings to compute word-level matches between the reference sentence and the generated sentence (Dey et al., 2016).
The metrics serve as a proxy to measure the ability of the generative models to expand the query to disambiguate various intents to aid in downstream retrieval tasks. They also measure if the queries are plausible. To evaluate the ranking approach, we employ standard ranking metrics such as MRR and nDCG@10 and evaluate on TREC-DL-19 and TREC-DL-20 test sets for passage and document collection.
## 5. Results
We begin by first answering if the re-writer component is able to generate plausible natural language expansions of queries. Then we conclude by analyzing the impact of the rewritten queries on the document ranking performance.
### Query Rewriter Evaluation
To answer **RQ1**, we evaluate the proposed rewriting approaches using metrics like BERTScore and ROUGEL. The results are reported in Table 3. We report the mean values of precision, recall and F1 scores across all the test samples from TREC web 2012 to report the final scores. We observe that, CAR variants, specifically, Davinci-003 and ChatGPT approaches with context-aware prompting have the highest BERTScores and ROUGE-L scores. We attribute this performance to the **scale of the GPT-3** language model, instruction fine-tuning and the **quality of context** provided that enable the rewriter to generate natural language queries. We observe that CAR provides more fluent and relevant rewrites when compared to vanilla prompt based methods. The in-context learning based approaches also yield fluent rewrites. However, on analysis, we observe that they sometimes have topic drifts due to lack of relevant context needed to disambiguate the queries. We also observe that LLMs of smaller scales like Ada-001, Babbage-001 and Curie-001 generate rewrites which are factually inconsistent and drift away from the relevant topic. This is evident from the downstream document ranking performance(Table 4) and the qualitative analysis of samples(Table 1). This demonstrates that CAR rewrites are consistent, fluent and help improve downstream ranking performance.
Among the fine-tuned approaches in Table 3, BART (topic), BART (topic + PAA) and GPT-2 (topic), we observe that BART (topic) generates fluent query rewrites when compared to, GPT-2 (topic), as evident from BERTScore and ROUGEL scores. After performing a manual analysis of samples, and we observed that the query rewrites generated by GPT-2 were not relevant due to hallucination and in certain cases they were also grammatically incorrect. Therefore, we omit the GPT-2 (topic) model for the rest of the experiments. The CAR (BART (topic + PAA)) further improves the fluency of generated queries, but still falls short compared to GPT-3 variants and ChatGPT approaches, even though they are not fine-tuned.
Since the metrics proposed above are not true indicators of quality, we assess their quality through their impact on downstream ranking performance.
**Insight 1**: LLMs can produce good rewrites for under-specified and ambiguous queries. Among our methods CAR with context-aware few-shot prompting has the best rewrites.
### Ranking Evaluation
To answer **RQ2** we first train rankers on data from MS MARCO passage dataset (Section 4.3) with rewrites from different LLMs (Section 4.2), and evaluate on TREC-DL-19 and TREC-DL-20 test sets. Then we choose the best rewriter models and train document ranking models on MS MARCO document dataset and evaluate on TREC-DL-19 and TREC-DL-20. All results are shown in Table 4.
In case of passage dataset, CAR models perform better than their counterparts with ChatGPT CAR showing improvements of **30%** and **33%** on nDCG@10 and **22%** and **25%** on MRR over baseline for TREC-DL-19 and TREC-DL-20 respectively. Apart from CAR, Davinci-003 in-context and ChatGPT in-context model outperform the baseline across test sets.
The poor performance of models such as Ada-001, Babbage-001, etc are evident from our observation that they are not able to disambiguate the query, but rather just re rewrite it. For example, the query "define sri" is generally rewritten as "Definition of sri" while Davinci-003 and ChatGPT are able to convey the intent better as shown in Table 1. Our CAR models perform better than non CAR models, proving that localized context is important along with global knowledge. We also observe that BART base model performs the worst among all models but variation of BART, with context
\begin{table}
\begin{tabular}{l r r r|r} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{3}{c|}{**BERTScore**} & \multirow{2}{*}{**Rouge-L**} \\ \cline{2-2} \cline{4-4} & **P** & **R** & **F1** & \\ \hline GPT2 (topic)\({}^{\dagger}\) & 0.457 & 0.447 & 0.451 & 7.47 \\ BART (topic)\({}^{\dagger}\) & 0.792 & 0.791 & 0.792 & 28.89 \\ Query2Doc & 0.672 & 0.749 & 0.708 & 6.76 \\ Ada-001 (in-context) & 0.734 & 0.776 & 0.754 & 17.43 \\ Babbage-001 (in-context) & 0.804 & 0.802 & 0.802 & 32.16 \\ Curie-001 (in-context) & 0.812 & 0.800 & 0.805 & 34.25 \\ Davinci-002 (in-context) & 0.818 & 0.823 & 0.820 & 33.81 \\ Davinci-003 (prompt1) & 0.727 & 0.743 & 0.734 & 14.38 \\ Davinci-003 (prompt2) & 0.811 & 0.792 & 0.801 & 32.07 \\ Davinci-003 (in-context) & 0.817 & 0.818 & 0.817 & 34.68 \\ ChatGPT & 0.768 & 0.812 & 0.789 & 25.77 \\ \hline
**CAR** & & & & \\ BART (topic+ PAA)\({}^{\dagger}\) & 0.777 & 0.783 & 0.779 & 25.85 \\ Davinci-003 & 0.821 & 0.834 & 0.827 & 38.57 \\ ChatGPT & **0.825** & **0.847** & **0.836** & **40.51** \\ \hline \hline \end{tabular}
\end{table}
Table 3. BERTScore and ROUGEL scores for different rewrites on TREC web 2012. The best results for each dataset and each model is in bold and second is underlined. \({}^{\dagger}\)indicates that the model has been fine-tuned with topic or topic + PAA data.
awareness BART(topic+PAA) does better than baseline model in all evaluations. We posit that the external knowledge provided in the form of _Google PAA_ questions when fine-tuning BART(topic+PAA) helps disambiguate the intent of the query. For instance, the query "403b" in TREC web track 2012 is ambiguous without context. However, one of the corresponding Google PAA questions, "What are the withdrawal limitations of a 430b retirement plan?" helps indicate that the query refers to a retirement plan. We posit that the BART (topic+PAA) approach encodes this knowledge through fine-tuning. Hence, it has better document ranking performance compared to other approaches. This proves that context matters when it comes to rewriting ambiguous queries.
We also test our CAR approach on MS MARCO document collection. We consider the baselines and the best-performing approaches on the passage dataset to test on MS MARCO document collection. Table 4 shows the performance of different rankers trained using ambiguous queries from MS MARCO document collection on TREC-DL-19 and TREC-DL-20. In the case of TREC-DL-19 ChatGPT CAR methods perform better than in-context models and for TREC-DL-20 BART(topic+PAA) performs better. CAR models perform about **12%** and **23%** better than baseline model on nDCG@10. In the case of the passage dataset for a given query and a relevant passage, the context is specific, whereas this is not true in the case of document collection. There can be context drift in the case of a document, where a document can have generic contexts, which can result in LLM generating queries of generic intents using the CAR approach. In Table 1 we can see examples where in case of CAR rewrites from different models are more specific compared to their document counterparts for the same queries.
If we look at model performance using MS MARCO passage collection (Table 4) and document collection (Table 4) combined, we can see that CAR approach performs better overall. This is due to the reason that the LLMs employed to expand ambiguous/ill formed queries encode world knowledge, which when enriched with relevant passage/document context helps ground the generation. The context serves as grounding, which prevents hallucination and topic drift. In Table 1 we can see how CAR model rewrites are more specific and factual compared to their non-CAR counterparts.
**Insight 2**: The quality of context employed in few-shot prompting of LLMs is crucial when expanding ambiguous/ill formed queries. The proposed approach CAR, is able to form concise and relevant query reformulations based on the given context.
**How do we deal with context drift for long documents?**
We previously discussed the phenomenon of _topic drift_ within the scope of document context. To mitigate this phenomenon, we employ techniques involving supervised selection techniques, namely Attention and Linear selectors (Subsection 3.2.1), as proposed in [25]. In both instances, the primary objective of the selector is
\begin{table}
\begin{tabular}{l c c c c|c c c} \hline \hline \multicolumn{6}{c}{**TREC-DL Passage**} & \multicolumn{4}{c}{**TREC-DL Document**} \\ \hline \hline & \multicolumn{2}{c}{**TREC-DL-19**} & \multicolumn{2}{c}{**TREC-DL-20**} & \multicolumn{2}{c}{**TREC-DL-19**} & \multicolumn{2}{c}{**TREC-DL-20**} \\ \cline{2-9} _Ranking Models_ & RR & nDCG\({}_{10}\) & RR & nDCG\({}_{10}\) & RR & nDCG\({}_{10}\) & RR & nDCG\({}_{10}\) \\ \hline
**Baseline** & & & & & & & \\ BERT Baseline & 0.653 & 0.381 & 0.523 & 0.288 & 0.750 & 0.453 & 0.735 & 0.393 \\ Query2Doc [44] & 0.578\(\pm\)1.64\(\pm\)0.3 & 0.321\(\pm\)5.75\(\pm\)0.608\(\pm\)1.62\(\pm\)0.323\(\pm\)42.33\(\pm\)0.82 & **0.82\(\pm\)0.73** & 0.467\(\pm\)0.31\(\pm\)0.15\(\pm\)0.749\(\pm\)0.25\(\pm\)0.415\(\pm\)0.45\(\pm\)0.7\(\pm\)0.749\(\pm\)0.25\(\pm\)0.415\(\pm\)0.45\(\pm\)0.7\(\pm\)0.749\(\pm\)0.25\(\pm\)0.415\(\pm\)0.7\(\pm\)0.749\(\pm\)0.25\(\pm\)0.415\(\pm\)0.7\(\pm\)0.749\(\pm\)0.25\(\pm\)0.415\(\pm\)0.7\(\pm\)0.749\(\pm\)0.25\(\pm\)0.749\(\pm\)0.25\(\pm\)0.415\(\pm\)0.7\(\pm\)0.749\(\pm\)0.25\(\pm\)0.415\(\pm\)0.7\(\pm\)0.749\(\pm\)0.25\(\pm\)0.749\(\pm\)0.25\(\pm\)0.415\(\pm\)0.749\(\pm\)0.25\(\pm\)0.
to identify and select the most pertinent passage based on a given query.
In our methodology, given a query and a relevant document to the query, we segment the document into passages, each comprising approximately four sentences. Subsequently, we employ Attention and Linear selectors to identify the passage that bears the highest similarity to the given query within the document. This selected passage, along with the original query, is used to rewrite the query using LLM and then subsequently use it for model training.
The final four models detailed in Table 4 exemplify the application of Attention and Linear selectors on Davinci-003 and ChatGPT, resulting in improved performance as compared to their CAR counterparts. Just comparing CAR models, we can clearly see CAR ChatGPT (Linear) does 2% and 8% better than CAR ChatGPT for nDCG\({}_{10}\) on TREC-DL '19 and TREC-DL '20 respectively. We can see a similar trend in case of Davinci-003 and Davinci-003 (Attention) models. Empirical observations demonstrate that models employing the CAR selector exhibit a performance enhancement ranging from 2% to 15% when compared to their CAR-only counterparts. Hence, proving that if we reduce _topic drift_ for documents we can improve query rewriting quality, in turn improving downstream task performance. From Table 4 we can concur that for query rewriting using documents CAR based models perform the best, specifically ChatGPT (Linear) and Davinci-003 (Attention) are the two best performing models overall.
**Insight 3**: Emphasizes that the utilization of Attention and Linear selectors to mitigate document topic drift not only enhances the quality of the rewritten query using the CAR methodology, but also leads to an overall improvement in the ranking model's performance.
**How does performance vary based on LLM of different parameter scales?**: We experiment with models of different parameter scales (Table 2). The results are shown in Table 4. We observe that as we scale up the rewriter LLM from 350 million parameters to 175 billion or 154 billion parameters, the performance improves significantly as observed during evaluation (Table 4). This is because they are able to characterize the world knowledge better from the instruction based training regime and the scale of the models (Zhu et al., 2019). We observe that smaller models like Ada-001, Babbage-001 and Curie-001 generate factually inconsistent rewrites of the original query and rewrites with significant topic drift from the original query.
**Insight 4**: We observe that when we scale up LLMs it results in better quality rewrites as they encode more world knowledge. This is evident from the significant improvement in ranking performance.
### Qualitative Analysis
We analyze examples of ambiguous queries and their corresponding rewrites obtained from the generative models. Some of these examples are in Table 1. We observe that the generated queries are close to actual intents in the CAR (ChatGPT or Davinci-003) approach. For instance, for the query _hs worms_, the corpus only contains documents pertaining to the educational institution. But it has multiple contexts, such as the _educational institution_ or _heartworms_. This renders the query and underlying intent ambiguous without being contextualized by the documents in the corpus. However, among the other rewriting approaches, CAR is able to decipher the correct intent with fine granularity by also providing an expansion for the query. The advantage of the proposed framework is more evident from the fourth example _urodeum function_. This query is also of ambiguous nature, as it could refer to any anatomy. However, we observe that CAR with document context-aware prompting is able to decipher the correct intent. We attribute this to the large scale pre-training data and the ability of over-parameterized models to serve as knowledge bases (Zhu et al., 2019).
**Insight 5**: CAR with context-aware prompting, is better at generating plausible rewrites with intents that are relevant to the search domain and improve downstream ranking.
### Limitations
In our current approach, we choose ambiguous queries based on heuristics like the query length and specific types of queries like acronyms, entities with multiple meanings based on context. Though these heuristics reflect the nature of ambiguous queries, a more principled approach would help identify and filter ambiguous queries.
## 6. Conclusion
In this work, we propose a framework for redefining query rewriting approaches using context-aware prompting of LLMs for improving document ranking. Our framework reformulates ambiguous queries to interpretable natural language queries that disambiguate user information needs. Our experiments demonstrate that generating plausible rewrites which disambiguate user intent is possible, which further enhances downstream ranking performance. We posit that the joint training of the rewriter and ranker in our framework with feedback signals would yield better rewrites and ranking performance. We also propose several challenges associated with the development of the framework. |
2309.17002 | Understanding and Mitigating the Label Noise in Pre-training on
Downstream Tasks | Pre-training on large-scale datasets and then fine-tuning on downstream tasks
have become a standard practice in deep learning. However, pre-training data
often contain label noise that may adversely affect the generalization of the
model. This paper aims to understand the nature of noise in pre-training
datasets and to mitigate its impact on downstream tasks. More specifically,
through extensive experiments of supervised pre-training models on synthetic
noisy ImageNet-1K and YFCC15M datasets, we demonstrate that while slight noise
in pre-training can benefit in-domain (ID) transfer performance, where the
training and testing data share the same distribution, it always deteriorates
out-of-domain (OOD) performance, where training and testing data distribution
are different. We empirically verify that the reason behind is noise in
pre-training shapes the feature space differently. We then propose a
light-weight black-box tuning method (NMTune) to affine the feature space to
mitigate the malignant effect of noise and improve generalization on both ID
and OOD tasks, considering one may not be able to fully fine-tune or even
access the pre-trained models. We conduct practical experiments on popular
vision and language models that are pre-trained on noisy data for evaluation of
our approach. Our analysis and results show the importance of this interesting
and novel research direction, which we term Noisy Model Learning. | Hao Chen, Jindong Wang, Ankit Shah, Ran Tao, Hongxin Wei, Xing Xie, Masashi Sugiyama, Bhiksha Raj | 2023-09-29T06:18:15Z | http://arxiv.org/abs/2309.17002v2 | # Understanding and Mitigating the Label Noise in Pre-training on Downstream Tasks
###### Abstract
Pre-training on large-scale datasets and then fine-tuning on downstream tasks have become a standard practice in deep learning. However, pre-training data often contain label noise that may adversely affect the generalization of the model. This paper aims to understand the nature of noise in pre-training datasets and to mitigate its impact on downstream tasks. More specifically, through extensive experiments of supervised pre-training models on synthetic noisy ImageNet-1K and YFCC15M datasets, we demonstrate that while slight noise in pre-training can benefit in-domain (ID) transfer performance, where the training and testing data share the same distribution, it always deteriorates out-of-domain (OOD) performance, where training and testing data distribution are different. We empirically verify that the reason behind is noise in pre-training shapes the feature space differently. We then propose a lightweight black-box tuning method (NMTune) to affine the feature space to mitigate the malignant effect of noise and improve generalization on both ID and OOD tasks, considering one may not be able to fully fine-tune or even access the pre-trained models. We conduct practical experiments on popular vision and language models that are pre-trained on noisy data for evaluation of our approach. Our analysis and results show the importance of this interesting and novel research direction, which we term _Noisy Model Learning_.
## 1 Introduction
The transfer learning paradigm of pre-training and fine-tuning (PT-FT) (Kornblith et al., 2019) has become the de facto standard in today's deep learning research and application. Instead of training a neural network from scratch on actual individual task, which can be time-consuming, resource-intensive, and less adaptable, the PT-FT paradigm first pre-trains a relatively larger and more general model with huge volumes of datasets, and then transfers this pre-trained model (or the foundation model (Bommasani et al., 2021)) to various downstream tasks (He et al., 2019; Radford et al., 2021; He et al., 2022; Brown et al., 2020). For instance, ResNet (He et al., 2016) and Vision Transformers (Dosovitskiy et al., 2020) pre-trained on ImageNet (Russakovsky et al., 2015) and larger but potentially noisy datasets (Kolesnikov et al., 2020; Xie et al., 2020; Ridnik et al., 2021) have been widely adopted in computer vision. The PT-FT paradigm has also become predominant in natural language processing (Devlin et al., 2018; Liu et al., 2019; Radford et al., 2018, 2019; Brown et al., 2020; OpenAI, 2023; Touvron et al., 2023) and multi-modality (Radford et al., 2021; Schuhmann et al., 2022), where the pre-training is usually on large datasets scraped from the web.
The generalization and transferability of the pre-trained models are usually not guaranteed to be satisfying on downstream tasks, and the reason can lie in either the pre-training or the fine-tuning. Over the years, there have been tremendous efforts in improving the performance of fine-tuning in various practical downstream scenarios: out-of-distribution generalization (Chen et al., 2021; Kumar et al., 2022), semi-supervised learning (Sohn et al., 2020; Wang et al., 2022), imbalanced learning
(Zhang et al., 2023; Wang et al., 2023b), noisy label learning (Song et al., 2022; Li et al., 2022), to name a few. While it is a common belief that scaling up the size of the pre-training data can benefit the downstream performance (Kaplan et al., 2020), its distribution also plays an essential role (Entezari et al., 2023; Zhang et al., 2023a). Recently, Nguyen et al. (2022) and Lee et al. (2022) found that the _quality_ of the pre-training data is more important for robust generalization compared to the quantity. The bias in pre-training data created by the collection (and annotation) process, e.g., corrupted, poisoned, and false information (Blodgett and O'Connor, 2017; Chang et al., 2020), can also impose malicious and unexpected influence to downstream tasks (Bommasani et al., 2021).
Take label noise as an example. Training CLIP (Radford et al., 2021) on LAION-2B (Schuhmann et al., 2022), which is a billion-scale uncurated image-text pair dataset, can just match the performance of training it on WIT-400M (Radford et al., 2021), which is heavily cleaned and processed by OpenAI. The label noise in large-scale datasets inevitably exists owing to the data collection process by human annotators and web crawlers. It thus can be difficult to avoid or eliminate in pre-training (Ridnik et al., 2021; Vasudevan et al., 2022; Schuhmann et al., 2022). In fact, there are already numerous models pre-trained on large-scale noisy data and have been transferred on downstream tasks, such as Noisy Student (Xie et al., 2020), BiT (Kolesnikov et al., 2020), and Open CLIP (Cherti et al., 2023). Not to mention the enormous but noisy raw text (Yang et al., 2019; Lee et al., 2022) that has been utilized to pre-train language models such as BERT (Devlin et al., 2018) and GPT (Radford et al., 2019; Brown et al., 2020). As the pre-trained models and datasets have been growing significantly, it has become increasingly important and challenging to understand _how the noise in pre-training data affects the performance of pre-trained models on downstream tasks_.
This paper presents the first study on this unexplored problem, demystifying the label noise in pre-training data, understanding its effects on downstream tasks, and then mitigating such (malignant) effects. Notably, there are existing efforts under the name of "noisy label learning" that train a robust model _given_ noisy training data (Ghosh et al., 2017; Li et al., 2020; Northcutt et al., 2021). Our problem is inherently different since the noisy labels exist in the (usually black-box) pre-training data, and we do not make noise assumptions on the downstream data (while they can be used together in Section 4.3; more discussion is in Section 5). Due to the increasing size of pre-trained models and datasets, it becomes notoriously difficult to alter the pre-training process or fine-tune the entire models (black-box or cannot be updated due to the large parameter size and the constrained computation).1 Therefore, given a pre-trained model, we should take special care of the _fine-tuning_ to overcome the influence of noise in pre-training on downstream tasks.
Footnote 1: For instance, the open-source Llama (Touvron et al., 2023a;b) model requires multiple V100 GPUs to fine-tune, which is not affordable to most ordinary researchers; and proprietary models like ChatGPT only provides the API, which cannot be locally fine-tuned.
Our study aims to answer the following key questions: 1) _Influence:_ Does the noise in pre-training data have an influence on downstream performance? 2) _Analysis:_ Why does such influence happen? and 3) _Mitigation:_ How to mitigate such influence in a light-weight and black-box fine-tuning process? We present an in-depth analysis to answer the above questions, based on the popular _supervised_ pre-training paradigm.2
Footnote 2: Supervised and self-supervised learning are the most popular pre-training schemes. The former learns the mapping from inputs to labels (He et al., 2016; Radford et al., 2021), while the latter does not rely on labels, but predicts parts of the data itself, which can also be viewed as supervised learning on data.
Figure 1: In-domain (ID) and out-of-domain (OOD) downstream performance when supervised pre-training the model on synthetic noisy ImageNet-1K (IN-1K) and YFCC15M of various noise ratios. We compare linear probing (LP) and the proposed method on 14 ID and 4 OOD tasks. On ID, \(5\%\) noise in pre-training benefits the LP performance. Our method not only boosts the general performance but also rectifies the model pre-trained on clean data to be comparable to \(5\%\) noise. On OOD, noise in pre-training is detrimental to robustness performance when conducting LP. Our method improves the transferability on OOD tasks significantly compared to LP.
* **Influence: The label noise in pre-training data has both benevolent and malignant influence on downstream tasks.** In Sections 2.1 and 2.2, we conduct realistic experiments with ResNet-50 models (He et al., 2016) fully-supervised and contrastive pre-trainied on synthetic noisy ImageNet-1K and YFCC15M (Thomee et al., 2016) with various noisy ratios (\(0.5\%,10\%,20\%,30\%\)) and then study the generalization performance on the downstream domain (ID) and out-of-domain (OOD) tasks. We observe that, on ID tasks, slight noise (up to \(5\%\) or \(10\%\)) can benefit generalization performance. In contrast, even \(5\%\) noise can drastically deteriorate robustness and transferability on OOD tasks, as shown in Figure 1 and Figure 2.
* **Analysis: The label noise in pre-training shapes the feature space significantly of the pre-trained model.** In Section 2.3, we conduct empirical analysis from the singular value spectrum on the feature space of the pre-trained models. Noise in pre-training results in the decreasing largest singular value and flatter singular value distribution with a higher dimension span in the feature space. An initial increase in the spanning dimension of the feature space is beneficial to the discriminability on ID tasks. Still, it then becomes detrimental with the further increase, indicating more feature capacities are learned to fit to noise structure. The decrease in the dominant singular values leads to less transferability for OOD tasks (Chen et al., 2019), as shown in Figure 3.
* **Mitigation: We design a simple black-box fine-tuning algorithm to reshape the pre-trained feature space, reducing the influence of noisy pre-training data and boost the performance of downstream tasks.** In Section 3, based on the analysis, we propose three regularization objectives on the singular value spectrum that help affine the feature space. We demonstrate the effectiveness of the proposed method on noisy ResNet-50 models with extensive analysis, as shown in Figure 1. In Section 4, we further validate our method on popular noisy pre-trained models (and APIs) and present superior generalization performance for both vision and language tasks.
Beyond our analysis, we view this research as a novel and complementary topic to the classic noisy label learning setting, termed as _Noisy Model Learning_ (NML). We think the value of this direction is even more significant in the era of large foundation models (Bommasani et al., 2021), where the downstream users only have access to the model weights or APIs. It would be of particular interest to explore how to eliminate the malignant influence of noise in pre-training on downstream tasks when adapting these models without full fine-tuning, since it may exist in broader applications such as the detection and segmentation in medical and autonomous driving. We hope that future research on this topic can facilitate a better understanding and application of large foundation models.
## 2 Understanding the Label Noise in Pre-trained Models
In this section, we empirically and systemically investigate the effect of noisy labels in the supervised pre-training on the learned representations. We build our evaluation and analysis on the realistic motivating experiments of training ResNet-50 (He et al., 2016) on synthetic noisy ImageNet-1K (Russakovsky et al., 2015) and YFCC15M (a subset of YFCC100M (Thomee et al., 2016)).
### Experiments Design
**Noisy pre-training datasets**. We assume the supervised pre-training dataset consists of inputs \(\mathbf{x}\sim\mathcal{X}\) and supervisions \(y\sim\mathcal{Y}\). We define a clean dataset \(\mathcal{D}=\{(\mathbf{x}_{i},y_{i})\}_{i\in[N]}\) of size \(N\) with accurate supervisions, where \([N]:=\{1,\dots,N\}\). We assume that \(y\) can exist in different formats in pre-training, e.g., an actual label for the input as in fully-supervised learning (Russakovsky et al., 2015; He et al., 2016; Ridnik et al., 2021) or a text description for an input image as in contrastive learning of CLIP (Thomee et al., 2016; Radford et al., 2021; Jia et al., 2021; Changpinyo et al., 2021; Desai et al., 2021; Schuhmann et al., 2021, 2022). Due to the scale of data collection and the cost of data annotation, the pre-training dataset can usually contain noisy supervision \(\hat{y}\) that does not accurately match the corresponding \(\mathbf{x}\)(Recht et al., 2019; Beyer et al., 2020; Northcutt et al., 2021; Yun et al., 2021; Vasudevan et al., 2022; Schuhman et al., 2022). We define such noisy pre-training dataset as \(\hat{\mathcal{D}}=\{(\mathbf{x}_{i},\hat{y}_{i})\}_{i\in[N]}\) and the noise ratio \(\gamma\) as the percentage of noisy supervision in \(\hat{\mathcal{D}}\).
**Pre-trained models**. The pre-trained models serve as a foundation for downstream tasks and usually can be abstracted as the stack of a feature extractor and a projection head. We define the feature extractor with learned parameters \(\phi\) as a mapping function \(f_{\phi}\) from the input space to feature space of dimension \(D\): \(f_{\phi}:\mathcal{X}\rightarrow\mathcal{F}\). The projection head \(g_{\theta}:\mathcal{F}\rightarrow\mathcal{Y}\) is jointly pre-trained with the feature extractor, but not used when adapting \(f_{\phi}\) on downstream tasks. We consider two types of supervised
pre-training on images for this motivating example: fully supervised pre-training where \(y\) is the actual class label and the projection head is a linear classifier (He et al., 2016), and contrastive pre-training with text supervision (CLIP) where \(y\) is the text and the projection is a non-linear function maps the image and text to a common feature space (Radford et al., 2021; Cherti et al., 2023).
**In-domain (ID) and out-of-domain (OOD) evaluation**. To investigate the effect of noisy supervision comprehensively, we leverage both in-domain (ID) and out-of-domain (OOD) evaluation to assess the generalization capability of the pre-trained feature extractor (Djolonga et al., 2021)\(f_{\phi}^{\gamma}\) that are obtained from the pre-training data of different noise ratios. To evaluate the pre-trained models on a downstream dataset \(\mathcal{D}^{\prime}=\{(x_{i},y_{i})\}_{i\in[M]}\)3 and measure the quality of the learned representation, we conduct linear probing (LP)4, where only a \(C\)-way linear classification head is re-trained on the downstream dataset and the feature extractor is frozen. The linear probing can be viewed as a simple black-box tuning method for pre-trained models that are typically large and difficult or unable to fully fine-tune. For ID evaluation, we assume the same marginal distribution over \(\mathcal{X}\) for both training and testing. In contrast, for OOD evaluation, we train the linear classifier on a source distribution and evaluate it on (multiple) different target distributions (Kumar et al., 2022).
Footnote 3: We always treat \(y\in[C]\) as an actual class label on downstream datasets.
Footnote 4: Linear probing is an evaluation protocol accessing feature quality (He et al., 2019; Liu et al., 2021).
**Experiment setup**. We use ImageNet-1K (IN-1K) (Russakovsky et al., 2015) in fully supervised pre-training and YFCC15M (Thomee et al., 2016) in CLIP pre-training, with ResNet-50 (He et al., 2016). To introduce noisy supervision in the datasets, we uniformly flip the ground truth class label into the other classes in IN-1K and randomly swap the text description from another image-text pair in YFCC15M. We set the noise ratio \(\gamma\) to \(\{0\%,5\%,10\%,20\%,30\%\}\), where \(0\%\) represents the clean dataset. For ID evaluation, we use \(14\) downstream datasets including CIFAR-10/100 (Krizhevsky et al., 2009), Flowers102 (Nilsback and Zisserman, 2008), Food101 (Bossard et al., 2014), OxfordPet (Parkhi et al., 2012), StanfordCars (Krause et al., 2013), FGVCAircraft (Maji et al., 2013), SVHN (Netzer et al., 2011), DTD (Simpoi et al., 2014), Caltech101 (Fei-Fei et al., 2004), EuroSAT (Helber et al., 2018, 2019), PatchCamelyon (Veeling et al., 2018), RESISC45 (Cheng et al., 2017), and Rendered SST2 (Socher et al., 2013), which cover various visual domains. For OOD evaluation, we use DomainNet (Peng et al., 2019) where we train on either "real" or "sketch" images and test on "real", "sketch", "inpainting", and "clippart" images. In addition, for the CLIP pre-trained models, we further evaluate on ImageNet-V2 (Recht et al., 2019), ImageNet-R (Hendrycks et al., 2021), ImageNet-Sketch (Wang et al., 2019), ImageNet-A (Hendrycks et al., 2021), ImageNet-Vid (Shankar et al., 2021), and ObjectNet (Barbu et al., 2019), while conducting LP on IN-1K Russakovsky et al. (2015). We report the LP performance for both ID and OOD evaluation using \(\{10\%,25\%,50\%,75\%,100\%\}\) percentage of downstream datasets. More details of the experiment setup are included in Appendix A.2. While only supervised ResNet-50 is considered, it can be extended to other architectures and pre-training, which we will discuss in Section 4. Our pre-training follows Wightman et al. (2021) and Cherti et al. (2023), with similar performance achieved as shown in Appendix A.1, thus eliminating the possible effect of hyper-parameter on downstream tasks.
Figure 2: Average ID and OOD evaluation results of ImageNet-1K (IN-1K) fully supervised pre-training ((a) and (b)) and YFCC15M CLIP pre-training ((c) and (d)) on downstream tasks with various percentages of data using ResNet-50. On ID evaluation, the transferring performance first increases as noise increases (to \(5\%\) or \(10\%\)) and then decreases with more noise. On OOD evaluation, the robustness performance constantly decreases once noise is introduced in pre-training.
### Results
In Figure 2, we plot the average accuracy for ID and OOD tasks of adapting the IN-1K fully supervised and YFCC15M CLIP pre-trained ResNet-50 models. With the extensive motivating experiments, we empirically find two important and counter-intuitive observations from the results:
* Proper noisy labels in pre-training (e.g., \(5\%\) or \(10\%\)) can benefit the performance on ID downstream tasks, while more noise results in inferior results;
* The robustness of transferability on OOD downstream tasks constantly deteriorates as the noise increases, even with the improvement in ID tasks on \(5\%\) noise.
While prior arts in noisy label learning mainly aim to correct/eliminate the noise or perform robust learning against noise (Ghosh et al., 2017; Li et al., 2020; Liu et al., 2022; Xue et al., 2022), we show that the noise in pre-training can have both benevolent and malignant effects on downstream tasks. These observations raise a natural and fundamental question: _where does the superior transferability (with slight noise) and the inferior robustness stem from_? We further analyze the feature space to understand the change in the pre-trained feature extractor caused by noise.
### Feature Space Analysis
To understand the noise in pre-training data, we empirically analyze the singular value spectrum of the pre-trained the feature space on downstream datasets, which is widely considered to be related to the generalization performance (Oymak et al., 2019; Chen et al., 2019; Xue et al., 2022). More specifically, we perform singular value decomposition (SVD) on the features \(\mathbf{F}\in\mathbb{R}^{M\times D}\) of pre-trained feature extractors on a downstream dataset: \(\mathbf{F}=\mathbf{U}\mathbf{\Sigma}\mathbf{V}^{\top}\).5. We plot the singular values in Appendix A.4, based on which we define two metrics that can help understand the observations:
Footnote 5: We assume \(D\leq M\)(Kumar et al., 2022). \(\mathbf{U}\) and \(\mathbf{V}\) denotes the left and right singular vector matrices, respectively, and \(\mathbf{\Sigma}\) denoting the diagonal singular value matrix \(\{\sigma_{1},\dots,\sigma_{D}\}\).
**Definition 2.1** (Singular Value Entropy).: The singular value entropy (SVE) is defined as the entropy of normalized singular values. SVE measures the flatness of the singular value distribution.
\[\mathrm{SVE}=-\sum_{i=1}^{D}\frac{\sigma_{i}}{\sum_{j=1}^{D}\sigma_{j}}\log \frac{\sigma_{i}}{\sum_{j=1}^{D}\sigma_{j}} \tag{1}\]
Larger SVE values indicate that the feature space captures more structure in the data and thus spans more dimensions either due to more discriminated features are learned or memorization of the noise.
**Definition 2.2** (Largest Singular Value Ratio).: The largest singular value ratio (LSVR) is defined as the logarithm of the ratio of the largest singular value \(\sigma_{1}\) to the summation of all singular values:
\[\mathrm{LSVR}=-\log\frac{\sigma_{1}}{\sum_{i=1}^{D}\sigma_{i}}. \tag{2}\]
Figure 3: Feature SVD analysis. We compute the singular value entropy (SVE) for in-domain (ID) tasks and the largest singular value ratio (LSVR) for out-of-domain (OOD) tasks. Both metrics are computed for ImageNet-1K fully supervised pre-trained ((a) and (b)) and YFCC15M CLIP pre-trained ((c) and (d)) models. The SVE first slightly improves as the noise ratio increases to \(5\%\) or \(10\%\), indicating better generalization. As the noise ratio increases, the SVE further improves, and the LSVR drops significantly, corresponding to worse generalization on ID and OOD tasks, as more noise structure is learned. The dominant singular components become less transferable.
LSVR measures the variations in data captured by the singular vector corresponding to the largest singular value \(\sigma_{1}\), which relates to the transferability of a model (Chen et al., 2019).
**Analysis.** We plot the SVE for ID tasks and LSVR for OOD tasks, as shown in Figure 3. For ID tasks, as the noise ratio slightly increases, the learned representation usually presents slightly higher SVE, which indicates the pre-trained feature extractor captures more structure in data. Specifically, more capabilities of the feature space are assigned to fit the noise in data, resulting in a feature space spanning more dimensions, which provides better-initialized features on downstream tasks and facilitates generalization. Similar observations have also been found and explored in Wu et al. (2022). However, as the noise ratio further increases, the increased SVE indicates that a more noisy data structure is captured and memorized, thus leading to deteriorated generalization performance. When the labels in pre-training are random, the SVE of the feature extract would further increase by memorizing all the noise but not generalize on downstream tasks, similar to Zhang et al. (2021). For OOD tasks, the robustness performance is _negatively correlated_ with the LSVR. As the noise ratio increases, the LSVR consistently increases with the decreasing largest singular value. A less transferable component is learned, thus resulting in worse generalization on unseen OOD tasks.
## 3 Mitigating the Noise with Regularization on Singular Values
In this section, we propose a black-box fine-tuning method, which we call "Noisy Model Tuning" (NMTune, Figure 4) in response to the noisy model learning setting. We demonstrate that NMTune can boost the generalization on downstream tasks and provide the analysis for the reasons behind.
### Method
Per analysis above, noise in pre-training can shape the feature space differently from pre-training on clean data, reducing the top dominant singular values with dampened transferability while increasing the spanning dimensions of the feature space to fit noise structure. Since the large pre-trained models are usually difficult to fully fine-tune due to the enormous parameter size and limited computation resources, we propose to alter the pre-trained feature space \(\mathcal{F}\) in a light-weight and black-box fashion. More specifically, we introduce a multi-layer perceptron (MLP) \(h_{\omega}\) transforming the pre-trained features into new feature space \(\mathcal{Z}\). We propose three regularization terms on \(\mathbf{Z}\), to encourage the pre-trained knowledge to be maintained and improving SVE and LSVR of the new feature space.
**Consistency regularization**. To encourage the consistency of the pre-trained knowledge, we adopt a mean-square-error (MSE) loss between the normalized features \(\mathbf{F}\) and \(\mathbf{Z}\):
\[\mathcal{L}_{\mathrm{MSE}}=\bigg{\|}\frac{\mathbf{F}}{\|\mathbf{F}\|_{2}}- \frac{\mathbf{Z}}{\|\mathbf{Z}\|_{2}}\bigg{\|}_{2}^{2}. \tag{3}\]
Figure 4: Illustration of noisy label learning (left) and the proposed _Noisy Model Learning_ (right). Noisy label learning mainly focuses on robustly training a model from scratch or fine-tuning a model from pre-training on a noisy dataset. Noisy model learning focuses on robustly adapting the black-box noisy pre-trained models to downstream datasets with no assumption on the downstream dataset.
This objective facilitates inheriting the pre-trained knowledge in the transformed features \(\mathbf{Z}\).
**Covariance regularization.** We define the covariance loss to encourage the off-diagonal elements in the covariance matrix of the transformed feature \(C(\mathbf{Z})\) to be close to \(\mathbf{0}\):
\[\mathcal{L}_{\mathrm{COV}}=\frac{1}{D}\sum_{i\neq j}[C(\mathbf{Z})]_{i,j}^{2}, \text{ where }C(\mathbf{Z})=\frac{1}{M-1}\sum_{i=1}^{M}\left(z_{i}-\bar{z}\right) \left(z_{i}-\bar{z}\right)^{T},\bar{z}=\frac{1}{M}\sum_{i=1}^{M}z_{i}. \tag{4}\]
Inspired by Zbontar et al. (2021) and Bardes et al. (2022), we use the covariance regularization term to improve the SVE of feature space by preventing the different coordinates of the features from encoding similar information. It also encourages more discriminative features to be learned.
**Dominant singular value regularization**. To help transferability, we use a more specific regularization to improve the LSVR by directly maximizing the ratio of the largest singular value:
\[\mathcal{L}_{\mathrm{SVD}}=-\frac{\sigma_{1}}{\sum_{j}^{D}\sigma_{j=1}}. \tag{5}\]
In summary, the total objective on a downstream task becomes:
\[\mathcal{L}=\mathcal{L}_{\mathrm{CE}}+\lambda\mathcal{L}_{\mathrm{NMTune}}= \mathcal{L}_{\mathrm{CE}}+\lambda\left(\mathcal{L}_{\mathrm{MSE}}+\mathcal{L }_{\mathrm{COV}}+\mathcal{L}_{\mathrm{SVD}}\right), \tag{6}\]
where \(\mathcal{L}_{\mathrm{CE}}\) is the cross-entropy loss for downstream classification. We set \(\lambda=0.01\) and use 2 layers MLP for all our experiments. Ablation study on MLP architecture and \(\lambda\) are in Appendix B.7.
### Evaluation on Noisy ImageNet-1K and YFCC15M
Here, we evaluate the proposed NMTune on the noisy models and analyze the reason for its effectiveness. We compare against solely training the MLP without the regularization, termed as MLP tuning, to show the effectiveness stems from the regularization rather than the extra parameters.
For ID tasks, we plot the average F1 score and SVE in Figures 5(a) and 5(b), respectively. The F1 score of linear probing (LP) on different pre-training noise ratios follows the same trend as the accuracy: it first increases as the noise ratio goes up to \(5\%\) and then decreases. While adding an MLP can improve the F1 score in general, we find that it cannot mitigate the effect of noise, i.e., the clean pre-trained model underperforms the \(5\%\) noisy pre-trained models. Further introducing our method can rectify the effect of noise on ID tasks, leading the clean pre-trained feature extractor to achieve the best results. More interestingly, only adding a MLP to LP can result in a smaller SVE, especially on ImageNet-1K, corresponding to a much sparser feature structure. In contrast, our method provides a larger and flatter SVE. It indicates the transformed feature space not only maintains the pre-trained knowledge but also spans more dimensions. For OOD tasks, the F1 score and LSVR are shown in Figure 5(c) and 5(d), respectively. Similarly, one can observe significantly better generalization performance deploying NMTune, compared to the MLP and LP. We also notice
Figure 5: Evaluation of our method on ID and OOD downstream tasks, compared to MLP tuning and LP on ResNet-50 models pre-trained on ImageNet-1K (IN-1K) and YFCC15M. (a) Average F1 score on ID tasks; (b) SVE on ID tasks; (c) Average F1 score on OOD tasks; (d) LSVR on OOD tasks. Our method presents better SVE and LSVR on both ID and OOD tasks with better generalization performance. Our method also rectifies the malignant noise effect: the feature extractor pre-trained on clean data now exhibits better performance than others on noisy data on ID tasks; and the performance gap between the clean one and the one with \(5\%\) noise gets smaller on OOD tasks.
a smaller performance gap for the clean pre-trained feature extractor and \(5\%\) noisy pre-trained, especially on YFCC15M. On LSVR, MLP tuning usually imposes larger LSVR compared to LP, presenting smaller dominant singular values. Considering MLP tuning also presents smaller SVE, its resulting feature space is expected to present a more long-tailed spectrum than the original feature space. Maximizing the dominant singular values results in better transferability for OOD tasks.
## 4 Experiments
We further validate NMTune on practical large-scale vision and language models that are pre-trained on noisy data, and discuss the noisy label learning and running time analysis in this section.
### Vision Models and Datasets
**Setup**. For vision models, we use ResNet152 (He et al., 2016a) with dimensions widened by a factor of two (ResNet152x2) fully supervised pre-trained on ImageNet-21K (Kolesnikov et al., 2020), Swin-L (Liu et al., 2021c) fully supervised pre-trained on ImageNet-21K, EfficientNet-B3 semi-supervised pre-trained on noisy JFT-300M (Hinton et al., 2015; Chollet, 2017) and ImageNet-1K, and ViT-L (Dosovitskiy et al., 2020) and ConvNext-L (Liu et al., 2022c) contrastive pre-trained on noisy Laion-2B (Cherti et al., 2023). All pre-trained models are adapted from TIMM (Wightman, 2019). We evaluate the models on the 14 downstream ID and 4 OOD vision datasets as in Section 2. The details of hyper-parameters are shown in Appendix B.1 due to space limit.
**Results**. We present the average accuracy and F1 score across different datasets with three runs on vision models in Table 1. Our method improves the quality of the noisy pre-trained features with better accuracy and F1 score on both ID and OOD vision tasks. A large margin on downstream task across different pre-training architectures and datasets is present by NMTune, demonstrating better feature is learned. Noteworthy is that, although the MLP tuning also improves the performance in general, its performance gain is much smaller compared to our method, showing the effectiveness of the proposed regularization terms on mitigating the malicious effect of noise and improving generalization. More detailed results with error bars for each dataset are shown in Appendix B.2.
### Language Models and Datasets
**Setup**. We evaluate BERT-L (Devlin et al., 2018), RoBERTa-L (Liu et al., 2019), and GPT-2 (Radford et al., 2019) on the GLUE (Wang et al., 2018) and GLUE-X (Yang et al., 2023) benchmarks for ID and OOD performance. BERT-L and RoBERTa-L are pre-trained on the combination of the BooksCorpus data (Zhu et al., 2015) and English Wikipedia with uncompressed raw text. It is found that the raw pre-training data of BERT can be reduced from 16GB to 12GB with data cleaning (Yang et al., 2019). GPT-2 is pre-trained on WebText (Radford et al., 2019), a scraped web dataset
\begin{table}
\begin{tabular}{l|l|c|c c c} \hline Pre-trained & \multicolumn{1}{c|}{Tuning} & \multicolumn{2}{c|}{In-Domain} & \multicolumn{2}{c}{Out-of-Domain} \\ \cline{2-5} Model & Method & Acc. & F1 & Acc. & F1 \\ \hline IFT30M & LP & 76.72 & 0.3815 & 4.413 & 0.3594 \\ Semi-Supervised & MLP & 78.67 & 0.3333 & 45.95 & 0.3624 \\ EfficientNet-B3 & Ours & **77.63** & **0.3874** & **46.364** & **0.3654** \\ ImageNet-21K & LP & 77.51 & 0.3718 & 40.82 & 0.3062 \\ Fully Supervised & MLP & 77.55 & 0.3726 & 41.73 & 0.3053 \\ ResNew-1252x2 & Ours & **78.43** & **0.3624** & **0.3102** & **0.3053** \\ ImageNet-21K & LP & 81.91 & 0.4092 & 50.88 & 0.3838 \\ Fully Supervised & MLP & 82.51 & 0.4125 & 51.21 & 0.3811 \\ Swin-L & Ours & **84.16** & **0.4177** & **52.35** & **0.3901** \\ \hline \hline Laion-2B & LP & 88.86 & 0.432 & 66.86 & 0.4253 \\ CLIP & MLP & 85.83 & 0.4417 & 68.43 & 0.4304 \\ ConvNext-L & Ours & **98.457** & **0.340** & **0.4367** \\ \hline \hline Laion-2B & LP & 86.85 & 0.4328 & 68.9 & 0.4208 \\ CLIP & MLP & 87.23 & 0.4375 & 69.50 & 0.4221 \\ ViT-L & Ours & **88.57** & **0.4414** & **70.47** & **0.4246** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results on popular vision models that are pre-trained on noisy datasets. We use 14 homomain (ID) and 4 out-of-domain (OOD) tasks.
\begin{table}
\begin{tabular}{l|l|c c} \hline Pre-trained & \multicolumn{1}{c|}{Tuning} & \multicolumn{1}{c}{In-Domain} & \multicolumn{1}{c}{Out-of-Domain} \\ \cline{2-4} Model & Method & In-Domain & \multicolumn{1}{c}{Out-of-Domain} \\ \hline \multirow{2}{*}{BERT-L} & LP & 69.44 & 50.65 \\ & MLP & 69.78 & 50.62 \\ & Ours & **70.26** & **51.63** \\ \hline \multirow{2}{*}{RoBERTa-L} & LP & 69.75 & 44.55 \\ & MLP & 70.27 & 45.22 \\ & Ours & **70.97** & **47.01** \\ \hline \multirow{2}{*}{GPT-2} & LP & 58.67 & 36.68 \\ & MLP & 58.44 & 37.24 \\ & Ours & **59.34** & **39.07** \\ \hline \multirow{2}{*}{text-4-ads-002} & LP & 56.96 & 44.06 \\ & MLP & 63.89 & 51.30 \\ \cline{1-1} & Ours & **65.99** & **53.48** \\ \hline \end{tabular}
\end{table}
Table 2: Evaluation of our method on language models in practice that are pre-trained on noisy datasets. We use GLUE for in-domain (ID) tasks and GLUE-\(X\) for out-of-domain (OOD) tasks.
from Common Crawl that contains low-quality raw texts. We also leverage OpenAI's API service "text-ada-002"6. Details of the hyper-parameters and evaluation metrics are in Appendix B.3.
Footnote 6: We cannot use larger and more recent language models such as LLaMA (Touvron et al., 2023a), since they are unable to fit in a single V100 GPU and we are unsure whether GLUE is in their training data.
**Results**. In Table 2, NMTune consistently achieves the best generalization performance. It presents superior performance gain, especially on OOD tasks of GLUE-X. On the "text-ada-002" model with only API access, it also outperforms LP significantly, demonstrating the necessity of mitigating the effect of noise for better generalization. Interestingly, on the ID tasks of GLUE, we also observe a smaller gap of MLP tuning method to LP even with more parameters, showing that the MLP alone may not mitigate the influence of noisy data in pre-training. Full results are in Appendix B.4.
### Discussion
**Noisy model learning with noisy label learning**. We explore another setting, where these two paradigms occur together with both the pre-training and fine-tuning containing label noise, as shown in Appendix B.5. Our exploration in synthetic noisy CIFAR-10/100 presents similar observations of LP and NMtune as in clean downstream datasets, and they can work closely to achieve better performance on downstream datasets with slight noise. **Running time analysis**. We present the average GPU hours of NMTune, MLP tuning, and LP in Appendix B.6, showing that it introduces negligible computation. The ablation study and architecture of MLP are shown in Appendix B.7. Finally, our results may not be comparable to white-box full fine-tuning results, which is acceptable since we perform black-box tuning and the feature extractors are frozen. Our goal is not to pursue the best but to offer insights and discuss new research possibilities in the era of foundation models.
## 5 Related Work
**Noisy label learning.** Prior arts on noisy label learning mainly focus on how to train robust models or how to adapt clean pre-trained models on noisy (downstream) datasets from scratch, including robust loss functions (Ghosh et al., 2017; Zhang and Sabuncu, 2018; Wang et al., 2019; Ma et al., 2020), noise estimation (Xiao et al., 2015; Goldberger and Ben-Reuven, 2016; Liu et al., 2020; Northcut et al., 2021; Li et al., 2021), and noise correction (Han et al., 2018; Li et al., 2020; Zhang et al., 2021c; Liu et al., 2022; Chen et al., 2023). Perhaps more close to our work is the line of understanding noisy label learning. Ghosh et al. (2017) looked at theoretical conditions for a loss function to be noise-tolerant. CIFAR-N (Wei et al., 2022b) was built to understand the real-world instance-dependent label noise. Cheng et al. (2023) proposed to mitigate the memorization of noise labels by analyzing the regularization between representations. Wen et al. (2022) provably verified the failure of benign overfitting with label noise. Xue et al. (2022) investigated the robustness of contrastive pre-training with noisy labels on downstream tasks. Our noisy model learning differs from the noisy label learning paradigm by focusing on the effect of noise in pre-training on downstream.
**Pre-training and fine-tuning.** Pre-training and fine-tuning is the dominant transfer learning paradigm that allows a pre-trained model to adapt to a new, but similar, dataset. Many techniques are proposed for better transfer performance on the new dataset when it contains distribution shift (Cheng et al., 2023), unlabeled data (Sohn et al., 2020; Zhang et al., 2021a; Wang et al., 2023a), imbalanced data (Kang et al., 2019; Wang et al., 2023c), and noisy data (Wei et al., 2022a; Xue et al., 2022). There are also much relevant work studying and processing the pre-training data for better transfer performance by diversity trade-off (Kaplan et al., 2020; Zhang et al., 2023a), data selection (Entezari et al., 2023), quality-quantity trade-off (Magar and Schwartz, 2022; Nguyen et al., 2022; Lee et al., 2022; Carlini et al., 2022; Gadre et al., 2023), and specified fine-tuning methods (Tsai et al., 2020; Kumar et al., 2022; Wortsman et al., 2022; Goyal et al., 2023; Xu et al., 2023). Parameter-efficient transfer learning (He et al., 2021; Oh et al., 2023) is lightweight paradigms by adding adapters (Houlsby et al., 2019), low rank approximation (Hu et al., 2021), or prompt tuning (Liu et al., 2022b; 2021b). However, they all assume the availability of pre-trained models while we deal with black-box models. They also do not consider the noise in pre-training data.
## 6 Conclusion
We presented _Noisy Model Learning_, a new research direction for understanding and mitigating the effect of label noise in pre-training on downstream tasks. Extensive experiments demonstrate
that proper noise in pre-training can benefit in-domain tasks and hurt out-of-domain tasks. We then proposed NMTune to mitigate the malignant effect of noise and improve the generalization performance of various noisy pre-trained models and APIs. While being the first study in this area, the explored models are still relatively small-scale in terms of pre-training, and we only use ResNet-50 for analytical experiments, due to the limited computing resources. We hope our work can inspire more researchers on this important and challenging topic in more practical settings.
## Disclaimer
In this paper, we generated some noisy pre-training images using ImageNet-1K to thoroughly study the noisy pre-training data. Such noisy data indeed could have malignant influence on downstream tasks, according to our findings. The only purpose of conducting this research is to study the noisy pre-training data, but not to claim their instability in real applications. Additionally, all the generated noisy images and our pre-trained models based on these data are for research purpose only, and will be released per request.
|
2309.04718 | Minimizing transients via the Kreiss system norm | We consider norms which assess transient behavior of stable LTI systems.
Minimizing such a norm in closed loop may enhance stability and performance of
a nonlinear system by mitigating transients and enlarging its region of
attraction around a locally stable steady state. | Pierre Apkarian, Dominikus Noll | 2023-09-09T08:15:08Z | http://arxiv.org/abs/2309.04718v1 | # Minimizing transients via the Kreiss system norm
###### Abstract.
We consider norms which assess transient behavior of stable LTI systems. Minimizing such a norm in closed loop may enhance stability and performance of a nonlinear system by mitigating transients and enlarging its region of attraction around a locally stable steady state.
Key Words. transient mitigation, \(L_{1}\) disturbance, Kreiss constant, structured controller, suppression of attractors, non-smooth optimization, LMI design techniques.
\({}^{1}\)ONERA, Department of System Dynamics, Toulouse, France.
\({}^{2}\)Institut de Mathematiques, Universite de Toulouse, France.
## 1. Introduction
It has been observed in the literature that the size of the region of attraction of a locally stable nonlinear system
\[\dot{x}=Ax+\phi(x),\;\;x(0)=x_{0} \tag{1}\]
with \(\phi(0)=0\), \(\phi^{\prime}(0)=0\), may strongly depend on the degree of normality of \(A\). When \(A\) is far from normal, the linearization \(\dot{x}=Ax\), \(x(0)=x_{0}\), may have large transient peaks, which may incite trajectories of (1) to leave the region of attraction. This is known as _peaking_, [43, 29, 19], and considered a major obstacle to global stability.
The tendency of a stable \(A\) to produce large transients or peaking may be assessed by its worst-case transient growth
\[M_{0}(A)=\max_{t\geq 0}\max_{\|x_{0}\|_{2}=1}\|e^{At}x_{0}\|_{2}=\max_{t\geq 0 }\overline{\sigma}(e^{At}), \tag{2}\]
and in closed loop, when \(A\) depends on tunable parameters, one may attempt to minimize \(M_{0}(A_{\text{cl}})\) in order to enlarge the region of local stability of (1). This has been studied in [8] for structured controllers, and previously in [21] using the controller Q-parametrization.
The rationale underlying this approach should be further deepened by taking two more facts into account. Firstly, in a continuous operating process the effect of initial values is not the appropriate lever, as instability is rather caused by noise, persistent perturbations, or finite-consumption disturbances. Secondly, nonlinearity often arises only in some of the states \(z\), and likewise may affect only parts of the dynamics, and the heuristic should be adaptable to such cases. We address those issues by considering a nonlinear controlled system of the form
\[\begin{split}\dot{x}&=Ax+B\phi(z)+Bw+B_{u}u\\ z&=Cx\\ y&=C_{y}x\end{split} \tag{3}\]
where the nonlinearity satisfies \(B\phi(0)=0\) and \(B\phi^{\prime}(0)C=0\), and where a tunable feedback controller \(u=K(\mathbf{x})y\), with \(\mathbf{x}\) as decision variables, is sought which stabilizes the system locally, rendering it as resilient as possible with regard to these disturbances. The latter is aimed at indirectly by tuning the closed loop channel \(w\to z\) to remain small with regard to a system norm assessing transients, the idea being that disturbances \(w\) cause the partial state \(z\) to have unduly large transients.
More formally, we close the loop with respect to the controller \(K(\mathbf{x})\) in (3), and consider the linear closed-loop channel \(T_{wz}(\mathbf{x},s)=C(sI-A_{\mathrm{cl}}(\mathbf{x}))^{-1}B\), which we now tune in such a way that transients in \(z(t)\) due to disturbances \(w(t)\) remain small. Expanding on (2), we may assess transients of \(G(s)=C(sI-A)^{-1}B\) via
\[\mathcal{M}_{0}(G)=\sup_{t\geq 0}\overline{\sigma}\left(Ce^{At}B\right)=\sup_{ \|w\|_{1}\leq 1}\|G*w\|_{\infty}, \tag{4}\]
which turns out to be a time-domain \(L^{1}\to L^{\infty}\) induced system norm with suitably chosen vector norms. It measures the time-domain peak of the response \(z=G*w\) of \(G\) to a finite consumption input \(w\). For \(G(s)=(sI-A)^{-1}\) we recover \(\mathcal{M}_{0}(G)=M_{0}(A)\).
Along with disturbances of finite consumption, \(w\in L^{1}\), it also makes sense to consider finite energy perturbations \(w\in L^{2}\), which may be thought of as representing noise, or time-domain bounded \(w\in L^{\infty}\), which stand for persistent perturbations, as naturally all those could be the reason why trajectories of (1) or (3) get outside the region of attraction. Ability of a system to withstand destabilizing disturbances is referred to as _resilience_, and along with \(\mathcal{M}_{0}(G)\) other ways to quantify it have been discussed, see e.g. [25]. Mitigating large transients is a general concern in control design, and has been addressed in various ways. LMI approaches are discussed in [12, 48], and a comparison between minimization of (4) and LMI techniques is [39], suggesting that, in the case study of plane Poiseuille flow, minimization (4) may be less conservative.
The remainder of this article is organized as follows. In Section 2, we introduce a frequency-domain approximation of \(\mathcal{M}_{0}(G)\), better suited for the purpose of optimization, called the Kreiss system norm \(\mathcal{K}(G)\). In a technical Section 3, estimates between various system norms related to \(L_{1}\)-disturbances are obtained, while Section 4 addresses the case of persistent perturbations. Experiments in Sections 5 and 6 focus on \(L_{1}\)-disturbances, where we apply Kreiss norm minimization to control nonlinear dynamics involving limit cycles, chaos or multiple fixed points, with the goal to increase the region of local stability or to even achieve global stability in closed loop. Conclusions are given in Section 7.
## 2. Kreiss system norm
A difficulty already mentioned in [28] is that \(M_{0}(A)\), and similarly \(\mathcal{M}_{0}(G)\), is hard to compute, let alone optimize. In response, the authors of [28] propose to use the _Kreiss constant_\(K(A)\) of a matrix \(A\in\mathbb{R}^{n\times n}\) as an alternative measure of normality. The latter is defined as
\[K(A)=\max_{\mathrm{Re}(s)>0}\mathrm{Re}(s)\overline{\sigma}\left((sI-A)^{-1} \right), \tag{5}\]
and its computation was investigated in [34, 35, 8]. By the famous Kreiss Matrix Theorem [45, p. 151, p.183] the estimate
\[K(A)\leq M_{0}(A)\leq enK(A)\]
is satisfied, where the constant is generally pessimistic, but sharp as shown in [28].
It turns out that minimizing \(K(A)\) has an effect similar to minimizing \(M_{0}(A)\), and this is in line with the observation that the global minimum \(K(A)=M_{0}(A)=1\) is the same for both criteria and occurs for normal \(A\), and more generally, for matrices \(A\) where \(e^{At}\) is a contraction in the spectral norm. In [8] we have shown that optimizing \(K(A_{\mathrm{cl}})\) is numerically possible, and that it has indeed the desired effect of driving \(A_{\mathrm{cl}}\) closer to normal behavior.
Computing and optimizing \(\mathcal{M}_{0}(G)\) in closed loop encounters the same difficulties as \(M_{0}(A)\), and it is therefore tempting to consider the Kreiss system norm
\[\mathcal{K}(G):=\sup_{\mathrm{Re}(s)>0}\mathrm{Re}(s)\overline{\sigma}\left(C( sI-A)^{-1}B\right),\]
as it generalizes \(K(A)\) in a natural way and satisfies the same estimate
\[\mathcal{K}(G)\leq\mathcal{M}_{0}(G)\leq en\,\mathcal{K}(G), \tag{6}\]
as we shall prove in Section 3.1. The principled reason to use \(\mathcal{K}(G)\) is that its computation, and for that matter, optimization, may be based on a robust control technique, first proposed in [8, Thm. 2.1] for the case \(B=C=I_{n}\):
**Theorem 1**.: _Suppose \(A\) is stable. Then the Kreiss system norm \(\mathcal{K}(G)\) can be computed through the robust \(H_{\infty}\)-performance analysis program_
\[\mathcal{K}(G)=\max_{\delta\in[-1,1]}\left\|C\left(sI-\left(\tfrac{1-\delta}{ 1+\delta}A-I\right)\right)^{-1}B\right\|_{\infty}, \tag{7}\]
_where \(\|G\|_{\infty}\) denotes the \(H_{\infty}\)-system norm. _
The Kreiss norm can be computed either by solving a non-smooth max-max program, or by a convex SDP; see [8, Theorems 2.1 and 2.4]. The SDP provides a certified accuracy, but the non-smooth technique is considerably faster. In numerical testing, we therefore use the SDP only for the final certification.
This leads us now to the following synthesis program:
\[\begin{array}{ll}\text{minimize}&\mathcal{K}(T_{wz}(\mathbf{x}))\\ \text{subject to}&K(\mathbf{x})\text{ stabilizes }G\\ &\mathbf{x}\in\mathbb{R}^{n}\end{array} \tag{8}\]
where \(\mathbf{x}\in\mathbb{R}^{n}\) are the finitely many tunable parameters of the structured controller \(K(\mathbf{x})\), and where \(T_{wz}(\mathbf{x},s)\) is the closed-loop channel of (3) by which we assess transients. Program (8) may at leisure be complemented by adding standard \(H_{\infty}\)- or \(H_{2}\)-loop-shaping requirements to further improve performances and robustness.
## 3. Norm estimates
In this Section, we obtain some basic estimates relating the Kreiss system norm \(\mathcal{K}(G)\) to the \(L^{1}\to L^{\infty}\) induced norm \(\mathcal{M}_{0}(G)\). We recall Young's inequality:
**Lemma 1**.: (Young's inequality; see [13])_. Let \(1/p+1/q+1/r=2\), \(p,q,r\geq 1\). Then_
\[\left|\iint f(x)g(x-y)h(y)dydx\right|\leq C_{p}C_{q}C_{r}\|f\|_{p}\|g\|_{q}\|h \|_{r},\]
_where_
\[C_{p}=\left(p^{1/p}/p^{1/p^{\prime}}\right)^{1/2},C_{1}=C_{\infty}=1.\]
__
Let \(\xi,\eta\) be test vectors of appropriate dimensions and consider a one-dimensional signal \(u(t)\), then Lemma 1 gives
\[\begin{split}\xi^{T}C(sI-A)^{-1}B\eta\,u(s)&=\int_{ 0}^{\infty}e^{-st}(\xi^{T}Ce^{At}B\eta*u)(t)dt\\ &\leq C_{p}C_{q}C_{r}\|e^{-ts}\|_{p}\|\xi^{T}Ce^{At}B\eta\|_{q}\|u \|_{r}\\ &=C_{p}C_{q}C_{r}\mathrm{Re}(s)^{-1/p}p^{-1/p}\|\xi^{T}Ce^{At}B \eta\|_{q}\|u\|_{r},\end{split} \tag{9}\]
where \(f(t)=e^{-st}\), \(g(t)=\xi^{T}Ce^{At}B\eta\), and \(h(t)=u(t)\) are understood to take values \(0\) for \(t<0\). In the sequel we consider various choices of \(p,q,r\).
### Kreiss system norm
We apply Young's inequality with \(r=1\), \(q=\infty\), \(p=1\), where \(C_{p}C_{q}C_{r}=1\). This leads to the following
**Theorem 2**.: _For a stable system \(G(s)=C(sI-A)^{-1}B\) we have the estimate_
\[\mathcal{K}(G):=\sup_{\mathrm{Re}(s)>0}\mathrm{Re}(s)\overline{\sigma}\left(C (sI-A)^{-1}B\right)\leq\sup_{t\geq 0}\overline{\sigma}\left(Ce^{At}B\right)=: \mathcal{M}_{0}(G). \tag{10}\]
**Proof:** From (9) with \(r=1\), \(q=\infty\), \(p=1\), we get
\[\mathrm{Re}(s)|\xi^{T}C(sI-A)^{-1}B\eta\,u(s)|\leq\|\xi^{T}Ce^{At}B\eta\|_{ \infty}\|u\|_{1}.\]
Now take \(u_{\epsilon}(t)=\epsilon^{-1}\) on \([0,\epsilon]\), \(u_{\epsilon}(t)=0\) else. Then \(\|u_{\epsilon}\|_{1}=1\). On the other hand, \(u_{\epsilon}(s)\to 1\) as \(\epsilon\to 0\), hence we get
\[\mathrm{Re}(s)|\xi^{T}C(sI-A)^{-1}B\eta|\leq\|\xi^{T}Ce^{At}B\eta\|_{\infty}= \sup_{t\geq 0}|\xi^{T}Ce^{At}B\eta|.\]
Now we consider test vectors \(\xi\in\ell_{2}\), \(\eta\in\ell_{2}\). Passing to the supremum over \(\|\xi\|_{2}\leq 1\), \(\|\eta\|_{2}\leq 1\) on the right gives
\[\mathrm{Re}(s)|\xi^{T}C(sI-A)^{-1}B\eta| \leq\sup_{t\geq 0}\,\sup_{\|\xi\|_{2},\|\eta\|_{2}\leq 1}|\xi^{T}Ce^{At }B\eta|\] \[=\sup_{t\geq 0}\overline{\sigma}\left(Ce^{At}B\right).\]
Then taking the supremum over \(\|\xi\|_{2}\leq 1\), \(\|\eta\|_{2}\leq 1\) and \(\mathrm{Re}(s)>0\) on the left gives
\[\mathcal{K}(G)=\sup_{\mathrm{Re}(s)>0}\mathrm{Re}(s)\overline{\sigma}\left(C (sI-A)^{-1}B\right)\leq\sup_{t\geq 0}\overline{\sigma}\left(Ce^{At}B \right)=\mathcal{M}_{0}(G),\]
which is the claimed estimate.
In order to interpret the expression \(\mathcal{M}_{0}(G)\) on the right, we consider vector norms on \(L^{p}([0,\infty),\mathbb{R}^{n})\) defined as
\[\|u\|_{p,q}=\left(\int_{0}^{\infty}|u(t)|_{q}^{p}dt\right)^{1/p},\]
where \(|u|_{q}=(\sum_{i=1}^{n}|u_{i}|^{q})^{1/q}\) is the standard vector \(q\)-norm in \(\mathbb{R}^{n}\), and where \(\|u\|_{\infty,q}=\sup_{t\geq 0}|u(t)|_{q}\). Then, with the terminology introduced in [15],
\[\|G\|_{(q,s),(p,r)}=\sup_{u\neq 0}\frac{\|G*u\|_{q,s}}{\|u\|_{p,r}} \tag{11}\]
are induced norms \((L^{p},\|\cdot\|_{p,r})\to(L^{q},\|\cdot\|_{q,s})\). In some cases these admit closed-form expressions, which is a prerequisite to making them amenable to computations, and even more so, optimization. By [15, (25)] one such case is
\[\|G\|_{(\infty,p),(1,r)}=\sup_{t\geq 0}\|G(t)\|_{p,r}, \tag{12}\]
where \(\|A\|_{q,p}=\sup_{x\neq 0}\|Ax\|_{q}/\|x\|_{p}\) are the usual well-known induced matrix norms. Therefore, if we choose \(p=r=2\) in (12), then
\[\|G\|_{(\infty,2),(1,2)}=\sup_{t\geq 0}\|G(t)\|_{2,2}=\sup_{t\geq 0}\overline{ \sigma}(G(t))=\mathcal{M}_{0}(G).\]
We have proved
**Proposition 1**.: \(\mathcal{M}_{0}(G)\) _is an induced system norm. Given the vector input \(w(t)\) satisfying \(\int_{0}^{\infty}|w(t)|_{2}dt=\int_{0}^{\infty}\left(\sum_{k=1}^{p}|w_{k}(t)|^{ 2}\right)^{1/2}dt=1\), it measures the output \(z=G*w\) by the vector signal norm_
\[\sup_{t\geq 0}\|z(t)\|_{2}=\sup_{t\geq 0}\left(\sum_{i=1}^{m}|z_{i}(t)|^{2} \right)^{1/2}.\]
The norm \(\mathcal{M}_{0}(G)=\|G\|_{(\infty,2),(1,2)}\) will be called the worst case transient peak norm, as it measures the peak of the time-domain response of \(G\) to a signal with finite resource consumption. Here'response to a signal of finite resource consumption' is terminology adopted from [10].
In consequence, the expression \(\mathcal{K}(G)\) is a frequency domain lower bound of \(\mathcal{M}_{0}(G)\), and it is easy to see that \(\mathcal{K}(G)\) is a norm, which we will call the _Kreiss system norm_.
**Remark 1**.: We do not expect \(\mathcal{K}(G)\) to be an induced system norm, but it does have the property of an operator norm, as follows from Theorem 1. Indeed, let \(G_{\delta}=C(sI-(\frac{1-\delta}{1+\delta}A-I))^{-1}B\), then \(\|G_{\delta}\|_{\infty}\) is the \(L^{2}\to L^{2}\) induced system norm when we take \(\|\cdot\|_{2,2}\) as vector norm. Hence \(\|z\|_{2,2}\leq\max_{\delta\in[0,1]}\|G_{\delta}\|_{\infty}\|w\|_{2,2}\), which due to (7) gives \(\|G*w\|_{2,2}\leq K(G)\|w\|_{2,2}\).
**Remark 2**.: Suppose \(G=(A,B,C)\) is output controllable. Then for \(y_{0}\in\operatorname{im}(C)\), \(y_{0}\neq 0\), there exists \(u_{0}\) and \(t_{0}>0\) such that \(Ce^{At_{0}}Bu_{0}=y_{0}\). Then \(\mathcal{M}_{0}(G)\geq\overline{\sigma}(Ce^{At_{0}}B)\geq\|Ce^{At_{0}}Bu_{0} \|_{2}/\|u_{0}\|_{2}=\|y_{0}\|_{2}/\|u_{0}\|_{2}>0\). Some such condition is of course required, because if we take \(C=[1\ 1]\), \(B=\begin{bmatrix}1\\ -1\end{bmatrix}\), \(A=-I_{2}\), then \(Ce^{At}B=0\) for all \(t\).
**Remark 3**.: The famous estimate (upper bound due to Spijker [41])
\[K(A)\leq M_{0}(A)\leq neK(A) \tag{13}\]
holds for matrices \(A\) of size \(n\times n\), and the global minimum \(K(A)=M_{0}(A)=1\) is attained for matrices where \(e^{At}\) is a contraction in the spectral norm, and in particular, for normal matrices. For this reason \(M_{0}(A)\), and \(K(A)\), have been considered as'measures of non-normality' of a matrix.
The following extends (13), obtained in [26, 28, 41], to system norms:
**Theorem 3**.: _We have_
\[\mathcal{K}(G)\leq\mathcal{M}_{0}(G)\leq en\mathcal{K}(G).\]
**Proof:** We have already shown in Theorem 2 that \(\mathcal{K}(G)\leq\mathcal{M}_{0}(G)\). For the upper bound estimate, take test vectors \(\xi,\eta\), then on putting \(q(s)=\xi^{T}C(sI-A)^{-1}B\eta\), we have
\[\xi^{T}Ce^{At}B\eta =\frac{1}{2\pi j}\int_{\operatorname{Re}(s)=\mu}e^{st}\xi^{T}C( sI-A)^{-1}B\eta\,ds\text{ (inverse Laplace)}\] \[=-\frac{1}{2\pi j}\int_{\operatorname{Re}(s)=\mu}\frac{e^{st}}{ t}q^{\prime}(s)ds\text{ (partial integration)}\] \[=-\frac{1}{2\pi j}\frac{e^{\mu t}}{t}\int_{-\infty}^{\infty}e^{ j\omega t}q^{\prime}(\mu+j\omega)j\,d\omega\] \[=-\frac{e}{2\pi}\frac{1}{t}\int_{-\infty}^{\infty}|q^{\prime}(1/ t+j\omega)|d\omega=\frac{e}{2\pi}\operatorname{Re}(s)\|q^{\prime}(\operatorname{Re}(s)+ j\cdot)\|_{1},\]
where in the last line we have chosen \(\operatorname{Re}(s)=\mu=1/t\). Since by [41] and [28] we have \(\|q^{\prime}\|_{1}\leq 2\pi n\|q\|_{\infty}\), we find
\[|\xi^{T}Ce^{At}B\eta| \leq en\operatorname{Re}(s)\sup_{\omega}|\xi^{T}C((\operatorname{ Re}(s)+j\omega)I-A)^{-1}B\eta|\] \[\leq en\sup_{\omega}\operatorname{Re}(s)|\xi^{T}C(sI-A)^{-1}B\eta|,\]
so that taking the supremum over \(\|\xi\|_{2}=1\), \(\|\eta\|_{2}=1\) gives the right hand estimate.
The question as to whether there exists a global bound attained by both criteria at the same 'normal' \(G(s)\) is more involved. We have the following
**Proposition 2**.: _We have the lower bound \(\overline{\sigma}(CB)\leq\mathcal{K}(G)\leq\mathcal{M}_{0}(G)\)._
**Proof:** For \(x>0\) we have \(\mathcal{K}(G)\geq x\overline{\sigma}(C(xI-A)^{-1}B)=\overline{\sigma}(Cx(xI- A)^{-1}B)\), and since the matrix \(x(xI-A)^{-1}\) approaches \(I\) as \(x\to\infty\), we get the lower bound \(\overline{\sigma}(CB)\) all right. \(\square\)
For \(G=(sI-A)^{-1}\) this reproduces the bound \(K(A)\geq 1\), which as we know is attained when \(e^{At}\) is a contraction in the spectral norm, and in particular, for normal matrices. The question is therefore whether, or for which systems \(G=(A,B,C)\), the bound \(\overline{\sigma}(CB)\) is attained. It is clear from Proposition 2 that \(\mathcal{M}_{0}(G)=\overline{\sigma}(CB)\) implies equality \(\overline{\sigma}(CB)=\mathcal{K}(G)=\mathcal{M}_{0}(G)\). However, in the matrix case the reverse argument is also true, i.e., \(K(A)=1\) implies \(M_{0}(A)=1\) as a consequence of the Hille-Yosida theorem [18]. The analogous result for systems is no longer valid.
**Example 1**.: If we consider a stable SISO system
\[G(s)=\frac{c_{n-1}s^{n-1}+\cdots+c_{0}}{s^{n}+a_{n-1}s^{n-1}+\cdots+a_{0}}\]
then in controllable companion form
\[A=\begin{bmatrix}0&1&0&\dots&0\\ 0&0&1&&\\ \dots&&&\ddots&\\ 0&0&\dots&&&1\\ -a_{0}&-a_{1}&&&-a_{n-1}\end{bmatrix},B=\begin{bmatrix}0\\ \vdots\\ 0\\ 1\end{bmatrix}\]
\(C=[c_{0}\dots c_{n-1}]\). If the degree of the numerator is \(n-1\), then we can normalize by taking the system \(G/c_{n-1}\), then \(\overline{\sigma}(CB)=1\), and we may ask whether there are choices of the \(a_{i}\), \(c_{i}\) where this bound is attained. However, if the degree of the numerator is \(\leq n-2\), then always \(CB=0\), so here the lower bound is never attained.
This leaves now two situations. In case \(\overline{\sigma}(CB)=0\) one may wonder under what conditions \(\mathcal{K}(G)=\mathcal{M}_{0}(G)>0\) is satisfied, and whether this holds under normality of \(A\). On the other hand, when \(\overline{\sigma}(CB)>0\) one may ask under what conditions the lower bound is attained, whether attainment \(\overline{\sigma}(CB)=\mathcal{K}(G)\) implies attainment \(\overline{\sigma}(CB)=\mathcal{M}_{0}(G)\), and again, whether this is linked to normality of \(A\).
The following example shows that in the case \(\overline{\sigma}(CB)=0\), normality of \(A\) is no longer the correct answer.
**Example 2**.: Take \(C=[1\ 1]\), \(B=\begin{bmatrix}1\\ -1\end{bmatrix}\), \(A=\begin{bmatrix}-\lambda&0\\ 0&-\mu\end{bmatrix}\) with \(0<\lambda<\mu\). Then \(\overline{\sigma}(CB)=0\), but \(Ce^{At}B=e^{-\lambda t}-e^{-\mu t}\neq 0\) for \(t>0\), so that \(\mathcal{M}_{0}(G)>0\), and by the Kreiss matrix theorem we also have \(\mathcal{K}(G)>0\). This also means that neither \(\mathcal{M}_{0}\) not \(\mathcal{K}\) are monotone in \(t\). For \(\lambda=1\), \(\mu=2\) we obtain \(\mathcal{K}(G)=0.1716<\mathcal{M}_{0}(G)=0.25\),
In case \(\overline{\sigma}(CB)>0\), the situation is also fairly unsettled, as the following examples underline.
**Example 3**.: Take \(B=[0\ 0\ 1]^{T}\), \(C=[1\ 1\ 1]\), \(a_{0}=0.9608\), \(a_{1}=1\), \(a_{2}=1\), in the controllable companion form above, which gives \(G=(s^{2}+s+1)/(s^{3}+s^{2}+s+0.9608)\), then \(|CB|=1\), \(\mathcal{K}(G)=\mathcal{M}_{0}(G)=1\). Here the lower bound is attained, while \(K((sI-A)^{-1})=1.17\), \(M_{0}((sI-A)^{-1})=1.43\), thus with \(A\) not a contraction, and in particular, not normal.
**Example 4**.: We give an example of a normal matrix \(A\), where \(\overline{\sigma}(CB)>0\), but \(\mathcal{K}<\mathcal{M}_{0}\). Change Example 2 by putting \(C=[1,1]\), \(B=[1;-1+\epsilon]\). Then \(\overline{\sigma}(CB)=\epsilon\). We get \(Ce^{At}B=e^{-\lambda t}-(1-\epsilon)e^{-\mu t}\). With \(\mu=2\), \(\lambda=1\), \(\epsilon=0.25\) we get \(0.25=\overline{\sigma}(CB)<\mathcal{K}(G)=0.3006<\mathcal{M}_{0}(G)=0.3333\).
**Example 5**.: Now we give an example where \(\mathcal{K}(G)=\overline{\sigma}(CB)=1\), but \(\mathcal{K}(G)<\mathcal{M}_{0}(G)\). Take \(A=[-q,p;0,-q]\), \(B=[b_{1};b_{2}]\), \(C=[c_{1},c_{2}]\) with \(b_{1}c_{1}+b_{2}c_{2}=1\). Then with the choices \(q=0.6509\), \(p=0.8746\), \(C=[-19.5450,-19.1251]\), \(B=[-0.2592;0.2126]\), we get \(\mathcal{K}(G)=1<\mathcal{M}_{0}(G)=1.72\).
**Example 6**.: The failure in Example 5 is again not related to failure of normality of \(A\), because the same may occur with diagonal \(A\) as seen with \(G(s)=\dfrac{s-2.032}{s^{2}+0.8456s+0.1769}\).
**Example 7**.: Example 5 can be used to analyze the special case considered in [8], where the \(C\)-matrix is \(J=[I_{n},0]\) and the \(B\)-matrix is \(J^{T}\). Starting out from the system in Example 5, we have to find a regular \(2\times 2\) matrix \(T\) such that \(CT^{-1}=[1,0]=J\) and \(TB=[1;0]=J^{T}\). That requires \(t_{11}=c_{1}\), \(t_{12}=c_{2}\) and \(c_{1}b_{1}+c_{2}b_{2}=1\). Moreover, we need to fix \(t_{21},t_{22}\) such that \(t_{21}b_{1}+t_{22}b_{2}=0\). That gives for \(b_{1}\neq 0\):
\[T=\begin{bmatrix}c_{1}&c_{2}\\ -\frac{t_{22}b_{2}}{b_{1}}&t_{22}\end{bmatrix}\]
which is regular for \(t_{22}\neq 0\). Now \(G=Ce^{At}B=CT^{-1}Te^{At}T^{-1}TB=Je^{TAT^{-1}t}J^{T}\), where \(A\) is as in the previous example. Then we have \(1=\mathcal{K}(G)<\mathcal{M}_{0}(G)\), so the special structure \(C=B^{T}=J\) used in [8] does not help.
**Example 8**.: The case \(C=I\) does not help either. Let \(A=[-0.0939,1.0000;0,-0.0939]\), \(B=[0.4722,0.7973;0.0339,0.5553]\), \(C=I_{2}\), then \(\overline{\sigma}(CB)=1.0577<\mathcal{K}(G)=1.9634<\mathcal{M}_{0}(G)=2.5226\).
**Example 9**.: As we have seen even for SISO systems with normal matrix \(A\), we cannot expect to get equality \(\mathcal{K}(G)=\mathcal{M}_{0}(G)=\overline{\sigma}(CB)\). There is, however, a special case when \(A\) is diagonalizable with real eigenvalues and all \(c_{i}b_{i}\) have equal signs, say \(c_{i}b_{i}>0\). Then by Laguerre's theorem the exponential polynomial \(\sum_{i=1}^{n}-\lambda_{i}c_{i}b_{i}e^{-\lambda t}\) does not change sign, hence the curve \(M(t)=\sum_{i=1}^{n}c_{i}b_{i}e^{\lambda_{i}t}\) has no extrema and is therefore monotone decreasing, in which case the maximum is attained at \(t=0\) with value \(\sum_{i=1}^{n}c_{i}b_{i}\).
**Remark 4**.: In the same vein, we also mention conditions given in [44], under which any induced system norm attains the value \(\overline{\sigma}(CB)\). Since this applies to \(\mathcal{M}_{0}(G)\), this line gives cases of attainment.
**Remark 5**.: Hausdorff's numerical abscissa \(\omega(A)\) satisfies \(\|e^{tA}\|\leq e^{\omega(A)t}\), hence \(e^{tA}\) is a contraction semigroup iff \(\omega(A)\leq 0\). Since \(\omega(A)=\frac{d}{dt}\|e^{tA}\|\big{|}_{t=0}\), the slope of the curve \(t\mapsto\|e^{tA}\|\) at \(t=0\) in that case conveys global information on the entire curve, and the semigroup \(e^{tA}\). This is why in the fluid flow literature it has been suggested that minimizing \(\omega(A(K))\) in closed loop might be a way to prevent transition to turbulence [47, 46, 45, 22, 40, 32]. Due to \(\omega(A)=\frac{1}{2}\overline{\lambda}(A+A^{T})\) this would have the additional advantage
of being an eigenvalue optimization problem, easier to handle than (8). However, in [8] we demonstrated that minimizing \(\omega(A(K))\) in closed loop does not have the desired effect of reducing transients.
One way to extend \(\omega(A)\) to systems is \(\omega(G)=\frac{d}{dt}\|Ce^{tA}B\|\,\Big{|}_{t=0}\), because then \(\omega(G)\leq 0\) continues to be a necessary condition for attainment \(\mathcal{M}_{0}(G)=\overline{\sigma}(CB)\). However, unlike the matrix case, it is no longer sufficient. Before showing this, we address necessity of attainment for the Kreiss norm:
**Proposition 3**.: _A necessary condition for attainment of the lower bound \(\mathcal{K}(G)=\overline{\sigma}(CB)\) is \(\overline{\lambda}(Y+Y^{T})\leq 0\), where \(Y=Q^{T}CABB^{T}C^{T}\), and where the columns of \(Q\) form an orthonormal basis of the maximum eigenspace of \(CBB^{T}C^{T}\)._
**Proof:** Let \(A(\eta)=\frac{\eta}{2-\eta}A-I\) and put \(G(\eta,s)=C(sI-A(\eta))^{-1}B\), then (7) can be re-written as \(\mathcal{K}(G)=\max_{\eta\in[0,2]}\|G(\eta,\cdot)\|_{\infty}\). Now \(\eta=0\) contributes the value \(\overline{\sigma}(CB)\) to the maximum over \(\eta\in[0,2]\), because \(A(0)=-I\), and therefore \(G(0,s)=C(sI-A(0))^{-1}B=(s+1)^{-1}CB\), hence \(\|G(0,\cdot)\|_{\infty}=\max_{\omega}|(j\omega+1)^{-1}|\,\overline{\sigma}( CB)=\overline{\sigma}(CB)\), attained at the single frequency \(\omega=0\). In consequence, due to our hypothesis \(\mathcal{K}(G)=\overline{\sigma}(CB)>0\), the slope of \(\phi:\eta\mapsto\|G(\eta,\cdot)\|_{\infty}\) at \(\eta=0\) must be non-positive, as otherwise \(\|G(\eta,\cdot)\|_{\infty}=\|C(sI-A(\eta))^{-1}B\|_{\infty}\) would attain values \(>\overline{\sigma}(CB)\) for some small \(\eta>0\).
To compute \(\phi^{\prime}(0)\), observe that since \(\|G(0,\cdot)\|_{\infty}\) is attained at the single frequency \(\omega=0\), we have
\[\phi^{\prime}(0) =\|\cdot\|_{\infty}{}^{\prime}(G(h,\cdot),\tfrac{d}{d\eta}G(\eta,\cdot))\,\Big{|}_{\eta=0}=\overline{\sigma}^{\prime}(G(\eta,j0),\tfrac{d}{d \eta}G(\eta,j0))\,\Big{|}_{\eta=0}\] \[=\overline{\sigma}^{\prime}(CB,-C(A(\eta)^{-1}\tfrac{d}{d\eta}A( \eta)A(\eta)^{-1})B)\,\Big{|}_{\eta=0}=\overline{\sigma}^{\prime}(CB,C\tfrac{ 1}{2}AB)\] \[=\tfrac{1}{4}\,\overline{\lambda}(Q^{H}(CAB)P+P^{H}(B^{T}A^{T}C^{ T})Q)\] \[=\frac{1}{4\overline{\sigma}(CB)}\overline{\lambda}(Q^{T}C(ABB^{ T}+BB^{T}A^{T})C^{T}Q),\]
where the second line uses \(\tfrac{d}{d\eta}\,[A(\eta)^{-1}]=-A(\eta)^{-1}\tfrac{d}{d\eta}A(\eta)A(\eta)^{ -1}=-A(\eta)^{-1}\tfrac{2}{(2-\eta)^{2}}AA(\eta)^{-1}\), which at \(\eta=0\) gives \(-\tfrac{1}{2}A\), whereas the third line uses Corollary 3 based on a SVD \(G(0,0)=CB=\begin{bmatrix}Q&R\end{bmatrix}\begin{bmatrix}\overline{\sigma}(CB) I\\ &\Sigma\end{bmatrix}\begin{bmatrix}P^{T}\\ T^{T}\end{bmatrix}\). The last line follows by re-substituting \(Q^{H}CB=\overline{\sigma}(CB)P^{H}\).
Note that this leads back to \(\omega(A)\leq 0\) for \(C=B=I\).
**Lemma 2**.: _The Clarke subdifferential of the maximum singular value function is \(\partial\overline{\sigma}(G)=\{QYP^{H}:Y\succeq 0,\operatorname{Tr}(Y)=1\}\), where \(G=\begin{bmatrix}Q&R\end{bmatrix}\begin{bmatrix}\overline{\sigma}(G)\\ &\Sigma\end{bmatrix}\begin{bmatrix}P^{H}\\ T^{H}\end{bmatrix}\) is a SVD of \(G\)._
**Proof:** From \(\overline{\sigma}(G)^{2}=\overline{\lambda}(GG^{H})\) we get \(2\overline{\sigma}(G)\partial\overline{\sigma}(G)=F^{\prime}(G)^{*}\partial \overline{\lambda}(F(G))\), where \(F:\mathbb{M}^{n,m}\rightarrow\mathbb{S}^{m}\) is the mapping \(F(X)=XX^{H}\). Now \(\partial\overline{\lambda}(GG^{H})=\{QYQ^{H}:Y\succeq 0,\operatorname{Tr}(Y)=1\}\), where the columns of \(Q\) in the SVD form an orthonormal basis of the maximum eigenspace of \(GG^{H}\). Furthermore, \(F^{\prime}(G)D=GD^{H}+DG^{H}\), hence for a test vector \(S\in\mathbb{S}^{m}\) we have by the definition of the adjoint \(\langle D,F^{\prime}(G)^{*}S\rangle=\langle F^{\prime}(G)D,S\rangle=\operatorname {Re}\operatorname{Tr}S(GD^{H}+DG^{H})=2\operatorname{Re}\operatorname{Tr}SDG^{ H}=2\operatorname{Re}\operatorname{Tr}\left(SG\right)^{H}D=\langle D,2SG\rangle\), so that the action of the adjoint is \(F^{\prime}(G)^{*}S=2SG\). On substituting \(S=QYQ^{H}\in\partial\overline{\lambda}(GG^{H})\), we obtain \(\partial\overline{\sigma}(G)=\frac{1}{2\overline{\sigma}(G)}\{2QYQ^{H}G:Y \succeq 0,\operatorname{Tr}(Y)=1\}\). Now since \(Q^{H}G=\overline{\sigma}(G)P^{H}\) from the SVD, we obtain the claimed \(\partial\overline{\sigma}(G)=\{QYP^{H}:Y\succeq 0,\operatorname{Tr}(Y)=1\}\).
**Lemma 3**.: _The Clarke directional derivative is \(\overline{\sigma}^{\prime}(G,D)=\frac{1}{2}\overline{\lambda}(Q^{H}DP+P^{H}D^{H}Q)\)._
**Proof:** We have \(\overline{\sigma}^{\prime}(G,D)=\max\{\langle\Phi,D\rangle:\Phi\in\partial \overline{\sigma}(G)\}=\max\{\operatorname{Re}\operatorname{Tr}\Phi^{H}D: \Phi\in\partial\overline{\sigma}(G)\}=\max\{\operatorname{Re}\operatorname{Tr} PYQ^{H}D:Y\succeq 0,\operatorname{tr}(Y)=1\}=\max\{\frac{1}{2}\operatorname{Re} \operatorname{Tr}Y(Q^{H}DP+P^{H}D^{H}Q):Y\succeq 0,\operatorname{Tr}(Y)=1\}=\frac{1}{2} \overline{\lambda}(Q^{H}DP+P^{H}D^{H}Q)\).
On re-substituting \(Q^{H}G=\overline{\sigma}(G)P^{H}\), we can also write this in the form \(\overline{\sigma}^{\prime}(G,D)=\frac{1}{2\overline{\sigma}(G)}\overline{ \lambda}(Q^{H}DG^{H}Q+Q^{H}GD^{H}Q)=\frac{1}{2\overline{\sigma}(G)}\overline{ \lambda}(Q^{H}\left[DG^{H}+GD^{H}\right]Q)\).
The following is an immediate consequence of the finite maximum rule for the subdifferential.
**Lemma 4**.: _Suppose \(\|G\|_{\infty}\) is attained at the finitely many frequencies \(\omega_{1},\ldots,\omega_{r}\). Then \(\partial\|\cdot\|_{\infty}(G)=\{\sum_{k=1}^{r}Q_{k}Y_{k}P_{k}^{H}:Y_{k}\succeq 0,\sum_{k=1}^{r}\operatorname{Tr}(Y_{k})=1\}\), where for every \(k\) we let \(G(j\omega_{k})=\begin{bmatrix}Q_{k}&R_{k}\end{bmatrix}\operatorname{diag}( \|G\|_{\infty},\Sigma_{k})\begin{bmatrix}P_{k}&T_{k}\end{bmatrix}^{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
vector norm. However, (12) shows that various other choices of vector norms could lead to numerically exploitable expressions \(\mathcal{K},\mathcal{M}\). Choosing test vectors \(\xi\in\ell_{p^{\prime}}\), \(\eta\in\ell_{r}\) gives
\[\sup_{\mathrm{Re}(s)>0}\mathrm{Re}(s)\|C(sI-A)^{-1}B\|_{r,p}\leq\sup_{t\geq 0} \|Ce^{At}B\|_{r,p},\]
where \(\|M\|_{r,p}\) is the \(\ell_{p}\to\ell_{r}\) induced matrix norm. This may lead to other criteria compatible with the goal to sensing \(L^{1}\to L^{\infty}\) amplification. Tractable expressions are obtained e.g. for \(p=1\), \(r=\infty\), which corresponds to taking \(\xi\in\ell_{1}\), \(\eta\in\ell_{1}\). Here we get the estimate
\[\sup_{\mathrm{Re}(s)>0}\max_{ik}\left|c_{i}\mathrm{Re}(s)(sI-A)^{-1}b_{k} \right|\leq\sup_{t\geq 0}\max_{ik}\left|c_{i}e^{At}b_{k}\right|\]
which reads as
\[\max_{ik}\mathcal{K}(c_{i}e^{A\bullet}b_{k})\leq\max_{ik}\mathcal{M}_{0}(c_{i} e^{A\bullet}b_{k})\]
with a finite maximum of SISO Kreiss constants and transient growth norms involved. This practical entry-wise Kreiss norm offers potential to weigh some channels more than others. An upper bound of \(\mathcal{M}_{0}\) is readily obtained as \(en\max_{ik}\mathcal{K}(c_{i}e^{A\bullet}b_{k})\) from Theorem 3.
### Other vector norms: \(\ell_{\infty}\)
Now take (9), but with \(|\xi|_{\infty}\leq 1,|\eta|_{\infty}\leq 1\). We get on the right
\[|\xi^{T}(Ce^{At}B)\eta|\leq\|(Ce^{At}B)\eta\|_{1}\leq\|Ce^{At}B\|_{1,\infty}\]
because the dual norm to \(\ell_{\infty}\) is \(\ell_{1}\). However, this norm is not very helpful for matrices with large dimension \(m\), because for \(A\in\mathbb{R}^{m\times n}\), we have:
\[\|A\|_{1,\infty}=\max_{r\in\{-1,1\}^{m}}\|Ar\|_{1}.\]
With the above technique, we easily get the following estimate
\[\max_{r\in\{-1,1\}^{m}}\mathcal{K}(Gr)\leq\max_{r\in\{-1,1\}^{m}}\mathcal{M}_ {0}(Gr).\]
## 4. Peak-to-peak norm for persistent perturbations
In this section, we discuss the choice \(p=\infty\), \(q=r=1\) in Young's inequality (9), which will allow us to address the case of persistent perturbations \(w\) in (3), when for an input \(\|w\|_{\infty,\infty}\leq 1\), we measure the response by the same signal norm \(\|G*w\|_{\infty,\infty}\). For test vectors \(\xi,\eta\) and a one-dimensional signal \(u\) we get from (9)
\[|\xi^{T}C(sI-A)^{-1}B\eta\,u(s)|\leq\|e^{-st}\|_{\infty}\|\xi^{T}Ce^{At}B\eta \|_{1}\|u\|_{1}=\|\xi^{T}Ce^{At}B\eta\|_{1}\|u\|_{1}.\]
Letting the scalar signal \(u(t)\) of unit \(L_{1}\)-norm approach the \(\delta\)-distribution, we get
\[|\xi^{T}C(sI-A)^{-1}B\eta|\leq\|\xi^{T}Ce^{At}B\eta\|_{1}=\int_{0}^{\infty}| \xi^{T}Ce^{At}B\eta|dt.\]
Now let \(m\) be the number of outputs, \(p\) the number of inputs, and let \(g_{ik}(t)\) be the entries of the matrix \(Ce^{At}B\), then with \(\|\xi\|_{2}\leq 1\) and \(\|\eta\|_{2}\leq 1\) we get
\[\int_{0}^{\infty}|\xi^{T}Ce^{At}B\eta|dt =\int_{0}^{\infty}\left|\sum_{i=1}^{m}\sum_{k=1}^{p}\xi_{i}g_{ik}( t)\eta_{k}\right|dt\] \[\leq\sum_{i=1}^{m}|\xi_{i}|\sum_{k=1}^{p}|\eta_{k}|\int_{0}^{ \infty}|g_{ik}(t)|dt\] \[=\sum_{i=1}^{m}|\xi_{i}|\sum_{k=1}^{p}\|g_{ik}\|_{1}|\eta_{k}|\] \[\leq\left(\sum_{i=1}^{m}|\xi_{i}|^{2}\right)^{1/2}\left(\sum_{i=1 }^{m}\left(\sum_{k=1}^{p}\|g_{ik}\|_{1}|\eta_{k}|\right)^{2}\right)^{1/2}\] \[\leq\left(m\max_{i=1,\ldots,m}\left(\sum_{k=1}^{p}\|g_{ik}\|_{1} \right)^{2}\right)^{1/2}=\sqrt{m}\max_{i=1,\ldots,m}\sum_{k=1}^{p}\|g_{ik}\|_ {1}.\]
When we recall that the time-domain peak-to-peak, or peak-gain, system norm is defined as
\[\|G\|_{\text{pk\_gn}}=\max_{u\neq 0}\frac{\|G*u\|_{\infty,\infty}}{\|u\|_{ \infty,\infty}}=\max_{i=1,\ldots,m}\sum_{j=1}^{p}\|g_{ij}(t)\|_{1},\]
then we have shown the estimate \(\|G\|_{\infty}\leq\sqrt{m}\|G\|_{\text{pk\_gn}}\) for a system \(G\) with \(m\) outputs. Since in this case input signals do not have to vanish at infinity, this estimate holds also when \(G\) has direct transmission. More generally even, as no structure of the \(g_{ij}\) is used, the estimate remains true when \(g_{ik}\in L^{1}([0,\infty),\mathbb{R}^{n})\), and this can be extended to \((m\times p)\)-valued Radon measures.
Let us now look at the reverse estimate, which is analogous to the right hand estimate in the Kreiss matrix theorem (Theorem 3). Consider a stable finite-dimensional strictly proper system
\[G:\quad\begin{aligned} \dot{x}&=Ax+Bu\\ y&=Cx\end{aligned}\]
where \(G(t)=Ce^{At}B\). Let \(g_{ij}(t)=c_{i}e^{At}b_{j}\), where \(c_{i}\) is the \(i\)th row of \(C\), \(b_{j}\) the \(j\)th column of \(B\), then \(\|G\|_{\text{pk\_gn}}=\max_{i=1,\ldots,m}\sum_{j=1}^{p}\|g_{ij}\|_{1}=\sum_{j= 1}^{p}\|c_{i}e^{At}b_{j}\|_{1}\) for some \(i\).
We now relate the peak-gain norm to the Hankel singular values of \(G\). The following was proved in the SISO case \(p=m=1\) in [11, Thm. 2] for discrete systems, and in [17, pp. 11-12] for continuous SISO systems, where in the latter reference the idea of proof is attributed to I. Gohberg.
**Lemma 6**.: _Let \(G\) be real-rational, strictly proper and stable, with \(p\) outputs and McMillan degree \(n\). Then_
\[\|G\|_{\text{pk\_gn}}\leq 2p^{1/2}\left(\sigma_{H1}+\cdots+\sigma_{Hn}\right), \tag{14}\]
_where \(\sigma_{Hi}\) are the Hankel singular values of \(G\). In particular, \(\|G\|_{\text{pk\_gn}}\leq 2np^{1/2}\|G\|_{\infty}\)._
Proof:We have for the \(i\in\{1,\ldots,m\}\) where the maximum is attained
\[\|G\|_{\text{pk\_gn}} =\sum_{j=1}^{p}\|c_{i}e^{At}b_{j}\|_{1}=2\sum_{j=1}^{p}\int_{0}^{ \infty}\left|c_{i}e^{2A\tau}b_{j}\right|d\tau=2\int_{0}^{\infty}\sum_{j=1}^{p} \left|\operatorname{Tr}(e^{A^{T}t}c_{i}^{T})^{T}(e^{At}b_{j})\right|dt\] \[=2\int_{0}^{\infty}\sum_{j=1}^{p}\left|\sum_{\ell=1}^{n^{2}} \operatorname{vec}(e^{A^{T}t}c_{i}^{T})_{\ell}\cdot\operatorname{vec}(e^{At} b_{j})_{\ell}\right|dt\] \[\leq 2\left(\int_{0}^{\infty}\sum_{j=1}^{p}\sum_{\ell=1}^{n^{2}} |\operatorname{vec}(e^{A^{T}t}c_{i}^{T})_{\ell}|^{2}dt\right)^{1/2}\left(\int_ {0}^{\infty}\sum_{j=1}^{p}\sum_{\ell=1}^{n^{2}}|\operatorname{vec}(e^{At}b_{ j})_{\ell}|^{2}dt\right)^{1/2}\] \[=2\left(\int_{0}^{\infty}\sum_{j=1}^{p}\operatorname{Tr}(e^{A^{T }t}c_{i}^{T}c_{i}e^{At})dt\right)^{1/2}\left(\int_{0}^{\infty}\sum_{j=1}^{p} \operatorname{Tr}(e^{At}b_{j}b_{j}^{T}e^{A^{T}t})dt\right)^{1/2}\] \[=2p^{1/2}\left(\int_{0}^{\infty}\operatorname{Tr}(e^{A^{T}t}c_{i }^{T}c_{i}e^{At})dt\right)^{1/2}\left(\int_{0}^{\infty}\operatorname{Tr}(e^{At }BB^{T}e^{A^{T}t})dt\right)^{1/2}.\]
Recall that the observability and controllability Gramians of the system \(G\) are
\[W_{o}=\int_{0}^{\infty}e^{A^{T}t}C^{T}Ce^{At}dt,\quad W_{c}=\int_{0}^{\infty}e^ {At}BB^{T}e^{A^{T}t}dt.\]
Now \(\operatorname{Tr}\!\int_{0}^{\infty}e^{A^{T}t}c_{i}^{T}c_{i}e^{At}dt\leq \operatorname{Tr}\int_{0}^{\infty}e^{A^{T}t}C^{T}Ce^{At}dt\) follows from \(c_{i}^{T}c_{i}\preceq C^{T}C\) by applying a congruence transformation with \(e^{At}\). Hence \(\|G\|_{\text{pk\_gn}}\leq 2p^{1/2}\left[\operatorname{Tr}(W_{o})\operatorname{Tr}(W_{c} )\right]^{1/2}\). Now if we take a balanced realization, then \(W_{o}=W_{c}=\operatorname{diag}(\sigma_{H1},\ldots,\sigma_{Hn})\) for the Hankel singular values \(\sigma_{H_{1}}\geq\cdots\geq\sigma_{Hn}\), hence \(\|G\|_{\text{pk\_gn}}\leq 2\sqrt{p}(\sigma_{H1}+\cdots+\sigma_{Hn})\leq 2p^{1/2}n \sigma_{H1}\leq 2p^{1/2}n\|G\|_{\infty}\) for a system without direct transmission. This uses \(\sigma_{H1}\leq\|G\|_{\infty}\).
Note, however, that by the Enns-Glover bound we have
\[\|G\|_{\infty}\geq\max\{\overline{\sigma}(D),\sigma_{H1}\}\]
for the maximum Hankel singular value \(\sigma_{H1}\) of \(G=(A,B,C,D)\), so this holds also for systems with direct transmission. Indeed, if we define the Hankel semi-norm of a system \(G\) as
\[\|G\|_{H}=\sup\left\{\frac{\|G*u\|_{L^{2}(T,\infty)}}{\|u\|_{L^{2}[0,T]}}:T>0,u \in L^{2}[0,\infty)\right\},\]
then \(\|G\|_{H}=\sigma_{H1}\) for the maximum Hankel singular value. But with this formulation, it is immediate to see that \(\|G\|_{H}\leq\|G\|_{\infty}\), when we recall that \(\|G\|_{\infty}\) is the \(L^{2}\)-operator norm.
Therefore we get for a system with direct transmission
\[\|G\|_{\text{pk\_gn}}\leq\|G-D\|_{\text{pk\_gn}}+|\!|D|\!|\!|_{\infty}\leq 2p^{ 1/2}n\sigma_{H1}+p^{1/2}\overline{\sigma}(D)\leq(2n+1)p^{1/2}\|G\|_{\infty}\]
using the fact that \(\sigma_{H1}\leq\|G\|_{\infty}\) and \(\overline{\sigma}(D)\leq\|G\|_{\infty}\). Altogether, we have proved the following estimates for the \(H_{\infty}\)- and peak-gain norms stated in [9]:
**Theorem 4**.: _Let \(G\) be a stable real-rational system with \(n\) poles, \(p\) inputs and \(m\) outputs. Then_
\[m^{-1/2}\|G\|_{\infty}\leq\|G\|_{\text{pk\_gn}}\leq(2n+1)p^{1/2}\|G\|_{\infty}.\]
A large variety of synthesis experiments based on the peak-gain norm \(\|G\|_{\text{pk\_gn}}\) has been presented in [9], so that our experiments here may focus on \(L_{1}\)-disturbances.
### Noise as perturbation
In this section we consider the case \(w\in L^{2}\), \(G\ast w\in L^{\infty}\), where we can rely on [15]. Consider for instance \(\|G\|_{(\infty,2)(2,2)}=\lambda_{\max}(CQC^{T})\), where \(Q\succeq 0\) is the unique solution of the Lyapunov equation \(AQ+QA^{T}+BB^{T}=0\). This norm can be optimized directly using a technique similar to [16].
## 5. Applications to nonlinear dynamics with limit cycle attractor
In this section and the following, we consider applications illustrating the use of the Kreiss system norm for both analysis and feedback control design. The techniques are general and applicable to a large variety of nonlinear controlled systems. We recall that in all tests the results are certified aposteriori, as Kreiss norm optimization is based on a heuristic.
### Study of \(2\)nd-order dynamics with limit cycle attractor
We start with the model of Brunton and Noack [14], which is a low-order illustration of a dynamic mechanisms known in oscillator flow, observed for instance on a larger scale in Navier-Stokes equations. Examples of this type include fluid flow around a cavity or a cylinder [27, 23]. The model is of the form
\[\left\{\begin{array}{rcl}\dot{x}&=&\begin{bmatrix}\sigma_{u}&-\omega_{u}\\ \omega_{u}&\sigma_{u}\end{bmatrix}x+B_{w}w+Bu\\ w&=&\bar{\phi}(x)\\ y&=&Cx\end{array}\right.\,, \tag{15}\]
with \(B_{w}:=I\), \(B:=[0\ g]^{T}\)\(C:=[0\ 1]\),
\[\phi(x):=\alpha_{u}\|x\|^{2}\begin{bmatrix}-\beta_{u}&-\gamma^{u}\\ \gamma_{u}&-\beta_{u}\end{bmatrix}x\,,\]
and \(\alpha_{u},\beta_{u}>0\). Signals \(u\) and \(y\) are control input and measured output, respectively. It is easy to verify that the triple \((A,B,C)\) is stabilizable and detectable.
Unlike transitional amplifier flows, oscillator flows are characterized by an unstable fixed point at the origin and a globally attractive limit cycle, here with radius \(\sqrt{\sigma_{u}/\alpha_{u}\beta_{u}}\). This is shown in Fig. 1 for two initial conditions inside and outside the asymptotic limit cycle for data \(\alpha_{u}=1\), \(\beta_{u}=1\), \(\omega_{u}=1\), \(\gamma_{u}=0\), \(\sigma_{u}=0.1\) and \(g=1\).
The goal is to compute a feedback controller \(u=K(s)y\) with two main design requirements. Firstly, \(K\) has to stabilize the origin, often called the base flow. Secondly, trajectories trapped in the limit cycle should be driven back to the origin with limited oscillations. Additional insight into this model in terms of fluid flow interpretation can be found in [14].
In order to mitigate the effects of nonlinearity, we minimize the Kreiss system norm in closed loop. This leads to the following min-max constrained program
\[\begin{array}{rl}\text{minimize}&\max_{\delta\in[-1,1]}\left\|J^{T}\left( sI-\left(\frac{1-\delta}{1+\delta}A_{d}(K)-I\right)\right)^{-1}J\right\|_{\infty}\\ \text{subject to}&K\text{ robustly stabilizing, }K\in\mathscr{K}\\ &\alpha(A_{d}(K))\leq-\eta\\ &\|W(s)GK(I+GK)^{-1}\|_{\infty}\leq 1\,.\end{array} \tag{16}\]
Here \(K\in\mathscr{K}\) means that the controller has a prescribed structure, which could be a PID, observed-based or low-order controller, a decentralized controller, as well as any control architecture assembling simple control components. The robust stability constraint on \(K\) in (16) demands stability of the entire set of matrices \(\left\{\frac{1-\delta}{1+\delta}A_{cl}-I:\ \delta\in[-1,1]\right\}\), and in particular, for \(\delta=0\) that of \(A_{cl}(K)\). Matrix \(J\) is a restriction matrix to the space of physical plant states since transient amplification of controller states is not relevant.
We have \(J:=I_{n}\) for a static feedback controller and \(J:=[I_{n},\,0_{n\times n_{K}}]^{T}\) for an \(n_{K}\)-order output-feedback controller (see also Examples 7,8).
The notation \(\alpha(\cdot)\) refers to the spectral abscissa, and the constraint \(\alpha(A_{cl})\leq-\eta\) in (16) therefore imposes a convergence rate to the origin for the linear dynamics in closed loop. In our experiments we have chosen \(\eta=0.1\).
**Remark 7**.: Program (16) is a special case of a much wider class of problems with parametric uncertainty discussed in [2, 7], where a successful approach alternating between two non-smooth programs is used. The inner max-max program with controller \(K\) fixed is characterized by a light form of non-smoothness and can be addressed by a first-order non-smooth trust-region technique whose convergence certificates have been established in [7]. The outer min-max program with \(\delta\) fixed corresponds to a more severe form of non-smoothness and should be handled by dedicated bundle or bundle trust-region techniques [7, 5, 6].
These constraints are readily implemented from the closed-loop nonlinear system:
\[\dot{x}_{cl} = A_{cl}x_{cl}+B_{w,cl}\phi_{cl}(x_{cl}),\quad x_{cl}:=[x^{T},x_{K} ^{T}]^{T}\,, \tag{17}\]
where
\[A_{cl}:=\begin{bmatrix}A+BD_{K}C&BC_{K}\\ B_{K}C&A_{K}\end{bmatrix},\,\phi_{cl}(x_{cl}):=\phi(x),\,B_{w,cl}:=J=[I_{2},0 _{2\times n_{K}}]^{T}\,, \tag{18}\]
and where the controller dynamics are
\[\left\{\begin{array}{l}\dot{x}_{K}\!=\ A_{K}x_{K}+B_{K}y,\quad x_{K}\in \mathbb{R}^{n_{K}}\\ u\ =\ C_{K}x_{K}+D_{K}y\end{array}\right.\,. \tag{19}\]
We exclude high-gain feedback in the high-frequency range by adding a constraint on the complementary sensitivity function \(\|WT\|_{\infty}\leq 1\), where \(W\) is a high-pass weighting filter \(W(s):=(1\mathrm{e}06s^{2}+1\mathrm{e}04s+24.99)/(s^{2}+10000s+2.5\mathrm{e}07)\).
Program (16) was solved for controller orders: 0, 1 and 3. All controllers achieve nearly the same Kreiss norm of 1.005, but differ in terms of the remaining performance/robustness constraints. This is seen by plotting transient amplifications versus time in Fig. 2. Peak values \(\mathcal{M}_{0}(J^{T}(sI-A_{cl})^{-1}J)\) are all close to 1.10 with \(\|J^{T}J\|=1\) as lower bound, see Proposition 2.
The static controller \(K=-0.20\) gives a spectral abscissa of \(\alpha(A_{cl})=-1.9899\mathrm{e}\)-04 with badly damped modes and a strong roll-off violation of \(\|WT\|_{\infty}=20.03\). The 1st-order controller \(K(s)=(0.001071s-2.247)/s+1.483)\) meets the roll-off constraint and achieves a decay rate constraints \(\alpha(A_{cl})=-0.393\). As expected, the 3rd-order controller \(K(s)=(-0.008068s^{3}-6.391s^{2}+83.2s-1673)/(s^{3}+27.97s^{2}+252.8s+1333)\) provides the best results in terms of decay rate \(\alpha(A_{cl})=-0.811\).
Simulations in closed loop for identical initial conditions are given in Fig. 3. Controllers are switched on at \(t=50\) seconds when the limit cycle is well engaged. The static controller leads to a spiral trajectory barely converging to the origin, a stint which is overcome by increasing the controller order.
Global stability of the origin is established in appendix A.
### Study of fourth-order dynamics with \(4d\) periodic orbit attractor
The fourth-order model of Brunton and Noack is described as
\[\left\{\begin{array}{rcl}\dot{x}&=&Ax+B_{w}w+Bu\\ w&=&\phi(x)\\ y&=&Cx\end{array}\right.\,, \tag{20}\]
with
\[\phi(x):=(\alpha_{u}(x_{1}^{2}+x_{2}^{2})A_{5}+\alpha_{a}(x_{3}^{2}+x_{4}^{2})A_{ 6}))x\]
where
\[A:=\operatorname{diag}\left(\begin{bmatrix}\sigma_{u}&-\omega_{u}\\ \omega_{u}&\sigma_{u}\end{bmatrix},\begin{bmatrix}\sigma_{a}&-\omega_{a}\\ \omega_{a}&\sigma_{a}\end{bmatrix}\right),\;B_{w}:=I,\,B:=[0\;g\;0\;g]^{T},\,C:= [1\;0\;1\;0],\]
\[A_{5}:=\operatorname{diag}\left(\begin{bmatrix}-\beta_{uu}&-\gamma_{uu}\\ \gamma_{uu}&-\beta_{uu}\end{bmatrix},\begin{bmatrix}-\beta_{au}&-\gamma_{au} \\ \gamma_{au}&-\beta_{au}\end{bmatrix}\right),\;A_{6}:=\operatorname{diag}\left( \begin{bmatrix}-\beta_{ua}&-\gamma_{ua}\\ \gamma_{ua}&-\beta_{ua}\end{bmatrix},\begin{bmatrix}-\beta_{aa}&-\gamma_{aa} \\ \gamma_{aa}&-\beta_{aa}\end{bmatrix}\right),\]
with data given in [14]. The open-loop dynamics are characterized by an unstable fixed point at the origin and an attractive \(4D\) periodic orbit. A 1st-order controller is computed to minimize the Kreiss norm as in program (16). The roll-off filter \(W\) is unchanged. The optimal controller \(K(s):=(0.03538s-0.5306)/(s+0.667)\) achieves a Kreiss norm of \(1.004\) with decay rate and roll-off constraints all met.
Despite the apparent complexity of the dynamics, experience shows that it is possible to bring points of the periodic orbit back to the origin with a fairly large class of _linear_ controllers. As an instance, a controller designed using a mixed-sensitivity approach
Figure 1. Limit cycle attractor of Brunton and Noack model
Figure 2. From left to right, time evolution of transient amplification for static, 1st-order and 3rd-order controller
[51, p. 141] with weight \(W_{1}:=\frac{0.001s+5}{s+0.05}\) for \(S\) and \(W\) as above for \(T\) also drives points from the periodic orbit to the origin. A 1st-order controller of this type was obtained as \(K(s):=(34.31s+168.1)/(s+32.47)\) with a closed-loop Kreiss constant of 1.54. Simulations show that closed-loop trajectories undergo large deviations before heading back to the origin. See Fig. 4. This remains risky, as attractors when still present may capture trajectories. The controller based on the Kreiss norm corrects such undesirable transients as corroborated in Figs. 4 and 5, where worst-case transients have been plotted. Finally, all controllers globally stabilize the origin and this can be established as was done for the 2nd-order system in appendix A.
## 6. Applications to nonlinear dynamics with chaos and fixed points
In this section, we consider suppression of undesirable nonlinear regimes such as chaos and fixed points for the Lorenz model.
### Study of the Lorenz system with chaotic attractor
The Lorenz system [31] has three coupled first-order nonlinear differential equations
\[\left\{\begin{array}{rcl}\dot{x}_{1}&=&p(x_{2}-x_{1})\\ \dot{x}_{2}&=&Rx_{1}-x_{2}-x_{1}x_{3}\\ \dot{x}_{3}&=&-bx_{3}+x_{1}x_{2}\,,\end{array}\right. \tag{21}\]
where \(p\), \(R\), and \(b\) are given parameters. In this study, we will use \(p=10\) and \(b=1\), while \(R\) will be varied to illustrate different nonlinear asymptotic regimes. To begin with, we take \(R=28\), where the Lorenz model has three unstable fixed points with coordinates
\[(0,0,0),\;(\sqrt{R-1},\sqrt{R-1},R-1),\;(-\sqrt{R-1},-\sqrt{R-1},R-1)\,. \tag{22}\]
For any initial condition \(x(0)=x_{0}\), a repelling effect of these fixed points is observed and trajectories are quickly captured by a chaotic attractor of double-scroll type, shown in Fig. 6.
Figure 3. From left to right, closed-loop free responses for static, 1st-order and 3rd-order controller
Our feedback goal is therefore suppression of the chaotic attractor and stabilization of the origin through various feedback control strategies. We complement (21) by adding actuation and sensing, letting \(B=[0,\,1,\,0]^{T}\) and discussing several cases \(C\), where \((A,B,C)\) is stabilizable and detectable. The Lorenz model is then rewritten as
\[\left\{\begin{array}{rcl}\dot{x}&=&Ax+B_{w}w+Bu\\ w&=&\phi(x)\\ y&=&Cx\,,\end{array}\right. \tag{23}\]
Figure 4. Brunton and Noack model. Free open- and closed-loop responses projected in \((x_{1},x_{2},x_{3})\)-space with 1st-order controllers. Top: Kreiss controller. Bottom: mixed-sensitivity controller.
where \(u\) is the control input, \(y\) the measurement output. Matrix \(A\) collects the linear terms in (21), \(\phi(x):=[-x_{1}x_{3},\,x_{1}x_{2}]^{T}\) the nonlinearity, and \(B_{w}:=[0_{2\times 1},I_{2}]^{T}\). As observed before, the origin is unstable in the absence of feedback.
When a linear feedback controller \(u=K(s)y\) is used with
\[\left\{\begin{array}{ll}\dot{x}_{K}=\ A_{K}x_{K}+B_{K}y,\quad x_{K}\in \mathbb{R}^{n_{K}}\\ u\ =\ C_{K}x_{K}+D_{K}y\,,\end{array}\right. \tag{24}\]
the Lorenz model in closed loop becomes:
\[\dot{x}_{cl} = A_{cl}x_{cl}+B_{w,cl}\phi_{cl}(x_{cl}),\quad x_{cl}:=[x^{T},x_{ K}^{T}]^{T}\,, \tag{25}\]
where
\[A_{cl}:=\begin{bmatrix}A+BD_{K}C&BC_{K}\\ B_{K}C&A_{K}\end{bmatrix},\,\phi_{cl}(x_{cl}):=\phi(x),\,B_{w,cl}:=[0_{2\times 1 },I_{2},0_{2\times n_{K}}]^{T}\,. \tag{26}\]
#### 6.1.1. Chaos dynamics: design with the QC approach
Here we assess the stability properties of the closed loop (25) using the Lyapunov Quadratic Constraints (QC) approach of [36, 24, 30].
Figure 5. Brunton and Noack model. Transient amplifications. Left: Kreiss controller. Right: mixed-sensitivity controller.
Figure 6. Double-Scroll chaotic attractor of the Lorenz model
A particularity of the Lorenz system is the so-called _lossless property_
\[x_{cl}^{T}B_{w,cl}w=0\,\text{ for all }\,x_{cl},w=\phi(x)\,, \tag{27}\]
which holds globally in state space. The QC approach to stability analysis now relies on the existence of a Lyapunov function \(V(x_{cl})=x_{cl}^{T}X_{cl}x_{cl}\), with \(X_{cl}\) a positive definite matrix, such that
\[\dot{V}(x_{cl})\leq-\epsilon V(x_{cl}),\epsilon>0\]
for all \(x_{cl}\), \(w\) such that the quadratic constraint in (27), when disregarding \(w=\phi(x)\), holds. This is clearly a sufficient possibly conservative condition because of the chosen quadratic in \(V(x_{cl})\), and also because the specific dependence of \(w\) on the states \(x\) is ignored. Using a \(S\)-procedure argument [24, 12] to aggregate the lossless constraint (27), this is rewritten as
\[\dot{V}(x_{cl})+\mu_{0}x_{cl}^{T}B_{w,cl}w\leq-\epsilon V(x_{cl})\]
for all \(x_{cl},w\), where \(\mu_{0}\) is a \(S\)-procedure parameter (sometimes called a Lagrange multiplier), which here is unsigned, as the constraint (27) is an equality. The following equivalent matrix inequality constraints are obtained:
\[\begin{bmatrix}A_{cl}^{T}X_{cl}+X_{cl}A_{cl}+\epsilon X_{cl}&X_{cl}B_{w,cl}+\mu _{0}B_{w,cl}\\ B_{w,cl}^{T}X_{cl}+\mu_{0}B_{w,cl}^{T}&0\end{bmatrix}\preceq 0,\,\,X_{cl} \succ 0\,. \tag{28}\]
We have the following:
**Theorem 5**.: _There exist a linear time-invariant controller (24) such that the sufficient global stability conditions (28) hold, if and only if there exist solutions \(X=X^{T}\) and \(Y=Y^{T}\) in \(\mathbb{R}^{(n-n_{\phi})\times(n-n_{\phi})}\) to the following LMIs:_
\[\begin{split}& N_{C}^{T}\left(A^{T}\begin{bmatrix}X&0\\ 0&I\end{bmatrix}+\begin{bmatrix}X&0\\ 0&I\end{bmatrix}A+\epsilon\begin{bmatrix}X&0\\ 0&I\end{bmatrix}\right)N_{C}\prec 0\\ & N_{B}^{T}\left(A\begin{bmatrix}Y&0\\ 0&I\end{bmatrix}+\begin{bmatrix}Y&0\\ 0&I\end{bmatrix}A^{T}+\epsilon\begin{bmatrix}Y&0\\ 0&I\end{bmatrix}\right)N_{B}\prec 0\\ &\begin{bmatrix}X&0&I&0\\ 0&I&0&I\end{bmatrix}\succeq 0\,,\end{split} \tag{29}\]
_where \(N_{C}\) and \(N_{B}\) are bases of the null space of \(C\) and \(B^{T}\), respectively._
_Moreover, the controller order is determined by the rank of \(I_{n-n_{\phi}}-XY\) with \(n_{\phi}\) the vector dimension of the nonlinearity._
**Proof:** See appendix B.
Note that Theorem 5 applies to any nonlinear system with similar structure for which (27) holds.
For the Lorenz model, we have a loss of rank of at least \(n_{\phi}=2\), the vector dimension of the nonlinearity. For the QC approach this means that controllers can have order at most \(n-n_{\phi}=1\). For problems with nonlinearity of dimension \(n\), the plant order, only static output feedback controllers can be computed. In that case the BMI (37) reduces to the LMI feasibility problem:
\[(A+BKC)^{T}+(A+BKC)\prec 0\,,\]
or equivalently, to minimization of the numerical abscissa \(\omega(A+BKC)\), defined as \(\omega(M):=1/2\lambda_{\max}(M+M^{T})\). This is in line with the results in [24] for transitional flow studies. On the other end, when \(n_{\phi}=0\), the plant is linear and the controller can
be of full order. The last step is construction of the controller given \(X\) and \(Y\) from (29), which is standard and found in [20].
Application to the Lorenz model with \(x\)-measurement, yields a 1st-order controller \(K(s)=-(306.5+2809)/(s+0.1044)\). Simulation in closed loop is shown in Fig. 7 (top left corner). The feedback controller is switched on after 15 seconds, when the chaotic regime is well engaged.
Characterization of state-feedback controllers is easily derived from the second projection LMI in (29), or using \(u=Kx\) and \(C=I\) in the BMI (28):
\[(A+BK)^{T}\operatorname{diag}(X,I)+(.)^{T}\prec-\epsilon\operatorname{diag}( X,I),\ X\succ 0\,, \tag{30}\]
or equivalently, using a congruence transformation \(\operatorname{diag}(Y,I)=\operatorname{diag}(X,I)^{-1}\), on the left- and right-hand sides of the first matrix inequality in (30)
\[(A+BK)\operatorname{diag}(Y,I)+(.)^{T}\prec-\epsilon\operatorname{diag}(Y,I), \ Y\succ 0\,. \tag{31}\]
The constraint (31) is turned into an LMI feasibility program using the standard change of variable \(W:=K\operatorname{diag}(Y,I)\):
\[A\operatorname{diag}(Y,I)+BW+(.)^{T}\prec-\epsilon\operatorname{diag}(Y,I),\ Y \succ 0\,. \tag{32}\]
All LMI characterizations derived so far can be solved by standard convex SDP software as _LMILab_[33] or _SeDuMi_[42]. Solving (32) for the Lorenz model yields a globally stabilizing state-feedback controller \(K=W\operatorname{diag}(P_{11},I)=[-154,400.245,0]\). A simulation is shown in Fig. 7 top right.
The fact that the state-feedback controller does not use the \(z\)-measurement suggests that even simpler controller structures should be satisfactory, e.g. using static output feedback in \(x\) or \(y\). For \(x\)-measurement alone, we have \(C=[1,0,0]\) and the BMI characterization is the same as in (30) with \(A+BKC\) replacing \(A+BK\). For a scalar \(K\) this is easily solved by sweeping an interval of \(K\) values and solving for the resulting LMIs with \(K\) fixed. We obtain \(K=-27.01\) with search interval \([-100,100]\). Simulations are displayed in Fig. 7, bottom left.
Similar results can be obtained with \(y\)-measurement feedback alone. The gain value is \(K=-27.01\), and simulations are given in Fig. 7, bottom right.
#### 6.1.2. Chaos dynamics: Kreiss norm minimization
We now investigate whether similar results can be achieved with controllers minimizing the Kreiss system norm. Here we follow a different strategy which is to decouple the linear dynamics \(\dot{x}=Ax\) from the nonlinearity \(\phi\) by way of mitigating transients due to initial conditions or \(L^{1}\) disturbances. While this is a heuristic in the first place, it can of course in a second step be certified rigorously using the same QC approach, now for analysis. This has the advantage that BMIs are replaced by LMIs. In addition, the technique is applicable in a much more general context beyond the Lorenz model as seen in sections 5.1 and 5.2 when the QC approach turns out too conservative.
Controllers based on minimizing the Kreiss norm alone are computed through the following min-max program
\[\begin{array}{ll}\text{minimize}&\max_{\delta\in[-1,1]}\left\|J^{T}\left( sI-\left(\tfrac{1-\delta}{1+\delta}A_{d}(K)-I\right)\right)^{-1}J\right\|_{\infty}\\ \text{subject to}&K\text{ robustly stabilizing, }K\in\mathscr{K},\end{array} \tag{33}\]
with the definitions already given for program (16).
Program (33) was solved for four controller structures: \(x\)-measurement dynamic output feedback, full state feedback, static \(x\)-measurement feedback, and static \(y\)-measurement feedback. Controller gains were computed as \(K(s)=-(47.06s+715.7)/(s+17.95)\), \([-41.07,-13.78,0]\), \(-34.70\) and \(-32.55\), respectively. In each case a Kreiss constant of
unit value with \(\mathcal{M}_{0}(G)=1\) was achieved, meaning that the linear dynamics do no longer amplify transients in the Lorenz model. Note that unlike the matrix case, \(\mathcal{M}_{0}(G)=1\) cannot be inferred directly from \(\mathcal{K}(G)=1\), but can be certified a posteriori. Naturally, all controllers have been tested for global stability of the Lorenz model, which for \(K\) fixed uses the characterization in (28) and requires solving a convex SDP. Simulations are given in Fig. 8.
Figure 7. Suppression of Lorenz double-scroll chaotic attractor using QC approach
Top left: \(x\)-measurement dynamic feedback, Top right: state feedback
Bottom left: : \(x\)-measurement static feedback, Bottom right: : \(y\)-measurement static feedback
Open loop: blue curve, Closed loop: red curve.
### Study of the Lorenz system with fixed points
For \(R<1\), the origin is the only stable equilibrium of (21) and the Lorenz model is then globally stable. When the Lorenz parameter is chosen as \(1<R<17.5\), the chaotic attractor disappears and is replaced with stable fixed points. For instance, when \(R=10\), the Lorenz model has an unstable fixed point at the origin and two stable fixed points given in (22). A typical illustration of that situation is shown in Fig. 9. Trajectories with initial conditions arbitrarily close to \(0\) are quickly captured by one of the fixed points.
Figure 8. Suppression of Lorenz chaotic attractor using Kreiss norm minimization
Top left: \(x\)-measurement dynamic feedback, Top right: state feedback
Bottom left: : \(x\)-measurement static feedback, Bottom right: : \(y\)-measurement static feedback
Open loop: blue curve, Closed loop: red curve.
Despite this quite different pattern of the attracting regime, synthesis proceeds along similar lines as in section 6.1. We remove the undesirable fixed points and stabilize the origin globally using static state-feedback, and dynamic and static output-feedback, comparing QC approach and Kreiss norm minimization.
#### 6.2.1. Fixed-point dynamics: design with the QC approach
As before, we start with the QC approach. A state-feedback controller was computed as \(K=[-136.40,0.24,0]\). Again the \(z\) measurement is not used. That leads us to computing static output feedback controllers given as \(K=-9.01\) and \(K=-9.01\) for the \(x\) and \(y\) measurements alone, respectively. A dynamic 1st-order \(x\)-measurement output feedback controller was computed as \(K=-(288.5s+2807)/(s+0.104)\) based on Theorem 5. All computed controllers globally stabilize the origin. This is illustrated in Fig. 10 for two initial conditions.
#### 6.2.2. Fixed-point dynamics: Kreiss system norm
Controllers with identical structure were computed using Kreiss norm minimization. Dynamic 1st-order \(x\)-measurement output feedback, full state, \(x\)-measurement and \(y\)-measurement static feedback were obtained as \(K=-(12.23s+67.63)/(s+5.541)\), \(K=[-4.47,-6.92,0]\), \(K=-26.32\) and \(K=-11.53\), respectively. All controllers were certified to stabilize the origin globally through feasibility of the LMI (28). Simulations are shown in Fig. 11 and should be compared to Fig. 10.
Figure 9. Lorenz model for \(1<R<17.5\) Unstable origin and two stable fixed points
Figure 10. Suppression of fixed point attractors using QC approach
Top left: \(x\)-measurement dynamic feedback, Top right: state feedback
Bottom left: : \(x\)-measurement static feedback, Bottom right: : \(y\)-measurement static feedback
Open loop: blue curve, Closed loop: red curve.
## 7. Conclusion
The idea to stabilize nonlinear systems in closed loop by mitigating transients of the linearized closed loop was investigated, the rationale being that large transients are responsible for driving the nonlinear dynamics outside the region of local stability. Heuristic approaches tailored to transients caused by noise, persistent perturbations, and finite consumption disturbances were obtained, opening up new possibilities for analysis and control of linear and nonlinear systems.
Figure 11. Suppression of fixed point attractors using Kreiss norm minimization
Top left: \(x\)-measurement dynamic feedback, Top right: state feedback
Bottom left: : \(x\)-measurement static feedback, Bottom right: : \(y\)-measurement static feedback
Open loop: blue curve, Closed loop: red curve.
The time-domain worst case transient peak norm \(\mathcal{M}_{0}(G)\) was identified as suitable to assess transients caused by \(L_{1}\)-disturbances. The Kreiss system norm \(\mathcal{K}(G)\) was introduced and studied as a frequency domain approximation of \(\mathcal{M}_{0}(G)\), better suited for the purpose of optimization due to its representation as a parametric robust control problem. In our numerical testing, Kreiss norm optimization was evaluated by matching it, in small to medium size cases where possible, with a properly extended QC approach.
Future work may strive to enable Kreiss norm minimization for large-dimensional plants, such as realistic fluid flow. This is challenging, but may be within reach when model sparsity is exploited, and specialized linear algebra is used. In contrast, LMI techniques and SOS certificates are no longer in business for such large scales.
## Appendix A
We consider the closed-loop system (15) in polar coordinates
\[\begin{split}\dot{r}&=\sigma r-\alpha\beta r^{3}+g Kr\sin^{2}\phi\\ \dot{\phi}&=\omega+\alpha\gamma r^{2}+gK\cos\phi\sin \phi\end{split} \tag{34}\]
First observe that \(r(t)\) must be bounded. Indeed, we have \(\dot{r}\leq 0\) for
\[r^{2}\geq\frac{\sigma+gK\sin^{2}\phi}{\alpha\beta}\]
which due to \(K<0\) means that states \(r\) with
\[r^{2}>\frac{\sigma}{\alpha\beta}=:r_{0}^{2}\]
cannot be reached (from below). Namely if \(r(0)<r_{0}\), then the trajectory may never reach values \(r(t)>r_{0}\), as this would require derivatives \(\dot{r}>0\) in between \(r_{0}\) and \(r(t)>r_{0}\). Even when \(r(0)>r_{0}\), then \(\dot{r}<0\) on some \([0,\epsilon)\), so the trajectory decreases until \(r(t)=r_{0}\) is reached, and then the previous argument shows that it cannot rebounce to values \(>r_{0}\). In conclusion, the trajectories of the system are bounded.
Let us look for steady states \((x^{*},y^{*})\). In the original \((x,y)\)-system we have (with \(r^{2}=x^{2}+y^{2}\))
\[\begin{split} 0&=(\sigma-\alpha\beta r^{2})x- \omega y-\alpha\gamma r^{2}y\\ 0&=\omega x+\alpha\gamma r^{2}x+\sigma y-\alpha \beta r^{2}y+gKy\end{split}\]
and this can be written
\[A(r):=\begin{bmatrix}\sigma-\alpha\beta r^{2}&-\omega-\alpha \gamma r^{2}\\ \omega+\alpha\gamma r^{2}&\sigma-\alpha\beta r^{2}+gK\end{bmatrix} \begin{bmatrix}x\\ y\end{bmatrix}=\begin{bmatrix}0\\ 0\end{bmatrix}\]
For this system to have a non-zero solution \((x^{*},y^{*})\neq(0,0)\), the determinant of the system matrix \(A(r)\) must vanish, which leads to
\[(\sigma-\alpha\beta r^{2})^{2}+gK(\sigma-\alpha\beta r^{2})+(\omega+\alpha \gamma r^{2})^{2}=0.\]
This quadratic equation in \(\sigma-\alpha\beta r^{2}\) has no real solution for \(g^{2}K^{2}-4(\omega+\alpha\gamma r^{2})^{2}<0\), which gives the following
**Proposition 4**.: _Suppose \(-K<\frac{2\omega}{g}\). Then the only steady state of the closed-loop system is \((0,0)\)._
The origin is locally exponentially stable, so there exists a largest ball \(B(0,\rho)\) such that all trajectories starting in \(B(0,\rho)\) converge to \((0,0)\). Suppose \(\rho<\infty\), then there exists \((x_{0},y_{0})\not\in B(0,\rho)\) such that the trajectory starting at \((x_{0},y_{0})\) does not enter the ball \(B(0,\rho)\). Since it is a bounded trajectory, the Poincare-Bendixson theorem implies
that it must approach a limit cycle. For a limit cycle to exist, the system must admit a periodic solution.
We therefore look for conditions which allow to exclude the existence of a periodic solution. The Bendixson condition tells that this is the case when \(P_{x}+Q_{y}\) does not change sign, where \(P,Q\) are the right hand sides of (15) with the loop \(u=Ky\) closed. We get
\[P_{x}+Q_{y}=2\sigma-4\alpha\beta r^{2}+gK\]
and this has negative sign for \(K<-\frac{2\sigma}{g}\). We conclude the
**Proposition 5**.: _Suppose \(K\in\mathbb{R}\) satisfies \(K<-\frac{2\sigma}{g}\) and \(-K<\frac{2\omega}{g}\). Then (15) is globally stabilized by the static controller \(u=Ky\)._
### Dynamic controllers
Consider the case of dynamic controllers. Closed-loop dynamics are obtained as follows (\(r^{2}=x^{2}+y^{2}\)):
\[\begin{bmatrix}\dot{x}\\ \dot{y}\\ \dot{x}_{K}\end{bmatrix}=\begin{bmatrix}(\sigma-\alpha\beta r^{2})&-(\omega+ \alpha r^{2}\gamma)&0\\ (\omega+\alpha\gamma r^{2})&(\sigma+gD_{K}-\alpha\beta r^{2})&gC_{K}\\ 0&B_{K}&A_{K}\end{bmatrix}\begin{bmatrix}x\\ y\\ x_{K}\end{bmatrix}\,.\]
The equilibrium equations give \(x_{K}=-A_{K}^{-1}B_{K}y\), assuming that \(A_{K}\) is invertible. This leads to
\[\begin{bmatrix}(\sigma-\alpha\beta r^{2})&(\omega+\alpha r^{2}\gamma)\\ (\omega+\alpha\gamma r^{2})&(\sigma-\alpha\beta r^{2}+g(D_{K}-C_{K}A_{K}^{-1}B _{K}))\end{bmatrix}\begin{bmatrix}x\\ y\end{bmatrix}=\begin{bmatrix}0\\ 0\end{bmatrix}\,,\]
which as before, has \((0,0)\) as unique solution if and only if the system matrix is invertible. The determinant quadratic equation in \(\sigma-\alpha\beta r^{2}\) has no real solution and is thus non-zero when
\[(g(D_{K}-C_{K}A_{K}^{-1}B_{K}))^{2}-4(\omega+\alpha\gamma r^{2})^{2}<0\,,\]
which is guaranteed when
\[|D_{K}-C_{K}A_{K}^{-1}B_{K}|<2\omega/g\,.\]
Note the latter involves a constraint on the DC gain of the dynamic controller \(K(s)=C_{K}(sI-A_{K})^{-1}B_{K}+D_{K}\).
The polar form of these differential equations for \((x,y)\) is obtained as
\[\dot{r} =\sigma r-\alpha\beta r^{3}+gD_{K}r\sin^{2}\phi+gC_{K}x_{K}\sin\phi \tag{35}\] \[\dot{\phi} =\omega+\alpha\gamma r^{2}+gD_{K}\cos\phi\sin\phi+gC_{K}x_{K}\cos\phi\] \[\dot{x}_{K} =A_{K}x_{K}+B_{K}r\sin\phi\]
Assuming that \(A_{K}\) is Hurwitz as is the case for all controllers based on the Kreiss norm, the third equation in (35) gives us on every finite interval \([0,t_{0}]\) an estimate of the form \(\max_{0\leq t\leq t_{0}}|x_{K}(t)|\leq c\max_{0\leq t\leq t_{0}}r(t)\) for a constant \(c>0\) independent of \(t_{0}\). Indeed, \(x_{K}(t)=\exp(tA_{K})x_{0}+\int_{0}^{t}\exp((s-t)A_{K})B_{K}\sin\phi(s)r(s)ds\), hence from Young's inequality (with \(q=r=\infty\), \(p=1\)), we get
\[\max_{0\leq t\leq t_{0}}|x_{K}(t)|\leq c_{1}+\|B_{K}|\|\|\exp(tA_{K})\|_{1} \max_{0\leq t\leq t_{0}}r(t)\leq c_{1}+c_{2}\max_{0\leq t\leq t_{0}}r(t)\leq c \max_{0\leq t\leq t_{0}}r(t).\]
Therefore by the comparison theorem, (see Lemma 7 below), applied to the first equation in (35), \(r(t)\) is bounded above by the solution of the equation \(\dot{r}=(\sigma+g|D_{K}|+g\|C_{K}\|c)r-\alpha\beta r^{3}\). The latter, however, is globally bounded, as the negative term \(-\alpha\beta r^{3}\) dominates for large \(r>0\). Having established global boundedness of \(r(t)\), we go back into the equation \(\dot{x}_{K}=A_{K}x_{K}+B_{K}r\sin\phi\), from which we now derive global boundedness of \(x_{K}\), and so altogether trajectories of (35) remain bounded.
**Lemma 7**.: (See e.g. [50, Thm. 2.1, p. 93])_. Suppose \(\phi\) satisfies \(|\phi(t,x)-\phi(t,x^{\prime})|\leq M|x-x^{\prime}|\) for all \(t\in[t_{0},t_{1}]\) and \(x,x^{\prime}\), and is jointly continuous. Let \(v(t)\) be an absolutely continuous function such that \(\dot{v}(t)\leq\phi(t,v(t))\) for almost all \(t\in[t_{0},t_{1}]\). Then \(v(t)\leq u(t)\) on \([t_{0},t_{1}]\), where \(u(t)\) is the solution of \(\dot{u}(t)=\phi(t,u(t))\) with initial value \(u(t_{0})\) satisfying \(v(t_{0})\leq u(t_{0})\). _
We are now in the situation addressed in [49, Corollary], which says that if a \(C^{1}\)-function \(V(x)\) can be found satisfying
\[\dot{V}(x)+\ddot{V}(x)\neq 0\text{ for all }x\neq 0 \tag{36}\]
then trajectories either converge \(x(t)\to 0\), or escape to infinity \(|x(t)|\to\infty\). Since we have already ruled out the latter, we have then a certificate of global asymptotic stability. For this model, we have used the more restrictive condition \(\dot{V}(x)<0\), with \(V(x)=V_{1}(x)+\dot{V_{2}}(x)\) and \(V_{1}\), \(V_{2}\) are chosen as multivariate polynomials. See [1] for details. The polynomials are then sought using _sostools_[38].
For both the 1st- and 3rd-order controllers, a solution was obtained with \(V_{1}\) and \(V_{2}\) sums of monomials of degree 2. For the simpler 1st-order controller, with \(x_{d}=(x,y,x_{K})\) this reads
\[V_{1}(x_{cl}) =2.556x^{2}-1.389xy-0.02803xx_{K}+2.897y^{2}-3.846e\text{-}5yx_{K }+0.003159x_{K}^{2}\] \[V_{2}(x_{cl}) =-0.2061x^{2}+0.008941xy-1.324e\text{-}6xx_{K}-0.1787y^{2}+1.641e \text{-}5yx_{K}-0.008169x_{K}^{2}\,.\]
We have established that \(x(t)\to 0\).
## Appendix B
Since the first matrix in (28) has a zero principal sub-matrix, the corresponding row and column terms should be zero for this matrix to be negative semi-definite. This leads to \(X_{cl}B_{w,cl}+\mu_{0}B_{w,cl}=0\,\). Also, the \((1,1)\) sub-matrix should be negative semi-definite. Using a partitioning in \(X_{cl}\) conformable to that of \(B_{w,cl}\) in (26), we have
\[X_{cl}=\begin{bmatrix}X&X_{12}&X_{13}\\ X_{12}^{T}&X_{22}&X_{23}\\ X_{13}^{T}&X_{23}^{T}&X_{33}\end{bmatrix},\text{ with }X\in\mathbb{R}^{(n-n_{\phi})\times(n-n_{\phi})}, \,X_{22}\in\mathbb{R}^{n_{\phi}\times n_{\phi}},\,X_{33}\in\mathbb{R}^{n_{K} \times n_{K}}\]
with \(n_{\phi}\) the vector dimension of the nonlinearity \(\phi\). This gives \(X_{12}=0\), \(X_{22}=-\mu_{0}I\) and \(X_{23}=0\). Due to homogeneity of the problem, \(\mu_{0}\) is set to \(-1\), and since \(X_{22}\) should be positive definite, we get \(X_{22}=I\). Also, non-strict feasibility can be replaced with strict feasibility by reducing \(\epsilon\) if necessary. Summing up, assessing global stabilization with a dynamic controller \(K(s)\) reduces to a specially structured Lyapunov inequality
\[A_{cl}^{T}X_{cl}+X_{cl}A_{cl}+\epsilon X_{cl}\prec 0,\ \ X_{cl}=\begin{bmatrix}X&0&X_{13} \\ 0&I&0\\ X_{13}^{T}&0&X_{33}\end{bmatrix},X_{cl}\succ 0\,. \tag{37}\]
This is rewritten in the familiar form:
\[\Psi+P^{T}\Theta Q+Q^{T}\Theta P\prec 0,\ \ X_{cl}=\begin{bmatrix}X&0&X_{13} \\ 0&I&0\\ X_{13}^{T}&0&X_{33}\end{bmatrix},X_{cl}\succ 0\,, \tag{38}\]
with appropriate matrices \(\Psi\), \(P\), \(Q\) depending on \(X,A,B,C\) and controller data gathered in
\[\Theta:=\begin{bmatrix}A_{K}&B_{K}\\ C_{K}&D_{K}\end{bmatrix}\,.\]
We can then apply the Projection Lemma [20] to eliminate \(\Theta\), which leads to LMI solvability conditions. There exist controllers of order \(n_{K}\) if and only if \(W_{P}^{T}\Psi W_{P}\prec 0\) and \(W_{Q}^{T}\Psi W_{Q}\prec 0\), for some \(X_{cl}\succ 0\). Introducing the inverse of \(X_{cl}\) as
\[Y_{cl}:=X_{cl}^{-1}=\begin{bmatrix}Y&0&Y_{13}\\ 0&I&0\\ Y_{13}^{T}&0&Y_{33}\end{bmatrix}\,,\]
and following [20], the two projection inequalities are computed as
\[\begin{split}& N_{C}^{T}\left(A^{T}\begin{bmatrix}X&0\\ 0&I\end{bmatrix}+\begin{bmatrix}X&0\\ 0&I\end{bmatrix}A+\epsilon\begin{bmatrix}X&0\\ 0&I\end{bmatrix}\right)N_{C}\prec 0\\ & N_{B}^{T}\left(A\begin{bmatrix}Y&0\\ 0&I\end{bmatrix}+\begin{bmatrix}Y&0\\ 0&I\end{bmatrix}A^{T}+\epsilon\begin{bmatrix}Y&0\\ 0&I\end{bmatrix}\right)N_{B}\prec 0\end{split} \tag{39}\]
where \(N_{C}\) and \(N_{B}\) are bases of the null space of \(C\) and \(B^{T}\), respectively. Also, completion of \(X_{cl}=Y_{cl}^{-1}\succ 0\) and \(X_{cl}\in\mathbb{R}^{(n+n_{K})\times(n+n_{K})}\) from \(X\) and \(Y\) is equivalent to [37, 20]
\[\begin{bmatrix}X&0&I&0\\ 0&I&0&I\\ I&0&Y&0\\ 0&I&0&I\end{bmatrix}\succeq 0,\ \operatorname{rank}\left(I_{n}-\begin{bmatrix}Y&0 \\ 0&I_{n_{\phi}}\end{bmatrix}\begin{bmatrix}X&0\\ 0&I_{n_{\phi}}\end{bmatrix}\right)\leq n_{K}\,. \tag{40}\]
Clearly, the maximal rank is \(\operatorname{rank}(I-YX)\leq n-n_{\phi}\) and determines the controller order. Finally, for \(X\) and \(Y\) solutions to (39) and (40), the full matrix \(X_{cl}\) can be reconstructed as well as controller state-space data \((A_{K},B_{K},C_{K},D_{K})\)[20].
|
2307.00039 | Towards Brain Inspired Design for Addressing the Shortcomings of ANNs | As our understanding of the mechanisms of brain function is enhanced, the
value of insights gained from neuroscience to the development of AI algorithms
deserves further consideration. Here, we draw parallels with an existing
tree-based ANN architecture and a recent neuroscience study[27] arguing that
the error-based organization of neurons in the cerebellum that share a
preference for a personalized view of the entire error space, may account for
several desirable features of behavior and learning. We then analyze the
learning behavior and characteristics of the model under varying scenarios to
gauge the potential benefits of a similar mechanism in ANN. Our empirical
results suggest that having separate populations of neurons with personalized
error views can enable efficient learning under class imbalance and limited
data, and reduce the susceptibility to unintended shortcut strategies, leading
to improved generalization. This work highlights the potential of translating
the learning machinery of the brain into the design of a new generation of ANNs
and provides further credence to the argument that biologically inspired AI may
hold the key to overcoming the shortcomings of ANNs. | Fahad Sarfraz, Elahe Arani, Bahram Zonooz | 2023-06-30T15:37:38Z | http://arxiv.org/abs/2307.00039v1 | # Towards Brain Inspired Design for Addressing the Shortcomings of ANNs
###### Abstract
As our understanding of the mechanisms of brain function is enhanced, the value of insights gained from neuroscience to the development of AI algorithms deserves further consideration. Here, we draw parallels with an existing tree-based ANN architecture and a recent neuroscience study [27] arguing that the error-based organization of neurons in the cerebellum that share a preference for a personalized view of the entire error space, may account for several desirable features of behavior and learning. We then analyze the learning behavior and characteristics of the model under varying scenarios to gauge the potential benefits of a similar mechanism in ANN. Our empirical results suggest that having separate populations of neurons with personalized error views can enable efficient learning under class imbalance and limited data, and reduce the susceptibility to unintended shortcut strategies, leading to improved generalization. This work highlights the potential of translating the learning machinery of the brain into the design of a new generation of ANNs and provides further credence to the argument that biologically inspired AI may hold the key to overcoming the shortcomings of ANNs.
## 1 Introduction
Artificial neural networks (ANNs) have achieved remarkable performance on many vision tasks which have been enabled by the considerable progress in developing deeper and more complex network architectures. However, despite the performance gains, the existing networks have been shown to be brittle and have several limitations and shortcomings. They require huge amounts of data to train, struggle with noisy and imbalanced datasets, do not generalize well to out-of-distribution data, and are vulnerable to shortcut learning and adversarial attacks [32]. While there have been studies on addressing these challenges individually, the majority of these specialized techniques and regularization approaches for overcoming a specific challenge lead to a trade-off in performance and do not provide a general solution [31].
Humans, on the other hand, excel at learning efficiently even under challenging scenarios with limited data and can generalize well to novel scenarios. Neuroscience has made substantial progress in understanding the mechanisms of brain functions and the design principles it employs to enable efficient learning [6; 7; 20; 23]. It is, therefore, important to further exploit insights from our enhanced understanding of the learning machinery of the brain into the development of AI algorithms.
We consider a recent study [27] which examines the organization of neurons in the cerebellum, an important learning site in the brain and resembles a three-layer feedforward network (see Figure 1). The neurons in the middle layer of the cerebellum are grouped into small populations which receive a personalized view of the entire error space. This is in stark contrast to standard ANNs which
lack any such organization of neurons and each unit in the network receives the same error signal. We, thus, attempt to study the potential effect of a similar error-dependent organization of neurons in ANNs. To this end, in the object recognition task in ANNs, we consider the classification error associated with a learned semantic grouping of object classes as partial views of the error space and the corresponding set of disjoint subnetworks as populations that share a preference for a particular partial error view. From this perspective, our intended learning paradigm can be more aligned with tree-structured ANNs.
Thus, we consider SplitNet [17], originally proposed to improve inference speed and reduce the number of parameters, as a suitable tree-structured method to assess the potential of population coding in ANNs as we see many similarities between its network design and the error-dependent organization of neurons in the cerebellum (see Figure 1). Analogous to the grouping of neurons in the cerebellum, SplitNet learns to split the network weights into multiple groups that use disjoint sets of features. In particular, since the logit values associated with the semantically disparate class groups only depend on the corresponding branched subtrees of the network and not the other subtrees, each group (subtrees) receives a gradient signal which is biased towards correcting the error associated with their corresponding semantic group (partial error view), similar to how populations in the cerebellum share preference for a biased error view. Finally, similar to the cerebellum which receives a highly processed input, SplitNet has a shared layer that extracts features from the input data before splitting them into separate populations. Therefore, we consider SplitNet to bear some similarities to the population coding in the cerebellum and are therefore suitable for conducting our empirical study.
We assess the potential benefits of the error-based organization of neurons in the design of ANNs under varying training conditions and assess its effect on the learned model. Our empirical evaluation demonstrates the effectiveness of the considered architecture in improving the generalization of the model over standard training under challenging scenarios. It provides considerable performance gains under class imbalance which is inherent in real-world datasets and significantly improves the sample efficiency of the model, enabling the model to generalize better with fewer data. Additionally, our empirical results suggest that error-based organization of neurons can reduce the texture bias and vulnerability to unintended shortcut cues which improve generalization to out-of-distribution data. We attribute these improvements to the flexibility of the subnets to explore the feature space more and learn specialized features for the semantic groups. Furthermore, our analyses of the characteristics of
Figure 1: In _standard ANN_, there is no grouping of neurons within a layer, and the predictions from Layer 3 are compared to the ground truths and then the overall prediction error signal is fed back to every neuron in the network which adjusts their weights to minimize the loss. The _Cerebellum_ in the brain, on the other hand, has a vastly different design and learning mechanism. It resembles a three-layer feedforward network comprising granule cells as the input layer, Purkinje cells in the middle layer, and deep cerebellar nucleus (DCN) cells as the output layer. The predictions of each DCN neuron are compared to the actual observation, resulting in an error signal originating in the inferior olive. Unlike in standard ANN, the olive organizes Purkinje cells in the hidden layer into small populations which receive a limited personalized view of the error space. Similar to the cerebellum, the desired population coding based ANN (_PC-ANN_) would form an error-dependent grouping of neurons into populations that learn from partial error views from the classification error associated with learned semantic groupings of classes. We consider SplitNet as an instance of an architecture that bears some similarities to the desired population coding based architecture.
the model suggest that it compresses more information and converges to flatter minima. We would like to emphasize that all of these benefits come merely from the design of the network rather than explicit regularization or specific techniques for each scenario. Our empirical results highlight the potential of error-based grouping and partial error views based learning mechanisms in ANNs.
Our work aims to bring the perspective of population coding based design in ANNs and presents it as a promising direction for further research. We believe that exploring the design space of population coding based ANNs can lead to more reliable and robust models that may address some of the key challenges and limitations of current AI models.
## 2 Background and Methodology
Here, we provide the required background and the premises of our study. Section 2.1 first provides an overview of population coding in the cerebellum study [27] along with key insights and implications for ANNs, Section 2.2 provides a detailed overview of SplitNet, which is central to our analysis, and finally, Section 2.3 draws parallels between the two and explains how we aim to study the potential benefits of population coding in ANNs.
### Population Coding in the Cerebellum
The cerebellum is an important learning site in the brain [25; 32] and, therefore, several studies in neuroscience have scrutinized how efficient learning is enabled in the cerebellum [11; 12; 18; 19]. It has a relatively simple circuit architecture that resembles a three-layer feedforward network of neurons in which the "hidden layer" consists of Purkinje cells (P-cells), and the output layer consists of deep cerebellar nucleus (DCN) neurons. Our study focuses on the recent work of Shadmehr _et al._[27] which provides an extensive overview of the learning characteristics and organization of neurons in the cerebellum from a machine learning perspective and its implications. Unlike an ANN, P-cells are grouped into small populations that converge on single DCN neurons. Furthermore, the error signal conveyed to the P-Cells, which in turn act as surrogate teachers for the downstream DCN neurons they project to, is not a fair reflection of the entire error space, but is rather biased to provide a limited (personalized) view of the error space. This error-dependent grouping of P-cells into populations is believed to play a crucial role in enabling efficient learning in the cerebellum. Our study aims to bring this perspective to ANNs and to study the potential benefits of such an architecture.
### SplitNet
To this end, we consider an existing ANN architecture that bears some resemblance to a grouping of neurons with personalized error views. SplitNet [17] was originally proposed to optimize the inference speed of the model by learning a tree-structured network architecture that is highly parallelizable. The method involves splitting the network into a set of subnetworks that share a common lower layer and using a disjoint set of features for the specific group of classes associated with the subnetwork. SplitNet employs a two-stage learning scheme whereby in the first stage classes-to-group and features-to-group assignment matrices are learned along with the network weights while regularizing them to be disjoint across groups. The learned assignment matrices are then utilized to obtain a tree-structured network that involves no connection between branched subtrees of semantically disparate class groups, which are subsequently finetuned with the cross-entropy loss.
Concretely, for a given number of groups, \(G\), the vector of assignment of the feature group and the vector of assignment of the class group for the group \(g\) (\(1\leq g\leq G\)) are given by \(p_{g}\in R^{D}\) and \(q_{g}\in R^{K}\) where \(D\) is the dimension of the features and \(K\) is the number of classes. \(p_{g}\) and \(q_{g}\) define a group together, where \(p_{g}\) represents the features associated with the group and \(q_{g}\) indicates a set of classes assigned to the group. The disjoint set of classes and features are learning by imposing a constraint on the network weight at each layer \(W^{l}\) to be a block-diagonal matrix, where each block \(W^{l}_{g}\) is associated with a class group \(g\in G\). There is no overlap between groups, either in features or classes, so each disjoint group of classes has exclusive features associated with it. The regularization which assigns features and classes into disjoint groups consists of three objectives:
- Group Weight Regularization_, \(R_{W}\) prunes out inter-group connections to obtain block-diagonal weight matrices by minimizing the off-block-diagonal entries;
\[R_{W}(W,P,Q)=\sum_{g}\sum_{i}\|((\mathbb{I}-P_{g})WQ_{g})_{i*}\|_{2}+\sum_{g} \sum_{j}\|(P_{g}W(\mathbb{I}-Q_{g}))_{*j}\|_{2} \tag{1}\]
where \(P_{g}=diag(p_{g})\) and \(Q_{g}=diag(q_{g})\) are the feature and class group assignment matrices for group \(g\), and \((M)_{i*}\) and \((M)_{*j}\) denote \(i\)-th row and \(j\)-th column of \(M\). Eq. 1 imposes row/column-wise \(l_{2,1}\)-norm on the inter-group connections.
- _Disjoint Group Assignment_, \(R_{D}\) ensures that the group assignment vectors are mutually exclusive by enforcing orthogonality;
\[R_{D}(P,Q)=\sum_{i<j}p_{i}\cdot p_{j}+\sum_{i<j}q_{i}\cdot q_{j} \tag{2}\]
- _Balanced Group Assignment_, \(R_{E}\) encourages the group assignments to be uniformly distributed by minimizing the squared sum of elements in each group assignment vector.
\[R_{E}(P,Q)=\sum_{g}\Big{(}(\sum_{i}p_{gi})^{2}+(\sum_{j}q_{gj})^{2}\Big{)} \tag{3}\]
Therefore, the overall regularization loss is as follows;
\[\Omega(W,P,Q)=\lambda_{1}R_{W}(W,P,Q)+\lambda_{2}R_{D}(P,Q)+\lambda_{3}R_{E}( P,Q) \tag{4}\]
where \(\lambda_{1}\), \(\lambda_{2}\), and \(\lambda_{3}\) control the strength of each regularization. For more details, see [17].
### Studying the potential benefits of Population Coding in ANNs
The resemblance of the cerebellum to a feedforward network and a preliminary understanding of the error-driven organization of neurons and the learning mechanisms it employs provide us with an opportunity to study the benefits of such an architecture in ANNs. Standard learning consists of evaluating an overall error term (e.g. mean cross-entropy loss over a training batch) and subsequently updating each neuron's weight in the gradient direction, which minimizes the loss term. As explained in Section 2.1, this is in stark contrast to how the cerebellum learns, and therefore we aim to study the potential impact of a similar error-dependent grouping of neurons into populations and subsequently learning from partial error views in ANNs.
To this end, we first define the framework within which we conduct our study by drawing parallels under the classification task in ANNs. We aim to learn semantically disparate grouping of object classes which can be represented by a disjoint set of features. Semantically similar classes are likely to share features and meaningfully partition the input space. Therefore, the classification error associated with each semantic group can provide a personalized view of the error space, which can be subsequently utilized to learn specialized features in the associated subnetwork (population of neurons). This learning paradigm naturally lends itself to tree-structured network architectures such as SplitNet. Figure 1 shows the similarities between the cerebellum and structure and the learning dynamics of SplitNet (referred to as PC-ANN for emphasis). Notably, a closer look at the backpropagation of errors in SplitNet reveals an intriguing property that makes it suitable for our study as an instance of ANN architecture that bears similarity to the population coding in the cerebellum: the logits for each class in a semantic group depend only on the associated subtree (population) which therefore receives an error signal which is biased towards correcting the error associated with the semantic group (partial error). For instance, consider the scenario where we have two semantic groups: living and non-living, and the input image is of a cat. The logit values for non-living classes are provided by the associated subtree and vice versa. Hence, as the error signal for each unit depends on its involvement in the forward pass, the subtree for the living semantic group will receive an error signal biased towards correcting the error associated with the logits for living classes and similarly for the non-living subtree. Therefore, we posit that splitNet implicitly utilizes partial error views to create specialized populations of neurons. Studying the performance and characteristics of such a network enables us to gauge the potential benefits of mimicking population coding in ANNs.
## 3 Experimental Setup
To ensure a fair comparison, we compare the standard training and population coding based training paradigm under uniform experimental settings. Following Kim _et al._[17], we employ WRN-16-8 [35] for both baseline (Standard-ANN) and SplitNet experiments. Unless otherwise stated, we use the following learning scheme: random horizontal flip and random crop data augmentations with reflective padding of 4 and mean standard normalization; Adam optimizer with \(5e^{-4}\) weight decay; 100 epochs; the batch size of 64; and an initial learning rate of \(1e^{-4}\), decayed by a factor of 0.1 at epochs 10, 30 and 50. For SplitNet, we use a 2-way split (i.e. \(G=2\)) at the final linear layer. For all our experiments, we use \(\lambda_{1}=1\), \(\lambda_{2}=2\) and \(\lambda_{3}=10\). For evaluation, we report the mean and one standard deviation of 3 runs with different seeds.
## 4 Empirical Evaluation
To study the potential benefits of incorporating a similar mechanism for population coding in ANN, we evaluate the characteristics and learning behavior of SplitNet in various challenging scenarios. Therefore, we refer to SplitNet as an instance of the desired population coding-based ANN (PC-ANN), the subnetworks as populations, and the classification error associated with the learned semantic grouping of classes as partial error views to emphasize our focus on studying the potential impact of a similar mechanism of population coding in ANNs as the cerebellum.
### Performance
To test the versatility of the models, we consider multiple datasets of varying complexity. Table 1 shows that PC-ANN consistently leads to generalization gains across datasets, especially in more complex datasets where both the number of classes and the interclass similarity are higher. The results suggest that PC-ANN is capable of learning useful semantic groups and learning efficiently with partial error views. We believe that partial error views that provide a signal to the corresponding populations enable the model to explore the feature space more extensively and learn specialized features for each semantic group, which can help the model avoid the pitfalls of narrow learning [30].
### Out-of-Distribution (OOD) Generalization
A long-standing challenge for AI is its inability to generalize well to OOD data, while humans excel at generalizing to novel situations. To test whether population coding enables the model to learn more generalizable features, we consider two challenging scenarios. We first utilize the cleaned version of the DomainNet dataset [24] that consists of data from different domains on 345 object classes. We train the models on the real domain and use the painting, clip art, sketch, and infograph domains for
\begin{table}
\begin{tabular}{l|c c c} \hline \hline & Cifar-10 & Cifar-100 & Tiny-ImageNet \\ \hline Baseline & 92.49 \(\pm\)0.25 & 73.65 \(\pm\)0.18 & 49.14 \(\pm\)0.49 \\ PC-ANN & **93.24**\(\pm\)0.21 & **75.33**\(\pm\)0.47 & **53.02**\(\pm\)0.22 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Test accuracy on different datasets. PC-ANN consistently improves the generalization of the model across datasets of varying complexity, demonstrating its versatility.
\begin{table}
\begin{tabular}{l|c c|c c|c c} \hline \hline & \multicolumn{2}{c|}{Cifar-10} & \multicolumn{2}{c|}{Cifar-100} & \multicolumn{2}{c}{Tiny-ImageNet} \\ \hline \(\gamma\) & Baseline & PC-ANN & Baseline & PC-ANN & Baseline & PC-ANN \\ \hline
2 & 78.02 \(\pm\)0.68 & **79.85**\(\pm\)0.61 & 47.42 \(\pm\)0.49 & **51.55**\(\pm\)0.29 & 23.67 \(\pm\)0.08 & **28.46**\(\pm\)0.07 \\
1 & 74.59 \(\pm\)0.42 & **75.84**\(\pm\)0.85 & 36.87 \(\pm\)0.34 & **42.35**\(\pm\)0.45 & 17.74 \(\pm\)0.29 & **21.67**\(\pm\)0.17 \\
0.6 & 72.84 \(\pm\)0.68 & **74.66**\(\pm\)0.51 & 34.02 \(\pm\)0.46 & **38.41**\(\pm\)0.43 & 15.75 \(\pm\)0.79 & **19.54**\(\pm\)0.16 \\
0.2 & 71.68 \(\pm\)0.24 & **73.57**\(\pm\)0.90 & 30.43 \(\pm\)0.81 & **35.10**\(\pm\)0.49 & 13.57 \(\pm\)0.42 & **17.45**\(\pm\)0.12 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison of models trained under various levels of class imbalance. Note that the degree of imbalance increases as \(\gamma\) reduces. PC-ANN provides consistent generalization gains over baseline under varying degrees of class imbalance, particularly for higher imbalance on more complex datasets.
our OOD testing. We also consider different variants of the ImageNet dataset [2]. ImageNet-R [8] and ImageNet-B [8] contain images from 11 different renditions and real blurry images from a subset of 100 classes of ImageNet classes, respectively. However, ImageNet-A [10] provides a dataset for naturally occurring adversarial examples. We test the models trained on Tiny-ImageNet datasets on the common subset of classes within each of these ImageNet variants for OOD evaluation.
PC-ANN provides better generalization across all the domains of DomainNet (Figure 2). We observe similar gains on different variants of the ImageNet datasets (Figure 3). Although the difference in ImageNet-A is minor, it provides early evidence that having separate subnetworks may improve adversarial robustness. We hypothesize that the improvement in OOD generalization with PC-ANN may be attributed to learning a specialized set of features for the learned semantic groups.
### Imbalanced Datasets
The majority of the benchmark datasets have a uniform distribution of samples across the object classes. However, class imbalance is naturally inherent in the real world, where some objects are more prevalent than others, or it is relatively easier to obtain more data for certain classes. Standard training exhibits bias towards the prevalent classes at the expense of minority class [3] leading to a significant drop in generalization performance. While several approaches have been proposed for efficiently training models under class imbalance [15] which employs specialized techniques for tackling class imbalance or making certain assumptions about the distribution of data, we still lack a general method that improves the robustness of the underlying learning paradigm.
To evaluate the robustness of PC-ANN to class imbalance, we simulate varying degrees of class imbalance on different datasets. We follow [9] and employ the power law model in which the number of training samples for a class \(c\) is given by \(n_{c}=\lfloor a/((c-1)^{-\gamma}+b)\rceil\), where \(\lfloor.\rceil\) is the integer rounding function, \(\gamma\) represents an imbalance ratio, \(a\) and \(b\) are offset parameters to specify the largest and smallest class sizes. The training data becomes a power-law class distribution as the imbalance ratio \(\gamma\) decreases. We compare the performance of PC-ANN with the standard ANN on varying degrees of class imbalance \(\gamma\in\{2.0,1.0,0.6,0.20\}\) as the \(\gamma\) value decreases, the class
\begin{table}
\begin{tabular}{l|c c|c c|c c} \hline \hline \multirow{2}{*}{Samples (\%)} & \multicolumn{2}{c|}{Cifar-10} & \multicolumn{2}{c|}{Cifar-100} & \multicolumn{2}{c}{Tiny-ImageNet} \\ \cline{2-7} & Baseline & PC-ANN & Baseline & PC-ANN & Baseline & PC-ANN \\ \hline
100 & 92.49 \(\pm\)0.25 & **93.24**\(\pm\)0.21 & 73.67 \(\pm\)0.18 & **75.33**\(\pm\)0.47 & 49.14 \(\pm\)0.49 & **53.02**\(\pm\)0.22 \\
50 & 88.69 \(\pm\) 0.23 & **90.34**\(\pm\)0.16 & 65.22 \(\pm\)0.21 & **68.35**\(\pm\)0.37 & 40.27 \(\pm\)0.30 & **46.07**\(\pm\)0.12 \\
20 & 80.89 \(\pm\)0.24 & **84.02**\(\pm\)0.35 & 48.93 \(\pm\)0.55 & **53.42**\(\pm\)0.54 & 26.04 \(\pm\)1.00 & **32.73**\(\pm\)0.46 \\
10 & 73.13 \(\pm\)0.42 & **76.36**\(\pm\)0.43 & 35.58 \(\pm\)0.39 & **40.95**\(\pm\)0.34 & 18.66 \(\pm\)0.37 & **23.09**\(\pm\)0.13 \\
5 & 63.48 \(\pm\)0.86 & **67.28**\(\pm\)0.09 & 23.77 \(\pm\)0.42 & **28.07**\(\pm\)0.65 & 11.83 \(\pm\)0.45 & **15.21**\(\pm\)0.23 \\
1 & 42.64 \(\pm\)0.20 & **44.03**\(\pm\)0.16 & 8.52 \(\pm\)0.27 & **9.47**\(\pm\)0.10 & 4.35 \(\pm\)0.23 & **4.83**\(\pm\)0.05 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance of the models trained on a different percentage of the training samples. PC-ANN improves the sample efficiency of the model, allowing it to achieve higher performance with less amount of training data.
Figure 2: Generalization of models trained on real photos and tested on various out-of-distribution domains. Population coding enables the model to learn better generalizable features, leading to improved OOD generalization.
imbalance increases. \((a,b)\) are set so that the maximum and minimum class counts are (5000, 250) for CIFAR-10, (500, 25) for CIFAR-100, and Tiny-ImageNet.
Table 2 shows that PC-ANN consistently provides a considerable performance improvement over standard ANNs, especially for more complex datasets with high degrees of class imbalance without any explicit regularization. We believe that a major shortcoming of standard ANNs is that there is no division or specialization of neurons as each unit is involved in correcting the prediction for every input. Therefore, an imbalanced batch significantly affects the performance of the model as the entire network is adjusted to reduce the loss on the imbalanced batch, thus preferring the dominant class at the expense of less sampled classes. On the contrary, the partial error views and disjoint subtrees in PC-ANN provide more protection to parts of the network, providing implicit regularization. Furthermore, it can take the prevalence of classes into account while grouping them to mitigate the impact of dominant classes on the performance of minority classes, which builds robustness into the learning framework itself.
### Sample Efficiency
Learning complex concepts with a few examples is a hallmark of human intelligence [34], whereas it remains a challenge for ANNs that are data-hungry and require an abundant amount of labeled data to generalize well [2]. This limits their application in a limited data regime [33]. We believe that mimicking the learning machinery of the brain may lead to models that can generalize better under a low data regime. To this end, we compare the performance of the models trained on a subset of different datasets where we only use \(p\in[1,5,10,20,50]\) percentage of the training dataset and test on the full test set. Table 3 shows that PC-ANN consistently provides better generalization compared to standard ANNs, suggesting that it can learn efficiently with limited data. Notably, the performance gains are higher for complex datasets, where both the number of classes and their interclass similarities are higher. We hypothesize that the grouping of neurons into populations allows each population to explore different regions in the feature space, enabling the model to learn more efficiently from partial error views of fewer data.
### Shortcut Learning
Shortcuts are decision rules that perform well on standard benchmarks but fail to transfer to more challenging testing conditions, including real-world scenarios [4]. As the models are typically trained to maximize the training accuracy, they are quite likely to rely on spurious correlations: associations that are predictive of the training data, but not valid for the actual task. A major challenge for enabling efficient learning in ANNs is therefore to control the sensitivity of the training process to such spurious correlations. To evaluate the susceptibility of the model to shortcut learning, we follow the analysis in [14] and consider a gender classification task based on CelebA dataset [22] (CelebA-Skewed) where the training dataset is biased so that it only contains blonde females and non-blond males. Therefore, hair color is highly predictive on training data but not in test data where hair color and gender are independent attributes. Therefore, this may result in a decision rule based
Figure 4: Shortcut learning analysis on CelebA-Skewed. PC-ANN considerably reduces the susceptibility of the model to learn the blonde color shortcut strategy as can be seen in the performance gap on Blonde-M and Non-Blond-F.
Figure 3: Generalization of models trained on Tiny-ImageNet and tested on common classes in different variants of ImageNet dataset. PC-ANN consistently provides better OOD generalization and shows potential for improving the adversarial robustness of the model.
only on hair color. We use the same training scheme and only change the learning rate decay steps to 60 and 90 epochs.
Figure 4 shows that PC-ANN is in fact less vulnerable to shortcut learning and significantly improves generalization compared to standard ANN. Particularly, we see considerable gain in generalization to non-blond females and blond males without any explicit regularization. To better understand how the two models make decisions, we use the Grad-CAM [26] approach to examine the visual explanations of the models. We use the penultimate layer to extract the feature embeddings and use a threshold of 0.4 on the attention maps. Figure 5 shows that population coding remarkably enables the model to attend to the salient features of the face to distinguish between the genders, while standard training relies more on the unintended shortcut cue of the hair color, thus paying more attention to the hair region to inform its decision. The remarkable ability of PC-ANN to avoid the shortcut cue without any explicit regularization can also be attributed to the flexibility of PC-ANN to learn specialized disjoint features for each gender and avoid narrow learning.
## 5 Characteristics Analysis
Here, we analyze the characteristics of PC-ANN to gain insight into generalization improvements.
### Texture Bias
Geirhos _et al._[5] conducted a comparative study on convolutional neural networks (CNNs) and human observers on images with a texture-shape cue conflict. Their study revealed that, in sharp contrast to humans, CNNs are strongly biased towards recognizing texture instead of shape, alluding to fundamentally different classification strategies. They further showed that models that learn shape-related features are more robust and generalizable whereas models that rely on texture are susceptible to shortcut learning and result in poor generalization. As PC-ANNs bear some similarities
Figure 5: Visual explanations of the models trained on CelebA-Skewed. Attention maps suggest that PC-ANN relies on the salient features of the face to predict gender, while standard training relies more on the unintended shortcut cue (blond hair color in this case).
Figure 6: Texture bias analysis of models trained on Tiny-ImageNet under varying degrees of stylization. PC-ANN provides higher generalization on the stylized images, indicating lower texture bias compared to standard ANNs. The images on the right show the original images and three style images whereas the images on the graph show stylized test images at different strengths.
Figure 7: Training accuracy of models on Tiny-ImageNet under varying degrees of weight perturbations. PC-ANN is more stable to weight perturbations, indicating convergence to flatter minima.
to population coding in the cerebellum, we aim to investigate whether they exhibit behavior that is closer to humans than standard ANNs.
Following [5], we evaluate the texture bias of the model by applying style transfer [13] to the Tiny-ImageNet test images. We use four different style images and apply style transfer with varying strengths, i.e. style alpha \(\in[0.2,0.4,0.6]\) so that only the shape of the image corresponds to the correct label. Figure 6 shows that PC-ANN is able to generalize better under varying stylization strengths, suggesting that it is less biased toward the texture of the image.
### Convergence to Flatter Minima
As the loss landscape of DNN's optimization objective is non-convex, there can be multiple solutions that can fit the training data, some solutions, however, generalize better because of being in wider valleys where the model predictions do not change drastically with small perturbations in the parameter space compared to the narrow crevices [16; 1; 26]. To assess whether PC-ANN converges to wider minima, we follow the analysis in [36] and add independent Gaussian noise of increasing strength to the parameters of the trained model and analyze the generalization of the trained models on the training dataset. Figure 7 shows that the performance is more stable to the perturbations, suggesting convergence to wider minima.
### Information Compression
A number of studies that view ANNs from an information theory perspective [28; 29] relate the degree to which ANNs compress the information in their hidden states to the bounds on generalization, with higher information compression leading to a stronger bound. To evaluate the effect of population coding on the compression of information in the learned representation, we follow the analysis in [21] by freezing the learned representation of the model and measuring how well the frozen representations can fit random labels. we add a 2-layer multi-layer perceptron (MLP) network with 400 and 200 neurons on top of the frozen models trained on the different datasets and fit them on random binary labels. Table 4 shows that PC-ANN enables higher information compression suggesting that the disjoint set of features in PC-ANN allows the model to learn optimal representations that can compress higher semantic information.
## 6 Conclusion
We conducted an empirical study to explore the potential benefits of drawing insights from neuroscience findings to the development of AI algorithms. Here, we focused on the recent study [27], which explains the error-based organization of neurons in the cerebellum from a machine learning perspective and attempted to draw parallels with an existing tree-structured ANN. Our empirical evaluation of the considered architecture shows improved robustness to class imbalance and shortcut learning, efficient learning under limited data, and reduced texture bias. Furthermore, the characteristic analyses demonstrate that it compresses higher information in the hidden states, and converges to flatter minima. We hypothesize that these benefits are a consequence of the architecture that resembles population coding in the cerebellum, and further work to explicitly mimic the error-based grouping of neurons in ANN is a promising research direction.
**Limitations and Future work:** Our study focused on the object recognition task where a meaningful semantic grouping of the classes is possible and utilizes an existing suitable tree-based architecture. As such, the network does not explicitly mimic population coding in the cerebellum, and it is not straightforward to employ it for other tasks (e.g. regression) or when semantic grouping is not possible. We hope that our study inspires exploration of this idea in different domains.
\begin{table}
\begin{tabular}{l|c c c} \hline \hline & Cifar-10 & Cifar-100 & Tiny-ImageNet \\ \hline Baseline & 51.80 \(\pm\)1.08 & 92.50 \(\pm\)2.57 & 74.37 \(\pm\)3.89 \\ PC-ANN & **51.60**\(\pm\)0.65 & **90.58**\(\pm\)2.21 & **71.33**\(\pm\)0.78 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Comparative analysis on the degree to which a model with frozen learned representations can fit random binary labels. Lower training accuracy indicates higher information compression.
Some potential focus areas for future work can be better strategies for forming error-based groupings of neurons and partial error views and aligning them to minimize the global task error, intertwining the population formation and learning from partial views instead of two separate stages of learning, and explicitly biasing the update rule of the population towards the partial views while also varying the strength of the update in different layers.
|
2309.11498 | Constrained quantization for a uniform distribution with respect to a
family of constraints | In this paper, with respect to a family of constraints for a uniform
probability distribution we determine the optimal sets of $n$-points and the
$n$th constrained quantization errors for all positive integers $n$. We also
calculate the constrained quantization dimension and the constrained
quantization coefficient. The work in this paper shows that the constrained
quantization dimension of an absolutely continuous probability measure depends
on the family of constraints and is not always equal to the Euclidean dimension
of the underlying space where the support of the probability measure is
defined. | Megha Pandey, Mrinal K. Roychowdhury | 2023-09-20T17:56:55Z | http://arxiv.org/abs/2309.11498v2 | # Constrained quantization for a uniform distribution with respect to a family of constraints
###### Abstract.
In this paper, with respect to a family of constraints for a uniform probability distribution we determine the optimal sets of \(n\)-points and the \(n\)th constrained quantization errors for all positive integers \(n\). We also calculate the constrained quantization dimension and the constrained quantization coefficient. The work in this paper shows that the constrained quantization dimension of an absolutely continuous probability measure depends on the family of constraints and is not always equal to the Euclidean dimension of the underlying space where the support of the probability measure is defined.
Key words and phrases:Probability measure, constrained quantization error, optimal sets of \(n\)-points, constrained quantization dimension, constrained quantization coefficient. 2010 Mathematics Subject Classification: 60Exx, 94A34
## 1. Introduction
Let \(P\) be a Borel probability measure on \(\mathbb{R}^{k}\) equipped with a metric \(d\) induced by a norm \(\|\cdot\|\) on \(\mathbb{R}^{k}\), and \(r\in(0,\infty)\). Then, for \(n\in\mathbb{N}\), the _\(n\)th constrained quantization error_ for \(P\), of order \(r\) with respect to a family of constraints \(\{S_{j}:j\in\mathbb{N}\}\) with \(S_{1}\) nonempty, is defined as
\[V_{n,r}:=V_{n,r}(P)=\inf\Big{\{}\int\min_{a\in\alpha}d(x,a)^{r}dP(x):\alpha \subseteq\bigcup_{j=1}^{n}S_{j},\ 1\leq\text{card}(\alpha)\leq n\Big{\}}, \tag{1}\]
where \(\text{card}(A)\) represents the cardinality of the set \(A\). The number
\[V_{r}(P;\alpha):=\int\min_{a\in\alpha}d(x,a)^{r}dP(x)\]
is called the distortion error for \(P\), of order \(r\), with respect to a set \(\alpha\subseteq\mathbb{R}^{k}\). Write \(V_{\infty,r}(P):=\lim\limits_{n\to\infty}V_{n,r}(P)\). Then, the number \(D_{r}(P)\) defined by
\[D_{r}(P):=\lim_{n\to\infty}\frac{r\log n}{-\log(V_{n,r}(P)-V_{\infty,r}(P))},\]
if it exists, is called the _constrained quantization dimension_ of \(P\) of order \(r\) and is denoted by \(D_{r}(P)\). For any \(\kappa>0\), the number
\[\lim_{n\to\infty}n^{\frac{\kappa}{\kappa}}(V_{n,r}(P)-V_{\infty,r}(P)), \tag{2}\]
if it exists, is called the _\(\kappa\)-dimensional constrained quantization coefficient_ for \(P\) of order \(r\). Constrained quantization has recently been introduced, see [PR1, PR2]. Unconstrained quantization, which traditionally in the literature is known as quantization, is a special case of constrained quantization. For unconstrained quantization, one can see [DFG, DR, GG, GL, GL1, GL2, GL3, GN, KNZ, P, P1, R1, R2, R3, Z1, Z2]. If \(\int d(x,0)^{r}dP(x)<\infty\) is satisfied, then the infimum in (1) exists (see [PR1]). A set \(\alpha\subseteq\bigcup\limits_{j=1}^{n}S_{j}\) for which the infimum in (1) exists and does not contain more than \(n\) elements is called an _optimal set of \(n\)-points_ for \(P\). In unconstrained quantization, the optimal sets of \(n\)-points are referred to as _optimal sets of \(n\)-means_. This paper deals with the Euclidean metric induced by the Euclidean norm \(\|\cdot\|\). Thus, instead of writing \(V_{r}(P;\alpha)\) and \(V_{n,r}:=V_{n,r}(P)\) we will write them as \(V(P;\alpha)\) and \(V_{n}:=V_{n}(P)\). Let us take the family \(\{S_{j}:j\in\mathbb{N}\}\) of constraints, that occurs in (1) as follows:
\[S_{j}=\{(x,y):-\frac{1}{j}\leq x\leq 1\text{ and }y=x+\frac{1}{j}\} \tag{3}\]
for all \(j\in\mathbb{N}\). Let \(P\) be a Borel probability measure on \(\mathbb{R}^{2}\) which has support the closed interval \(\{(x,y):0\leq x\leq 1,\,y=0\}\). Moreover, \(P\) is uniform on its support. In this paper we determine the optimal sets \(\alpha_{n}\) of \(n\)-points for \(P\) such that
\[\alpha_{n}\subseteq\bigcup_{j=1}^{n}S_{j}\]
for all \(n\in\mathbb{N}\). We also calculate the constrained quantization dimension and the constrained quantization coefficient for \(P\). As mentioned in Remark 4.3, unlike the unconstrained quantization dimension of an absolutely continuous probability measure, the constrained quantization dimension of an absolutely continuous probability measure depends on the family of constraints and is not always equal to the Euclidean dimension of the space where the support of the probability measure is defined (for some more details, one can see [12]).
## 2. Preliminaries
In this section, we give some basic notations and definitions which we have used throughout this paper. Notice that for any elements \(p,q\in\mathbb{R}^{2}\), if \(e\) an element on the boundary of their Voronoi regions, then
\[\rho(p,e)-\rho(q,e)=0,\]
where for any two elements \((a,b)\) and \((c,d)\) in \(\mathbb{R}^{2}\), \(\rho((a,b),(c,d))\) represents the squared Euclidean distance between the two elements. Such an equation is known as a _canonical equation_. Let \(P\) be a Borel probability measure on \(\mathbb{R}^{2}\) which is uniform on its support the line segment given by
\[J:=\{(x,y):0\leq x\leq 1\text{ and }y=0\}.\]
Let \(P_{1},P_{2}\) be the marginal distributions of \(P\), i.e., \(P_{1}(A)=P(A\times\mathbb{R})\) for all \(A\in\mathfrak{B}\), and \(P_{2}(B)=P(\mathbb{R}\times B)\) for all \(B\in\mathfrak{B}\), where \(\mathfrak{B}\) is the Borel \(\sigma\)-algebra on \(\mathbb{R}\). Then, \(P=P_{1}\times P_{2}\). Thus, for any \(A\in\mathfrak{B}\) and \(B\in\mathfrak{B}\), \(P(A\times B)=P_{1}(A)P_{2}(B)\). Since \(P\) has support \([0,1]\times\{0\}\), we have
\[1=P([0,1]\times\{0\})=P_{1}([0,1])P_{2}(\{0\}),\]
i.e., \(P_{1}([0,1])=1\) and \(P_{2}(\{0\})=1\), i.e., \(P_{1}\) and \(P_{2}\) have supports \([0,1]\) and \(\{0\}\), respectively. The probability density function \(f(x,y)\) for \(P\) is given by
\[f(x,y)=\left\{\begin{array}{ll}1&\text{ if }0\leq x\leq 1\text{ and }y=0,\\ 0&\text{ otherwise.}\end{array}\right.\]
Moreover,
\[dP(x,0)=P(dx\times\{0\})=P_{1}(dx)=dP_{1}(x)=f(x,0)dx.\]
Thus, we see that the probability distribution \(P\) is uniform on its support \(J\) is equivalent to say that \(P_{1}\) is a uniform distribution on the closed interval \([0,1]\). Let \(\mathbf{X}=(X,Y)\) be a random vector with probability distribution \(P\). Then, notice that for any \(0\leq a<b\leq 1\), we have
\[E(\mathbf{X}:\mathbf{X}\in[a,b]\times\{0\})=\int_{[a,b]\times\{0\}}(x,0)dP(x,0 )=\Big{(}\int_{a}^{b}x\,dP_{1}(x),0\Big{)}=\Big{(}\frac{a+b}{2},0\Big{)}. \tag{4}\]
With respect to a finite set \(\alpha\subset\mathbb{R}^{2}\), by the _Voronoi region_ of an element \(a\in\alpha\), it is meant the set of all elements in \(\mathbb{R}^{2}\) which are nearest to \(a\) among all the elements in \(\alpha\), and is denoted by \(M(a|\alpha)\).
In this paper, we investigate the constrained quantization for the probability measure \(P\) with respect to the family of constraints given by
\[S_{j}=\{(x,y):-\frac{1}{j}\leq x\leq 1\text{ and }y=x+\frac{1}{j}\}\text{ for all }j\in\mathbb{N}, \tag{5}\]
i.e., the constraints \(S_{j}\) are the line segments joining the points \((-\frac{1}{j},0)\) and \((1,1+\frac{1}{j})\) which are parallel to the line \(y=x\). The perpendicular on a constraint \(S_{j}\) passing through a point \((x,x+\frac{1}{j})\in S_{j}\) intersects
the real line at the point \((2x+\frac{1}{j},0)\) where \(-\frac{1}{j}\leq x\leq 1\); and it intersects \(J\) if \(0\leq 2x+\frac{1}{j}\leq 1\), i.e., if
\[-\frac{1}{2j}\leq x\leq\frac{1}{2}-\frac{1}{2j}. \tag{6}\]
Thus, for all \(j\in\mathbb{N}\), there exists a one-one correspondence between the elements \((x,x+\frac{1}{j})\) on \(S_{j}\) and the elements \((2x+\frac{1}{j},0)\) on the real line if \(-\frac{1}{j}\leq x\leq 1\). Thus, for all \(j\in\mathbb{N}\), there exist bijective functions \(U_{j}\) such that
\[U_{j}(x,x+\frac{1}{j})=(2x+\frac{1}{j},0)\text{ and }U_{j}^{-1}(x,0)=\Big{(} \frac{1}{2}(x-\frac{1}{j}),\frac{1}{2}(x-\frac{1}{j})+\frac{1}{j}\Big{)}, \tag{7}\]
where \(-\frac{1}{j}\leq x\leq 1\).
In the following sections we give the main results of the paper.
## 3. Optimal sets of \(n\)-points and the \(n\)th constrained quantization errors
In this section, we calculate the optimal sets of \(n\)-points and the \(n\)th constrained quantization errors for all \(n\in\mathbb{N}\). Let us first give the following lemma.
**Lemma 3.1**.: _Let \(\alpha_{n}\subseteq\mathop{\cup}\limits_{j=1}^{n}S_{j}\) be an optimal set of \(n\)-points for \(P\) such that_
\[\alpha_{n}:=\{(a_{j},b_{j}):1\leq j\leq n\},\]
_where \(a_{1}<a_{2}<a_{3}<\cdots<a_{n}\). Then, \(\alpha_{n}\subseteq S_{n}\) and \((a_{j},b_{j})=U_{n}^{-1}(E(\textbf{X}:\textbf{X}\in M((a_{j},b_{j})|\alpha_{n })))\), where \(M((a_{j},b_{j})|\alpha_{n})\) are the Voronoi regions of the elements \((a_{j},b_{j})\) with respect to the set \(\alpha_{n}\) for \(1\leq j\leq n\)._
Proof.: Let \(\alpha_{n}:=\{(a_{j},b_{j}):1\leq j\leq n\}\), as given in the statement of the lemma, be an optimal set of \(n\)-points. Take any \((a_{q},b_{q})\in\alpha_{n}\). Since \(\alpha_{n}\subseteq\mathop{\cup}\limits_{j=1}^{n}S_{j}\), we can assume that \((a_{q},b_{q})\in S_{t}\), i.e., \(b_{q}=a_{q}+\frac{1}{t}\) for some \(1\leq t\leq n\). Since the Voronoi region of \((a_{q},b_{q})\), i.e., \(M((a_{q},b_{q})|\alpha_{n})\) has positive probability, we can assume that \(M((a_{q},b_{q})|\alpha_{n})\) intersects the support of \(P\) at the points \((a,0)\) and \((b,0)\), where \(0\leq a<b\leq 1\). Hence, the distortion error contributed by \((a_{q},b_{q})\) in its Voronoi region \(M((a_{q},b_{q})|\alpha_{n})\) is given by
\[\int_{M((a_{q},b_{q})|\alpha_{n})}\rho((x,0),(a_{q},b_{q}))\,dP= \int_{a}^{b}\|(x,0)-(a_{q},b_{q})\|^{2}\,dx\] \[=\frac{1}{3}(b-a)\Big{(}a^{2}-3(a+b)a_{q}+ab+3a_{q}^{2}+b^{2}+3b_ {q}^{2}\Big{)}\] \[=\frac{1}{3}(b-a)\Big{(}a^{2}-3(a+b)a_{q}+ab+3\Big{(}a_{q}+\frac{ 1}{t}\Big{)}^{2}+3a_{q}^{2}+b^{2}\Big{)}.\]
The above expression is minimum if \(a_{q}=\frac{at+bt-2}{4t}\). Now, putting \(a_{q}=\frac{at+bt-2}{4t}\), we have the above distortion error as
\[\frac{(b-a)\left(t^{2}\left(5a^{2}+2ab+5b^{2}\right)+12t(a+b)+12\right)}{24t^ {2}}.\]
Since \(1\leq t\leq n\), the above distortion error is minimum if \(t=n\). Thus, for \(t=n\), we see that \((a_{q},b_{q})\in S_{n}\), and
\[a_{q}=\frac{1}{2}(\frac{a+b}{2}-\frac{1}{n}),\text{ and }b_{q}=a_{q}+\frac{1}{n }=\frac{1}{2}(\frac{a+b}{2}-\frac{1}{n})+\frac{1}{n},\]
which implies
\[(a_{q},b_{q})=U_{n}^{-1}\Big{(}\Big{(}\frac{a+b}{2},0\Big{)}\Big{)},\]
which by (4) yields that
\[(a_{q},b_{q})=U_{n}^{-1}(E(\textbf{X}:\textbf{X}\in M((a_{j},b_{j})|\alpha_{n }))).\]
Since \((a_{q},b_{q})\in\alpha_{n}\) is chosen arbitrarily, the proof of the lemma is complete.
**Remark 3.2**.: By (6) and (7), and Lemma 3.1, we can conclude that all the elements in an optimal set of \(n\)-points must lie on \(S_{n}\) between the two elements \(U_{n}^{-1}(0,0)\) and \(U_{n}^{-1}(1,0)\), i.e., between the two elements \((-\frac{1}{2n},\frac{1}{2n})\) and \((\frac{n-1}{2n},\frac{n+1}{2n})\). If this fact is not true, then the constrained quantization error can be strictly reduced by moving the elements in the optimal set between the elements \((-\frac{1}{2n},\frac{1}{2n})\) and \((\frac{n-1}{2n},\frac{n+1}{2n})\) on \(S_{n}\), in other words, the \(x\)-coordinates of all the elements in an optimal set of \(n\)-points must lie between the two numbers \(-\frac{1}{2n}\) and \(\frac{n-1}{2n}\) (see Figure 1).
**Lemma 3.3**.: _Let \(\alpha_{n}\) be an optimal set of \(n\)-points for \(P\). Then, \(U_{n}(\alpha_{n})\) is an optimal set of \(n\)-means for \(P\)._
Proof.: By Lemma 3.1, \(\alpha_{n}\subseteq S_{n}\) for all \(n\in\mathbb{N}\). Let \(\alpha_{n}:=\{(a_{j},b_{j}):1\leq j\leq n\}\) be an optimal set of \(n\)-points for \(P\) such that \(a_{1}<a_{2}<\cdots<a_{n}\). Then, by Remark 3.2, we have \(-\frac{1}{2n}\leq a_{1}<a_{2}<\cdots<a_{n}\leq\frac{n-1}{2n}\). Moreover, as \((a_{j},b_{j})\in S_{n}\), we have \(b_{j}=a_{j}+\frac{1}{n}\) for all \(1\leq j\leq n\).
Notice that the boundary of the Voronoi region of the element \((a_{1},b_{1})\) intersects the support of \(P\) at the elements \((0,0)\) and \((a_{1}+a_{2}+\frac{1}{n},0)\), the boundaries of the Voronoi regions of \((a_{j},b_{j})\) for \(2\leq j\leq n-1\) intersect the support of \(P\) at the elements \((a_{j-1}+a_{j}+\frac{1}{n},0)\) and \((a_{j}+a_{j+1}+\frac{1}{n},0)\), and the boundary of the Voronoi region of \((a_{n},b_{n})\) intersects the support of \(P\) at the elements \((a_{n-1}+a_{n}+\frac{1}{n},0)\) and \((1,0)\). Thus, the distortion error due to the set \(\alpha_{n}\) is given by
\[V(P;\alpha_{n})=\int_{\mathbb{R}}\min_{a\in\alpha_{n}}\|(x,0)-a \|^{2}dP(x)\] \[=\int_{0}^{a_{1}+a_{2}+\frac{1}{n}}\rho((x,0),(a_{1},a_{1}+\frac{ 1}{n}))\,dx+\sum_{i=2}^{n-1}\int_{a_{i-1}+a_{i}+\frac{1}{n}}^{a_{i}+a_{i+1}+ \frac{1}{n}}\rho((x,0),(a_{i},a_{i}+\frac{1}{n}))\,dx\] \[\qquad+\int_{a_{n-1}+a_{n}+\frac{1}{n}}^{1}\rho((x,0),(a_{n},a_{n }+\frac{1}{n}))\,dx.\]
Since \(V(P;\alpha_{n})\) gives the optimal error and is differentiable with respect to \(a_{i}\) for all \(1\leq i\leq n\), we have \(\frac{\partial}{\partial a_{i}}V(P;\alpha_{n})=0\) implying
\[\frac{1}{n}+2a_{1}=a_{2}-a_{1}=a_{3}-a_{2}=\cdots=a_{n}-a_{n-1}=1-\frac{1}{n} -2a_{n}.\]
Then, we can assume that there is a constant \(d\) depending on \(n\), such that
\[\frac{1}{n}+2a_{1}=a_{2}-a_{1}=a_{3}-a_{2}=\cdots=a_{n}-a_{n-1}=1-\frac{1}{n} -2a_{n}=d \tag{8}\]
yielding
\[a_{2}=d+a_{1},\,a_{3}=2d+a_{1},\,a_{4}=3d+a_{1},\,\cdots,\,a_{n}=(n-1)d+a_{1}\]
i.e.,
\[a_{j}=(j-1)d+a_{1}\text{ for }2\leq j\leq n. \tag{9}\]
Again, by (8), we have
\[a_{1}=\frac{1}{2}(d-\frac{1}{n})\text{ and }a_{n}=\frac{1}{2}(1-\frac{1}{n}-d). \tag{10}\]
Putting the above values of \(a_{1}\) and \(a_{n}\) in the expression \(a_{n}=(n-1)d+a_{1}\), and then upon simplification, we have \(d=\frac{1}{2n}\). Putting the values of \(d\) by (9) and (10), we have
\[a_{j}=\frac{2j-3}{4n}\text{ for }1\leq j\leq n. \tag{11}\]
Then, we see that the boundary of the Voronoi region of the element \((a_{1},b_{1})\) intersects the support of \(P\) at the elements \((0,0)\) and \((a_{1}+a_{2}+\frac{1}{n},0)\), i.e., at the elements
\[(0,0)\text{ and }(\frac{1}{n},0),\]
the boundaries of the Voronoi regions of \((a_{j},b_{j})\) for \(2\leq j\leq n-1\) intersect the support of \(P\) at the elements \((a_{j-1}+a_{j}+\frac{1}{n},0)\) and \((a_{j}+a_{j+1}+\frac{1}{n},0)\), i.e., at the elements
\[\Big{(}\frac{j-1}{n},0\Big{)}\text{ and }\Big{(}\frac{j}{n},0\Big{)},\]
and the boundary of the Voronoi region of \((a_{n},b_{n})\) intersects the support of \(P\) at the elements \((a_{n-1}+a_{n}+\frac{1}{n},0)\) and \((1,0)\), i.e., at the elements
\[\Big{(}\frac{n-1}{n},0\Big{)}\text{ and }(1,0).\]
Thus, we deduce that for \(1\leq j\leq n\), the boundaries of the elements \((a_{j},b_{j})\) in the optimal set \(\alpha_{n}\) of \(n\)-points intersect the support of \(P\) at the elements \((\frac{j-1}{n},0)\) and \((\frac{j}{n},0)\). Hence, by Lemma 3.1, we have
\[(a_{j},b_{j})=U_{n}^{-1}(E(\mathbf{X}:\mathbf{X}\in[\frac{j-1}{n},\frac{j}{n} ]\times\{0\}))=U_{n}^{-1}((\frac{2j-1}{2n},0))\text{ for }1\leq j\leq n.\]
We know that for the uniform distribution \(P\), the optimal set of \(n\)-means (see [RR]) is given by
\[\left\{\Big{(}\frac{2j-1}{2n},0\Big{)}:1\leq j\leq n\right\}.\]
Since
\[U_{n}(\alpha_{n})=\left\{U_{n}(a_{j},b_{j}):1\leq j\leq n\right\}=\left\{ \Big{(}\frac{2j-1}{2n},0\Big{)}:1\leq j\leq n\right\},\]
the proof of the lemma is complete.
Let us now give the following theorem, which is the main theorem of the paper.
**Proposition 3.4**.: _A set \(\alpha_{n}\subseteq S_{n}\) is an optimal set of \(n\)-points for \(P\) if and only if \(U_{n}(\alpha_{n})\) is an optimal set of \(n\)-means for \(P\)._
Proof.: Let \(\alpha_{n}\subseteq S_{n}\) be an optimal set of \(n\)-points for \(P\). Then, by Lemma 3.3, the set \(U_{n}(\alpha_{n})\) is an optimal set of \(n\)-means for \(P\). Next assume that for a set \(\beta_{n}\subseteq S_{n}\), the set \(U_{n}(\beta_{n})\) is an optimal set of \(n\)-means for \(P\), we need to show that \(\beta_{n}\) is an optimal set of \(n\)-points for \(P\). For the sake of contradiction, assume that there exists a set \(\gamma_{n}\subseteq S_{n}\) such that \(\gamma_{n}\) is an optimal set of \(n\)-points for \(P\) and \(\gamma_{n}\neq\beta_{n}\). Then, by Lemma 3.3, the set \(U_{n}(\gamma_{n})\) is an optimal set of \(n\)-means for \(P\). Since, the optimal set of \(n\)-means for \(P\) is unique, we must have \(U_{n}(\gamma_{n})=U_{n}(\alpha_{n})\). Since \(U_{n}\) is an injective function, then we have \(\gamma_{n}=\beta_{n}\), which is a contradiction. Thus, the proof of the proposition is complete.
Let us now give the following theorem, which is the main theorem of the paper.
Figure 1. Points in the optimal sets of \(n\)-points for \(1\leq n\leq 4\).
**Theorem 3.5**.: _An optimal set of \(n\)-points for the probability distribution \(P\) is given by_
\[\left\{\left(\frac{1}{2}\left(\frac{2j-1}{2n}-\frac{1}{n}\right),\frac{1}{2} \left(\frac{2j-1}{2n}-\frac{1}{n}\right)+\frac{1}{n}\right):1\leq j\leq n\right\},\]
_with \(n\)th constrained quantization error_
\[V_{n}=\frac{4n^{2}+12n+13}{24n^{2}}.\]
Proof.: Let \(\alpha_{n}:=\{(a_{j},b_{j}):1\leq j\leq n\}\) be an optimal set of \(n\)-points for \(P\) such that \(a_{1}<a_{2}<\cdots<a_{n}\). By Proposition 3.4, we know that \(U_{n}(\alpha_{n})\) is an optimal set of \(n\)-means for \(P\), i.e.,
\[U_{n}(\alpha_{n})=\left\{\left(\frac{2j-1}{2n},0\right):1\leq j\leq n\right\}.\]
Since \(U_{n}\) is an injective function, we have
\[\alpha_{n}=U_{n}^{-1}\Big{\{}\Big{(}\frac{2j-1}{2n},0\Big{)}:1\leq j\leq n \Big{\}}=\Big{\{}U_{n}^{-1}\Big{(}\frac{2j-1}{2n},0\Big{)}:1\leq j\leq n\Big{\}}\]
i.e.,
\[\alpha_{n}=\left\{\left(\frac{1}{2}\left(\frac{2j-1}{2n}-\frac{1}{n}\right), \frac{1}{2}\left(\frac{2j-1}{2n}-\frac{1}{n}\right)+\frac{1}{n}\right):1\leq j \leq n\right\}.\]
Writing
\[(a_{j},b_{j})=\left(\frac{1}{2}\left(\frac{2j-1}{2n}-\frac{1}{n}\right),\frac {1}{2}\left(\frac{2j-1}{2n}-\frac{1}{n}\right)+\frac{1}{n}\right)\!,\]
for \(1\leq j\leq n\), we have the \(n\)th constrained quantization for \(n\)-points as
\[V_{n}=\int_{\mathbb{R}}\min_{a\in\alpha_{n}}\|(x,0)-a\|^{2}dP(x)\] \[=\int_{0}^{a_{1}+a_{2}+\frac{1}{n}}\rho((x,0),(a_{1},a_{1}+\frac{ 1}{n}))\,dx+\sum_{i=2}^{n-1}\int_{a_{i-1}+a_{i}+\frac{1}{n}}^{a_{i}+a_{i+1}+ \frac{1}{n}}\rho((x,0),(a_{i},a_{i}+\frac{1}{n}))\,dx\] \[\qquad+\int_{a_{n-1}+a_{n}+\frac{1}{n}}^{1}\rho((x,0),(a_{n},a_{n }+\frac{1}{n}))\,dx,\]
which upon simplification yields
\[V_{n}=\frac{4n^{2}+12n+13}{24n^{2}}.\]
Thus, the proof of the theorem is complete (see Figure 1).
## 4. Constrained quantization dimension and constrained quantization coefficient
In this section, we show that the constrained quantization dimension \(D(P)\) exists and equals two. We further show that the \(D(P)\)-dimensional constrained quantization coefficient for \(P\) exists as a finite positive number.
**Theorem 4.1**.: _The constrained quantization dimension \(D(P)\) of the probability measure \(P\) exists, and \(D(P)=2\)._
Proof.: By Theorem 3.5, the \(n\)th constrained quantization error is given by
\[V_{n}=\frac{4n^{2}+12n+13}{24n^{2}}.\]
Notice that \(V_{\infty}=\lim_{n\to\infty}V_{n}=\frac{1}{6}\). Hence, the constrained quantization dimension is given by
\[D(P)=\lim_{n\to\infty}\frac{2\log n}{-\log(V_{n}-V_{\infty})}=\lim_{n\to \infty}\frac{2\log(n)}{-\log\left(\frac{4n^{2}+12n+13}{24n^{2}}-\frac{1}{6} \right)}=2,\]
which is the theorem.
**Theorem 4.2**.: _The \(D(P)\)-dimensional constrained quantization coefficient for \(P\) exists, and equals \(\frac{1}{2}\)._
Proof.: We have
\[V_{n}=\frac{4n^{2}+12n+13}{24n^{2}}\text{ and }V_{\infty}=\lim_{n\to\infty}V_{n}= \frac{1}{6},\]
and hence, using (2), we have the \(D(P)\)-dimensional constrained quantization coefficient as
\[\lim_{n\to\infty}n(V_{n}-V_{\infty})=\lim_{n\to\infty}n\left(\frac{4n^{2}+12n+ 13}{24n^{2}}-\frac{1}{6}\right)=\frac{1}{2}.\]
Thus, the proof of the theorem is complete.
**Remark 4.3**.: For the absolutely continuous probability measure, considered in this paper, we have obtained that the constrained quantization dimension is two which is not equal to the Euclidean dimension of the underlying space where the support of the probability measure is defined. On the other hand, it is well-known that the unconstrained quantization dimension of an absolutely continuous probability measure always equals the Euclidean dimension of the space where the support of the probability measure is defined (see [BW]).
|
2309.06682 | A Novel Low-Cost, Recyclable, Easy-to-Build Robot Blimp For Transporting
Supplies in Hard-to-Reach Locations | Rural communities in remote areas often encounter significant challenges when
it comes to accessing emergency healthcare services and essential supplies due
to a lack of adequate transportation infrastructure. The situation is further
exacerbated by poorly maintained, damaged, or flooded roads, making it arduous
for rural residents to obtain the necessary aid in critical situations. Limited
budgets and technological constraints pose additional obstacles, hindering the
prompt response of local rescue teams during emergencies. The transportation of
crucial resources, such as medical supplies and food, plays a vital role in
saving lives in these situations. In light of these obstacles, our objective is
to improve accessibility and alleviate the suffering of vulnerable populations
by automating transportation tasks using low-cost robotic systems. We propose a
low-cost, easy-to-build blimp robot (UAVs), that can significantly enhance the
efficiency and effectiveness of local emergency responses. | Karen Li, Shuhang Hou, Matyas Negash, Jiawei Xu, Edward Jeffs, Diego S. D'Antonio, David Saldaña | 2023-09-13T02:41:22Z | http://arxiv.org/abs/2309.06682v1 | A Novel Low-Cost, Recyclable, Easy-to-Build Robot Blimp For Transporting Supplies in Hard-to-Reach Locations
###### Abstract
Rural communities in remote areas often encounter significant challenges when it comes to accessing emergency healthcare services and essential supplies due to a lack of adequate transportation infrastructure. The situation is further exacerbated by poorly maintained, damaged, or flooded roads, making it arduous for rural residents to obtain the necessary aid in critical situations. Limited budgets and technological constraints pose additional obstacles, hindering the prompt response of local rescue teams during emergencies. The transportation of crucial resources, such as medical supplies and food, plays a vital role in saving lives in these situations. In light of these obstacles, our objective is to improve accessibility and alleviate the suffering of vulnerable populations by automating transportation tasks using low-cost robotic systems. We propose a low-cost, easy-to-build blimp robot (UAVs), that can significantly enhance the efficiency and effectiveness of local emergency responses.
## I Introduction
In rural communities located in remote areas, the absence of transportation infrastructure presents considerable obstacles in the population's access to emergency healthcare, first aid kits, and vital supplies during times of crisis [1]. This issue is particularly pronounced in countries like the Philippines, which has a unique and challenging geographical layout [2]. The scattered nature of the Philippine islands and the rugged terrain make it exceptionally difficult to establish well-developed road networks, resulting in limited accessibility to emergency services and supplies. Adding to the complexity, the country faces a high frequency and intensity of natural disasters due to its geographical layout, with some estimations placing 60% of its land area and 74% of its population as exposed to numerous hazards [3]. In the aftermath of a disaster, debris, fallen trees, or floodwaters can block roads and hinder transportation. This restriction on road access significantly affects emergency response efforts, particularly the timely delivery of essential supplies. Consequently, addressing these transportation challenges in rural areas becomes crucial. It requires concerted efforts to implement strategies that enhance accessibility to emergency services and supplies.
With the advent of modern technology, there is an increasing recognition of the crucial role science and technology play in humanitarian development [4]. To address the issue of inaccessibility in rural communities, innovative tools and methodologies are needed to effectively tackle these problems. Unmanned Aerial Vehicles (UAVs) have emerged as valuable assets for improving accessibility due to their unique capabilities and versatility [5]. Leveraging the advantages of drone technology can help to overcome the limitations imposed by rugged terrains and the absence of well-developed transportation infrastructure.
The drone delivery space has experienced remarkable growth and innovation in recent years [6], spearheaded by industry leaders such as Zipline, Matternet, and Flytex. These companies are at the forefront of revolutionizing
Fig. 1: Current prototype capable of lifting objects weighing 50 g and flying, tested in an outdoor environment with a wind speed of up to 2 m/s. The blimp was flying on the campus of the University of the Philippines.
the transportation of goods, particularly in remote or inaccessible areas. Among them, Zipline stands out as the most promising player, having successfully executed over 450,000 deliveries [7]. Zipline has established a proven market model by specializing in the delivery of crucial medical supplies, including blood, ointments, and medicine, in Rwanda. Their choice of operating fixed-wing drones allows them to achieve higher speeds, reaching up to 70 mph, and enables efficient long-range flights [8]. To launch their drones, Zipline utilizes a catapult launcher, ensuring swift takeoff. Their aircraft releases its payload through a hatch, relying on a parachute for the payload to descend to the ground. While this approach works well in areas with ample landing zones and extensive infrastructure such as launchers, it poses challenges when the delivery takes place on the side of a mountain or a small island. Quadcopters offer an alternative solution, which rely solely on motors for lift generation, as opposed to utilizing airflow through the wings. Consequently, quadcopters have significantly shorter flight times, typically less then 50% of what fixed-wing counterparts can achieve. While UAVs have great potential, their high initial cost and ongoing maintenance expenses may limit their widespread adoption, especially in rural communities facing budget constraints. In order to address this issue, cost-effective tools and methodologies are required to better handle these situations.
In contrast to fixed-wing aircraft and quadcopters, blimps utilize lighter-than-air gases within their body to generate the majority of their lifting force. This unique characteristic offers several advantages over their counterparts. By incorporating blimp technology into drone delivery systems, we can overcome the limitations faced by fixed-wing aircraft and quadcopters. However, there is very limited research and development in the field of drone delivery utilizing blimp technology to address the challenges including fuel efficiency, noise levels, and flight time. To bridge this gap, we propose a solution that leverages blimp robots for drone applications in realistic outdoor environments. By combining the advantages of blimps with robot capabilities, we aim to create a feasible and sustainable approach to a scalable drone delivery with the following advantages:
Extended Flight TimeBlimp robots offer the advantage of longer flight durations compared to traditional drones that rely on mechanical components like wings or rotors. Instead of solely relying on batteries with limited capacity, blimps utilize the buoyancy provided by lighter-than-air (LTA) gas to stay afloat for extended periods, which help us accomplish tasks that require prolonged operation without interruption.
Maximized PayloadThe design of blimps allows for the carriage of larger and heavier payloads, making them ideal for transporting essential resources such as medical supplies to disaster-affected areas, which contributes to their significance in facilitating deliveries in various applications.
Ease of Takeoff and LandBlimps have a distinct advantage when it comes to takeoff due to their lighter-than-air nature. Unlike fixed-wing aircraft, blimps require minimal runway or takeoff infrastructure. They have the ability to take off and land vertically, smoothly and steadily [9]. The ease of takeoff adds to the versatility and adaptability of blimps, making them a valuable asset in diverse operational environments.
Collision ResilienceThe soft and flexible nature of the balloon envelope in blimps allows them to absorb impacts with obstacles or structures, making them well-suited for operating in cluttered or confined environments where the risk of collisions and potential damage is higher [10, 11]. Their collision resilience enhances safety and reliability, enabling them to continue operating and providing valuable support in scenarios such as disaster response missions.
We present in this paper a novel solution that utilizes drone technology to enhance accessibility in areas grappling with transportation barriers, such as the Philippines. We discuss how blimp robots can overcome the limitations of current technology and provide a more feasible and practical solution for emergency response in resource-constrained areas. By using affordable and readily available materials, we introduce a blimp robot model that can be constructed without significant financial burden, making it accessible to communities with limited resources. We also discuss the important challenges that need to be addressed to enable the seamless use and adoption of blimp robots, which include technical considerations and operational limitations.
## II Our Approach
The blimp was meticulously designed to prioritize simplicity and feasibility, and it has three primary components:
#### Ii-1 Flight Controller
The propulsion system of the blimp utilizes the low-cost microcontroller ESP32-S3 as flight control board. Despite its compact size, weighing just \(1.7~{}g\) and having a small dimension of \(21\times 17.5~{}mm\). The processing power of ESP32-S3 is sufficient to control 2 servos and 2 motors. Each motor is mounted on a servo to assemble a bi-copter mechanism. The simplicity of our design, using minimal actuation support, reduces complexity and the risk of potential points of failure, provides adequate support for takeoff and landing tasks.
#### Ii-2 Foldable chassis
We introduce a foldable chassis design utilizing paper materials for the robot blimp's structure, offering an efficient and cost-effective construction solution. Drawing inspiration from principles found in origami and bridges, we have engineered the paper material to exhibit sufficient strength for securely holding all components in place, while also providing the capability to bear additional weight. Crutially, the internal structure can support forces up to \(5~{}kg\). To implement our approach, we employ a laser cutter or a scalpel to precisely cut a foam core board, which
serves as the basis for the chassis. Upon cutting, the chassis can be effortlessly folded, forming the desired framework for the blimp. (See Fig. 2).
#### Ii-B3 Helium Balloon
The blimp's main body consists of a helium-filled balloon made from Mylar material. The balloon takes on the shape of an ellipsoid, providing stability and aerodynamic performance. The total volume of the ellipsoid-shaped balloon is \(0.125\ m^{3}\), and it is filled with industry-grade helium, guaranteeing a minimum helium concentration of \(99\%\). The blimp generates up to \(65\ g\) buoyancy, providing sufficient lifting force to transport all necessary components as well as any additional payloads, if required.
The blimp's design features practicality and reliability, incorporating several noteworthy aspects:
Balloon AttachmentTo ensure a secure connection between the balloon and the foldable chassis, we employ two sewing elastic bands forming an 'X' configuration (See Fig. 1). These elastic bands offer flexibility and can accommodate the expansion and contraction of the balloon as needed. The tension provided by the elastic bands keeps the chassis securely in place, eliminating the need for tape or other adhesives. Moreover, this balloon attachment design facilitates the effortless replacement of balloons. The use of sewing elastic bands exemplifies a practical and adaptable method for firmly attaching the balloon to the structure, allowing for a reliable and adjustable connection.
Adjustable FasteningTo further enhance the flexibility and adaptability of the system, plastic clips are utilized to tighten the sewing elastic bands (See Fig. 3). These clips enable fine-tuning of the tightness, providing precise control over the attachment of the balloon to the chassis. This adjustability is particularly beneficial in situations where the helium balloon may lose air or deflect, as it allows for prompt tightening to maintain the structural integrity of the blimp robot. The use of plastic clips offers a convenient and effective means of adjusting the tension in the elastic bands. By easily sliding the clips along the bands and securing them in place, the desired level of tightness can be achieved, thus ensuring that the balloon remains securely fastened to the chassis. The incorporation of plastic clips in the attachment mechanism adds to the overall flexibility and adaptability of the system.
Foldable Chassis PositioningBased on the blimp components design, the vehicle naturally tries to stay horizontal. The rigid placement of the object underneath the top of chassis body beneath the intersection of the balloon's major axes is a key design feature that emphasizes the blimp's low center of mass and enhances its structural integrity. The low center of mass plays a crucial role in ensuring the blimp's natural stability during flight, as it helps to counterbalance any external forces or disturbances that may affect the vehicle. By positioning the motors, servos, and associated components in a rigid configuration inside the chassis, such a design minimizes potential vibrations and stresses, promoting safe and controlled operation.
## III Use Cases
Drone delivery has numerous application in places lack of connectivity as it addresses various challenges and improves efficiency in many different sectors. Thus, the opportunity route for utilization of drones is vast and diverse. Here are some possible uses cases:
Rural ConnectivityIn less developed countries like the Philippines, numerous remote and underserved rural areas suffer from limited access to essential facilities. This
\begin{table}
\begin{tabular}{|c|c|c|} \hline Components & QITY & C/U (USD) \\ \hline \hline XIAO ESP32 S3 & 1 & 5 \\ Sensor Board & 1 & 4.88 \\ Brushed motors & 2 & 2.4 \\ Servo motor & 2 & 0.8 \\ Motor driver & 1 & 1.49 \\ Paper frame & 1 & 2 \\ Balloon & 2 & 2 \\ Helium & 124 liter & 3.65 \\ \hline Total & 1 & 27.42 \\ \hline \end{tabular}
\end{table} TABLE I: Total cost of a blimp robot.
Fig. 3: Plastic clips utilized to adjust tightness by sliding along the bands.
Fig. 2: Foldable chassis used to hold the components
ack of physical connectivity creates barriers for rural residents, hindering their ability to reach crucial services such as healthcare facilities, schools, and markets. Without well-maintained roads or reliable public transportation systems, individuals face significant challenges in accessing specialized medical care or attending educational institutions in nearby towns or cities. However, the introduction of drone technology can address these limitations by providing efficient and cost-effective transportation solutions, ultimately generating educational impact and promoting knowledge transfer. By implementing drone technology, not only can essential goods, medical supplies, and agricultural products be transported to remote rural areas in a swift and efficient manner, but the technology itself can also have transformative educational benefits. Drones equipped with user-friendly interfaces and simplified operation mechanisms can empower non-expert individuals to become familiar with cutting-edge technology. This exposure to drones not only facilitates the transport of goods but also serves as a catalyst for knowledge transfer and skill development. As more people become acquainted with drone maneuvering, the technology can be leveraged to address a wider range of challenges faced by rural communities. By gradually expanding the scope of drone applications, rural residents gain exposure to new opportunities and possibilities, fostering a culture of innovation and problem-solving.
Emergency ResponseOne useful application is the delivery of medical supplies during emergency response. In countries like the Philippines, where road access can be limited, delivering medical supplies for emergency response can be challenging and time-consuming. The geographical features of these countries make it difficult to establish and maintain efficient transportation networks, especially in areas with limited road infrastructure. However, drones can quickly transport essential medical supplies, such as medications, vaccines, test strips, or diagnostic samples, to inaccessible areas. They can bypass traffic congestion and rough terrain, ensuring the timely delivery of critical resources to emergency response teams. This can be particularly valuable during emergency situations where immediate medical attention is required. The establishment of a drone-based transportation network can enhance the coordination and connectivity between medical facilities. The integration of blimps into the existing infrastructure enables efficient distribution of resources during emergencies.
Disaster ReliefIn the aftermath of a disaster, the disruption caused by debris, fallen trees, or floodwaters often obstructs roads and severely hampers transportation. This poses significant challenges for both affected individuals in need of assistance and response teams attempting to coordinate their efforts. To mitigate these difficulties and minimize losses during the crucial post-disaster response phase, the implementation of efficient management practices integrated with digital technologies becomes imperative. In this regard, the utilization of drone delivery approaches emerges as a game-changer in disaster relief efforts. Drones, with their remarkable capabilities, can play a pivotal role in swiftly and effectively transporting essential emergency supplies to impacted communities. These supplies encompass vital resources such as food, water, blankets, and hygiene kits. By leveraging drone delivery, these provisions can be rapidly transported, bypassing disrupted transportation infrastructure and reaching otherwise inaccessible areas in a timely manner.
## IV Model
### _Dynamics_
We define the world reference frame as a fixed frame, denoted by \(\{W\}\). The blimp has a body frame \(\{B\}\) whose origin is at the center of mass (COM). Its \(x\)-axis points toward the front of the blimp, and its \(z\) axis points upward, as shown in Fig. 6. A pair of vectorized thrusters actuate the blimp to achieve rotational and translational motion. The vectorized thrusters are mounted at both end of a support arm placed beneath the balloon in the fashion of
Fig. 4: Front view of the vehicle.
Fig. 5: Side view of the vehicle.
a bicopter [12], both keeping a distance of \(d\) from \(z_{B}\). The support arm is in parallel with the \(y\)-axis of \(\{B\}\) with a distance of \(l_{b}\) below the blimp COM. Therefore, we denote their mounting positions in \(\{B\}\) as \(\mathbf{p}_{1}=[0,-d,l_{b}]^{\top}\) and \(\mathbf{p}_{2}=[0,d,l_{b}]^{\top}\), respectively. Each vectorized thruster consists of a micro servo and a rotor. The thrust forces of the rotors are \(f_{1}\) and \(f_{2}\), respectively, and the rotation angles of the servos in the direction of \(y_{B}\) are \(\theta_{1}\) and \(\theta_{2}\), respectively. At rest, i.e., \(\theta_{i}=0\), the force vector of the \(i\)-th thruster aligns with \(x_{B}\).
The translation and rotation from \(\{W\}\) to \(\{B\}\), \(\mathbf{r}\in\mathbb{R}^{3}\) and \(\mathbf{R}\), respectively, describe the position and orientation of the blimp. The rotation matrix, \(\mathbf{R}\in\text{SO}(3)\), is in the special orthogonal group of dimension \(3\), which means \(\det\mathbf{R}=1\) and \(\mathbf{R}^{-1}=\mathbf{R}^{\top}\). \(\mathbf{R}\) can be converted from Euler angles with
\[\mathbf{R}=\begin{bmatrix}c\psi c\theta-s\phi s\psi s\theta&-c\phi s\psi&c\psi s \theta+c\theta s\phi s\psi\\ c\theta s\psi+c\psi s\phi s\theta&c\phi c\psi&s\psi s\theta-c\psi c\theta s\phi \\ -c\phi s\theta&s\phi&c\phi c\theta\end{bmatrix} \tag{1}\]
where \(\phi,\theta\), and \(\psi\) represent the corresponding roll-pitch-yaw Euler angles in the direction of \(x_{W},y_{W}\), and \(z_{W}\), respectively, and \(c\theta\) and \(s\theta\) denote \(\cos\theta\) and \(\sin\theta\), respectively, similarly for \(\phi\) and \(\psi\).
We use Newton-Euler equation to describe the dynamics of the blimp,
\[m\mathbf{\ddot{r}}= \mathbf{R}\mathbf{f}+\mathbf{f}_{e}, \tag{2}\] \[\mathbf{J}\mathbf{\dot{\omega}}+\mathbf{\omega}\times\mathbf{J}\mathbf{\omega}= \mathbf{\tau}+\mathbf{\tau}_{e}, \tag{3}\]
where the net force and torque vectors generated by the thrusters in \(\{B\}\) are
\[\mathbf{f}=\sum_{i=1}^{2}f_{i}\left[\cos\theta_{i},0,\sin\theta_{i} \right]^{\top}, \tag{4}\] \[\mathbf{\tau}=\sum_{i=1}^{2}f_{i}\left(\mathbf{p}_{i}\times\left[\cos \theta_{i},0,\sin\theta_{i}\right]^{\top}\right). \tag{5}\]
The external force from gravity and the balloon buoyancy is \(\mathbf{f}_{e}=\left[0,0,f_{b}-mg\right]^{\top}\), where \(g\) is the gravitational acceleration and \(m\) is the total mass of the blimp. The external torque from the buoyancy is \(\mathbf{\tau}_{e}=\left(\mathbf{R}\left[0,0,l\right]\right)\times\left[0,0,f_{b} \right]^{\top}\) and \(\mathbf{J}\) is the matrix of inertia moment. (2) shows that the combined forces of gravity, buoyancy, and rotor thrust bring a linear acceleration to the blimp, causing it to translate. (3) shows that the combined torque of gravity, buoyancy, and rotor thrust bring an angular acceleration to the blimp, causing it to rotate.
### _Control_
#### Ii-B1 Manual Control
We use a joystick to provide manual control input for the blimp. Under manual control, the motion is egocentric, which means that users only control the force and torque in \(\{B\}\), as shown in (4) and (5). Similarly to controlling a differential drive ground vehicle [13] using a joystick controller, the user determines the linear motion of the blimp in the \(xz\)-plane and the angular motion in yaw, marked by forces \(f_{x}\), \(f_{z}\) which are the first and third elements in (4) and torque \(\tau_{z}\) which is the third element in (5), respectively. Since the blimp has four actuators, namely, two rotors and two servos, we further allow the user to provide a desired \(\tau_{x}\) which is the first element in (5) for stable roll operation. Solving for the actuator inputs \(f_{1},f_{2},\theta_{1}\), and \(\theta_{2}\) with given \(f_{x},f_{z},\tau_{x}\), and \(\tau_{z}\), we obtain
\[f_{1}=\sqrt{f_{1x}^{2}+f_{1z}^{2}},\] \[f_{2}=\sqrt{f_{2x}^{2}+f_{2z}^{2}},\] \[t_{1}=\arctan\frac{f_{1z}}{f_{1x}},\] \[t_{2}=\arctan\frac{f_{2z}}{f_{2x}},\]
where \(f_{1x}=\frac{1}{2}(f_{x}-\frac{\tau_{z}}{d}),f_{2x}=\frac{1}{2}(f_{x}+\frac{ \tau_{x}}{d}),f_{1z}=\frac{1}{2}(f_{z}+\frac{\tau_{x}}{d}),\) and \(f_{2z}=\frac{1}{2}(f_{z}-\frac{\tau_{x}}{d})\).
#### Ii-B2 Autonomous control
We control the blimp to go to a desired position \(\mathbf{r}^{d}\) by applying a Proportional-Integral-Derivative (PID) control [14] to obtain the desired force in \(\{W\}\)
\[\mathbf{f}^{d}=\mathbf{K}_{p}\left(\mathbf{r}^{d}-\mathbf{r}\right)+\mathbf{K}_{d}\left(\mathbf{\dot{r} }^{d}-\dot{\mathbf{r}}\right)+\mathbf{K}_{i}\int\left(\mathbf{r}^{d}-\mathbf{r}\right)dt, \tag{6}\]
where \(\mathbf{K}_{p}\), \(\mathbf{K}_{d}\), and \(\mathbf{K}_{i}\) are the proportional, derivative, and integral gain matrices. To generate the force, the blimp needs to consider its orientation, and transform the desired
Fig. 6: The abstracted model of our blimp, where the dowel points represents the center of mass of the blimp.
force into \(\{B\}\), \(\boldsymbol{f}_{b}=\boldsymbol{R}^{\top}\boldsymbol{f}^{d}\). Similarly to manual control, the blimp achieves the desired motion in its \(xz\)-plane by taking the first and third elements of \(\boldsymbol{f}^{d}\) as \(f_{x}\) and \(f_{z}\), then compensates the desired motion in \(y_{B}\) directions by converting the second element of \(\boldsymbol{f}_{b}\), \(f_{y}\) into the desired torque in yaw, \(\tau_{z}=K_{\psi}\arccos\frac{f_{y}}{\|\boldsymbol{f}^{d}\|}\), where \(K_{\psi}\) is a gain coefficient for yaw.
## V Experiments
To evaluate the performance and limitations of our vehicle design and control system, we conducted a series of experiments using the prototype blimp robot, as described in Section V, with manual control via a joystick. These experiments aim to evaluate the blimp's ability to transport a small box weighing \(50\) grams from point A to point B, thereby validating the effectiveness of the blimp robot design and controller. The experiments involve subjecting the blimp robot to various conditions, including turbulent environments and potential collisions.
### _Flying in environments with low turbulence_
This evaluation involves transporting objects from one point to another, covering a distance of 60 meters in a realistic environment where turbulence is present (See Fig. 6). To simulate turbulent conditions, we introduce random turbulence generated by air conditioners placed in the high-bay area. The velocity of the turbulence reaches up to 0.4 meters per second, providing a realistic scenario to test the blimp robot's ability to navigate and maintain stability in turbulent environments. The total duration of the journey, including takeoff and landing, is 45 seconds. During the flight, the blimp robot reaches and maintains a height of approximately 4 meters, which introduces a slight delay during landing. The turbulence encountered during the flight affects the trajectory of the blimp, causing it to take detours. However, despite the challenging conditions, the blimp successfully overcomes the wind, maintains stability, and ultimately reaches the intended destination point.
### _Flying in a cluttered, narrow space_
In this experiment, our aim is to showcase the collision-tolerant capabilities of our blimp robot. We design the experiment to test the blimp's ability to passively withstand collisions by intentionally directing it towards different obstacles along its path. We conduct the experiment in a narrow hallway with obstacles of various shapes and sizes (See Fig. 7). During the experiment, the blimp robot not only tolerates hard crashes, but it also effectively protects its propellers from hitting the obstacles, preventing damages to the vehicle. Despite the collisions, the blimp robot exhibits the ability to recover quickly, maintaining its stability, and continue to proceed. This experiment highlights the robustness and durability of our blimp robot design, demonstrating its potential to operate in cluttered environments where collisions with obstacles are likely to occur.
## VI Conclusions and Future Work
While blimp robots utilizing affordable components offer several advantages over traditional drones, such as extended flight time, improved payload capacity, and improved collision tolerance, and are a more cost-effective option for accomplishing similar tasks, there are still several challenges that must be addressed to enable their seamless use and widespread adoption in projects aimed at generating significant social impact, such as improving accessibility in rural communities. In light of that, the authors of this paper are actively working as part of a team to develop a cost-effective and reliable prototype capable of autonomous flight in outdoor environments.
Fig. 8: A blimp robot carrying an object weighing \(50\)\(g\) flying in a cluttered, narrow hallway with the presence of various obstacles including wall, trash cans, desks, and helmets hanging on the wall.
Fig. 7: A blimp robot carrying an object weighing \(50\)\(g\) flying in a high-bay area with air-conditioners on the side generating random turbulence, with velocities reaching up to \(0.4\)\(m/s\), simulating a realistic environment.
The current design of the blimp robot lacks autonomy or a control strategy beyond the remote controller. Incorporating additional sensors such as Geo-localization sensors or cameras into the system becomes crucial to address the potential impact of the added weight on the blimp's overall performance. Adding extra weight will necessitate generating more uplift force to uphold stable flight. Therefore, it will be essential to make proper adjustments and optimize the blimp's structure and propulsion system to accommodate the increased payload while ensuring its effective maneuver and stability.
Moving forward, we are strongly committed to integrating a perception component into our blimp robot. We are considering the utilization of potential camera modules, such as OpenMV H7 and Nicla Vision, both of which offer open-source capabilities and high versatility. The integration of perception capabilities involves the implementation of visual servoing methodologies, such as path following, which has been widely adopted as a prominent control strategy for robot blimps. With the integration of the path following algorithm, the blimp will be empowered to navigate autonomously and accurately follow a specific trajectory, enhancing its capacity to efficiently and precisely connect two points in a straight line. The inclusion of perception features not only elevates the blimp's autonomy but also expands its potential for sophisticated tasks, showcasing its adaptability and utility in real-world applications.
The current estimated vehicle cost is $27.42 per vehicle, with a target to reduce it below $150.00 after integrating the perception component. While cost minimization is a priority, we maintain a focus on vehicle reliability and usability. One significant challenge we are addressing is the impact of wind on the blimp's safe operation. Wind can affect stability and control, potentially leading to undesired flight behavior. The 9-axis inertial motion sensors, which accurately monitor the vehicle's orientation, are currently incorporated to enable the vehicle's effective response during turbulence and to maintain stable flight. Moreover, we are actively exploring and designing structural elements that enhance wind tolerance. These modifications aim to make the vehicle more resilient to gusts and crosswinds.
In our future endeavors, we aim to improve accessibility and provide timely assistance to communities in need. Our project envisions the establishment of a sustainable and scalable blimp robot that serves as a reliable means of transportation for critical supplies and medical assistance. To ensure the successful implementation, the team has actively sought continuous feedback from partners in the Philippines. During our initial visit to the Philippines in July 2023, we formed essential partner relationships to establish developmental frameworks for the vehicle. Key partnerships were forged with the Red Cross Philippines and the Quezon City Disaster and Risk Reduction Management Office. These collaborations have significantly contributed to narrowing the focus of our project, aligning it with the needs and priorities of the local communities.
Moving forward, our goal is to develop a vehicle that effectively addresses a well-defined problem. We plan to thoroughly test and implement this vehicle on a small scale, ensuring its functionality and adaptability to local conditions. Upon a successful small-scale launch, we aim to pursue widespread implementation of the blimp robot throughout the Philippines. Aligned with our vision, we are committed to generating educational impact and promoting knowledge transfer. We aim to make technology more affordable and accessible. By providing training and educational programs, we can empower local communities to effectively and sustainably utilize and maintain the blimp robot rescue network.
|
2305.19477 | An Insider Threat Mitigation Framework Using Attribute Based Access
Control | Insider Threat is a significant and potentially dangerous security issue in
corporate settings. It is difficult to mitigate because, unlike external
threats, insiders have knowledge of an organization's access policies, access
hierarchy, access protocols, and access scheduling. Several approaches to
reducing insider threat have been proposed in the literature. However, the
integration of access control and moving target defense (MTD) for deceiving
insiders has not been adequately discussed. In this paper, we combine MTD,
deception, and attribute-based access control to make it more difficult and
expensive for an insider to gain unauthorized access. We introduce the concept
of correlated attributes into ABAC and extend the ABAC model with MTD by
generating mutated policy using the correlated attributes for insider threat
mitigation. The evaluation results show that the proposed framework can
effectively identify correlated attributes and produce adequate mutated policy
without affecting the usability of the access control systems. | Olusesi Balogun, Daniel Takabi | 2023-05-31T01:12:13Z | http://arxiv.org/abs/2305.19477v1 | # An Insider Threat Mitigation Framework Using Attribute Based Access Control
###### Abstract.
Insider Threat is a significant and potentially dangerous security issue in corporate settings. It is difficult to mitigate because, unlike external threats, insiders have knowledge of an organization's access policies, access hierarchy, access protocols, and access scheduling. In addition, the complexity, time, and skill required to locate the threat source, model, and timestamp make it more difficult for organizations to combat. Several approaches to reducing insider threat have been proposed in the literature. However, the integration of access control and moving target defense (MTD) for deceiving insiders has not been adequately discussed. In this paper, we combine MTD, deception, and attribute-based access control to make it more difficult and expensive for an insider to gain unauthorized access. We introduce the concept of correlated attributes into ABAC and extend the ABAC model with MTD by generating mutated policy using the correlated attributes for insider threat mitigation. The evaluation results show that the proposed framework can effectively identify correlated attributes and produce adequate mutated policy without affecting the usability of the access control systems.
Cyber Deception, Moving Target Defense, Insider Threat, Attribute based Access Control +
Footnote †: *: Corresponding Author: Department of Computer Science, Georgia State University, Atlanta, GA 30303 USA, Email: [email protected]
## 1. Introduction
Insider Threat is an existent and significant security challenge in corporate organizations. Recently, insider threat has increasingly grown to be a leading threat as more organizations adopt digital tools to manage their data, either locally or in the cloud, resulting in expensive security breaches. (Krishnan et al., 2018) According to Cyber and Infrastructure Security Agency (CISA), "insider threat occurs when an insider uses authorized access, wittingly or unwittingly, to do harm to the Department's mission, resources, personnel, facilities, information, equipment, networks, or systems. (Krishnan et al., 2018)* Organization's employees misuse their assigned privileges to fulfill specific secondary purposes. These purposes include gaining access to either view, edit, delete unauthorized resources, or reveal the resources to the organization's competitors. Unlike external threat, which organizations often guard against mainly at the network level before entering their perimeter, insider threat lingers for an extended period and mostly goes unnoticed before its effect surfaces. In addition, it is difficult to manage because, unlike external threats, the insiders have some information about an organization's access policies, access hierarchy, access protocols, and access scheduling. In addition, the complexity, time, and expertise required to identify the threat's source, technique, and timeline make it more challenging for organizations to combat. Therefore, insider threat remains a top security priority for private and public organizations. In this light, some works have numerically quantified the severity of the insider threat to organizations. For example, in 2015, a survey of federal cybersecurity containing 200 Information Technology managers conducted by the SolarWinds research team revealed that one-third of the participants had concerns about insider threats. (Krishnan et al., 2018) Likewise, in 2020, the Securonic Threat Research Team examined over 300 security incidents across organizations from 8 different sectors. According to their findings, privilege abuse, a key type of insider threat, accounts for 19% of all incidents and is the second most common type. (Krishnan et al., 2018) More recently, Cybersecurity Insiders Research Team conducted an insider threat study of organizations. Their result revealed that 98% of organizations are vulnerable to insider attacks, while 49% of such organizations only detect insider threats after the data have been copied out of the organization (Krishnan et al., 2018).
In the literature, the proposed approaches to mitigate insider threat can generally be classified into four lines of research which are: psychological-driven, behavioral-driven, content-driven, and deception-driven. The psychological-driven approach tracks insider threats using human signals such as brain signals (Krishnan et al., 2018), and eye movements (Krishnan et al., 2018). The behavioral-driven approach performs data analytics of human behavior using methods such as analysis of access logs (Krishnan et al., 2018) or convolution graph (Krishnan et al., 2018). The content-driven approach builds natural language and machine learning models on texts to detect insider threats. (Krishnan et al., 2018; Balogun et al., 2018). Lastly, deception-driven approach uses honey items such as honey attributes (Bartos et al., 2018), honey photos (Krishnan et al., 2018), honey files (Krishnan et al., 2018), honey permissions (Krishnan et al., 2018), honey words(Krishnan et al., 2018), honey documents(Krishnan et al., 2018), honey tokens (Krishnan et al., 2018; Krishnan et al., 2018), and honey encryption(Krishnan et al., 2018) to deceive insiders. Further, several studies have integrated insider threat mitigation approaches to access control systems, such as Attribute-based Access control (ABAC), to monitor insiders to achieve better mitigation. ABAC is an access control system that authorize or denies user's access request to an object based on the user's attributes, object's attributes, environment's attributes, and policy rules. ABAC has emerged as a promising access control system and is thus commonly used within and across organizations because: (1) it can be extended seamlessly into other frameworks for better security enhancement; (2) its components can flexibly be adjusted or increased; (3) it is granular and can be managed easily; and (4) it can effectively handle concurrency of users. However, a possible security breach to ABAC can occur if an insider has information about the policy rules and changes them to gain unauthorized access. ABAC's entities can be changed randomly to mitigate this incident based on heuristics such as Moving Target Defense (MTD). MTD is a potential security concept in the literature that proactively
changes security systems' configurations to deter potential attacks. This work presents ongoing research integrating ABAC, Deception, and MTD. This study aims to increase the stress and cost for an insider to achieve unauthorized access. The key contributions of our paper are:
1. introduce the concept of correlated attributes in ABAC.
2. integrate deception into ABAC.
3. extend ABAC model with moving target defense for insider threat mitigation and detection.
The remainder of this paper is structured as follows: Section 2 provides the background information. Section 3 summarizes the state of the art and related work. Section 4 describes the threat model. In section 5, we discuss the proposed framework. Section 6 describes the components of the proposed approach. In section 7, we discuss the experiment setup and results, while in section 8, we conclude and discuss the future work.
## 2. Background
Access control systems (ACS) are critical components in the security architecture of organizations. Historically, traditional ACS, such as discretionary access control (DAC), mandatory access control (MAC), and role-based access control (RBAC) have successfully been used to protect organizations' resources. However, traditional ACS add more overhead and fewer security enhancements as the organization grows, leading to a need for advanced control models. ABAC is a higher-policy access framework that has drawn key attention in industry and academia. This work focuses on the National Institutes and Standards (NIST) ABAC framework (Kumar et al., 2018) for consistency.
NIST describes ABAC as an access control model that authorizes subject's requests to carry out operations on objects based on the subject's attributes, object's attributes, environment attributes, and specified actions. The subjects can be either human or non-human (applications), such as automated software or legitimate bots. The attributes of subjects describe their properties and may include name, department, employee ID, branch, etc. In contrast, objects are mostly non-human, and their attributes may consist of creation date, type, department, and file size. Environment attributes describe the system's current state and may include geolocation, timestamp, weather condition, and threat level.
Intuitively, the NIST ABAC model shown in figure 1 has authorization policy rules which comprise subject attributes, object attributes, environment attributes, set of action(s), and the permission status, such as "grant" or "deny". The policy repository stores the policy rules, while the attribute repository stores the attributes. In addition, the model has four functional points connected in the pattern shown in figure 1. The functional points are: (1) Policy Administration Point (PAP); (2) Policy Information Point (PIP); (3) Policy Decision Point (PDP); and (4) Policy Enforcement Point (PEP). The PAP updates and manages the policy rules. The PIP manages access to the policy repository and attribute repository. The PDP analyses the attributes and policies to make logical access decisions for each access request. Lastly, the PEP enforces access decisions and ensures the subject does not go beyond the granted access decision.
## 3. State of the art and related work
### Moving Target Defense (MTD)
The US Department of Homeland Security describes MTD as "the concept of controlling change across multiple system dimensions in order to increase uncertainty and apparent complexity for attackers, reduce their window of opportunity and increase the costs of their probing and attack efforts" (Gebel et al., 2018). MTD involves proactively changing the configurations of a protected system to provide security guarantees. Ge et al (Ge et al., 2018). assert that the configuration elements may change through shuffling, diversification, or distributed methods. In addition, the attack surface, detection surface, or both surfaces can be "moved" to incur more cost to the attackers. Oftentimes, security loopholes go unnoticed in critical infrastructures, which creates avenues for vulnerabilities that attackers can exploit. Although traditional approaches such as firewalls, intrusion detection systems, and vulnerability patching can be used to correct such security anomalies. However, because these approaches are reactive, attackers may gather knowledge about the system through reconnaissance and then launch attacks on the system before necessary corrections are made. MTD helps to resolve these vulnerabilities by limiting exposure to attack. MTD achieves this by: (1) negating any advantages that an attacker may have, in contrast to traditional approaches that assume security configurations remain immutable; (2) increasing the number of system's components attackers needed to compromise for successful attacks; and (3) increasing attack's time complexity. While achieving this goal, MTD ensures the usability of the access system are not affected by minimizing the response time and changes to end-user access patterns that the adoption of MTD may introduce.
### Insider Threat
Insider threats have been studied for years and numerous strategies to reduce them have been put forth, including the use of deception, separation of duties and least privilege, behavioral analysis, and psychological analysis. For instance, Greitzer et al. (Greitzer et al., 2018) investigated the connection between employees' behavioral states and insider
Figure 1. The NIST ABAC Framework
risks. These include disgruntlement, anger management issue, performance, stress, and aggressive behavior. Likewise, Hashem et al. (Hashem et al., 2018) proposed detection of potential insiders using electrical signals generated by human biological activities, such as electroencephalography (EEG), electrocardiogram (ECG), and electromyography (EMG). Furthermore, Jiang et al. (Jiang et al., 2019) proposed a model that combines deep learning and Graph Convolutional Networks (GCN) for detecting insider threats. Theoharidou et al. (Tie et al., 2019) presents an analysis of insider behaviour based on social science and criminology theories. Brown et al. (Brown et al., 2020) proposed an insider threat detection framework that investigates the relationship between the use of words and a set of risk factors that are either psychological or behavioral in nature. More recently, Paxton et al. (Paxton et al., 2020) proposed a framework based on natural language processing that uses written and recorded incident notes to model insider attacks.
### Deception Systems
Deception-based insider threat mitigation has garnered a lot of focus among cybersecurity professionals. Several works, both in industry and academia, have focused on using honey or decoys in various forms as the entity in systems to achieve the deception goal. Bercovitch et al. (Bercovitch et al., 2019) suggested an "HoneyGen" system that mines rules to capture real data's features and generates fake data based on the rules. Similarly, Bowen et al. (Bowen et al., 2020) introduced a Decoy Document Distributor (D3) model, which employs a rule-based approach to automatically generate and distribute decoy documents across a file system in order to entice malicious users. Bhagat et al. (Bhagat et al., 2020) proposed using honeypots to solve a network's intrusion detection problem. The authors investigated attacker interactions with honeypots and discovered that TCP 1 could be easily compromised among other network protocols. Srinivasa et al. (Srinivas et al., 2020) proposed using honey token fingerprinting as a decoy in a network system. Yuill et al. (Yuill et al., 2020) proposed honeyfiles, an intrusion detection element that resides in a network server to bait hackers. Kaghazgaran et al. (Kaghazgaran et al., 2020) introduced honey permissions and honey objects as extensions to role-based access control (RBAC) model for detecting insider threats. Similarly, Juels and Rivest considered various attack scenarios involving password theft. These include easily guessable passwords through brute-force, password compromise, and duplicity of passwords in systems. To mitigate such attacks, the authors proposed using honeywords to prevent attackers from guessing real passwords from a list of hashed passwords. Finally, Jaffarian et al.(Jaffarian et al., 2020) proposed using a Multi-dimensional Host Identity Anonymization approach to defeat skilled attackers. The study suggests that reconnaissance attacks can be mitigated by modifying host addresses and fingerprints, anonymizing host fingerprints, and deploying honeypots and context-aware content.
Footnote 1: Transmission Control Protocol
## 4. Problem Statement and Threat Model
ABAC collects and analyzes attributes belonging to objects, subjects, and environment entities involved in a request to make access decisions. These attributes are assigned by dedicated attribute sources, which may be internal or external to an organization. We assume that these attribute sources are primarily outside of organizations' control, and therefore, there is the possibility of errors or vulnerabilities in the attributes. In this case, the incorrect assignment of attributes to access entities may adversely impact the reliability of the access control system. Based on this issue, we consider a threat model in which an insider can compromise the attributes in a given policy, for example, by creating an unauthorized attribute-access entity assignment or intentionally manipulating the attribute: value set or policy rules. An insider may compromise a given attribute through: (1) an unintended software error; (2) attribute forgery; or (3) creating and assigning new attributes to entities. In addition, actors in access control systems, such as subjects and objects, may exhibit new attributes. Here, we are extending our previous work by changing "moving" system configurations to mitigate the effect of attribute vulnerabilities and accommodate attributes' dynamic nature. In our previous work (Bercovitch et al., 2019), we presented a defensive strategy for attribute-based access control that included a deception mechanism. We integrated a sensitivity estimator, which assesses object sensitivities and identifies sensitive objects that should be considered as potential deception targets; a honey attribute generator, which creates honey versions of sensitive objects for deception purposes; and a monitoring unit, which analyzes the PDP's access decision and monitors the use of honey attributes to detect the existence of insiders. To consolidate and extend the deception framework, we introduce an MTD-based concept in this work, in which the entities of the ABAC engine are changed to increase the cost of compromise for the ABAC elements. As a result, our approach aims to innovatively mitigate the insider threat and achieve dynamic access control.
## 5. The Proposed Framework
The primary objective of this work is to mitigate insider threat detection by introducing deception components and MTD to the standard ABAC system. Unlike previous work, which integrates MTD into ABAC, we consider new attributes of access entities and changes in the values of existing attributes. We employ a frequent pattern growth (FPGrowth) association algorithm, a more efficient association analysis algorithm than the Apriori. We extend the standard ABAC model with MTD-based components to dynamically mutate the policy rules using attributes that correlate to the original subject's attributes and object's attributes.
### Integrating Honey Elements into ABAC
In our previous work (Bercovitch et al., 2019), we introduced three additional modules to ABAC: Sensitivity Assessment (SA), Honey Attribute Generator (HAG), and Monitoring Unit (MU) in addition to the standard ABAC components:
**Sensitivity Assessment:** The SA module estimates the sensitivities of every object's attribute in the system. The sensitivity of an object's attribute depends on the type of action that users can perform on the object and the number of authorized users that can perform those actions. Based on this, we estimated the sensitivity of open, edit, and delete actions using a probability measure based on Shannon information content.(Shan et al., 2020). Our assessment shows that writable objects have higher sensitivity than read-only objects because they require more confidentiality and integrity requirements.
**Honey Attribute Generation:** The Honey Attribute Generator module employs the Genetic Algorithm (GA) concept to generate honey attributes. GA is a population-based algorithm that generates a new population from initial ones by initializing population elements and then successfully performing crossover and mutation operations to generate the final population. Based on this, we generated non-distinguishable versions of sensitive attributes. We: (1) seeded the algorithm with initial version of real attributes; (2) evaluated the fitness score of each individual based on the semantic similarity between real and honey attributes using GLOVE embedding 2; and (3) generated new population as the honey attributes.
Footnote 2: Glove Embedding uses unsupervised learning algorithm to generate word vector representations. (Han et al., 2017)
**Monitoring Unit:** When a user requests access to an object, the PDP evaluates the request, and grants access if the user is authorized. On the other hand, the PDP denies access if the user does not have authorization while the user is presented with honey attributes as bait. Then, the MU monitors every attempt of the unauthorized user to ensure that the honey attributes are accessed.
### Integrating Moving Target Defense (MTD) into ABAC
ABAC policy has several entities that can be used to expand the existing policy set dynamically. Intuitively, entities in an access request exhibit new attributes different from those in the original policy. Hence, in our approach, we use MTD to increase the difficulty and cost of successful insider attacks. Conceptually, the insider attack will be unsuccessful if the insiders do not know the mutated policies. To achieve this, we: (1) identify attributes as the changing components of ABAC; (2) identify the correlations among the attribute: value pairs by evaluating the access requests to identify new attributes which correlate to the attributes in the original authorization policy set; and (3) gracefully mutate the policy rules with correlated attributes. With this, the mutated policy set will prevent unauthorized access if the original policy becomes compromised.
## 6. The Extended ABAC With MTD and Deception
In this section we describe our proposed framework. We will: 1) informally discuss our approach by defining the standard ABAC components and extensions introduced in this work; 2) discuss the FPGworth algorithm; 3) discuss the attribute correlation module; and 4) discuss the policy mutation engine to the standard ABAC that uses correlated attributes to mutate the policy rules.
### Informal Description
We added the following entities and functions to achieve an MTD-enhanced deception framework.
**Deception Components**
1. **Sensitive Attributes (\(SA\)):** a subgroup of object attributes that have high criteria for secrecy or integrity, making them vulnerable to insider assaults. The set of sensitive attributes \(SA\) is defined as: \(SA=\{sa:sa\in OA\wedge Sensitivity(sa)\geq K\}\). where \(K\) is a predetermined threshold and \(Sensitivity\) is a function used to calculate an attribute's sensitivity score. Each attribute \(sa\in SA\) is a tuple: \(sa=(name:value)\)
2. **Honey Attributes (\(HA\)):** a subgroup of attributes produced for every sensitive object as honey entities. Each attribute \(ha\) is a member of the set \(HA\) (\(ha\in HA\)) such that \(ha\) is a tuple: \(ha=(name:value)\)
**MTD Components**
1. **Support Threshold** (\(SP_{\theta}\)): the minimum value of support for association. We assume the heuristics for setting this value vary across organizations.
2. **Confidence Threshold** (\(CT_{\theta}\)): the minimum value of confidence for association. We also assume the heuristics for setting this value vary across organizations.
3. **Correlated Attributes** (\(CA\)): set of attributes generated through the correlation module. The set of correlated attributes \(CA\) is defined as: \(CA=\{ca:ca\in CA\wedge Correlation(ca)\geq S\}\). Where \(Correlation\) is a function, that generates the attributes correlated to the original attributes and \(S\) is a predefined support threshold. Each attribute \(ca\in CA\) is a tuple: \(ca=(name:value)\)
4. **Mutated Policies** (\(MP\)): set of mutated rules. Each mutated policy \(mp\) is a member of the set \(MP\) (\(mp\in MP\)).
**Functions**
To support our approach, we added the following extensions to the NIST ABAC model:
1. **Correlation:** Returns a list of attributes correlated to the original attributes of particular object, as shown in the following: \[Correlation:OA\rightarrow\mathbb{CA}\] (1) Attributes which correlate to the original attributes have support and confidence values that are greater than the threshold values.
2. **MutatedPolicy:** Generates a set of mutated policies using the correlated attributes. The function updates the original policy to the mutated policy set, as shown in the following: \[MutatedPolicy:OP\rightarrow\mathbb{MP}\] (2)
The extended ABAC framework proposed is illustrated in figure 2. It has the main functional points of ABAC; PDP, PEP, PIP, and PAP. In addition, it has the sensitivity estimator and honey attribute generator as deception entities. Further, we introduced the attribute correlation and policy mutation modules as MTD-based extensions. These extensions work together to: (1) identify correlated attributes; (2) mutate original policy using the correlated attributes; and (3) protect the system against insider attacks using the combination of honey data and mutated policies.
### Frequent Pattern Growth
Frequent Pattern Growth, FPGworth, is a technique for performing association analysis among elements in huge datasets. Association analysis is an approach for finding correlations or statistical relationships among data elements. Suppose a set of users have the same attributes of age, name, address, etc., but with unique IDs. Also, suppose these users frequently request access to objects which
have the same values of attributes such as resource owner, date created, department, etc., but with unique IDs. Then, we can derive the frequent pattern for the subjects and object attributes in the data. Given that \(X\) and \(Y\) are sets of related subject attributes and object attributes, mathematically, the notation \(X\implies Y\) denotes a statistical correlation between the two frequent sets. A natural question that follows this will be "How do we measure this association?". Support and Confidence are two metrics in the literature for measuring the strength of associations between \(X\) and \(Y\). Given an object dataset \(D\) having \(N\) total number of records. Suppose \(X=\{X_{1},X_{2},...,X_{m}\}\) and \(Y=\{Y_{1},Y_{2},...,Y_{m}\}\) are two non-empty object's itemsets in \(D\), such that \(X\neq\phi,Y\neq\phi\), and \(X\cap Y\neq\phi\). If \(P(X\cup Y)\neq\phi\) is the probability of \(D\) records that contains both itemsets X and Y. Then, support of association between \(X\) and \(Y\), given in equation 3, is the percentage of records in \(D\) that contains both itemsets \(X\) and \(Y\).
\[Support(X\to Y)=P(X\cup Y)=\frac{Freq(X,Y)}{N} \tag{3}\]
Now, let us assume that \(P(Y|X)\) is the conditional probability of having \(D\) records that contains \(X\) as well as \(Y\). Then, confidence of association between \(X\) and \(Y\) given in equation 4 is the conditional probability between itemsets \(X\) and \(Y\). That is, the ratio of probability of \(X\) and \(Y\) occurring together to the probability of \(X\) occurring alone. An association rule is said to be strong if the support and confidence values are greater than the respective threshold values.
\[Confidence(X\to Y)=P(Y|X)=\frac{Support(X\cup Y)}{Support(X)} \tag{4}\]
### Attribute Correlation Module
We used the FPGrowth algorithm in subsection 6.2 to generate correlated attributes for the subjects and objects. We note that two common association algorithms, Apriori and FPGrowth can be used for this module. However, we chose the FPGrowth algorithm because it is more efficient than Apriori Algorithm which has high time complexity for two reasons. First, apriori generates candidate itemsets which grow as the dataset's size increases. Second, for each itemset generated, Apriori requires multiple scans of the dataset to verify the support. Therefore, apriori will be inefficient when memory is low and when the transactions are high. These limitations lead to increased time complexity.
On the other hand, FPGrowth uses a tree structure and a depth-first search pattern to register and mine all the frequent itemsets in the data. A frequent itemset is a set of items that usually appear together in a dataset. For example, if two or more objects have similar attribute-value pairs, then the set of attributes are frequent itemsets and can thus be regarded to have the same frequent structured pattern. Essentially, the FPGrowth algorithm scans the dataset two times only without generating a high number of candidate sets, thus reducing the search costs compared to the Apriori algorithm. Therefore, we gracefully used an FPGrowth algorithm to generate correlated attributes in our approach. We assume that environment attributes do not undergo frequent changes. Hence, we generated correlated attributes for the subject and object datasets only as described in the following steps: (1) We scanned the attribute records to identify all frequent items called (1-itemsets) and their frequencies (or support count). Then, we sorted the items based on support values in descending order and eliminated items with a support frequency less than the threshold; (2) We constructed the FPTree by scanning the dataset the second time. In this step, we created
Figure 2. The Attribute-based Insider Threat Mitigation Framework.
the Tree root. Then, we recursively generated a branch for each unique subject and object's record in the order of the List; (3) We mined the frequent pattern growth by identifying the lowest node, known as the suffix or frequent length-1 pattern. Then, in the FP-tree, we built its sub-data which is a set of prefix paths, also known as a conditional pattern, that coexists with the suffix pattern. Next, we constructed the conditional FP-tree by counting the number of itemsets in the sub-data with support greater than the threshold; and (4) We created the FPGrowth by combining the previous step's conditional FP-tree with the suffix pattern. The frequent pattern items in a smaller conditional FP-Tree constitute the correlated attributes.
### Policy Mutation Module
We used the correlated attributes to generate a mutated policy set. In algorithm 1, we defined a \(getMutatedPolicy\) function for this purpose which involve three steps: (1) In lines 2 and 3, we defined the minimum support values for support and confidence. Then, we identified the attributes in each access request. This step is shown in lines 6; (2) For each attribute in the access request, we evaluated the correlated attributes and identified the ones that have support value and confidence value greater than the threshold values. This step is shown in lines 7 - 11; (3) In lines 14 - 16, we collated the attributes in the original policy; and (4) We chose the correlated attributes to generate a mutated policy. This step is shown in lines 17 - 21.
## 7. Experiment and Evaluation
### Dataset
The effectiveness of an MTD approach depends on the number of components that change and the time of change. Since two actors, the subject and object, are in ABAC, we evaluated our approach on two separate datasets, one for the subject and another for the object while we assume that environment conditions are constant. We also assume that the evaluation will achieve a good result if we obtain both subject and object datasets from the same sources. We obtained the subject and object attribute data from the California Basic Educational Data System (CBES), which was published by the Department of Education in 2018. The CBES collects data about students and staff across various schools and districts. We built the subjects' data in this study using tabular staff demographics and staff assignment datasets. Staff demographics data has 364,759 distinct records and 16 attributes, whereas staff assignment data contains 1,269,836 distinct records with 13 attributes. We integrated the two datasets using overlapping attributes to generate a single dataset with 1,269,836 distinct records and 23 attributes. Similarly, we obtained the object dataset from course enrollment data, which has 3,228,250 distinct object records and 23 attributes. In addition, we performed proportional sampling of 40,000 records from the generated datasets to lower the execution time. We can also evaluate the approach with synthetic datasets. However, the approach may not produce satisfactory results with synthetic datasets because the attributes may be skewed, resulting in low correlations.
### Implementation And Results
**Feature Selection:** We pruned the features with very low correlation using Dispersion Ratio measure, which finds the ratio of the arithmetic mean (AM) and the geometric mean (GM) of a feature. We removed the feature from the dataset if the ratio is lower than a specified threshold value.
**Time Overhead:** Table 1 shows the overhead added by the correlation module for both Apriori and FPGrowth algorithm. The
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline S/N & Support Threshold & FPGrowth Time(s) & Apriori Time(s) \\ \hline
1 & 0.025 & 20.4622 & 154.4166 \\ \hline
2 & 0.050 & 10.0144 & 132.7004 \\ \hline
3 & 0.075 & 7.0509 & 127.8882 \\ \hline
4 & 0.100 & 5.43514 & 127.2858 \\ \hline
5 & 0.125 & 4.8038 & 127.2233 \\ \hline
6 & 0.150 & 4.13404 & 126.9406 \\ \hline
7 & 0.175 & 3.8763 & 126.5939 \\ \hline
8 & 0.200 & 3.7909 & 125.4295 \\ \hline
9 & 0.225 & 3.5830 & 125.1921 \\ \hline
10 & 0.250 & 3.4153 & 124.7493 \\ \hline \end{tabular}
\end{table}
Table 1. Overhead of Attribute Correlation Module
table reveals that that the time complexity reduces as the support threshold increases. This is because less attributes will be analysed as the support threshold increases. The table also reveals that FP-Growth has less overhead than Apriori algorithm. Therefore, we selected FPGrowth algorithm to generate the correlated attributes.
**Frequent Attributes Itemsets:** Figure 3(a) shows the frequent attribute itemsets generated by the FPGrowth algorithm for object dataset. Similarly, Figure 3(b) shows the frequent attribute itemsets generated by the FPGrowth algorithm for the subject dataset. The figure reveals that the frequent attributes itemsets reduce as the support threshold increases. This is because fewer attribute sets become frequent as the support threshold increases. Similarly, as the confidence threshold increases, the number of frequent attribute itemsets reduce.
## 8. Conclusion and Future Work
In this paper, we proposed a framework for addressing insider threat challenges by incorporating moving target defense techniques and deception into the ABAC system. To accomplish this, we devised an algorithm that collated the original subject and object attributes in access requests, and generated correlated attributes for each subject and object attribute. Then, using the FPGrowth association algorithm, we mutated the original policy set with the correlated attributes to make it more difficult and costly for an insider to gain unauthorized access. The evaluation results revealed that the proposed approach effectively identified the correlated attributes and adequately generated a mutated policy set without affecting the access control systems' usability. In the future, we plan to explore additional methods such as deep learning-based approaches to generate the correlated attributes. We will also conduct a comprehensive user study to further evaluate the effectiveness and usability of the proposed approach.
###### Acknowledgements.
This work was partially supported by the National Science Foundation under Grant No. 2006329.
|
2309.14286 | Coherent Spectral Feature Extraction Using Symmetric Autoencoders | Hyperspectral data acquired through remote sensing are invaluable for
environmental and resource studies. While rich in spectral information, various
complexities such as environmental conditions, material properties, and sensor
characteristics can cause significant variability even among pixels belonging
to the same material class. This variability poses nuisance for accurate
land-cover classification and analysis. Focusing on the spectral domain, we
propose an autoencoder architecture called the symmetric autoencoder (SymAE),
which leverages permutation invariant representation and stochastic
regularization in tandem to disentangle class-invariant `coherent' features
from variability-causing 'nuisance' features on a pixel-by-pixel basis. This
disentanglement is achieved through a purely data-driven process, without the
need for hand-crafted modeling, noise distribution priors, or reference 'clean
signals'. Additionally, SymAE can generate virtual spectra through
manipulations in latent space. Using AVIRIS instrument data, we demonstrate
these virtual spectra, offering insights on the disentanglement. Extensive
experiments across five benchmark hyperspectral datasets show that coherent
features extracted by SymAE can be used to achieve state-of-the-art pixel-based
classification. Furthermore, we leverage these coherent features to enhance the
performance of some leading spectral-spatial HSI classification methods. Our
approach especially shows improvement in scenarios where training and test sets
are disjoint, a common challenge in real-world applications where existing
methods often struggle to maintain relatively high performance. | Archisman Bhattacharjee, Pawan Bharadwaj | 2023-09-25T16:51:26Z | http://arxiv.org/abs/2309.14286v7 | # Virtual Hyperspectral Images Using Symmetric Autoencoders
###### Abstract
Spectral data acquired through remote sensing are invaluable for environmental and resource studies. However, these datasets are often marred by nuisance phenomena such as atmospheric interference and other complexities, which pose significant challenges for accurate analysis. We show that an autoencoder architecture, called symmetric autoencoder (SymaAE), which leverages symmetry under reordering the pixels, can learn to disentangle the influence of these nuisance from surface reflectance features on a pixel-by-pixel basis. This disentanglement provides an alternative to atmospheric correction, without relying on radiative transfer modeling, through a purely data-driven process. More importantly, SymAE can generate virtual hyperspectral images by manipulating the nuisance effects of each pixel. We demonstrate using AVIRIS instrument data that these virtual images are valuable for subsequent image analysis tasks. We also show SymAE's ability to extract intra-class invariant features, which is very useful in clustering and classification tasks, delivering state-of-the-art classification performance for a purely spectral method.
autoencoders, atmospheric corrections, hyperspectral imaging, nuisances, redatuming, virtual images, hyperspectral image classification
## I Introduction
Complications due to nuisance or uninteresting parameters arise in many inverse problems. In remote sensing of the Earth's surface, nuisance effects could be associated with atmospheric effects, sensor noise, sun-angle variations, topographic effects, spatial intra-class variations, spectral mixing, and instrumental and data artifacts [1, 2]. The presence of such nuisance can make the inference of useful surface reflectance features highly uncertain.
Therefore, dealing with these nuisance effects is seen as a critical pre-processing step. In this note, we demonstrate the utility of an autoencoder [3, 4] architecture, called the symmetric autoencoder [5, SymAE], to _uniformize_ the nuisance effects in hyperspectral images. To illustrate this, envisage a collection of pixels belonging to a specific class. Among these, certain pixels might be obscured by cloud cover, while others enjoy the clarity of an unobstructed sky. Through SymAE's process of _redatuming_, nuisances can be transposed from one pixel to another, effectively generating virtual pixel spectra. These synthesized pixels can be systematically manipulated to simulate either cloudless or cloudy conditions -- effectively aligning pixels in terms of their nuisance effects as shown in Fig. 1.
Our approach holds significant potential for applications such as mineral characterization, land cover identification, and hyperspectral signature analysis. It becomes especially valu
Fig. 1: Example from the Kennedy Space Center dataset. (a) Original spectra in the dataset belonging to Oak Hammock vegetation class. These pixel spectra showcase dissimilarities possibly arising from nuisance effects like atmospheric or ground-based variations. (b) The raw spectra undergo the redatuming process to generate virtual spectra, as depicted here. Through redatuming, the spectra share nuisance effects, leading to uniformity among spectra while preserving vegetation-class-specific reflectance features. In the subsequent examples presented in this paper, we employ band indices to annotate the x-axis of the spectra, consistent with standard remote sensing datasets where the wavelength of radiation is well understood.
able when dealing with classes exhibiting subtle differences that are muddled by the variance introduced by nuisance effects. Ultimately, our experiments demonstrate that employing this architecture enables us to achieve hyperspectral image (HSI) classification performance comparable to that of other state-of-the-art networks and outperforming them for a purely spectral method.
SymAE constitutes an encoder that provides a compressed latent representation of the input hyperspectral image on a pixel-by-pixel basis and a decoder that reconstructs the input in a near-lossless manner. The latent representation of this autoencoder is valuable because each at-sensor pixel spectrum is decomposed into two components (dimensions): one correlated to nuisance parameters and another to the surface reflectance features. Specifically, a disentangled representation can be used to manipulate the nuisance information of the pixels, e.g., the swapping of the atmospheric effects of a given pixel with another. As will be discussed later, swapping nuisance effects is equivalent to decoding a hybrid latent code prepared by mixing components of the latent code between the respective pixels. Consequently, the atmospheric effects can be uniformized across the pixels, producing a new virtual hyperspectral image. It should be crucially emphasized that the SymAE executes the aforementioned decomposition in a model-free manner, excluding the need for atmospheric radiation transfer models that typically demand prior information about nuisance parameters -- examples being aerosols, gases, clouds, and water vapor distribution. Virtual hyperspectral images can be of several uses:
* as redatuming can be seen as an alternative to atmospheric correction, virtual images can be useful in environments where physical models of the atmosphere have high uncertainties;
* better image processing tasks such as classification, segmentation, etc. can be performed on virtual images with uniformized nuisance effects;
* virtual hyperspectral images enable scenario analysis, i.e., simulating the spectra under different atmospheric or nuisance conditions enabling more accurate surface characterization.
Traditional autoencoding ideas alone will not guarantee that the surface reflectance features and nuisance features of an input spectrum are encoded as separate components in the latent space. To accomplish data-driven representation learning, we harness the property of permutation invariance found in surface reflectance of pixels with common features. In other words, when examining a group or a cluster of pixels within extensive remote-sensing datasets, possessing identical surface-reflectance information -- this information remains unaffected by the arrangement or the ordering of pixels within this group. Specifically, we capitalize on grouping the observed spectra a priori to structure the latent space. The identification of these groups is application-specific, and some ideas are discussed in Section II. Finally, the latent representation of the SymAE is guided by specific constraints linked to these grouped spectra, encompassing:
* Firstly, the surface-reflectance characteristics within a particular group exhibit symmetry concerning the arrangement of its spectra. In essence, the surface reflection information is assumed to be coherently shared across the spectra within the group.
* Secondly, the nuisances across the spectra within a given group are distinct, i.e., each spectrum experiences a distinct unwanted alteration, influenced by corresponding nuisance phenomena. For example, we assume that the pixels within a group contain effects due to diverse atmospheric configurations.
These constraints collectively steer SymAE's learning process, facilitating the creation of a latent representation that captures the essential features while accounting for the complexities introduced by nuisances. SymAE training is a highly generalized form of blind deconvolution that uses the merging of different latent variables in a neural network to replace the notion of convolution. The architectural design choices of SymAE, which are made to ensure that the constraints mentioned above are satisfied, are detailed in [5].
Before we delve further, we wish to underscore the nuanced nature of the term _surface-reflectance_, which we will consistently employ throughout the remainder of this article. Although we assert our intention to disentangle these features from the spectra, we are, in fact, more accurately extracting _coherent features_ from within spectral groups. To illustrate this, consider a collection of spectra obtained through multi-scan hyperspectral imaging of a specific location at various instances, each affected by diverse atmospheric conditions. In this context, the repeated measurement of reflectance from identical surface constituents ensures that surface information remains coherent across pixels. It is worth noting here that while the term 'coherent features' might seemingly overlap with surface-reflectance features, certain atmospheric attributes could exhibit coherency across multiscan measurements. Additionally, surface reflectance might undergo seasonal changes, e.g., due to variations in surface moisture content, etc. However, due to the inherent challenge of physically labeling the coherent information and the fluidity of coherency based on the application context, we will frequently use the term'surface reflectance' when discussing concepts in this paper. It is crucial to recognize that within the framework of the Symmetric Autoencoder (SymAE), the classification of information into coherent information and nuisance information depends on the strategies used for grouping. This paper highlights the advantages of disentangling the coherent information through spectral grouping for classification and surface characterization tasks, in contrast to working with the raw spectra.
Image processing for eliminating the atmospheric effects from hyperspectral images is tedious in a setting with lack of prior information about the atmosphere. We provide an alternative to traditional atmospheric corrections, avoiding the need for complex radiative transfer modeling and instead use a purely data-driven approach. SymAE does not rely on the atmospheric prior, as opposed to radiative transfer approaches [6, 7, 8] that can simulate the absorption and scattering effects of atmospheric gases and aerosols to correct the hyperspectral images. Several scene-based empirical approaches are developed for atmospheric correction [9, 10]. These
approaches do not rely on radiative transfer modeling of the atmosphere. However, the applicability of these approaches is limited due to unrealistic requirements, e.g., the flat-field correction [11] approach requires the availability of an area in the scene that has flat reflectance spectra; The empirical-line approach [12] requires field-measured spectra to derive atmospheric corrections. Our approach belongs to this class of scene-based approaches since SymAE is trained to automatically learn the nuisance characteristics while processing large volumes of the observed data. However, more importantly, we believe that our requirement of grouping the observed spectra is less restrictive and therefore more practical.
Deep learning algorithms are popular for remote-sensing image analysis tasks such as image fusion, registration, scene classification, semantic segmentation, and pixel-based classification [13]. Like SymAE, some contemporary architectures [14, 15] offer a fundamentally different way of correcting the atmospheric effects in images. The authors of [15] use synthetic spectra from radiative transfer modeling to train an atmospheric-correction network in a supervised setting -- our data-driven approach does not involve radiative transfer modeling. Furthermore, the permutation invariance property used in the architecture of SymAE enables it to extract features common to specific classes which make classification tasks easier. Permutation invariance has been previously used in the context of remote sensing in [16] for multiscan data to obtain super-resolution of images. While generative models like Generative Adversarial Networks (GANs) have found application in remote sensing [17, 18], our approach offers a distinctive interpretational advantage. It enables the differentiation of nuisance features from class-specific surface reflectance features within latent codes. The interpretability of these features depends on the prior grouping step, which holds the potential of enhancing our understanding of the data, setting our approach apart from conventional generative models.
## II Spectral Grouping
Our approach necessitates a priori grouping of spectra to effectively disentangle nuisance effects. All the spectra in a given group are presumed to contain identical coherent information but dissimilar nuisance effects. The number of such identifiable groups is the application and scene-specific. If we intend to disentangle atmospheric and lighting variations from surface reflectance, an ideal scenario for achieving this objective is one in which multi-scan hyperspectral data is available. In this setup, each pixel undergoes multiple scans under diverse atmospheric conditions and varying elevation angles. Here, each group of spectra pertains to the same pixel but exhibits varying atmospheric influences. In this context, SymAE proves invaluable for disentangling atmospheric nuisances and pixel-specific reflectance in an entirely unsupervised manner.
Acknowledging the challenges in acquiring multi-scan data for real-world applications, we concentrate instead on single hyperspectral scenes. The grouping task is straightforward if distinct spatial features (e.g., water bodies, crops, asphalt, etc.) are identifiable in the image. In addition, pixels that are classified with high certainty during a preliminary analysis can also assist the grouping task. Specifically, our approach in this paper involves grouping pixels using a priori information derived from two sources: 1) ground truth labeling and 2) spatial proximity. Leveraging ground truth information entails forming groups based on assigned pixel classes, such as specific vegetation or land types, in hyperspectral images. Note that the variations in textures, spectral mixing, and other atmospheric factors exist even among pixels of the same class, contributing to what we refer to as nuisance effects. During training, we utilized approximately \(10\%\) of the ground truth information. We showcase its performance on the remaining
Fig. 2: The architecture of a symmetric autoencoder disentangles surface reflectance information from the nuisance (for example, atmospheric scattering) effects in its latent space. The surface-reflectance information is assumed to be coherent across the pixels in a group. Therefore, it can only propagate through the network via solid arrows — notice that the dropout masks prevent its propagation. Colored arrows to indicate the propagation of the remaining nuisance effects — notice that a symmetric function, i.e., symmetric w.r.t. the ordering of pixels, prevents its propagation.
test pixels of the scene in Section IV. In cases where ground truth labels are limited, we adopt an alternative approach by working with groups of spatially proximate pixels (groups of 9 pixels, present in \(3\times 3\) patches). Here, SymAE is trained to extract spatially-coherent features. Although we acknowledge that this spatial grouping method is less efficient than using ground truth, we analyze its advantages in Section V.
A hyperspectral image is inherently three-dimensional, with the first two dimensions representing the spatial domain, and the third dimension corresponding to the spectral domain. To facilitate our analysis, we group pixels into distinct sets, bundling together all pixels belonging to the same group. These grouped pixels are utilized for training our autoencoder after constructing a set of datapoints denoted as \(\{X_{i}\}_{i=1,\dots,n_{X}}\). Each datapoint, \(X_{i}\), comprises a selection of pixels randomly drawn, with replacement, from the same group. The datapoints are assigned to all the available groups uniformly. In our notation, \([A;B]\) signifies the vertical concatenation of two vectors, \(A\) and \(B\). To access individual pixels within a datapoint, we index them as \(X_{i}[\tau]\), where \(\tau\) ranges from 1 to \(n_{\tau}\). Consequently, \(X_{i}\) is constructed as \([X_{i}[1];\dots;X_{i}[n_{\tau}]]\). Each pixel spectrum, represented by \(X_{i}[\tau]\), is a vector of length equal to the number of frequency bands. These constructed data points serve as the basis for training SymAE, with further details provided in the subsequent section.
## III Symmetric Autoencoder
We constructed the datapoints such that the surface reflectance is _coherent_ across the pixels of each datapoint. The goal of symmetric autoencoder [5] to disentangle this coherent reflectance information from the remaining nuisance variations, e.g., atmospheric effects, in its latent space. Autoencoders [19] are comprised of two components: an encoder Enc that maps each datapoint \(X_{i}\) into latent code \(H_{i}=\texttt{Enc}(X_{i})\), and a decoder Dec that attempts reconstruct to \(X_{i}\) from the code. We determine both the functions Enc and Dec by minimizing the reconstruction loss
\[\texttt{Enc},\;\texttt{Dec}=\operatorname*{arg\,min}_{\texttt{Enc},\; \texttt{Dec}}\sum_{i}\|X_{i}-\texttt{Dec}(\texttt{Enc}(X_{i}))\|^{2} \tag{1}\]
over the training datapoints. SymAE relies on a unique encoder architecture, as depicted in Fig. 2, to structure the latent space. This architecture can be mathematically described by
\[\texttt{Enc}(X_{i}) =\texttt{[REnc}(X_{i});\texttt{NEnc}(X_{i}[1]);\dots;\] \[\texttt{NEnc}(X_{i}[n_{\tau}])]. \tag{2}\]
As a result, the latent code \(H_{i}=[R_{i};\,N_{i}[1];\,\dots;\,N_{i}[n_{\tau}]]\) is partitioned into the following interpretable components of each datapoint \(X_{i}=[X_{i}[1];\dots;X_{i}[n_{\tau}]]\):
1. the component \(R_{i}=\texttt{REnc}(X_{i})\) contains the surface-reflectance information as it is coherent across the pixels of \(X_{i}\)
2. the remaining components \(N_{i}[\tau]=\texttt{NEnc}(X_{i}[\tau])\) complement \(R_{i}\) with pixel-specific nuisance information.
Finally, SymAE's decoder Fuse non-linearly combines code \(R_{i}\) with each pixel-specific code \(N_{i}[\cdot]\) to reconstruct the original datapoint pixel-by-pixel
\[\hat{X}_{i} =\texttt{Dec}(H_{i})=\texttt{Dec}([R_{i};\,N_{i}[1];\,\dots;\,N_ {i}[n_{\tau}]])\] \[=[\texttt{Fuse}([R_{i};\,N_{i}[1]]);\,\dots;\texttt{Fuse}([R_{i}; \,N_{i}[n_{\tau}]])].\]
Here, no constraints are enforced on functions NEnc and Fuse as they are parametrized using fully-connected layers. On the other hand, to ensure REnc, the _reflectance encoder_, only encodes the coherent reflectance, we constrain it to be invariant under permutations of the pixels in \(X_{i}\). In other words, for all permutations \(\Pi\) along the pixel dimension, we desire that
\[R_{i}=\texttt{REnc}(X_{i})=\texttt{REnc}(X_{i}[\Pi(1{:}n_{\tau})]) \tag{3}\]
purely represents the coherent information since \(R_{i}\) does not depend on the labeling of the pixels in \(X_{i}\). Moreover, it is important to note that the nuisance effects, which are dissimilar across the pixels, cannot be encoded using REnc without significant loss of information.
SymAE's reflectance encoder explicitly achieves the invariance mentioned above using a permutation-invariant network architectures following [20] which provide universal approximation guarantees for symmetric functions. These architectures use pooling functions such as the mean or the max across the instances to ensure permutation invariance. In our experiments, the spectrum of each pixel is simply transformed using REnc\({}_{1}\), an unconstrained function parameretized using fully-connected layers, and mean taken along the pixel dimension
\[R_{i}=\left(\frac{1}{n_{\tau}}\sum_{\tau=1}^{n_{\tau}}\texttt{REnc}_{1}(X_{i}[ \tau])\right). \tag{4}\]
We emphasize that the key observation in this equation is that the mean of the _transformed instances_ REnc\({}_{1}(X_{i}[\tau])\) is symmetric with respect to the ordering of pixels. This ensures that the desired symmetry (eq. 3) is achieved. SymAE's nuisance encoder NEnc is unconstrained. This aspect is a significant concern as the decoder Fuse might tend to ignore the \(R_{i}\) component in favor of using purely \(N_{i}[\cdot]\) information for reconstruction.
As the purpose of NEnc is exclusively to encode pixel-specific nuisance information while disregarding surface reflectance, SymAE incorporates _dropout masks_ during training via Bernoulli dropout [21] with a probability of \(p=0.5\):
\[N_{i}[\tau]=\texttt{Dropout}(\texttt{NEnc}(X_{i}[\tau])). \tag{5}\]
The dropout introduces random obfuscation to elements of \(N_{i}\), making the decoder Fuse perceive the codes as dissimilar and hindering the reconstruction of coherent surface-reflectance information from \(N_{i}\). While there is a continuous stream of information from REnc, the outputs of NEnc intentionally introduce noise, with certain features being randomly obfuscated. This compels Fuse to extract as much meaningful information as possible from \(R_{i}\), which inherently contains coherent data. Over time, Fuse becomes adept at capturing all coherent information from REnc, with the remaining pixel features learned from \(N_{i}\) encoded by NEnc. At test-time, the entirety of the \(N_{i}\) code is sent unaltered into the decoder. Finally, the functions NEnc, REnc\({}_{1}\) and Fuse are trained
concurrently by minimizing Eq. 1 with the dropout mechanism just described. The success of SymAE requires a sufficiently large number of pixels with _dissimilar_ nuisance variations in each group to achieve the desired structure of the latent space.
### _Virtual Hyperspectral Images_
Using a trained SymAE network, we can generate a virtual hyperspectral image by redatuming each of the measured pixel spectra. The redatuming is performed pixel-by-pixel by swapping the nuisance effects in a given spectrum with a reference pixel. Redatuming data is equivalent to manipulations in the latent space. Precisely, to redatum the \(k\)th pixel spectrum \(D[k]\), we first extract its reflectance code using \(\mathtt{REnc}_{1}(D[k])\). We then fuse this code with the nuisance code of the reference spectrum (indexed using \(k_{0}\)) to obtain a virtual spectrum
\[\hat{D}_{k_{0}}[k]=\mathtt{Fuse}([\mathtt{REnc}_{1}(D[k]);\,\mathtt{NEnc}(D[k_ {0}])]), \tag{6}\]
which is not originally measured. Therefore, SymAE allows for _scanning_ the area corresponding to pixel \(k\) with nuisance conditions present during the observation of pixel \(k_{0}\). The virtual image \(\hat{D}_{k_{0}}\) is generated by collecting all the virtual spectra with similar nuisance effects.
## IV Training With Ground Truth
In this section, we perform spectral grouping using ground truth information to showcase the application of SymAE. We utilize a hyperspectral image acquired by NASA's AVIRIS instrument over the Kennedy Space Center (KSC), Florida, on March 23, 1996. The image has dimensions of \(512\times 614\) pixels and comprises \(176\) spectral bands. While this dataset was corrected using the ATREM [22, Atmosphere Removal Program] method based on radiative transfer modeling, researchers [23] have highlighted the necessity for post-ATREM polishing due to errors in solar irradiance models and atmospheric parameter estimations. The differences in spectral signatures among certain vegetation types may appear subtle. However, due to the presence of nuisance effects, these spectral signatures can exhibit notable discrepancies, even within pixels belonging to the same class. Consequently, the correction of residual nuisance effects, referred to as _polishing_, becomes imperative to ensure accurate discrimination of land cover in this environment.
To train SymAE, we partitioned the ground truth data into separate test and training sets. As mentioned earlier, our training set comprised approximately \(10\%\) of pixels from each class provided as ground truth within the dataset. Subsequently, we organized the training set pixels into groups corresponding to the ground truth categories. Our training process involved \(n_{X}=524288\) data points, utilizing \(n_{\tau}=8\) and a mini batch size of \(256\). The dimensions of the latent codes, \(R_{i}\) and \(N_{i}\), were both set to \(64\). For further details regarding the configuration of the architecture, a Jupyter notebook notebook written in Julia using the Flux package [24] is shared on [https://github.com/archieb1999/SymAE_KSC](https://github.com/archieb1999/SymAE_KSC).
After completing training, we generated virtual spectra using Equation 6 with randomly selected reference pixels. The resulting vegetation spectra exhibited a notably reduced intra-class variance, suggesting the uniformization of nuisance effects across these spectra. This is visually demonstrated in Fig. 3, where the spectra display a significant decrease in intra-class variance after undergoing the redatuming process. The autoencoder not only exhibits a high degree of proficiency in redatuming within the training set, but the variance is also considerably reduced in the case of test ground truth pixels that were excluded during training. Furthermore, it is noteworthy that subtle inter-class differences are retained throughout the redatuming process. These findings underscore the motivation for utilizing the redatumed pixels in subsequent classification and characterization tasks.
To quantify the extent of nuisance effects among pixels within a specific ground truth class, we employ the metric of _average variance_. Initially, we calculate the variance in spectral reflectance for each band, followed by computing the average of these variance values. This average effectively represents the overall variance among all spectra belonging to a particular ground truth class. A higher average variance indicates that the pixels within the chosen class exhibit significant dissimilarities. Post-redatuming, we anticipate observing a reduction in variance. This reduction is apparent in Table I, which presents the average variance values both before and after redatuming with a random pixel. Notably, following the redatuming procedure, the residual average variance in the test pixels is consistently below \(5\%\) for the majority of the classes.
However, we wish to emphasize that the choice of a reference pixel can significantly impact virtual spectra. In our methodology of a priori grouping, which relies on ground truth data, we have observed that nuisance features are not limited to atmospheric and lighting effects; they could also include ground-based factors. These factors encompass spectral mixing, surface moisture content, and texture, among others, and they may add complexity to the virtual spectral analysis. It is noteworthy that these nuisance effects may affect different classes to varying degrees. For instance, marsh classes may be more susceptible to spectral variations due to surface water content than upland vegetation classes, potentially resulting in substantial fluctuations in energy reflected from marsh pixels. Additionally, some nuisance phenomena may pertain to specific classes but might not exist for others. For example, features related to crop ripeness may not be relevant in the context of water bodies. Thus, the degree of correspondence between generated virtual images and real-world scenarios may depend on the choice of reference pixel. Therefore, if the intention is to employ redatuming for detailed spectral and scenario analysis, we suggest selecting reference pixels with relatively similar nuisance feature distributions if prior information is available. Apart from a small example in the upcoming subsection, a detailed study on the interpretability of virtual spectra is beyond the scope of this paper. Instead, our primary focus in this paper is on the advantages of uniformizing nuisance features across spectra and extracting spectral features to enhance clustering and classification performance.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline No. & Class & Training & Test & Average Variance In & Average Variance After & Residual Variance (\%) \\ & & Samples & Samples & Raw Spectra (\(\times 10^{-6}\)) & Redatuming (\(\times 10^{-6}\)) & After Redatuming \\ \hline
1 & Scrub & 77 & 684 & 1173.9 & 24.6 & 2.10 \% \\
2 & Willow Swamp & 25 & 216 & 1938.3 & 17.0 & 0.88 \% \\
3 & CP Hammock & 26 & 230 & 591.9 & 38.0 & 6.42 \% \\
4 & CP/Oak Hammock & 26 & 225 & 1267.6 & 43.5 & 3.43 \% \\
5 & Slash Pine & 17 & 144 & 1315.4 & 22.2 & 1.69 \% \\
6 & Oak Hammock & 23 & 206 & 1346.5 & 61.5 & 4.57 \% \\
7 & Hardwood Swamp & 11 & 94 & 695.3 & 9.4 & 1.35 \% \\
8 & Graminoid Marsh & 44 & 387 & 3466.7 & 51.0 & 1.47 \% \\
9 & Sparina Marsh & 52 & 468 & 1530.9 & 132.3 & 8.64 \% \\
10 & Typha Marsh & 38 & 339 & 3086.0 & 141.1 & 4.57 \% \\
11 & Salt Marsh & 42 & 377 & 3986.9 & 415.4 & 10.42\% \\
12 & Mud Flats & 47 & 415 & 1529.5 & 325.1 & 21.26\% \\
13 & Water Body & 91 & 817 & 143.0 & 0.047 & 0.03 \% \\ \hline \end{tabular}
\end{table} TABLE I: Redatuming Significantly Reduces the Average Variance in Testing Pixels Across Diverse Ground Truth Classes in the KSC Dataset.
Fig. 3: Ribbon plots illustrate reduction in intra-class variance post-redatuming. Each ribbon plot represents spectral distribution of distinct classes, with central line denoting mean and ribbon’s width on either side indicating intra-class standard deviation. (a) Displays train set spectra from four distinct classes, while (b) shows their respective redatum counterparts, wherein pixels from the same classes almost coincide, and (c) shows the reference pixel used for redatuming. (d)-(f) Show spectra from test setpland vegetation classes, following the pattern observed in (a)-(c). While not as pronounced as in (b), the redatum test set pixels exhibit a discernible reduction in intra-class variance. (g)-(i) are same as (d)-(f) but for wetland classes.
### _Comparison to Denoising Autoencoders_
Denoising Autoencoders (DAEs) represent a class of neural networks frequently employed in unsupervised learning. They are recognized for their proficiency in recovering underlying data representations by intentionally introducing noise into input data [25]. In the field of remote sensing and hyperspectral image (HSI) analysis, various iterations of Denoising Autoencoders (DAEs) have been applied in previous studies [26]. In this study, we sought to compare the noise reduction achieved by denoising autoencoders with that of SymAE. Our observations indicate that while denoising autoencoders tend to smooth the spectral data, they fall short of significantly mitigating intra-class variance, as depicted in Fig. 4(b). DAEs typically assume a noise distribution, such as Gaussian, for denoising spectra. In contrast, SymAE adopts a different approach, learning the distribution of the underlying nuisance/noise after spectral grouping. Remarkably, utilizing SymAE resulted in a noticeable reduction in intra-class variance, highlighting its efficacy in capturing and discerning the distinctive features among different classes.
As mentioned earlier, it is important to recognize that the virtual spectra are contingent upon the reference pixel chosen during the redatuming process. While our demonstrations illustrate that a task such as classification remains relatively unaffected by the choice of this reference pixel, interpreting the virtual spectra may not be straightforward. These spectra are still influenced by the residual nuisance features present in the reference pixel. To illustrate this, we intentionally selected a Salt Marsh pixel with relatively high reflected energy (as shown in Fig. 4(d)) for redatuming Mud Flats and Typha Marsh pixels, as depicted in Fig. 4(c). Post-redatuming, the virtual spectra manifest noticeably elevated energy levels
Fig. 4: Comparative analysis of application of DAE and SymAE on test data. (a) Raw spectra from two land-cover classes in Kennedy Space Center scene. (b) DAE demonstrates a propensity to smooth spectral data, yet notable within-group variations remain evident. (c) Redatuming, as implemented by SymAE, outperforms denoising by DAE in mitigating intra-class variance. However, it is important to note that redatumed spectra may exhibit significant dissimilarities from the original raw spectra. (d) Shows the reference Salt Marsh pixel used for the redatuming, along with pixels from the respective ground truth classes that are closest to the redatumed spectra.
compared to their raw counterparts. This phenomenon likely stems from our autoencoder's incorporation of overall reflected energy as an element within the nuisance features. Nevertheless, the redatumed spectra still maintain a shape resembling the pixels closest to them in their respective ground truth classes, as referenced in subfigure 4(d).
In essence, the SymAE-generated virtual images are not completely _denoised_; they still retain residual nuisance effects originating from the reference pixels. Nonetheless, due to the uniformity of nuisance features across the entire image, the relative distinctions among redatumed pixels can prove invaluable for subsequent image-processing tasks.
### _Virtual Images: Classification_
SymAE introduces a valuable capability to visualize hyperspectral image locations under varying virtual conditions, which has the potential to enhance pixel classification accuracy and extend the possibilities of hyperspectral image analysis. To exemplify the benefits brought about by redatuming and the uniformization of nuisance effects across pixels, we conducted K-Nearest Neighbor (KNN) classification, with \(K=5\), on both the raw and virtual hyperspectral images. These results are presented in Fig. 5. For the raw image, the overall accuracy for the test pixels aligned with the ground truth stands at \(81.6\%\), a lower figure primarily attributed to the presence of nuisance effects. However, when applied to the virtual images after the uniformization of nuisances, the overall accuracy elevates significantly to \(92.8\pm 0.9\%\). In a parallel approach, we assessed the performance of two classical machine learning models on our test set: 1) Random Forests and 2) linear Support Vector Machines (SVM). The outcomes distinctly underscore the significance of SymAE-generated virtual images: an evident enhancement in predictive overall accuracy (OA) for both Random Forests (e.g., \(86.2\%\) accuracy for raw images versus a noteworthy \(93.0\pm 0.9\%\) overall accuracy for virtual image) and linear SVM classifiers (\(74.0\%\) overall accuracy for raw images compared to a substantial \(85.8\pm 4.9\%\) overall accuracy for virtual images). As a result, we conclusively establish that the application of SymAE redatuming proves beneficial when undertaking land-cover discrimination tasks.
While more complex models, such as the Random Forests and KNN, exhibit relatively consistent performance, the substantial variance in accuracy when employing a linear SVM on different virtual images highlights the decisive role of reference pixel selection in shaping classification outcomes. This influence is not uniform across all cases, with some reference pixels leading to remarkable enhancements while others, albeit rarely, induce performance deterioration. On average, our results demonstrate a significant overall improvement in classification performance. Nonetheless, the selection of an appropriate reference pixel can pose challenges. In subsequent sections, we introduce an alternative method for leveraging SymAE to further enhance clustering and classification.
### _Surface-Reflectance Code: Clustering Analysis_
The alternative method we mentioned is leveraging reflectance code generated by REnc for clustering and classification tasks. In addition to generating virtual images, a trained SymAE also provides us with a reflectance code, denoted as REnc\({}_{1}(D[\cdot])\), for each pixel. Since this code is intended to remain unaffected by atmospheric distortions and other forms of nuisance variability, the focus of this section is to leverage this code for clustering analysis. Our objective is to investigate whether the reflectance latent space can effectively disentangle classes characterized by subtle differences in reflectivity, such as neighboring vegetation types.
Fig. 5: K-Nearest Neighbors (KNN) pixel classification results on KSC scene maps. (a) ground truth map of the KSC scene, serving as the baseline. (b) Pixel classification utilizing KNN on the raw image, resulting in an overall accuracy of \(81.6\%\) for the test set ground truth. (c) Pixel classification conducted on a virtual image with uniformized nuisance, demonstrating an elevated average overall accuracy of \(92.8\%\). Virtual images, generated through the redatuming process, contribute to enhanced pixel classification accuracy, highlighting their valuable role in advancing hyperspectral image analysis.
In the case of KSC experiment, we initially sampled 100 pixels from two ground truth classes, Slash Pine and Oak Hammock (Fig. 6(a)). We performed Principal Component Analysis (PCA) on their spectra and observed that they closely overlapped in the first two principal components, making their separation challenging (Fig. 6(b)). Expanding the number of components did not significantly alter the results. Subsequently, we explored an alternative approach by clustering the pixels based on the reflectance code. The 2-D linear subspace of the reflectance code is depicted in Fig. 6(c). To quantitatively assess this improvement, we applied the K-means clustering algorithm to both the raw spectra space and the reflectance code latent space. We repeated this process 100 times with randomly sampled pixels from the classes. The results indicated that, on average, K-means clustering in the raw spectra space achieved an accuracy of \(75.5\%\), while in the latent reflectance code space, it achieved \(95.9\%\). This represents a substantial improvement of \(20.4\%\) in percentage accuracy. Even more challenging were the classes CP Hammock and CP/Oak Hammock (Fig. 6(d), 6(e)), which exhibited even closer proximity. In raw spectra space, the average clustering accuracy was \(53.3\%\). In contrast, when we performed clustering in the REnc space, we obtained an accuracy of \(89.9\%\), reflecting a notable improvement of \(36.6\%\).
In our comprehensive pairwise clustering experiment encompassing all ground truth classes within the scene, we observed an average improvement of \(12.0\%\) in classification accuracy across all class pairs. Notably, the most substantial improvements were evident among classes characterized by subtle differences. These pairwise enhancements are graphically illustrated in Fig. 7. These findings underscore the substantial effectiveness of REnc in capturing class-specific features essential for distinguishing between closely related classes.
### _Using Surface-Reflectance Code: Classification_
Building on the insights gained from the previous section, we now turn our focus to the application of SymAE for hyperspectral image (HSI) classification, utilizing the reflectance encoder REnc to extract class-specific information. We trained a feed-forward dense layer neural network to predict the ground truth class label based on a given pixel's reflectance code. The classification was performed on a pixel-by-pixel basis, resulting in an overall test accuracy of \(94.65\%\). To the best of our knowledge, this represents the highest classification accuracy achieved using solely spectral information, without leveraging spatial correlation within the scene for this train-test split ratio. The classification results are detailed in Table II.
In our pursuit of comparing SymAE with leading HSI classification methods that adopt a combined spatial-spectral approach, we sought to incorporate spatial information into our experiments. Our approach involved assigning labels to
Fig. 6: SymAE allows for clustering pixels based on reflectance code, i.e., REnc\({}_{1}(D[.])\) that is not affected by the atmospheric variations and other nuisance effects. (a,d) Raw spectra of spectrally close-by classes. (b,e) These classes are hard to separate in 2D raw spectra space. (c,f) Notice that the classes that otherwise have subtle differences in raw spectra are much easier to discriminate in latent coherent code space. Most significant improvement in the K-means clustering experiment we described is observed in classes with subtle differences like CP Hammock and CP/Oak Hammock depicted in (d),(e) and (f).
each pixel based on the mode of labels obtained from its spectral feed-forward network and its eight adjacent neighbors. This process was iterated three times, effectively applying a form of spatial smoothing. The results of this approach are presented in Table II, showcasing an overall accuracy of \(99.48\%\). This performance closely aligns with state-of-the-art methods such as SSRPnet [27] and CVSSN [28], which harness spatial information more comprehensively.
Furthermore, we extended our assessment to include two widely used HSI datasets: Indian Pines and Pavia University. Employing the same number of training samples as Hong et al. did when introducing SpectralFormer [29], the current state-of-the-art backbone network for extracting spectral features in HSI classification, our results on both datasets demonstrated superior classification performance compared to SpectralFormer when using purely spectral features, numerical results of which are demonstrated in Table V.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline No. & Class & Training & Test & Pixel-based & Classification Accuracy \\ & & Samples & Samples & Classification Accuracy & After Spatial Smoothing \\ \hline
1 & Scrub & 77 & 684 & 95.61 \% & 97.71 \% \\
2 & Willow Swamp & 25 & 218 & 96.79 \% & 100.0 \% \\
3 & CP Hammock & 26 & 230 & 85.22 \% & 98.26 \% \\
4 & CP/Oak Hammock & 26 & 226 & 80.53 \% & 96.02 \% \\
5 & Slash Pine & 17 & 144 & 77.08 \% & 94.44 \% \\
6 & Oak Hammock & 23 & 206 & 77.67 \% & 100.0 \% \\
7 & Hardwood Swamp & 11 & 94 & 88.23 \% & 100.0 \% \\
8 & Graminoid Marsh & 44 & 387 & 96.90 \% & 100.0 \% \\
9 & Spartina Marsh & 52 & 468 & 97.44 \% & 100.0 \% \\
10 & Typha Marsh & 38 & 366 & 97.27 \% & 100.0 \% \\
11 & Salt Marsh & 42 & 377 & 98.14 \% & 99.73 \% \\
12 & Mud Flats & 47 & 456 & 98.90 \% & 100.0 \% \\
13 & Water Body & 91 & 836 & 100.0 \% & 100.0 \% \\ \hline & Overall Accuracy & & & 94.65 \% & 99.48 \% \\ & Average Accuracy & & & 91.53 \% & 99.09 \% \\ & Kappa \(\times 100\) & & & 94.04 & 99.43 \\ \hline \end{tabular}
\end{table} TABLE II: Classification of Test Pixels from Kennedy Space Center Dataset Using SymAE Generated Reflectance Code.
Fig. 7: Heatmap illustrating improvement in clustering in KSC dataset. The matrix elements indicate the percentage accuracy difference between K-means clustering in the latent reflectance code, REn\({}_{e_{1}}(D[\cdot])\), and clustering in the raw spectral data while doing pairwise unsupervised clustering between land-cover classes. The numbers on axes indicate the class indices, following same ordering as in I. This heatmap pertains to the ground truth-based training scenario and the clustering was done on test set. Pairs that show minimal improvement are those that already exhibit significant separation in raw spectra.
\begin{table}
\begin{tabular}{c c c c} \hline HSI Scene & Metric & SpectralFormer & SymAE \\ \hline \multirow{3}{*}{Pavia University} & Overall Accuracy & 87.94 \% & **93.90 \%** \\ & Average Accuracy & 87.47 \% & **94.15 \%** \\ & Kappa \(\times 100\) & 83.58 & **91.76** \\ \hline \multirow{3}{*}{Indian Pines} & Overall Accuracy & 78.55 \% & **80.82 \%** \\ & Average Accuracy & 84.68 \% & **88.79 \%** \\ \cline{1-1} & Kappa \(\times 100\) & 75.54 & **78.19** \\ \hline \end{tabular}
\end{table} TABLE V: Comparison Between Classification Accuracies of SymAE and SpectralFormer Using Only Spectral Information.
\begin{table}
\begin{tabular}{c c c c c c} \hline No. & Class & Training & Test & Pixel-based & Classification Accuracy \\ & & Samples & Classification Accuracy & After Spatial Smoothing \\ \hline
1 & Alfalfa & 15 & 31 & 96.77 \% & 100.0 \% \\
2 & Corn-notill & 50 & 1378 & 74.02 \% & 82.80 \% \\
3 & Corn-mintill & 50 & 780 & 76.28 \% & 88.72 \% \\
4 & Corn & 50 & 187 & 81.82 \% & 99.47 \% \\
5 & Grass-pasture & 50 & 433 & 95.38 \% & 97.69 \% \\
6 & Grass-trees & 50 & 680 & 91.18 \% & 97.50 \% \\
7 & Grass-pasture-mowed & 15 & 13 & 100.0 \% & 92.31 \% \\
8 & Hay-windowed & 50 & 428 & 98.36 \% & 99.53 \% \\
9 & Oats & 15 & 5 & 100.0 \% & 80.00 \% \\
10 & Soybean-no-till & 50 & 922 & 82.65 \% & 95.77 \% \\
11 & Soybean-min-till & 50 & 2405 & 69.90 \% & 87.69 \% \\
12 & Soybean-clean & 50 & 543 & 85.82 \% & 98.16 \% \\
13 & Wheat & 50 & 155 & 99.35 \% & 99.35 \% \\
14 & Woods & 50 & 1215 & 88.48 \% & 95.97 \% \\
15 & Buildings-Grass-Trees-Drives & 50 & 336 & 80.65 \% & 95.54 \% \\
16 & Stone-Steel-Towers & 50 & 43 & 100.0 \% & 100.0 \% \\ \hline \multirow{3}{*}{Pavia University} & Overall Accuracy & & & 80.82 \% & 91.97 \% \\ & Average Accuracy & & & 88.79 \% & 94.41 \% \\ \cline{1-1} & Kappa\(\times 100\) & & & 78.19 & 90.82 \\ \hline \end{tabular}
\end{table} TABLE IV: Classification of Test Pixels from Indian Pines Dataset Using SymAE Generated Reflectance Code.
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline No. & Class & Training & Test & Pixel-based & Classification Accuracy \\ & & Samples & Samples & Classification Accuracy & After Spatial Smoothing \\ \hline
1 & Asphalt & 548 & 6083 & 93.65 \% & 97.63 \% \\
2 & Meadows & 540 & 18109 & 93.81 \% & 97.91 \% \\
3 & Gravel & 392 & 1707 & 83.42 \% & 91.86 \% \\
4 & Trees & 524 & 2520 & 98.58 \% & 99.37 \% \\
5 & Painted metal sheets & 265 & 1080 & 100.0 \% & 100.0 \% \\
6 & Bare soil & 532 & 4497 & 97.89 \% & 100.0 \% \\
7 & Bitumen & 375 & 955 & 91.62 \% & 95.39 \% \\
8 & Self blocking bricks & 514 & 3168 & 88.35 \% & 97.03 \% \\
9 & Shadows & 94 & 716 & 100.0 \% & 100.0 \% \\ \hline \multirow{3}{*}{Indian Pines} & Overall Accuracy & & & 93.90 \% & 97.90 \% \\ & Average Accuracy & & & 94.15 \% & 97.69 \% \\ \cline{1-1} & Kappa \(\times 100\) & & & 91.76 & 97.14 \\ \hline \end{tabular}
\end{table} TABLE III: Classification of Test Pixels from Pavia University Dataset Using SymAE Generated Reflectance Code.
This notable improvement in performance emphasizes SymAE's strong proficiency in spectral feature extraction. As Sun et al. pointed out in their study [30], spectral features play a fundamental role in accurately characterizing the distribution of ground objects, serving as crucial discriminative factors in HSIs. Despite these promising results, it is important to note that purely spectral methods are susceptible to scattered noise, which can lead to lower accuracy levels. Even after applying spatial smoothing, our accuracies on these datasets did not match the leading spectral-spatial methods. These findings collectively underscore SymAE's potential for hyperspectral image classification and motivate further exploration into advanced techniques for incorporating spatial information. Such exploration holds the promise of yielding even greater classification performance.
## V Training SymAE Without Ground Truth
Many remote sensing datasets lack ground truth labels for different spectra, making it challenging to group spectra before SymAE training. In such scenarios, we rely on the assumption of spatial correlation in the reflectance information, enabling us to group pixels located nearby within the scene. This assumption implies that spatially neighboring pixels likely belong to the same class.
This approach, which assumes that spatial proximity implies class similarity, provides structural organization to the data even when explicit labels are absent, costly to obtain, or difficult to acquire. This conjecture is particularly valid when dealing with datasets characterized by significant spatial correlation, as seen in examples like the Indian Pines dataset, which contains nearly 10 classes (farmlands) with extensive spatial coverage. Guided by this premise, we partitioned the KSC scene into small \(3\times 3\) pixel groups for SymAE training. Our experiments on the KSC dataset revealed an average enhancement of \(8.7\%\) in pairwise K-means clustering accuracy, akin to the results discussed in subsection IV-C, when utilizing the reflectance code instead of raw spectra. It is crucial to note that we did not observe this improvement when spectra were randomly grouped within the scene. As expected, random spectral grouping led to significantly poorer clustering performance.
The clustering analysis, as illustrated in Fig. 9, provides valuable insights into the performance of SymAE concerning pair-wise classes. We expect that the pixels situated close to class boundaries might not be well represented in SymAE's latent space due to the simplicity of the grouping approach employed. As evident, the performance of SymAE does exhibit variations across different classes. In fact, there is evidence of performance degradation for certain classes when compared to the use of raw spectra -- most of these classes are not spatially contiguous or have a small extent.
This observation motivated us to verify this unsupervised approach further, in a more concentrated setting. We chose a small patch of land on the Indian Pines data set where our spatial-proximity assumption would seem to fit well. The patch primarily contains two close-by classes: Soybean-clean and Corn-min-till. The patch and the clustering analysis of the pixels are depicted in Fig. 10. In line with our prior observations from subsection IV-C, the initial representation of raw spectral data using the first two principal components does not reveal clear class separations. However, a substantial enhancement in structure becomes evident when examining the latent reflectance code space, where class distinctions become considerably more discernible. It is worth highlighting that
Fig. 8: Classification of SymAE-generated reflectance code for the Kennedy Space Center scene. a) Pixels are labeled using purely spectral information. b) Spatial smoothing applied to (a) which improved the classification accuracy.
## V Conclusion
Fig. 10: A focused testing of SymAE without ground truth in Indian Pines scene. (a) A selected sub-region within the scene characterized by favorable spatial conditions to test SymAE without ground truth. (b) A 2D representation of the raw spectral space, utilizing the same color scheme as in (a) to visualize data points. (c) The 2D latent space of the reflectance code. Pixels near class boundaries pose challenges for differentiation, aligning with our spatial proximity assumption that groups border-adjacent pixels together. However, pixels farther from class boundaries exhibit clear separation within this space, aiding straightforward discrimination.
Fig. 9: Heatmap illustrating the improvement in K-means clustering achieved by utilizing the latent reflectance code in place of raw spectra, similar to Fig. 7, but without relying on ground truth labels. The heatmap highlights substantial performance enhancements across most classes, while also indicating instances of performance decline among specific class pairs.
pixels positioned near or along the class boundaries present a challenge in terms of differentiation. This observation aligns with our underlying assumption of spatial proximity, which groups border-adjacent pixels together prior to the SymAE training process, even if they genuinely belong to distinct classes. On a positive note, pixels situated at a greater distance from the class boundaries, or beyond their immediate vicinity, exhibit a distinct separation within the latent space. This facilitates their effective classification using a straightforward decision boundary. The results illustrate improved clustering within the latent space. This unsupervised grouping approach holds potential utility in settings where ground truth information is unavailable, such as remote or extraterrestrial environments. In future research endeavors, we will focus on developing advanced and robust methodologies for prior grouping, especially in unsupervised contexts.
## VI Discussion
In this section, we delve into the distinctive training phenomena observed during SymAE training, consider potential implications and applications of this architecture, and outline areas with room for future improvements.
### _Atypical training nature of SymAE and choice of activation function_
We observed an intriguing phenomenon during the training process of SymAE. Initially, the training loss curve exhibits a declining trend, followed by a subsequent increase in loss, and then eventually reaching a state of saturation. Notably, this increase in loss corresponds with an improvement in K-means clustering performance within the latent space. We conjecture that this phenomenon is attributable to the feature transfer dynamics between the encoding modules, specifically from NEnc to REnc, and the attention of decoder Fuse to them. In the early stages of SymAE training, NEnc inadvertently captures coherent reflectance features. As training progresses, dropout layers intermittently obfuscate these features. Consequently, Fuse works to extract information from REnc, which continually supplies data. Over time, Fuse adjusts to make the most of REnc-sourced data. However, it is important to note that due to REnc's inherent constraints, the quality of reconstruction falls short of what an unconstrained dense layer network can achieve. This discrepancy leads to the observed increase in loss during training.
Having a sufficiently long nuisance code length can mitigate this atypical behavior, but that would significantly increase the number of training updates required to achieve effective disentanglement of nuisance and reflectance features in latent space. Empirically, we consistently achieved satisfactory performance of SymAE upon \(3000\) to \(4000\) training epochs with \(2048\) minibatches, each minibatch containing \(256\) datapoints in our experiments.
We would also like to highlight our selection of the Leaky ReLU with slope parameter \(0.5\) as activation function. Our decision in this regard was guided by empirical observations from our study. Throughout our investigations, we observed that traditional activation functions, including tanh and ReLU, exhibited susceptibility to vanishing gradient issues and the _dying ReLU_ problem [31]. These challenges are particularly pronounced in SymAE, given its inherent stochastic nature. Our experimental results unequivocally demonstrated that the Leaky ReLU, characterized by its inherent flexibility, effectively mitigates these issues, thus establishing itself as the better choice for our network.
### _Applications and future possibilities with SymAE_
Symmetric Autoencoder (SymAE) introduces a data-driven architecture with significant potential applications in challenging scenarios where physical modeling is impractical. For instance, it can find utility in remote sensing tasks conducted in extraterrestrial environments or locations with limited information available about nuisance factors. SymAE offers an alternative approach that sidesteps the complexity associated with developing intricate physical models.
Some key implications and potential future applications of SymAE encompass:
* Atmospheric Correction Alternative and Data Quality Enhancement: SymAE's capacity to disentangle surface reflectance information from nuisances like atmospheric interference and sensor noise provides an appealing alternative to conventional atmospheric correction methods. This has the potential to significantly enhance data quality, particularly in settings where physical modeling is unfeasible.
* Scenario Exploration and Virtual Imaging: SymAE's ability to generate virtual images under diverse conditions facilitates scenario exploration and hypothesis testing, supporting more informed decision-making.
* Data Augmentation and Model Training: SymAE's capability to generate synthetic data could bolster machine learning model training, enhancing model robustness. This could prove particularly valuable in data-scarce scenarios.
Fig. 11: Atypical training curves we encountered while training SymAE. At the outset, the training loss curve shows a descending trend, which is subsequently followed by a rise in loss, ultimately reaching a state of saturation.
* Enhanced Hyperspectral Signature Analysis: SymAE's capacity to reduce intra-class variance enhances hyperspectral signature analysis, potentially aiding in finer material differentiation and environmental change detection.
* Spatial-Spectral Fusion: Future investigations can delve into the integration of spatial information with spectral data, further extending SymAE's utility in applications such classification and clustering.
### _Scope for improvement_
While SymAE has shown promising results, there are certain aspects that warrant further improvement. One significant consideration is the time required to achieve effective disentanglement between reflectance and nuisance features. During our experiments, we observed that the inherent stochastic nature of SymAE training demanded a substantial investment of approximately 12 hours to reach a point where reflectance and nuisance features were satisfactorily disentangled, based on the configurations chosen for the datasets in this study. This extended training duration prompts the exploration of more efficient training strategies and architectural refinements, offering a compelling avenue for future research. Another area of improvement relates to the initial grouping method employed in the fully unsupervised scenario presented here. This method, while effective to some extent, is relatively simplistic and susceptible to misclassifying pixels from distinct classes as a single group. In future investigations, we aim to enhance this grouping algorithm to minimize the probability of erroneously assigning pixels from different classes to the same group. This refinement will contribute to the overall robustness of the SymAE approach, particularly in scenarios where ground truth labels are unavailable or challenging to obtain.
## VII Conclusion
In conclusion, this study has introduced the Symmetric Autoencoder (SymAE) architecture in the context of Hyperspectral Imaging (HSI) analysis and demonstrated its capabilities. SymAE's unique approach to disentangling nuisance features from surface reflectance features in a purely data-driven manner presents potential opportunities for advancing HSI data preprocessing and analysis. We have showcased the practicality of SymAE by utilizing it to generate virtual images through redatuming spectra, effectively uniformizing nuisance effects across hyperspectral image spectra and reducing intra-class variance. Compared to denoising autoencoders, SymAE offers superior performance in this regard. Furthermore, the use of virtual images enhances various image analysis tasks, which were particularly demonstrated in classification and clustering. Our work has also highlighted the extraction of reflectance codes, which remain independent of nuisance effects. We demonstrated the capability of SymAE to extract spectral features, enhancing clustering and classification performance, outperforming state-of-the-art architectures in the process. To enhance accuracy further, we introduced a spatial smoothing technique, complementing SymAE's spectral capabilities. While this narrows the performance gap between our purely spectral approach with state-of-the-art spectral-spatial classification methods, realizing the full potential of spatial information in classification requires more advanced methodologies. Additionally, we proposed a method for applying SymAE without relying on ground truth information, opening possibilities for extraterrestrial settings or environments where modeling nuisance phenomena is challenging. However, the current grouping method before SymAE training without ground truth requires refinement, offering an avenue for future research. Our architecture has the potential to find applications in various domains, including spectral signature analysis, data augmentation, and scenario analysis, making Symmetric Autoencoders a tool with promise in HSI analysis, inviting further exploration and evaluation within the research community.
## VIII Acknowledgements
All experimental procedures were conducted using the Julia programming language on a local computing system equipped with 128 GB RAM, an AMD Ryzen Threadripper 3960X 24-core processor, and a 24 GB NVIDIA GeForce RTX 3090 GPU. We would also like to acknowledge the use of ChatGPT, an AI language model developed by OpenAI, which played a significant role in refining portions of the text in this paper. The contributions of ChatGPT were valuable in enhancing the clarity and coherence of our writing. |
2309.04095 | Unitary time evolution in quantum mechanics is a stronger physical
postulate than linear time evolution | Discussions of quantum mechanics often loosely claim that time evolution
logically must be unitary, in order for the probabilistic interpretation of the
amplitudes of the state vector to make sense at all times. We discuss from
first principles whether this claim is true: if we assume only that the
time-evolution operator is *linear*, then does the stronger requirement that it
be *unitary* follow from the other axioms of quantum mechanics? The answer is
subtle. We discuss two mathematically distinct but physically equivalent
formulations of the axioms of quantum mechanics, and consider generalizing each
to postulate only that time evolution is linear. Within one formulation, the
unitarity of time evolution follows logically from the other axioms -- but
within the other formulation, it does not. Allowing the time-evolution operator
be (a priori) arbitrarily linear does not change the physical observables in
one formulation of quantum mechanics, but changes the other formulation to a
*distinct* (internally consistent) physical theory that allows new
phenomonology like (e.g.) faster-than-light communication. Therefore, the
unitarity of time evolution is arguably better thought of as a logically
independent and experimentally falsifiable axiom of quantum mechanics, not as a
tautological consequence of the other axioms. | Edward Parker | 2023-09-08T03:14:32Z | http://arxiv.org/abs/2309.04095v1 | Unitary time evolution in quantum mechanics is a stronger physical postulate than linear time evolution
###### Abstract
Discussions of quantum mechanics often loosely claim that time evolution logically must be unitary, in order for the probabilistic interpretation of the amplitudes of the state vector to make sense at all times. We discuss from first principles whether this claim is true: if we assume only that the time-evolution operator is _linear_, then does the stronger requirement that it be _unitary_ follow from the other axioms of quantum mechanics? The answer is subtle. We discuss two mathematically distinct but physically equivalent formulations of the axioms of quantum mechanics, and consider generalizing each to postulate only that time evolution is linear. Within one formulation, the unitarity of time evolution follows logically from the other axioms - but within the other formulation, it does not. Allowing the time-evolution operator be (a priori) arbitrarily linear does not change the physical observables in one formulation of quantum mechanics, but changes the other formulation to a _distinct_ (internally consistent) physical theory that allows new phenomonology like (e.g.) faster-than-light communication. Therefore, the unitarity of time evolution is arguably better thought of as a logically independent and experimentally falsifiable axiom of quantum mechanics, not as a tautological consequence of the other axioms.
## I Introduction
Discussions of quantum mechanics (QM) often claim that time evolution logically must be unitary in order to preserve total probability [1; 2; 3]. A more precise version of this claim states that the basic rules of probability require that the norm of a quantum state vector must be preserved over time - so if we postulate only that time evolution is represented by a _linear_ operator \(\hat{U}\), then it must in fact be unitary, because unitary operations are the only linear operators on an inner product space that preserve vector norms.1
Footnote 1: Throughout this paper, we assume the standard norm on an inner product space \(\|\psi\|:=\sqrt{\langle\psi|\psi\rangle}\). Ref. [4] considers other choices of norm for vectors in \(\mathbb{C}^{n}\) and finds that only the standard inner product space 2-norm (and the 1-norm of classical probability theory, if we restrict ourselves to nonnegative entries) permit nontrivial norm-preserving linear maps.
This article attempts to clarify certain implicit assumptions behind this claim. We argue that there are two physically equivalent ways to formulate "textbook" quantum mechanics with unitary time evolution. Under one formulation, the unitarity of time evolution does follow naturally from the other postulates of QM and the assumption of linear time-evolution. But the other formulation permits a natural generalization of the time-evolution rule to allow non-unitary time evolution that is fully compatible with the other postulates, but which is not allowed by textbook QM. Moreover, this generalized time-evolution rule is not just a different mathematical formalism for standard QM, but represents a genuinely different physical theory that allows for physical phenomena that are impossible under the axioms of standard QM. Therefore, under the second formulation of QM, the question whether time evolution is unitary is directly experimentally testable. Of course, all experimental evidence so far indicates that time evolution is indeed unitary, indicating that time evolution in QM must indeed be postulated to be unitary in order to agree with experiment. Our central argument is simply that the unitarity of time evolution needs to be specified as an independent axiom of QM, which (at least under certain formulations of QM) is logically independent of the other axioms.
This second formulation of QM has been discussed before in [4] under the name of "manual normalization." Our goal is not to discuss its phenomenology in detail, but to simply clarify the conceptual point that it is fully compatible with (certain formulations of) all of the axioms of standard QM _except_ for the postulate of unitary time-evolution.
We make the important caveat that we only consider the abstract version of QM that is often used in (e.g.) quantum information science, which allows for arbitrary unitary time evolution and does not explicitly postulate the Schrodinger equation [5]. The unitarity of time evolution does indeed follow automatically from the Schrodinger equation. But it does not (necessarily) follow automatically from the _other_ axioms of QM (like the Born rule), as is sometimes claimed.
This paper assumes familiarity with the basic axioms of quantum mechanics, but no other background (except during one short optional discussion). The footnotes go into a bit more mathematical detail than the main text does, but are not necessary for conveying the main argument.
Two physically equivalent formulations of the axioms of quantum mechanics
We will not attempt to be completely rigorous in our statement of the axioms of QM; in the usual tradition of physics, we will be just rigorous enough to get our point across, but no more.
We will follow the general axiom set laid out by Shankar [6], omitting details that will not be necessary for our argument. We will deliberately present them somewhat vaguely at first, and then refine them with more details below. In order to avoid the many mathematical subtleties that arise from infinite-dimensional Hilbert spaces, we will assume that all Hilbert spaces are finite-dimensional.
At a high level, the standard theory of QM can be derived from four basic axioms (with some clarifying details in the footnotes):
1. The state of an isolated quantum system is (non-uniquely) represented by a vector \(|\psi\rangle\) in a complex Hilbert space.2 Footnote 2: “Non-uniquely represented” means that multiple state vectors in the Hilbert space correspond to the same physical state. The meaning of an “isolated” quantum system is rather subtle, but in this article we ignore the possibility of mixed states and assume that all quantum states are pure (except in one short discussion below).
2. Physical observables are represented by Hermitian operators on the Hilbert space.3 Footnote 3: The canonical communication relation \(\left[\hat{X}_{i},\hat{P}_{j}\right]=i\hbar\,\delta_{ij}\,\hat{I}\) is sometimes included within this axiom in the context of nonrelativistic QM, but we will not need it in this article.
3. The rules for measurement: the possible outcomes of a measurement of a physical observable are the eigenvalues of the corresponding Hermitian operator \(\hat{A}\). If a system is in state \(|\psi\rangle\) immediately before the measurement is performed, then the probability of observing each eigenvalue \(\lambda\) is proportional to \(|\langle\lambda|\psi\rangle|^{2}\), where \(|\lambda\rangle\) is an eigenvector of \(\hat{A}\) with eigenvalue \(\lambda\). Immediately after the measurement is performed, the system's state is the eigenvector \(|\lambda\rangle\) corresponding to the observed eigenvalue.4 Footnote 4: If \(\hat{A}\) is degenerate, then we instead use \(P(\lambda)=\langle\psi|\hat{P}_{\lambda}|\psi\rangle\), where \(\hat{P}_{\lambda}\) is the orthogonal projection operator onto the eigenspace of \(\hat{A}\) corresponding to the eigenvalue \(\lambda\). We ignore any interpretational questions about whether the measurement physically changes the system’s ontological state or only updates the experimenter’s epistemic description of the system.
4. Time evolution from time \(t_{i}\) to time \(t_{f}\) is given by a unitary operator \(\hat{U}(t_{f},t_{i})\): a state \(|\psi_{i}\rangle\) at time \(t_{i}\) gets mapped to state \(|\psi_{f}\rangle=\hat{U}(t_{f},t_{i})|\psi_{i}\rangle\) at time \(t_{f}\).5 Footnote 5: A linear operator \(\hat{U}\) is unitary iff \(\hat{U}^{\dagger}\hat{U}=\hat{I}=\hat{U}\hat{U}^{\dagger}\). For a finite-dimensional Hilbert space, either of those equalities automatically implies the other. The initial time \(t_{i}\) and final time \(t_{f}\) are often left implicit, and the term “the time-evolution operator” is used to refer to the entire family of unitary operators \(\hat{U}(t_{f},t_{i})\). If the time-evolution operator is time-translationally invariant, then the family of operators \(\hat{U}(\Delta t)\) parameterized by the time interval \(\Delta t:=t_{f}-t_{i}\) forms a one-parameter Lie group.
This axiom set is not quite precise enough to be operationally useful. For our purposes, we will need to consider two physically equivalent - but slightly formally different - variant formulations A and B.
### Formulation A
Variant A is the formulation that is perhaps more often taught in a first introduction to QM, since it is more convenient for concrete calculations:
1. The state of an isolated quantum system is (non-uniquely) represented by a normalized _unit_ vector \(|\psi\rangle\) in a complex Hilbert space.
2. [Same as #2.]
3. [Same as #3, except:] the probability \(P(\lambda)\) of observing each eigenvalue \(\lambda\) is \[P(\lambda)=|\langle\lambda|\psi\rangle|^{2},\] where \(|\lambda\rangle\) is a _unit_ eigenvector of \(\hat{A}\).
4. [Same as #4.]
In this formulation, the physical state of a quantum system is only specified up to an arbitrary complex _phase factor_\(e^{i\theta},\ \theta\in[0,2\pi)\); the state vectors \(|\psi\rangle\) and \(e^{i\theta}|\psi\rangle\) correspond to the same physical quantum state. Put another way: the physical state of the system is _uniquely_ represented by an _equivalence class_ of unit state vectors with respect to the equivalence relation
\[(|\psi\rangle\sim|\phi\rangle)\ \text{iff}\ \bigl{(}\exists\,\theta\in[0,2\pi) \ \text{such that}\ |\psi\rangle=e^{i\theta}|\phi\rangle\bigr{)}.\]
### Formulation B
Variant B is sometimes used in more theoretical contexts:
1. The state of an isolated quantum system is (non-uniquely) represented by a _nonzero_ vector \(|\psi\rangle\) in a complex Hilbert space.
2. [Same as #2.]
3. [Same as #3, except:] the probability \(P(\lambda)\) of observing each eigenvalue \(\lambda\) is \[P(\lambda)=\frac{\langle\psi|\lambda\rangle\langle\lambda|\psi\rangle}{\langle\psi| \psi\rangle\langle\lambda|\lambda\rangle}.\]
4. [Same as #4.]
In this formulation, the physical state of a quantum system is only specified up to an arbitrary _nonzero_ complex number \(z\); the state vectors \(|\psi\rangle\) and \(z|\psi\rangle\) correspond to the same physical quantum state. Put another way: the physical state of the system is _uniquely_ represented by an equivalence class of state vectors with respect to the equivalence relation
\[(|\psi\rangle\sim|\phi\rangle)\text{ iff }(\exists\,z\in(\mathbb{C}\setminus\{0 \})\text{ such that }|\psi\rangle=z|\phi\rangle).\]
For a given Hilbert space, the set of these equivalence classes is known as the corresponding _projective Hilbert space_.6
Footnote 6: Confusingly, the elements of a projective Hilbert space are sometimes called _points_ and sometimes _rays_ – although either the term “line” or (in the complex case) “plane” might arguably be a better analogy, since each element of a projective Hilbert space is a one-dimensional subspace of the original Hilbert space that is isomorphic to the underlying field. Also confusingly, a projective Hilbert space is not itself a Hilbert space, or even a vector space; there is no way to add together different elements of a projective Hilbert space.
The equivalence classes described above are somewhat abstract and unintuitive. But they have the advantage that each possible physical state of a quantum system corresponds to a _unique_ equivalence class, and the uniqueness of these representations is useful in many advanced theoretical applications. The projective Hilbert space formulation turns out to usually be more mathematically convenient than the equivalence classes within formulation A. So while formulation B may at first seem needlessly complicated, it is often used in mathematical physics [7].
### Equivalence of formulations A and B
Formulations A and B are completely physically equivalent: the only difference is whether things get normalized before or after the inner product is taken. In formulation A, the state vector itself (and the observable eigenvectors) are normalized before any inner products are formed, so that the (norm-squared) inner products are already correctly normalized to sum to 1 and represent direct probabilities. In formulation B, the (norm-squared) inner products are what get normalized to actual probabilities. In particular, both formulations yield the same probabilities \(P(\lambda)\), which are the only quantities in QM that are physically measurable. These \(P(\lambda)\) are guaranteed to lie in \([0,1]\) and to add up to 1 when summed over all possible observable values \(\lambda\), as must be the case by the definition of probability. Therefore, for standard QM, which formulation is more useful is largely a matter of taste (as well as a few practical or conceptual concerns mentioned above). But, as we discuss below, the two formulations admit physically distinct natural _generalizations_.
Of course, we cannot necessarily consistently mix together axioms between the two formulations. If we were to combine together axioms 1B and 3A, then we would get nonsensical "probabilities" that do not sum to 1 as required. But it turns out that we actually can consistently mix together axioms 1A and 3B, because if axiom 1A holds, then axiom 3B becomes equivalent to axiom 3A. The next section discusses in more detail which of the axioms above do or do not logically imply which (under which additional assumptions).
## III Logical implications between axioms and possible generalizations
**Theorem**.: If we assume axioms 1A-3A, but we weaken axiom 4A to only postulate that
1. [label= 4A-]
2. Time evolution is given by a _linear_ (but not a priori unitary) operator \(\hat{U}(t_{f},t_{i})\),
then axioms 1A-3A and 4A' still imply that \(\hat{U}\) must be unitary.
This theorem is the rigorous version of the heuristic claim that "conservation of probability requires that time evolution be unitary."
Proof.: Postulate 1A says that only unit vectors \(|\psi\rangle\) are legitimate state vectors in their formulation, which is necessary in order for rule 3A to always yield valid probabilities that sum to 1. So any time-evolution operator \(\hat{U}\) must necessarily preserve the norm of all unit vectors: if \(\|\psi\|=1\), then \(\left\|\hat{U}|\psi\rangle\right\|=\sqrt{\langle\psi|\hat{U}^{\dagger}\hat{U}| \psi\rangle}\) must equal 1 as well. But then it follows by linearity (axiom 4A') that \(\hat{U}\) must preserve the norm of _all_ vectors in the Hilbert space: if \(|\phi\rangle\) is any nonzero vector, then
\[\left(\frac{\langle\phi|}{\|\phi\|}\right)\hat{U}^{\dagger}\hat{U }\left(\frac{|\phi\rangle}{\|\phi\|}\right) =1\] \[\langle\phi|\hat{U}^{\dagger}\hat{U}|\phi\rangle =\|\phi\|^{2}\] \[\left\|\hat{U}|\phi\rangle\right\| =\|\phi\|.\]
Next, we use the result that \(\hat{U}\) preserves norms to show that it in fact preserves _all_ inner products. The proof is almost identical to the derivation of the _polarization identity_ for a complex inner product space, which uses the norm of an arbitrary vector in the space to derive
the form of the space's inner product [8]. Consider the generic vector sum \(|\alpha\rangle+|\beta\rangle\):
\[(\langle\alpha|+\langle\beta|\rangle\hat{U}^{\dagger}\hat{U}(|\alpha\rangle+| \beta\rangle)=(\langle\alpha|+\langle\beta|)(|\alpha\rangle+|\beta\rangle)\]
Expanding out the sums, using \(\langle\alpha|\hat{U}^{\dagger}\hat{U}|\alpha\rangle=\langle\alpha|\alpha\rangle\) and \(\langle\beta|\hat{U}^{\dagger}\hat{U}|\beta\rangle=\langle\beta|\beta\rangle\), and simplifying yields
\[\mathrm{Re}\Big{[}\langle\alpha|\hat{U}^{\dagger}\hat{U}|\beta\rangle\Big{]}= \mathrm{Re}[\langle\alpha|\beta\rangle].\]
Similar manipulations starting from the complex linear combination \(|\alpha\rangle+i|\beta\rangle\) give that \(\mathrm{Im}\Big{[}\langle\alpha|\hat{U}^{\dagger}\hat{U}|\beta\rangle\Big{]}= \mathrm{Im}[\langle\alpha|\beta\rangle]\), so
\[\langle\alpha|\hat{U}^{\dagger}\hat{U}|\beta\rangle=\langle\alpha|\beta\rangle.\]
Since this identity holds for all vectors \(|\alpha\rangle\) and \(|\beta\rangle\) in the Hilbert space, we have that \(\hat{U}^{\dagger}\hat{U}=\hat{I}\). For a finite-dimensional Hilbert space, this implies that \(\hat{U}\) is unitary.
So we can weaken axiom 4A to axiom 4A' without changing the resulting theory: formulations A and A' are equivalent.
But the analogous proposition does _not_ hold for formulation B. Suppose we considered weakening axiom 4B to an axiom 4B' that is the same as 4A'; that is, we weaken the postulate that the time-evolution operator \(\hat{U}\) is unitary to merely require \(\hat{U}\) to be linear. Axiom 1B requires that state vectors be nonzero, so the time-evolution operator \(\hat{U}\) cannot map a nonzero vector to 0, so \(\hat{U}\) must be invertible.7 But other than that, its form is not logically constrained by the generalized axioms 1B-4B'.
Footnote 7: In this generalized theory, the set of possible time-evolution operators is expanded from the \(N^{2}\)-dimensional unitary Lie group \(\mathrm{U}(N)\) to the larger \(2N^{2}\)-dimensional Lie group \(\mathrm{GL}(N,\mathbb{C})\), where \(N\) is the dimension of the Hilbert space.
The generalized axiom set B' allows a state vector's norm to change over time. Does this possibility have physically observable consequences? That is, can axiom set B' produce measurement probability distributions \(\{P(\lambda)\}\) that are not possible within the standard axiom set B? The answer is an emphatic yes.
Ref. [4] briefly discusses the physical theory (which it refers to as _global manual normalization_) described by this generalized axiom set B'. The author points out that this generalized theory has physical consequences that are very different from standard quantum mechanics. In particular, it allows entanglement to be used for faster-than-light communication!
To see how, consider two physically separated qubits initially in the entangled Bell state \(|\psi_{i}\rangle=|00\rangle+|11\rangle\).8 The reduced density matrix for Bob's qubit (listed second) is the maximally mixed state \(\hat{\rho}_{B}=\frac{1}{2}\hat{I}\), and Bob initially has an equal probability \(1/2\) of measuring his qubit to have either value 0 or 1. If Alice wants to transmit a 0 bit to Bob, then she can apply the non-unitary qubit gate
Footnote 8: This paragraph (alone in this article) assumes some background in quantum information theory.
\[\left(\begin{array}{cc}1&0\\ 0&\epsilon\end{array}\right)\]
to her qubit, where \(0<\epsilon\ll 1\). The global state becomes \(|\psi_{f}\rangle=|00\rangle+\epsilon|11\rangle\), which has a different norm from \(|\psi_{i}\rangle\). Bob's reduced density matrix is now
\[\hat{\rho}_{B}=\left(\begin{array}{cc}1&0\\ 0&\epsilon^{2}\end{array}\right),\]
and Bob's new normalized measurement probabilities are \(\{P(0)=1/(1+\epsilon^{2}),\ P(1)=\epsilon^{2}/(1+\epsilon^{2})\}\), so Bob is almost guaranteed to measure his qubit to have the transmitted value.9 Similarly, Alice could have chosen to apply a different non-unitary gate to her qubit in order to transmit the bit 1. By contrast, the no-communication theorem gives that if Alice can only apply unitary operators to her qubit, as in axiom 4B, then she cannot use entanglement to change Bob's reduced density matrix at all, nor any of his measurement probabilities.
Footnote 9: Interestingly, Alice still cannot use entanglement to communicate completely deterministically within this protocol, because \(\epsilon=0\) would make \(U\) singular and annihilate the \(|1\rangle_{A}\) state – although she can make the probability of transmission error arbitrarily small.
Ref. [4] also gives a (much more complicated) proof that a hypothetical quantum computer that operated under the generalized axioms B' would be able to solve all problems in the complexity class PP in polynomial time. The complexity class PP is believed to contain a much larger and more difficult set of problems than does the complexity class BQP of problems solvable by a standard quantum computer.10
Footnote 10: PP is also believed to contain a larger and more difficult set of problems than the more famous complexity class NP.
What does global manual normalization look like when framed within formulation A? Since all state vectors are normalized within formulation A, we first map \(|\psi_{i}\rangle\rightarrow\hat{U}|\psi_{i}\rangle\) and then rescale the output back to a unit vector. This composed time-evolution map takes the form
\[|\psi_{i}\rangle\rightarrow|\psi_{f}\rangle=\frac{1}{\sqrt{\langle\psi_{i}| \hat{U}^{\dagger}\hat{U}|\psi_{i}\rangle}}\hat{U}|\psi_{i}\rangle,\]
which is nonlinear because the scalar prefactor depends on the input state \(|\psi_{i}\rangle\). So unless \(\hat{U}^{\dagger}\hat{U}\) is proportional to the identity, a time-evolution map that appears linear within formulation B will appear nonlinear within formulation A! This perspective makes it clear why global manual normalization leads to very different phenomenology than standard QM does. Interestingly, global manual
normalization _does_ reproduce standard QM if the time-evolution operator is _proportional_ to a unitary operator, even if the (nonzero) proportionality constant does not equal 1. Such an operator can be thought of as uniformly dilating the entire Hilbert space. This suggests that the fundamental characteristic of time evolution in QM may be best thought of not as preserving norms per se, but instead as preserving _relative_ norms or angles between states.
Of course, we could also consider other "intermediately strong" generalization of axiom 4B, in which we allow _some_ non-unitary time-evolution operators, but not the full set of invertible operators. Such theories could perhaps be made compatible with existing experimental results, but they would probably need to be quite convoluted and unnatural.
## IV Conclusion
This article attempted to address the question of whether the basic mathematical rules of probability alone require that time evolution in quantum mechanics be unitary. The answer turns out to be rather subtle.
We developed two physically equivalent versions of the basic axioms of QM, which each postulate that time evolution is unitary. But while these two formulations are physically equivalent, they naturally _generalize_ in different ways to two theories that turn out to be physically distinct. In particular, in one formulation (axion set A), the postulate of unitary time evolution is indeed unnecessary; the unitarity of time evolution follows logically from the assumption of _linearity_ only (and the other axioms). But in the other formulation (axiom set B), the postulate of unitary time evolution is essential. If we weaken that axiom to only postulate linear time evolution, then we end up with a new physical theory that is completely logically self-consistent, but which makes very different experimental predictions than standard QM does.
Of course, all the experimental evidence collected so far overwhelmingly supports the hypothesis that time evolution is indeed unitary and is given by the Schrodinger equation. But, contrary to what is sometimes loosely implied, this hypothesis of unitarity is experimentally falsifiable, and is not merely a tautological claim that follows from the basic rules of probability and from _every_ formulation of the other axioms of QM.
|
2309.03922 | Filtered rays over iterated absolute differences on layers of integers | The dynamical system generated by the iterated calculation of the high order
gaps between neighboring terms of a sequence of natural numbers is remarkable
and only incidentally characterized at the boundary by the notable
Proth-Glibreath Conjecture for prime numbers.
We introduce a natural extension of the original triangular arrangement,
obtaining a growing hexagonal covering of the plane. This is just the base
level of what further becomes an endless discrete helicoidal surface. %
Although the repeated calculation of higher-order gaps causes the numbers that
generate the helicoidal surface to decrease, there is no guarantee, and most
often it does not even happen, that the levels of the helicoid have any
regularity, at least at the bottom levels.
However, we prove that there exists a large and nontrivial class of sequences
with the property that their helicoids have all levels coinciding with their
base levels. This class includes in particular many ultimately binary sequences
with a special header. % For almost all of these sequences, we additionally
show that although the patterns generated by them seem to fall somewhere
between ordered and disordered, exhibiting fractal-like and random qualities at
the same time, the distribution of zero and non-zero numbers at the base level
has uniformity characteristics. Thus, we prove that a multitude of straight
lines that traverse the patterns encounter zero and non-zero numbers in almost
equal proportions. | Raghavendra Bhat, Cristian Cobeli, Alexandru Zaharescu | 2023-09-06T19:22:30Z | http://arxiv.org/abs/2309.03922v1 | # Filtered rays over iterated absolute differences on layers of integers
###### Abstract.
The dynamical system generated by the iterated calculation of the high order gaps between neighboring terms of a sequence of natural numbers is remarkable and only incidentally characterized at the boundary by the notable Proth-Glibreath Conjecture for prime numbers.
We introduce a natural extension of the original triangular arrangement, obtaining a growing hexagonal covering of the plane. This is just the base level of what further becomes an endless discrete helicoidal surface. Although the repeated calculation of higher-order gaps causes the numbers that generate the helicoidal surface to decrease, there is no guarantee, and most often it does not even happen, that the levels of the helicoid have any regularity, at least at the bottom levels.
However, we prove that there exists a large and nontrivial class of sequences with the property that their helicoids have all levels coinciding with their base levels. This class includes in particular many ultimately binary sequences with a special header. For almost all of these sequences, we additionally show that although the patterns generated by them seem to fall somewhere between ordered and disordered, exhibiting fractal-like and random qualities at the same time, the distribution of zero and non-zero numbers at the base level has uniformity characteristics. Thus, we prove that a multitude of straight lines that traverse the patterns encounter zero and non-zero numbers in almost equal proportions.
Key words and phrases:Proth-Gilbreath Conjecture, formal power series, SP-numbers, gaps between primes 2020 Mathematics Subject Classification: Primary 11B37; Secondary 11B39, 11B50
## 1. Introduction
Let \(\mathfrak{u}=\{a_{k}\}_{k\geq 0}\) be a sequence of non-negative integers. We place the sequence \(\mathfrak{u}\) on the top row of a triangle whose subsequent rows are recursively obtained as sequences of numbers given by the absolute values of the differences between neighboring terms on the previous line. The infinite equilateral triangle obtained in this way is defined by
\[\begin{array}{ccccccccc}a_{0}&a_{1}&a_{2}&a_{3}&a_{4}&a_{5}&a_{6}&\ldots\\ d_{0}^{(1)}&d_{1}^{(1)}&d_{2}^{(1)}&d_{3}^{(1)}&d_{4}^{(1)}&d_{5}^{(1)}&\ldots \\ &d_{0}^{(2)}&d_{1}^{(2)}&d_{2}^{(2)}&d_{3}^{(2)}&d_{4}^{(2)}&\ldots&\\ &&d_{0}^{(3)}&d_{1}^{(3)}&d_{2}^{(3)}&d_{3}^{(3)}&\ldots&\\ &&\ldots&\ldots&\ldots&\ldots&\\ \end{array}\] (P-G)
where
\[d_{k}^{(j+1)}:=\left|d_{k+1}^{(j)}-d_{k}^{(j)}\right|\quad\text{ and }\quad d_{k}^{(0)}:=a_{k}\quad\text{ for }j,k\geq 0. \tag{1}\]
Let \(\mathfrak{w}=\{b_{j}\}_{j\geq 0}\) be the sequence of numbers on the left edge of this triangle, that is, \(b_{0}=a_{0}\) and \(b_{j}=d_{0}^{(j)}\) for \(j\geq 1\). We also denote by \(\mathfrak{w}_{k}=\left\{d_{k}^{(j)}\right\}_{j\geq 0}\), for \(k\geq 0\), the column or the _ray
that passes through the triangle parallel to the edge on the left, with the first component being \(a_{k}\), so that \(\mathfrak{w}=\mathfrak{w}_{0}\).
Taking successively the absolute values of higher-order differences, the resulting numbers on the lower rows become smaller and smaller. It is then natural to observe this comprehensive phenomenon on the left edge. However, the components of \(\mathfrak{w}\) may not necessarily become all equal to \(0\), but we should expect that from a certain point onwards they will take at most two values, one being zero and the other an integer different from zero, provided \(\mathfrak{u}\) does not grow too fast. Considering, for example, the simpler revealing case where the components of \(\mathfrak{u}\) only take the values \(0\) or \(a\), where \(a\) is a positive integer, then the numbers on the left edge are also only \(0\)'s or \(a\)'s.
The famous case of this type of construction, where the top row is the sequence of prime numbers, is the subject of the Proth-Gilbreath conjecture (see Proth [20], Gilbreath [13], Killgrove and Ralston [16], Odlyzko [18] and the problem sets of Guy [14, Example 12], [15, Problem A10] and Montgomery [17, Appendix Problem 68]). In accordance with common sense and extensive numerical observations (see [16, 18]), the conjecture states that all the entries of \(\mathfrak{w}\) except \(b_{0}\) are equal to \(1\).
**Conjecture 1** (Proth-Gilbreath).: _All the differences on the western edge of the (P-G) triangle generated by the sequence of all primes are equal to \(1\)._
However, to this day, in the sequence that is believed, as stated in Conjecture 1, to be constantly equal to \(1\), it is not even known if there are an infinite number of \(1\)'s.
In fact, the reality generated by the primes is even more intriguing beyond the left edge \(\mathfrak{w}_{0}\). Inside the (P-G) triangle, the phenomenon is exactly at the opposite end. First, let us observe that on \(\mathfrak{w}_{1}\), the immediately adjacent parallel line to the left edge, there are only \(0\)'s or \(2\)'s, according to Conjecture 1. Furthermore, numerical evidence shows that these values are present on this line in approximately equal proportions. And likewise, when moving inside towards the right, on the following parallel rays \(\mathfrak{w}_{j}\), \(j\geq 2\), it is also very likely that the same fact holds true (see Table 1).
**Conjecture 2**.: _Let \(\mathfrak{w}_{j}\), \(j\geq 1\), be any line parallel to the left edge of the (P-G) triangle generated by the sequence of all primes. Denote by \(\nu_{d}(n)\) the number of \(d\)'s among the first \(n\) elements of \(\mathfrak{w}_{j}\). Then, for \(d\in\{0,2\}\), there exists a constant \(c>0\) and an integer \(n_{j}\), such that_
\[\left|\nu_{d}(n)-\frac{n}{2}\right|<c\sqrt{n},\quad\text{ for }n\geq n_{j}.\]
We expect the same type of uniform distribution to occur on other rays that cross the infinite (P-G), for example, those that pick the differences at equally spaced intervals in different directions. Similarly, in convex domains, that are sufficiently large and are situated far enough from the starting row, the number of \(0\)'s and the number of \(2\)'s should be approximately the same with probability one.
The structure of the discrete dynamical system generated by iteratively calculating neighbor gaps recorded in the (P-G) triangle can be better understood when viewed in a broader context (see [6]) that is related to phenomena occurring in Pascal's triangle (see [3, 9] and Prunescu et al. [8, 21]) or the patterns of numbers generated in Ducci type games (see Caragiu, Zaki et al. [12, 11, 10, 5, 7]).
Let \(\mathcal{L}\) denote the set of sequences of non-negative integers, and let \(\mathcal{L}_{0}\subset\mathcal{L}\) be the set of sequences with terms equal to \(0\) or \(1\), only. Similarly, for any integer \(N\geq 0\), let \(\mathcal{L}(N)\) and \(\mathcal{L}_{0}(N)\) be the sets of finite sequences with \(N\) non-negative integer elements, and sequences consisting only of \(0\)'s or \(1\)'s, respectively. If \(\mathfrak{s}\) is a sequence and \(N\geq 0\), we denote by \(\mathfrak{s}(N)\) the partial finite sequence formed by the first \(N\) elements of \(\mathfrak{s}\).
In this paper, we examine the overlying operator \(\Upsilon\) that transforms the top sequence \(\mathfrak{u}\) into the one on the left edge \(\mathfrak{w}\) in the (P-G) triangle. Then \(\Upsilon\) is defined by
\[\Upsilon:\mathcal{L}\to\mathcal{L}\text{ and }\Upsilon(\mathfrak{u}):= \mathfrak{w}.\]
Let \(\Psi:\mathcal{L}\to\mathcal{L}\) be the operator that transforms a horizontal row of the triangle into the immediately following row. Note that the entire triangle (P-G) is composed of the sequence of successive horizontal rows \(\Psi^{(j)}(\mathfrak{u})\), for \(j\geq 0\), where \(\Psi^{(0)}(\mathfrak{u})=\mathfrak{u}\) is the top generating row. Also note that the restrictions of \(\Psi\) and \(\Upsilon\) to \(\mathcal{L}_{0}\) have in their image only sequences in \(\mathcal{L}_{0}\). The same type of property occurs with the action of \(\Psi\) and \(\Upsilon\) on sequences \(\mathfrak{s}\) of \(0\)'s and \(1\)'s, except for a finite number of terms. For these sequences, \(\Psi(\mathfrak{s})\) is also ultimately composed only of \(0\)'s and \(1\)'s, but this is not generally true for \(\Upsilon(\mathfrak{s})\).
The geometric correspondent of each iteration of \(\Upsilon\) applied on \(\mathfrak{u}\) results in the construction of a new equilateral triangle, rotated clockwise around the first component \(a_{0}\) of \(\mathfrak{u}\) by an angle of \(60\) degrees. After six iterations of \(\Upsilon\), the initial sequence \(\mathfrak{u}\) is geometrically reached again. This results in the completion of the first _layer_ or _level_ of what can further be seen as a _helicoidal discrete surface_, since in general \(\Upsilon^{(6)}(\mathfrak{u})\neq\mathfrak{u}\) (see Figure 1 for an example of these iterations with a finite sequence of integers). Continuing the iterations produces a discrete _helicoid_ denoted \(\mathcal{H}=\mathcal{H}(\mathfrak{u})\), a "Riemann surface"-like structure of non-negative integers. Let \(\mathcal{H}_{n}=\mathcal{H}_{n}(\mathfrak{u})\), \(n\geq 1\), denote the \(n\)-th levels of the helicoid, so that
\[\mathcal{H}=\bigcup_{n\geq 1}\mathcal{H}_{n}. \tag{2}\]
The first level \(\mathcal{H}_{1}\), called also the _base layer_, is the union of six equilateral triangles with a vertex in \(a_{0}\) and edges \(\Upsilon^{(k)}(\mathfrak{u})\) and \(\Upsilon^{(k+1)}(\mathfrak{u})\) for \(k=0,\ldots,5\). The subsequent levels are each generated similarly by their initial sequence \(\Upsilon^{(6k)}(\mathfrak{u})\) for \(k\geq 0\). Note that all layers are congruent geometrically, each of them covering the entire plane with numbers, if \(\mathfrak{u}\) is infinite, or having the shape of a regular hexagon with \(N+1\) numbers on each side if \(\mathfrak{u}=(a_{0},\ldots,a_{N})\)
Figure 1. The triangle generated by \(\mathfrak{u}\) that contains the first \(30\) square-prime numbers. (See Section 3 for the formal definition and some appealing properties that the sequence of square-primes has.) Then, with \(\Upsilon^{(n)}(\mathfrak{u})\) as initial rows, five more triangles are generated. The six figures represent the intermediate steps in forming the first layer of the helicoid.
(see Figure 2).
The result of this process leads to a simple pattern when \(\mathfrak{u}\) is the sequence of prime numbers, which, at least under the assumption of the Proth-Gilbreath Conjecture, is well understood. In this case \(\Upsilon^{(n}(\mathfrak{u})=(2,1,1,1,\dots)\) for \(n\geq 1\), so that the interior of the corresponding generated equilateral triangles are then bounded by sequences of \(1\)'s and have only \(0\)'s inside.
However, there is a much more interesting perspective if we mark in the initial sequence only the positions where prime numbers appear, their size being given by the rank of the terms in the sequence. To be precise, let \(\mathfrak{i}=\{\operatorname{ind}_{\mathcal{P}}(j)\}_{j\geq 0}\) be the indicator
Figure 3. Left: The base layer of the helicoid generated by the prime-number-indicator function for integers in the interval \([0,50)\). Right: The base layer of the helicoid generated by the sequence: \(3072,1536,768,384,192,96,48,24,12,6,3\) (the first \(10\) integers that are \(3\) times a power of \(2\) in decreasing order), followed by a sequence or ‘random bits’ defined by \(\operatorname{ind}_{[0,1/2)}\left(\{k\sqrt{2}\}\right)\) for \(k=1,2,\dots,40\). (Note that by Theorem 4, in both cases, the helicoids have identical layers at each level.)
Figure 2. The first seven layers of the helicoid generated by the sequence \((100000,59049,32768,16807,7776,3125,1024,243,32,1,0,1,0,0,0,0,0,1,0,0)\), where the first positive integers are the first ten \(5\)th powers in decreasing order. The four captures are taken in order from bottom, sides, and top. Distinct integers are shown in different colors. The helicoid has seven distinct layers, and starting from the \(8\)th, all layers coincide with the \(7\)th. The vertical strip indicates the places where the last outcome row on one layer transcends, becoming the first generating row on the upper layer.
sequence of prime numbers, where
\[\operatorname{ind}_{\mathcal{P}}(j)=\begin{cases}1&\text{ if $j$ is prime,}\\ 0&\text{ else.}\end{cases}\]
Among our numerical experiments, we observed with surprise that the 6th iteration of \(\Upsilon\) brings \(\mathfrak{i}\) back to its starting point. Indeed, the helicoid generated by \(\mathfrak{i}\) with its leaves composed of six equilateral triangles each, actually has only one leaf, all the subsequent that follow being identical to the first modulo a rotation or a reflection.
We will show that this phenomenon is not unique and that there is a large class of generating sequences \(\mathfrak{u}\) that have the same property \(\Upsilon^{(6)}(\mathfrak{u})=\mathfrak{u}\). In Theorem 4 below we consider these sequences and note, according to Theorem 5, that their class includes a multitude of sequences that are expected to exhibit a random behavior, like the indicator function of primes has.
## 2. The main results
The problem of demonstrating the uniform distribution of zeros and ones on the sides or on certain rays that cross the (P-G) triangle generated by some generic initial sequence is not within our reach. However, beyond the specific results that can be obtained in particular cases, we can prove the following existence result with square-prime numbers.
**Theorem 1**.: _There exists an infinite subsequence of square-primes \(A_{1}<A_{2}<\ldots\) such that the (P-G) triangle generated by \(A_{1},A_{2},\ldots\) on the first row has \(1\) as every other element on the left edge._
Note that Theorem 1 implies that there are (P-G) triangles generated by sequences of square-prime numbers for which at least half of the elements on the left edge are \(1\).
What is actually specific to the (P-G) constructions is not the singular fact that the edge on the left only has a special form, as it happens when the generator is the sequence of prime numbers and the constant shape of the edge on the left is given by almost an arithmetic accident. An impression of what happens most often can be seen in Figure 4,
Figure 4. Two cut-offs of the (P-G) triangles with integers taken modulo 4 (left) and modulo 2 (right). In the image on the left, the triangle is generated by the first 78 primes less than 400, and in the image on the right, the top row contains the 75 square-prime numbers less than 400.
where the triangles generated by prime and square-prime numbers are placed side by side for comparison.
Then what is truly remarkable is that the rays \(\mathfrak{w}_{j}\) parallel to the left edge tend to have a random sequence appearance. In particular, the sequences \(\mathfrak{w}_{j}\) seem to become binary, and the statistics that count the number of the two types of elements indicate this. Thus, Table 1 shows the closeness between the number of \(0\)'s and the number of \(2\)'s on rays \(\mathfrak{w}_{1},\dots,\mathfrak{w}_{9}\) for the partial (P-G) triangle generated by prime numbers less than one million and the high order differences taken modulo \(4\).
To compare, the same property is observed in Table 2 when the first row begins with the square-prime numbers less than one million and differences taken modulo \(2\). In this case, on each of the columns \(\mathfrak{w}_{0},\dots,\mathfrak{w}_{9}\), the distribution of the number of \(0\)'s and the number of \(1\)'s is also almost evenly split in half. And this is not only happening on the western edge, the same phenomenon occurs inside the (P-G) triangle as well. To quantify the distribution let us measure the rate of change on rays.
The next theorem demonstrates the existence of an underlying link between the horizontal and the vertical/diagonal rows of the (P-G) triangle.
**Theorem 2**.: _Let \(\mathfrak{u}=(a_{0},a_{1},\dots)\in\mathcal{L}_{0}\) be the first row of the (P-G) triangle and let \(\mathfrak{w}=(b_{0},b_{1},\dots)\) be the sequence on its left-edge. Let \(f,g\in\mathbb{F}_{2}[[X]]\) be the formal power series with coefficients in \(\mathfrak{u}\) and \(\mathfrak{w}\), respectively. Let \(T:\mathbb{F}_{2}[[X]]\to\mathbb{F}_{2}[[X]]\) be the operator defined by \(T(f)=g\). Then:_
_1. The operator \(T\) satisfies the following formula_
\[\big{(}T(f)\big{)}(X)=f\Big{(}\frac{X}{1+X}\Big{)}\cdot\frac{1}{1+X}\,. \tag{3}\]
_2. The operator \(T\) has the property \(T^{(2)}(f)=f\) for any \(f\in\mathbb{F}_{2}[[X]]\), so that \(T\) is invertible and bijective._
The phenomenon of involution between \(\mathfrak{u}\) and \(\mathfrak{w}\) shown in Theorem 2 occurs even more exquisitely for all (P-G) triangles of bounded size. Consequently, an analogous version of Theorem 2 holds true for such triangles of finite size. Here, polynomials play the role of
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \(r\) & \(N\) & \(z\) & \(t\) & \(z-t\) & \(h\) & \((z-t)/N\) \\ \hline
0 & 78497 & 0 & 0 & 0 & 78497 & 0.00000 \\
1 & 78496 & 39061 & 39435 & -374 & 0 & -0.00476 \\
2 & 78495 & 39272 & 39223 & 49 & 0 & 0.00062 \\
3 & 78494 & 39218 & 39275 & -57 & 1 & -0.00073 \\
4 & 78493 & 39405 & 39088 & 317 & 0 & 0.00404 \\
5 & 78492 & 39311 & 39180 & 131 & 1 & 0.00167 \\
6 & 78491 & 39030 & 39461 & -431 & 0 & -0.00549 \\
7 & 78490 & 39307 & 39182 & 125 & 1 & 0.00159 \\
8 & 78489 & 39276 & 39211 & 65 & 2 & 0.00083 \\
9 & 78488 & 39231 & 39256 & -25 & 1 & -0.00032 \\ \hline \hline \end{tabular}
\end{table}
Table 1. The frequencies of the absolute values of the differences on the rays \(\mathfrak{w}_{0},\dots,\mathfrak{w}_{9}\) that cross a cut-off of the (P-G) triangle passing parallel to its left edge. The generating row contains the prime numbers less than one million. Counting is done based on the values of the higher-order differences taken modulo \(4\). The notations are as follows: \(r\) is the rank of the ray, starting with the left edge \(\mathfrak{w}\) that has rank \(r=0\); \(N\) is the number of differences on the ray (note that there are no differences on the first row of (P-G)); \(z\) is the number of \(0\)’s, \(t\) is the number of \(2\)’s, and \(h\) is the number of values that are not equal to \(0\) or \(2\).
the formal power series because they can be thought of as formal power series with a finite number of nonzero coefficients.
Let \(R[X]^{[n]}\) denote the set of polynomials with coefficients in \(R\) and degree at most \(n\).
**Theorem 3**.: _Let \(N\geq 1\) be an integer and let \(\mathfrak{u}=(a_{0},a_{1},\ldots,a_{N-1})\in\mathcal{L}_{0}(N)\) be the top row and let \(\mathfrak{w}=(b_{0},b_{1},\ldots,b_{N-1})\) be the sequence on the left-edge of the (P-G) triangle of side \(N\). Suppose \(a_{0}=b_{0}\) and let \(f,g\in\mathbb{F}_{2}[X]^{[N-2]}\), be the polynomials whose coefficients are the components of \(\mathfrak{u}\) and \(\mathfrak{w},\) respectively. Let \(T_{N}:\mathbb{F}_{2}[X]^{[N-2]}\to\mathbb{F}_{2}[X]^{[N-2]}\) defined by \(T_{N}(f)=g\). Then:_
_1. The operator \(T_{N}\) satisfies the following formula_
\[\big{(}T_{N}(f)\big{)}(X)\equiv f\Big{(}\frac{X}{1+X}\Big{)}\cdot\frac{1}{1+X} \left(\mathrm{mod}\;X^{N}\right)\,. \tag{4}\]
_2. The operator \(T_{N}\) has the property \(T_{N}^{(2)}(f)=f\) for any \(f\in\mathbb{F}_{2}[X]^{[N-2]}\), so that \(T_{N}\) is invertible and bijective._
As a follow-up of Theorems 2 and 3, we find that \(\Upsilon^{(6)}(\mathfrak{u})=\mathfrak{u}\) for all binary finite or infinite sequences \(\mathfrak{u}\), which in particular proves to be true the inceptive observation discussed at the end of the Introduction for the indicator function of primes. The above theorems also imply that the helicoids generated by binary sequences have only one distinct layer, which is a three-petal hexagon. The next results also shows the necessity of an additional condition that must be fulfilled by the more general sequences in \(\mathcal{L}\) so that they also generate helicoids with just a single distinct layer. The general statement includes both the case of infinite sequences and the case of finite sequences, as we consider the ring of polynomials embedded in the ring of formal power series, where polynomials have only a finite number of non-zero coefficients.
First, we define the concept of a champion in a sequence. We say that the term of rank \(n\geq 0\) in the sequence \(\mathfrak{s}=\{a_{k}\}_{k\geq 0}\) of non-negative integers is a _champion of \(\mathfrak{s}\)_, or shortly a _champion_, if \(a_{n}>0\) and \(a_{j}<a_{n}\) for \(0\leq j<n\). Note that an unbounded sequence has infinitely many champions, and in a strictly increasing sequence all its terms are champions. However, our point is at the other end, at sequences with at most one champion.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \(r\) & \(N\) & \(z\) & \(o\) & \(z-o\) & \(h\) & \((z-o)/N\) \\ \hline
0 & 69178 & 34616 & 34559 & 57 & 3 & 0.00082 \\
1 & 69177 & 34684 & 34485 & 199 & 8 & 0.00288 \\
2 & 69176 & 34614 & 34556 & 58 & 6 & 0.00084 \\
3 & 69175 & 34439 & 34727 & -288 & 9 & -0.00416 \\
4 & 69174 & 34485 & 34681 & -196 & 8 & -0.00283 \\
5 & 69173 & 34808 & 34357 & 451 & 8 & 0.00652 \\
6 & 69172 & 34707 & 34458 & 249 & 7 & 0.00360 \\
7 & 69171 & 34471 & 34694 & -223 & 6 & -0.00322 \\
8 & 69170 & 34644 & 34522 & 122 & 4 & 0.00176 \\
9 & 69169 & 34689 & 34472 & 217 & 8 & 0.00314 \\ \hline \hline \end{tabular}
\end{table}
Table 2. The frequencies of the absolute values of the differences on the rays \(\mathfrak{w}_{0},\ldots,\mathfrak{w}_{9}\) that cross a cut-off of the (P-G) triangle passing parallel to its left edge. The generating row contains the 69179 square-primes less than one million. Counting is done based on the values of the higher-order differences taken modulo 2. The notations are as follows: \(r\) is the rank of the ray, starting with the left edge \(r_{0}=\mathfrak{w}\); \(N\) is the number of differences on the ray (note that there are no differences on the first row of (P-G)); \(z\) is the number of \(0\)’s, \(o\) is the number of \(1\)’s, and \(h\) is the number of values that are not equal to \(0\) or \(1\).
**Theorem 4**.: _Let \(\mathfrak{u}\in\mathcal{L}\) and let \(\Upsilon:\mathcal{L}\to\mathcal{L}\) be the operator defined by \(\Upsilon(\mathfrak{u})=\mathfrak{w}\), where \(\mathfrak{w}\) is the sequence of numbers obtained as the left edge of the (P-G) triangle generated by \(\mathfrak{u}\). Then:_
1. _Any discrete helicoid, as defined by relation_ (_2_)_, which is generated by a finite or infinite binary sequences_ \(\mathfrak{u}\in\mathcal{L}_{0}(N)\) _or_ \(\mathfrak{u}\in\mathcal{L}_{0}\) _has all levels equal, that is_ \(\Upsilon^{(6)}(\mathfrak{u})=\mathfrak{u}\)_._
2. _The base level of the helicoids generated by binary finite or infinite sequences are composed by three equal diamonds rotated around the origin, and each of these diamonds is the union of two equilateral triangles that mirror each other with respect to the diagonal that joins the corners with obtuse vertices of the diamond._
3. _Let_ \(\mathfrak{u}\in\mathcal{L}\) _and suppose that_ \(\Upsilon^{(6)}(\mathfrak{u})=\mathfrak{u}\)_. Then the sequence_ \(\mathfrak{u}\) _has at most one champion._
If \(\mathfrak{u}\) is a finite sequence, since the numbers in the helicoid \(\mathcal{H}(\mathfrak{u})\) have a general tendency to decrease, in any case, all of them being at most equal to the largest component of \(\mathfrak{u}\), it follows that the sequence \(\{\mathcal{H}_{n}(\mathfrak{u})\}_{n\geq 1}\) of the hexagonal layers of numbers that compose \(\mathcal{H}(\mathfrak{u})\) is finally periodic. The sequence of these layers should then be composed of a precycle followed by a cycle, both of finite length. Note that the length of a precycle can be zero, but the length of a cycle must always be positive. According to the first part of Theorem 4, the length of the precycle is zero and the length of the cycle is one, for all binary generators \(\mathfrak{u}\in\mathcal{L}_{0}(N)\), where \(N\geq 1\), as well as for infinite sequences in \(\mathcal{L}\). Further investigation is needed to classify the generating sequences based on the length of the precycle or the cycle of the sequences of layers that their helicoids have. The following is an example of such a comprehensive problem.
**Problem 1**.: Let \(P\geq 0\) and \(C\geq 1\) be integers. Find \(\mathfrak{u}\in\mathcal{L}(N)\), for some \(N\geq 1\), such that the sequence \(\{\mathcal{H}_{n}(\mathfrak{u})\}_{n\geq 1}\) of layers of the helicoid \(\mathcal{H}(\mathfrak{u})\) has exactly \(P+C\) distinct layers, grouped in a precycle of length \(P\), and followed by the endless repetition of a cycle of length \(C\).
For finite sequences with a single champion, we numerically tested various decreasing sequences of integers followed by a sequence of \(0\)'s or \(1\)'s. We found that many such sequences produce helicoids with a sequence of layers composed by a precycle (possibly empty) and a cycle of length one. Yet this is not generally observed, as for instance, the helicoid generated by the first positive \(77\) seventh powers arranged in decreasing order and followed by the sequence of ten bits: \(0,1,0,0,0,0,0,1,0,0\) generates a helicoid with \(17\) distinct layers, of which \(9\) are in a precycle, and \(8\) are in the subsequent infinitely repeated cycle.
In Figure 3 there are shown the base layer of two helicoids whose upper layers coincide with their initial ones. In both cases the generating sequences have exactly one champion. In Section 5 we present several other examples of sequences with just one champion, whose helicoids have only one distinct level. Additionally, we show that the property of having a single champion is not sufficient to characterize the one-distinct-level helicoids. Indeed, there are sequences with just one champion whose helicoids have more distinct levels, such as those in Figure 8 (left) and Figure 6 (right), which have four and nine distinct levels, respectively.
For any ray \(\mathfrak{w}_{k}\), for \(k\geq 0\), that is parallel to the western edge \(\mathfrak{w}_{0}\), let \(R_{\mathfrak{w}}(k)\) denote the proportion of zeros among the first \(N\) components of the ray, that is,
\[R_{\mathfrak{w}}(k):=\frac{1}{N-k}\#\Big{\{}j:d_{k}^{(j)}=1,\ 0\leq j<N\Big{\}}, \quad\text{for}\ 0\leq k\leq N. \tag{5}\]
Symmetrical with respect to vertical axes, let us consider the eastern edges of the cut-off (P-G). For any fixed integer \(N\geq 0\), denote by \(\mathfrak{e}_{0},\mathfrak{e}_{1},\dots\) these edges, seen this time geometrically, in order from right to left. Precisely, \(\mathfrak{e}_{k}=\mathfrak{e}_{k}(N)\), \(0\leq k\leq N\), is defined by
\[\mathfrak{e}_{k}:=\big{\{}d_{N-k}^{(j)}:0\leq j\leq N,\,j+k=N\big{\}}.\]
Then, just like for the western edges, let us denote by \(R_{\mathfrak{e}}(k)\) the proportion of zeros on \(\mathfrak{e}_{k}\):
\[R_{\mathfrak{e}}(k):=\frac{1}{N-k}\#\Big{\{}j:d_{N-k}^{(j)}=1,\,\,0\leq j<N,\, j+k=N\Big{\}},\quad\text{for $0\leq k\leq N$.} \tag{6}\]
The following theorem shows that almost all (P-G) triangles generated by sequences of \(0\)'s and \(1\)'s have nearly equal proportions of \(0\)'s and \(1\)'s on the rays \(\mathfrak{w}_{0},\mathfrak{w}_{1},\dots\) and \(\mathfrak{e}_{0},\mathfrak{e}_{1},\dots\) The more precise result is as follows.
**Theorem 5**.: _For any \(\varepsilon\in(0,1/2)\), there exits \(\delta=\delta_{\varepsilon}>0\) and an integer \(N_{\varepsilon,\delta}\) such that, for any integer \(N\geq N_{\varepsilon,\delta}\), there exists an exceptional subset \(\mathcal{E}(N)\subset\mathcal{L}_{0}(N)\) (possibly empty) having at most \(\varepsilon\cdot\#\mathcal{L}_{0}(N)\) elements, such that, for any sequence \(\mathfrak{u}(N)\in\mathcal{L}_{0}(N)\setminus\mathcal{E}(N)\), all ratios, defined by (5) and (6), of the number of zeros on the rays \(\mathfrak{w}_{0},\mathfrak{w}_{1},\dots,\mathfrak{w}_{\lfloor\delta N\rfloor}\) and \(\mathfrak{e}_{0},\mathfrak{e}_{1},\dots,\mathfrak{e}_{\lfloor\delta N\rfloor}\) in the corresponding triangle (P-G) satisfy_
\[R_{\mathfrak{w}}(0),R_{\mathfrak{w}}(2),\dots,R_{\mathfrak{w}}\big{(}\lfloor \delta N\rfloor\big{)};R_{\mathfrak{e}}(0),R_{\mathfrak{e}}(2),\dots,R_{ \mathfrak{e}}\big{(}\lfloor\delta N\rfloor\big{)}\in[1/2-\epsilon,1/2+ \epsilon]. \tag{7}\]
## 3. Differences with square-primes
### Preliminary notes on SP-numbers
Merging together squares larger than one and primes into the recently introduced sequence of _square-primes_[4, 1, 2, 3] proves to be a bright combination. Formally, the sequence is defined as the ordered union:
\[S\mathcal{P}:= \bigsqcup_{k=2}^{\infty}\{k^{2}p\mid p\text{ prime}\}\] \[= \big{\{}8,12,18,20,27,28,32,44,45,48,50,52,63,68,72,75,76,80,92,98,99,\dots\big{\}}.\]
Note that there are no primes nor squares in the set \(S\mathcal{P}\). Due to the uniform growth of the gaps between squares, this new sequence, also called _SP-numbers_, has a type of distribution that reverberates from a distance that of the prime numbers (for the higher order differences of primes and square-primes, in Figure 4 there are side-by-side two triangles generated by them for comparison). However, the change in the arithmetic nature from primes to composite numbers, makes proving some remarkable properties that the prime number sequence has transferred to SP-numbers not as difficult if we employ what we already know about prime numbers.
Let \(s_{n}\), \(n\geq 1\), denote the \(n\)th square-prime. A few such numbers are \(s_{1}=8\), \(s_{21}=99\), \(s_{76}=404\) and \(s_{1000}=7900\). An asymptotic estimate (see [1, Theorem 4.1]) shows that \(s_{n}\) and \(p_{n}\) have a similar order of magnitude:
\[s_{n}=\big{(}\zeta(2)-1\big{)}\cdot\frac{n}{\log n}+O\left(\frac{n}{\log n^{2 }}\right),\]
and \(s_{n}<p_{n}\) for large \(n\) because \(\zeta(2)-1\approx 0.64493<1\). An analogous result to Dirichlet's Theorem [1, Theorem 6.1] on primes in arithmetic progressions holds with a different constant depending on the progression for square-primes, also.
In the increasing sequence of square-primes, _twins_ are pairs of neighbor numbers that are \(1\) distance apart (such as \(27\) and \(28\) or \(44\) and \(45\)), and we know that there are infinitely many such pairs [1, Theorem 4.3]. Closely related to this is the following lemma that we
need in the proof of Theorem 1. The proof of the lemma appears in [4] and for the sake of completeness, we include here, also.
**Lemma 3.1** ([4, Theorem 2.1]).: _For any positive integer \(x\), there exist two square-prime numbers \(a,b\) such that \(x=a-b\)._
Proof.: We partition the set of all positive integers into the following five subsets:
1. \(S_{1}=\{1\}\).
2. \(S_{2}=\mathcal{P}\), the set of all primes.
3. \(S_{3}=\{x:x\not\in\mathcal{P},\,2\nmid x\}\), the set of all odd composite numbers.
4. \(S_{4}=\{x:x=2p_{1}\cdots p_{k},\,k\geq 1,\text{ for some distinct odd primes}\}\), the set of all even composite square-free numbers.
5. \(S_{5}=\{x:x=p^{2}d,p\in\mathcal{P},d\geq 1\}\), the set of non-square-free numbers.
Note that \(\mathbb{N}\setminus\{0\}=S_{1}\cup S_{2}\cup S_{3}\cup S_{4}\cup S_{5}\), hence it suffices to prove the existence of a pair of \(SP\)-numbers with difference \(x\), separately, for \(x\in S_{j}\), \(1\leq j\leq 5\).
(i) We have only one candidate, \(x=1\), in this case. There exist infinitely many neighbor SP-numbers [1, Theorem 4.3], but it is enough to consider the first example \((a,b)=(28,27)\), for which \(a-b=1\), which proves the case.
(ii) Let \(x\in\mathcal{P}\) be fixed, and let also \(p\in\mathcal{P}\) be a different prime number. Consider the following Pell equation in variables \(m,n\in\mathbb{Z}\):
\[m^{2}-pxn^{2}=1. \tag{8}\]
Since \(px\) is not a square, we know after Pell that (8) has at least a solution with \(m,n>1\). Let \((M,N)\) be such a solution. Then, \(M^{2}-pxN^{2}=1\), and multiplying this equality by \(x\), we find that
\[xM^{2}-p(xN)^{2}=x. \tag{9}\]
Observing that \(a:=xM^{2}\) and \(b:=p(xN)^{2}\) are both SP-numbers, the equality (9), which becomes \(a-b=x\), proves the lemma in this case.
(iii) Suppose now that \(x\) is an odd composite square-free number. Then, \(x=py\), where \(p\geq 3\) is prime and \(y\geq 5\) is prime or a product of distinct primes \(\geq 5\). Thus \(y\) is also necessarily odd. Let \(y=2K+1\) for some integer \(K\geq 2\). It then follows that \(y\) is a difference of two squares: \(y=(K+1)^{2}-K^{2}\), which implies
\[x=p\big{(}(K+1)^{2}-K^{2}\big{)}=p(K+1)^{2}-pK^{2}. \tag{10}\]
Let \(a:=p(K+1)^{2}\) and \(b:=pK^{2}\). Since \(K>1\), both \(a\) and \(b\) are square-prime numbers and (10) shows that \(x=a-b\), which concludes the proof in this case as well.
(iv) Now let us assume that \(x\) is an even composite square-free number. Then \(x=2y\), where \(y\geq 3\) is prime or a product of distinct primes \(\geq 3\). Reasoning as in the previous case, we find that \(y=2K+1\), and the analogue of (10) is
\[x=2\big{(}(K+1)^{2}-K^{2}\big{)}=2(K+1)^{2}-2K^{2}, \tag{11}\]
where \(K\) is a positive integer that can also be equal to \(1\) this time.
Let \(a:=2(K+1)^{2}\) and \(b:=2K^{2}\). Note that if \(K>1\), then both \(a\) and \(b\) are SP-numbers and (11) shows that \(x=a-b\).
In the remaining possibility when \(K=1\), we have \(y=3\), so \(x=6\), which can also be written as a difference of square-primes: \(6=2\cdot 3^{2}-3\cdot 2^{2}\), concluding the argument in case (iv).
(v) Let us assume now that \(x>1\) is not square-free, that is, \(x=c^{2}y\) for some integers \(c>1\), \(y\geq 1\), and \(y\) is square-free. Then, from the proved cases (i)-(iv), we know that there exist two square-prime numbers \(a^{\prime}\) and \(b^{\prime}\) such that \(y=a^{\prime}-b^{\prime}\). Let us say that \(a^{\prime}=p^{\prime}s^{2}\) and \(b^{\prime}=p^{\prime\prime}t^{2}\), where \(p^{\prime},p^{\prime\prime}\) are prime numbers and \(s,t>1\) are integers. These yield:
\[x=c^{2}y=c^{2}(a^{\prime}-b^{\prime})=p^{\prime}s^{2}c^{2}-p^{\prime\prime}t^{ 2}c^{2}.\]
Let \(a:=p^{\prime}(sc)^{2}\) and \(b:=p^{\prime\prime}(tc)^{2}\). Since \(p^{\prime},p^{\prime\prime}\) are primes and \(sc,tc>1\), both \(a\) and \(b\) are SP-numbers, and the above shows that \(x=a-b\). This concludes the proof for case (v) and also the entire proof of the lemma.
On combining Lemma 3.1 with the fact that any distance between two square-primes is replicated infinitely often as the difference between other square-primes [1, Theorem 4.3], we find that all positive integers appear infinitely often as differences between square-primes.
**Lemma 3.2**.: _Any positive integer appears infinitely often as a difference between square-primes._
### Proof of Theorem 1
We begin by proving a related result which shows that a (P-G) triangle with 'controlled size' elements on the eastern edge can be enlarged by padding it in such a way that the new southern vertex has a preset number \(Z\).
**Proposition 3.1**.: _Consider a (P-G) triangle with integers \(0\leq B_{1}\leq B_{2}\leq\cdots\leq B_{m}\) on the eastern edge \(\mathfrak{e}_{1}\). Then, there exists an integer \(C_{1}\geq B_{1}\), such that the triangle bordered by a new eastern edge \(\mathfrak{e}_{0}\) obtained by calculating the differences generated by the addition of \(C_{1}\) at the end of the generating row has components \(C_{1},C_{2},\ldots,C_{m},C_{m+1}\) with \(C_{j}\geq B_{j}\) for \(1\leq j\leq m\). Moreover, given an integer \(Z\geq 0\), we can choose \(C_{1}\) such that \(C_{m+1}=Z\)._
Proof.: With some arbitrary integers \(A_{1},\ldots,A_{m-1}\), the triangle in the statement of the proposition is as follows:
\[\begin{array}{ccccccccc}A_{1}&&A_{2}&&\ldots&&A_{m-1}&&B_{1}&&C_{1}\\ &\cdots&&\ldots&&\ldots&&B_{2}&&C_{2}\\ &&\ldots&&\ldots&&B_{3}&&C_{3}\\ &&\ldots&&\ldots&&\ldots&&\\ &&B_{m}&&C_{m}&&\\ &&Z&&\end{array}\]
We proceed backwards, from bottom to top. Let us assume that \(Z\geq 0\) is given and it takes the position of \(C_{m+1}\). Then, according to the definition, we may take \(C_{m}\) such that \(C_{m}-B_{m}=Z=C_{m+1}\), that is, \(C_{m}=C_{m+1}+B_{m}\geq B_{m}\).
Next, take \(C_{m-1}\) such that \(C_{m-1}-B_{m-1}=C_{m}\), so that \(C_{m-1}=C_{m}+B_{m-1}\geq B_{m-1}\).
Likewise, inductively, it follows that we may take \(C_{1}\) such that \(C_{1}-B_{1}=C_{2}\), so that \(C_{1}=C_{2}+B_{1}\geq B_{1}\).
In conclusion, we obtained \(C_{1}\) and the sequence \(C_{1},\ldots,C_{m+1}\), which satisfies the inequalities \(C_{1}\geq B_{1},\ldots,C_{m}\geq B_{m}\), and additionally \(C_{m+1}=Z\), thus proving the lemma.
**Remark 3.1**.: Let us note that the proof of Proposition 3.1 also allows the assumption of a different preset order between \(C_{j}\) and \(B_{j}\) for \(1\leq j\leq m\). Indeed, starting in the same way from \(Z\) and recursively calculating in reverse order the elements \(C_{j}\) from the new equalities
given by the preset order, we obtain \(C_{1}\), the new element of the first row, which ensures that the southern vertex of the (P-G) triangle is \(Z\).
Numerical experiments show that square-prime numbers are very handy for generating (P-G) triangles that have various properties. For example, one that has alternately 1 on the western edge is:
\[\begin{array}{cccccccccccccccc}27&28&44&76&98&112&153&171&180&188&292&316\\ 1&16&32&22&14&41&18&9&8&104&24\\ &15&16&10&8&27&23&9&1&96&80\\ &&1&6&2&19&4&14&8&95&16\\ &&5&4&17&15&10&6&87&79\\ &&1&13&2&5&4&81&8\\ &&12&11&3&1&77&73\\ &&1&8&2&76&4\\ &&7&6&74&72\\ &&1&&68&2\\ &&67&&66\\ &&1&&&&&\\ \end{array} \tag{12}\]
Turning now to the proof of Theorem 1, let us suppose that \(m\geq 2\) is even and the (P-G) triangle generated by \(\mathfrak{u}=(A_{1}<A_{2}<\cdots<A_{m})\) is given such that it satisfies the hypothesis of the theorem. For the case where there are only square-prime numbers on the first row and the elements on the western edge have 1 as every other element, there are many small triangles satisfying these requirements (see the numerical triangle (12) that has inserted into it a few such examples). Our objective is to border the triangle of size \(m\) with two additional edges to the east in such a way that the larger triangle, with a side length of \(m+2\), also satisfies the requirements of Theorem 1. Then, by induction, we will conclude that the result holds for any triangle of even size \(m\geq 2\).
Denote by \(\mathfrak{e}^{\prime\prime}=(A_{m},D_{1},\ldots,D_{m-1})\) the eastern edge of the given triangle of size \(m\). Let \(X\) and \(Y\), be the two new numbers that will continue \(\mathfrak{u}\), and let us denote the bordering edges they generate by \(\mathfrak{e}^{\prime}=(X,E_{1},\ldots,E_{m}-1,E_{m})\) and \(\mathfrak{e}=(Y,F_{1},\ldots,F_{m-1},F_{m},F_{m+1})\), respectively. Let \(Z\) be the integer on the southern vertex, that is, in the previous notation, \(F_{m+1}=Z\), and in the hypothesis of the theorem \(Z=1\). All these notations can be seen at a glance in the following display:
\[\begin{array}{cccccccccccc}A_{1}&A_{2}&&\ldots&&A_{m}&&X&Y\\ A_{2}-A_{1}&&\ldots&&D_{1}&&E_{1}&&F_{1}\\ &\ldots&&D_{2}&&E_{2}&&F_{2}\\ &&\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots\ldots \ldots\\ &&D_{m-1}&&E_{m-1}&&F_{m-1}\\ &&E_{m}&&F_{m}\\ &&F_{m+1}=Z\end{array}\]
Let \(\Delta\) denote the difference \(\Delta=Y-X\). Proceeding backwards as in the proof of Proposition 3.1, we find out the conditions that \(X\) and \(\Delta\) must meet. This time, we start with the necessary conditions and verify that they actually fulfill their role. The two conditions are:
\[X\geq A_{m}+Z+D_{1}+D_{2}+\cdots+D_{m-1} \tag{13}\]
\[\Delta=Y-X=Z+\big{[}D_{1}+D_{3}+\cdots+D_{m-1}\big{]}, \tag{14}\]
where, the double square brackets indicate that only the \(D_{j}\)'s with odd indices are added.
Let us first note that both conditions (13) and (14) depend only on the eastern edge \(\mathfrak{e}^{\prime\prime}\) of the initial triangle that we border and the integer \(Z\) that we want as the southern vertex of the bordered triangle. Then we know from Lemma 3.2 that there are pairs of square-primes, no matter how big and how many, that fulfill them.
Taking into account condition (13), the numbers on the first added border layer \(\mathfrak{e}^{\prime}\) are:
\[E_{1} =X-A_{m},\] \[E_{2} =E_{1}-D_{1}=X-A_{m}-D_{1},\] \[E_{3} =E_{2}-D_{2}=X-A_{m}-D_{1}-D_{2},\] \[\cdots\] \[E_{m-1} =E_{m-2}-D_{m-2}=X-A_{m}-D_{1}-D_{2}-\cdots-D_{m-2},\] \[E_{m} =E_{m-1}-D_{m-1}=X-A_{m}-D_{1}-D_{2}-\cdots-D_{m-2}-D_{m-1}.\]
Further, employing condition (14) as well, these show that the differences on \(\mathfrak{e}\), the outer layer of the border, are
\[F_{1} =Y-X=Z+\big{[}D_{1}+D_{3}+\cdots+D_{m-1}\big{]},\] \[F_{2} =E_{1}-F_{1}=X-A_{m}-Z-\big{[}D_{1}+D_{3}+\cdots+D_{m-1}\big{]},\] \[F_{3} =E_{2}-F_{2}=Z+\big{[}D_{3}+\cdots+D_{m-1}\big{]},\] \[F_{4} =E_{3}-F_{3}=X-A_{m}-Z-D_{1}-D_{2}-\big{[}D_{3}+\cdots+D_{m-1} \big{]},\] \[F_{5} =E_{4}-F_{4}=Z+\big{[}D_{5}+\cdots+D_{m-1}\big{]},\] \[F_{6} =E_{5}-F_{5}=X-A_{m}-Z-D_{1}-D_{2}-D_{3}-D_{4}-\big{[}D_{5}+ \cdots+D_{m-1}\big{]},\] \[\cdots\] \[F_{m-1} =E_{m-2}-F_{m-2}=Z+\big{[}D_{m-1}\big{]},\] \[F_{m} =E_{m-1}-F_{m-1}=X-A_{m}-Z-D_{1}-D_{2}-\cdots-D_{m-2}-\big{[}D_{ m-1}\big{]},\] \[F_{m+1} =E_{m}-F_{m}=Z.\]
In conclusion, by extending the initial sequence \(\mathfrak{u}\) with the two square-prime numbers \(X\) and \(Y\), we obtained a triangle whose southern vertex is \(Z=1\), as desired, which concludes the proof of Theorem 1.
It is worth noting that in the above proof, the actual value of the number \(Z\) did not play any special role, as \(Z\) could have taken any value \(Z\geq 0\). Thus, the following more general result holds. Given a sequence of non-negative integers \(\{w_{j}\}_{j\geq 1}\), there exists a (P-G) triangle generated by an increasing sequence of square-primes whose western edge is \((*,w_{1},*,w_{2},*,w_{3},*,\dots)\), where the stars are some unspecified non-negative integers.
**Theorem 6**.: _Let \(\boldsymbol{w}=\{w_{j}\}_{j\geq 1}\) be a sequence of non-negative integers. Then, there exists an increasing sequence of square-primes such that the (P-G) triangle they generate has on the western edge a sequence whose even-indexed elements are the elements of \(\boldsymbol{w}\)._
## 4. (P-G) triangles in the mirror-rays
Suppose all entries on the top row of the (P-G) are from \(\mathcal{L}_{0}\). Then the entire triangle has the same property, also.
Next we show that there is a close, analytically expressible link between the top row \(\mathfrak{u}\) and the left edge \(\mathfrak{w}\) of the triangle. This link actually extends across the entire triangle, because if one cuts a few rows from the top, then the link shall be maintained between the remaining new-first row and the remaining elements on the new-left edge. On the other hand, cutting also a few vertical (geometrically rather oblique) columns on the left side, all parallel to edge \(\mathfrak{w}\), then the link becomes one between the first remaining row and the first remaining leftmost edge.
Denote the components of the top sequence by \(\mathfrak{u}=(a_{0},a_{1},\dots)\in\mathcal{L}_{0}\), and the componets of the left-edge sequence by \(\mathfrak{w}=(b_{0},b_{1},\dots)\in\mathcal{L}_{0}\). Their corresponding formal power series are \(f=f(X)\in\mathbb{F}_{2}[[X]]\) and \(g=g(X)\in\mathbb{F}_{2}[[X]]\), respectively, where
\[f(X)=\sum_{n\geq 0}a_{n}X^{n}\quad\text{and}\quad g(X)=\sum_{n\geq 0}b_{n}X^{n}. \tag{15}\]
Suppose \(a_{0}=b_{0}\) is the element in the upper left corner of (P-G), and that it is also the constant term of both series \(f\) and \(g\).
### Lemmas
The first lemma shows that starting somewhere on the right side of the Pascal triangle and adding the numbers placed on the following rows diagonally to the left, the partial sums that we obtain are equal to the first number on the right of the next row.
**Lemma 4.1**.: _For any integers \(K,n\geq 0\), we have_
\[\binom{K}{K}+\binom{K+1}{K}+\dots+\binom{K+n}{K}=\binom{K+n+1}{K+1}. \tag{16}\]
Proof.: The proof follows by induction over \(n\) using the recursive formula that generates Pascal's triangle.
If \(n=1\), we have \(\binom{K}{K}+\binom{K+1}{K}=1+(K+1)=K+2=\binom{K+2}{K+1}\).
Suppose (16) holds for some \(n\geq 1\). Then, since
\[\binom{K+n+1}{K+1}+\binom{K+n+1}{K}=\binom{K+n+2}{K+1}\]
it follows that (16) also holds for \(n+1\), which concludes the proof of the lemma.
The next lemma gives the formal power series expression and the rational representation of a power of the fundamental series \(F(X)=1/(1+X)\), whose coefficients are all equal to \(1\) in \(\mathbb{F}_{2}\).
**Lemma 4.2**.: _Let \(F\in\mathbb{F}_{2}[[X]]\), \(F(X)=1+X+X^{2}+\cdots\). Then, for any integer \(N\geq 0\), we have_
\[\big{(}F(X)\big{)}^{N+1}=\sum_{n\geq 0}\binom{N+n}{N}X^{n}=\frac{1}{(1{+}X)^{N+ 1}}. \tag{17}\]
Proof.: The proof is by induction. If \(N=1\), relation (17) holds because \(F(X)\cdot F(X)=\sum_{n\geq 0}(n+1)X^{n}\), the coefficients being given by the equality
\[\sum_{r\geq 0}\sum_{\begin{subarray}{c}s\geq 0\\ r+s=n\end{subarray}}1\cdot 1=n+1.\]
Let \(K\geq 1\) and suppose that the coefficient of \(X^{s}\) in the power series of \(\big{(}F(X)\big{)}^{K}\) is \(\binom{K-1+s}{K-1}\) for all \(s\geq 0\). Then, the coefficient of \(X^{n}\) in the product \(F(X)\cdot F^{K}(X)\) is
\[\sum_{r\geq 0}\sum_{\begin{subarray}{c}s\geq 0\\ r+s=n\end{subarray}}1\cdot\binom{K-1+s}{K-1}=\binom{K-1+0}{K-1}+\binom{K-1+1}{ K-1}+\dots+\binom{K-1+n}{K-1}=\binom{K+n}{K},\]
where the last equality follows from Lemma 4.1. Since these are exactly the coefficients of \(\big{(}F(X)\big{)}^{K+1}\), this concludes the proof of the lemma.
### Proof of Theorem 2
1. Let \(\mathfrak{w}=(b_{0},b_{1},b_{2},\dots)\) be the left edge of (P-G) generated by \(\mathfrak{u}\). Then \(b_{0}=a_{0}\), \(b_{1}=a_{0}+a_{1}\), \(b_{2}=a_{0}+2a_{1}+a_{2}\) and so on. Then, by induction, one finds that the general formula for \(b_{n}\) is
\[b_{n}=\binom{n}{0}a_{0}+\binom{n}{1}a_{1}+\dots+\binom{n}{n}a_{n}\,. \tag{18}\]
The formal power series of \(\big{(}T(f)\big{)}(X)\) is \(b_{0}+b_{1}X+b_{2}X^{2}+\dots\), and to deduce a functional expression for it, we rearrange the terms using formula (18). Collecting together similar terms with the same coefficient \(a_{n}\), we see that
\[\big{(}T(f)\big{)}(X)= a_{0}\big{(}1+X+X^{2}+X^{3}+\dots\big{)}\] \[+a_{1}X\bigg{(}\binom{1+0}{1}+\binom{1+1}{1}X+\binom{1+2}{1}X^{2 }+\binom{1+3}{1}X^{3}+\dots\bigg{)}\] \[+a_{2}X^{2}\bigg{(}\binom{2+0}{2}+\binom{2+1}{2}X+\binom{2+2}{2} X^{2}+\binom{2+3}{2}X^{3}+\dots\bigg{)}\] \[+\dots\] \[+a_{n}X^{n}\bigg{(}\binom{n+0}{n}+\binom{n+1}{n}X+\binom{n+2}{n} X^{2}+\binom{n+3}{n}X^{3}+\dots\bigg{)}\] \[+\dots\]
Using Lemma 4.2 on each of the lines of the relation above we obtain
\[\big{(}T(f)\big{)}(X) =\frac{a_{0}}{1+X}+\frac{a_{1}X}{(1+X)^{2}}+\frac{a_{2}X^{2}}{(1+ X)^{3}}+\dots+\frac{a_{n}X^{n}}{(1+X)^{n+1}}+\dots\] \[=\frac{1}{1+X}\Big{(}a_{0}+a_{1}\frac{X}{1+X}+a_{2}\bigg{(}\frac{ X}{1+X}\bigg{)}^{2}+\dots+a_{n}\bigg{(}\frac{X}{1+X}\bigg{)}^{n}+\dots\Big{)}\] \[=\frac{1}{1+X}f\Big{(}\frac{X}{1+X}\Big{)}\,,\]
which proves the first point of the theorem.
2. We apply formula (3) twice. First we obtain
\[T^{(2)}\big{(}f(X)\big{)}=T\Big{(}T\big{(}f(X)\big{)}\Big{)}=\frac{1}{1+X}T(f) \Big{(}\frac{X}{1+X}\Big{)},\]
and then, continuing, on the second application, after reducing the terms in the rational fractions of \(\mathbb{F}_{2}(X)\) and making the necessary cancellations, we obtain:
\[T^{(2)}\big{(}f(X)\big{)}=\frac{1}{1+X}\cdot\frac{1}{1+X}f\bigg{(}\frac{\frac {X}{1+X}}{1+X}\bigg{)}=f(X).\]
It then follows that \(T\) is invertible, so that is bijective and \(T^{-1}=T\). This concludes the proof of the theorem.
### Proof of Theorem 3
For any integer \(N\geq 0\), the set of finite sequences \(\mathcal{L}_{0}(N)\) is in one-to-one correspondence with the set of polynomials \(\mathbb{F}_{2}[X]\), which in turn is embedded in \(\mathbb{F}_{2}[[X]]\), viewing the polynomials as formal power series with only finitely many non-zero coefficients. In accordance with this, the restriction of the application \(T\) to \(\mathcal{L}_{0}(N)[X]\), and also that of \(\Upsilon\), on the sequences side, to finite binary sequences \(\mathcal{L}_{0}(N)\), are well-defined. Furthermore, since as we have seen in the induction process during the proof of
Theorem 2, the transformations through \(\Upsilon\) that occur between the top \(\mathfrak{u}\) and the western edge \(\mathfrak{w}\) actually occur in an ordered manner, the first \(N\) components of one only affect the first \(N\) components of the other, for any \(N\geq 0\). Therefore, Theorem 3, the finite analogue version of Theorem 2, also holds true for (P-G) triangles of bounded size and polynomials instead of formal power series. That is, the restriction of \(T\) to \(\mathcal{L}_{0}(N)[X]\) is still an involution, just like the corresponding restriction of \(\Upsilon\) to \(\mathcal{L}_{0}(N)\) is.
## 5. Binary generators and generators with only one champion
### Proof of Theorem 4
A consequence of the fact that \(T\) is an involution, as proved in Theorems 2 and 3, on the side of the coefficients of the formal power series or of the polynomials, is the fact that the restriction of \(\Upsilon\) to binary sequences is also an involution. This means that, on one hand, \(\Upsilon^{(2)}(\mathfrak{u})=\mathfrak{u}\) for binary sequences \(\mathfrak{u}\) and for any finite initial fragments of these sequences, as well. It follows then that \(\Upsilon^{(6k)}(\mathfrak{u})=\mathfrak{u}\) for \(k\geq 1\), meaning that the helicoid generated by \(\mathfrak{u}\in\mathcal{L}_{0}\) has all layers identical.
On the other hand, it also follows that \(\Upsilon\) is invertible and its inverse satisfies \(\Upsilon^{(-1)}=\Upsilon\). Therefore, if \(\Upsilon(\mathfrak{u})=\mathfrak{w}\), it follows that \(\Upsilon(\mathfrak{w})=\Upsilon^{(2)}(\mathfrak{u})=\mathfrak{u}\) for all finite or infinite binary sequences \(\mathfrak{u},\mathfrak{w}\). Geometrically, this means that the single layer of a helicoid generated by a binary sequence is composed of three identical diamond petals. Moreover, the diamond is the union of two equilateral triangles, positioned symmetrically across the short diagonal \(\mathfrak{w}\). This proves the first two parts of Theorem 4.
Figure 5. Two helicoids with identical layers on all levels. They are generated by \(\mathfrak{u}\) given by the first thirty elements of the Fibonacci sequence \(F_{n}\) (left) and the Bisection of Fibonacci sequence \(F_{2n}\)[19, A001906] (right) in decreasing order, followed by the sequence of ten bits: \(0,1,0,0,0,0,0,1,0,0\). Distinct integers are shown in different colors. Under the helicoids, the corresponding generating sequences \(\Upsilon^{(0)}(\mathfrak{u})\), \(\Upsilon^{(1)}(\mathfrak{u}),\ldots,\Upsilon^{(6)}(\mathfrak{u})\) of the intermediate triangles are shown. In both helicoids, the initial \(\mathfrak{u}=\Upsilon^{(0)}(\mathfrak{u})\) is covered by \(\Upsilon^{(6)}(\mathfrak{u})\), but they can be seen for comparison on the accompanying maps.
In order to prove the third part of Theorem 4, let \(\rho\geq 0\) be integer and let \(\mathscr{C}(\rho)\) be the hexagonal _circle of differences_ of radius \(\rho\) and center \(a_{0}\) on a layer generated by \(\mathfrak{u}=(a_{0},a_{1},a_{2},\dots)\). Explicitly, the numbers on the first edge of the circle are the elements on the eastern edge of the (P-G) triangle:
\[E_{0}=\big{(}a_{\rho}=d_{\rho}^{(0)},d_{\rho-1}^{(1)},d_{\rho-2}^{(2)},\dots,d_ {0}^{(\rho)}\big{)},\]
where the differences \(d_{k}^{(j)}\) are defined by (1). Likewise are obtained all the six edges \(E_{m}\), \(0\leq m\leq 5\), of \(\mathscr{C}(\rho)\), where \(E_{m}\) is the eastern edge of the (P-G) triangle generated by \(\Upsilon^{(m)}(\mathfrak{u})\) instead of \(\mathfrak{u}\), that is,
\[E_{m}=\Big{(}\Upsilon^{(m)}(a_{\rho})=d_{\rho}^{(0)}(m),d_{\rho-1}^{(1)}(m),d_ {\rho-2}^{(2)}(m),\dots,d_{0}^{(\rho)(m)}\Big{)},\quad\text{for }m=0,1,\dots,5,\]
where
\[d_{k}^{(j+1)}(m):=\big{|}d_{k+1}^{(j)}(m)-d_{k}^{(j)}(m)\big{|}\quad\text{ and }\quad d_{k}^{(0)}(m):=\Upsilon^{(m)}(a_{k})\quad\text{ for }j,k\geq 0.\]
Then
\[\mathscr{C}(\rho):=E_{0}\cup E_{1}\cup E_{2}\cup E_{3}\cup E_{4}\cup E_{5}\,.\]
Now, suppose that \(\mathfrak{u}\) is the generator of a helicoid that has just one distinct layer, that is, we assume that \(\Upsilon^{(6)}(\mathfrak{u})=\mathfrak{u}\). If \(a\geq 0\) is the \(\rho\)th element of \(\mathfrak{u}\), then the assumption says,
Figure 6. The first level of the helicoid generated by \(\mathfrak{u}^{\prime}\) given by the first twenty \(4\)th positive powers (left) and the fourth level of the helicoid (out of the nine distinct it has) generated by \(\mathfrak{u}^{\prime\prime}\) given by the first twenty \(5\)th positive powers (right), both in decreasing order, and then followed each by the same sequence of ten bits: \(0,1,0,0,0,0,0,1,0,0\). Distinct integers are shown in different colors. Under the represented levels, the corresponding generating sequences of their intermediate triangles are shown: \(\Upsilon^{(0)}(\mathfrak{u}^{\prime})\), \(\Upsilon^{(1)}(\mathfrak{u}^{\prime}),\dots,\Upsilon^{(6)}(\mathfrak{u}^{ \prime})\) (left), and \(\Upsilon^{(18)}(\mathfrak{u}^{\prime\prime})\), \(\Upsilon^{(19)}(\mathfrak{u}^{\prime\prime}),\dots,\Upsilon^{(24)}(\mathfrak{u} ^{\prime\prime})\) (right, numbered also from \(0\) to \(6\)). In the representation on the left, \(\mathfrak{u}^{\prime}=\Upsilon^{(0)}(\mathfrak{u}^{\prime})\) is covered by \(\Upsilon^{(6)}(\mathfrak{u}^{\prime})\) and in the representation on the right \(\Upsilon^{(18)}(\mathfrak{u}^{\prime\prime})\) is covered by \(\Upsilon^{(24)}(\mathfrak{u}^{\prime\prime})\), but their elements can be seen for comparison on the last two rows of the maps underneath.
in particular, that the \(\rho\)th element of \(\Upsilon^{(6)}(\mathfrak{u})\) is also equal to \(a\). Then, let us analyze the process of generating the layer just on the circle \(\mathscr{C}(\rho)\).
**Remark 5.1**.: Suppose that \(\rho>0\) and \(a\) is a champion of the initial sequence \(\mathfrak{u}\), that is, \(a>0\) and \(a\) is strictly larger than all the elements if \(\mathfrak{u}\) of lower indices.
1. As the construction of the (P-G) triangle and its five subsequent continuations involves only taking absolute values of differences, the numbers on \(\mathscr{C}(\rho)\) cannot be larger than \(a\). Moreover, the sequence of numbers on \(\mathscr{C}(\rho)\) cannot increase again if it has dropped at any point to a lower value, because, by assumption, \(a\) is a champion.
2. Since the number that arrives to cover on the upper level the original \(a\) after the 6th rotation is still \(a\), it then follows that all numbers on circle \(\mathscr{C}(\rho)\) are equal.
3. Again, since \(a\) is a champion, the only possibility for this to happen is when all the numbers on the smaller adjacent hexagonal circle \(\mathscr{C}(\rho-1)\) are 0's.
4. Further, it follows that all the numbers on \(\mathscr{C}(\rho-2)\) are also only 0's.
By iterating, it follows from the above remark that if \(a\) is a champion, then all the numbers on the circle \(\mathscr{C}(\rho)\) are equal to \(a\) and all the numbers in the interior of \(\mathscr{C}(\rho)\) are 0's. Therefore, there is no other champion in \(\mathfrak{u}\) besides \(a\), and this concludes the proof of Theorem 4.
### Trial of sequences with a single champion
The necessary condition for sequences to have at most one champion in order for the helicoids they generate to have just one
Figure 7. The basic levels of two helicoids that have identical layers from the second level on. They are generated by \(\mathfrak{u}\) given by the first twenty primes (left) and the first twenty square-primes (right) in decreasing order, followed by the sequence of ten bits: \(0,1,0,0,0,0,0,1,0,0\). Distinct integers are shown in different colors. Under the layers, the corresponding generating sequences \(\Upsilon^{(0)}(\mathfrak{u})\), \(\Upsilon^{(1)}(\mathfrak{u}),\dots,\Upsilon^{(6)}(\mathfrak{u})\) of the intermediate triangles are shown. In both images, the initial \(\mathfrak{u}=\Upsilon^{(0)}(\mathfrak{u})\) is covered by \(\Upsilon^{(6)}(\mathfrak{u})\), but they can be seen for comparison on the accompanying maps.
distinct layer proves to be insufficient. In fact, there may exist one-champion sequences that generate helicoids with a record number of distinct layers. In Figures 5-8, the helicoids generated by different finite integer sequences are shown, each having a single champion, their first element. To use comparable units in reasonably sized images that can be displayed in print, we have chosen decreasing sequences \(\mathfrak{u}\) of \(20\) or \(30\) integers, all followed by the same sequence of \(10\) random bits: \(0,1,0,0,0,0,0,1,0,0\). Distinct numbers are represented by different colors. Under each helicoid, the generating sequences of the partial equilateral triangles, namely \(\Upsilon^{(0)}(\mathfrak{u})\), \(\Upsilon^{(1)}(\mathfrak{u}),\ldots,\Upsilon^{(6)}(\mathfrak{u})\), \(\Upsilon^{(0)}(\mathfrak{u})\), can be seen stacked on top of each other, making it easier to compare and determine if the helicoid has multiple distinct sheets on distinct levels.
One finds that the results are mixed. There are sequences that generate helicoids with one distinct level, as the one in Figure 5, or with exactly two distinct levels, as those in Figure 6 (left), Figure 7 and Figure 8 (right). There are also helicoids with more distinct levels, like the one that is generated by \(5\)th powers in Figure 6 (right), where is shown level four out of the nine distinct ones it has, or the one in Figure 8 (left), which has four distinct levels. The lead sequence, also used in reversed order in Figure 8, is \(0,1,3,4,9,10,12,13,27,28,30,31,\ldots\), the sequence that starts with \(0\) and is generated by the greedy algorithm so that it contains no arithmetic progressions of length \(3\). Its elements are also characterized as being sums of distinct powers of \(3\) or as having only \(0\) and \(1\) in their base-\(3\) representation [19, A005836]. The longer instances with the first
Figure 8. The hexagons on the base level of two helicoids generated by the sequence \(\mathfrak{u}\) of non-negative integers whose base-\(3\) representation contains no \(2\)[19, A005836]. The image on the left uses \(20\) and the image on the right \(30\) elements of the sequence in decreasing order, both followed by the same ten bits: \(0,1,0,0,0,0,0,1,0,0\). Distinct integers are shown in different colors. The helicoids have four and two distinct levels, respectively. Under the hexagons, the corresponding generating sequences \(\Upsilon^{(0)}(\mathfrak{u})\), \(\Upsilon^{(1)}(\mathfrak{u}),\ldots,\Upsilon^{(6)}(\mathfrak{u})\) of the intermediate triangles are shown. In both images, the initial generators \(\mathfrak{u}=\Upsilon^{(0)}(\mathfrak{u})\) are covered by \(\Upsilon^{(6)}(\mathfrak{u})\), but they can be seen for comparison in the accompanying maps.
\(N=314,315,\ldots,320\) elements in reversed order followed by the above sequence of ten bits generate helicoids with \(84\) distinct layers, the cycle having just one element.
A complete characterization of the integer sequences based on the number of distinct levels that their associated helicoids have requires further investigation. For instance, it would be interesting to know if there are helicoids with an arbitrarily large number of distinct levels. A promising candidate to try seems to be a series of length-balanced decreasing sequences of powers. For instance, to add to the points already mentioned, we note that the sequence of just ten \(9\)th powers in decreasing order followed by \(0,1,0,0,0,0,0,1,0,0\) produces a helicoid with \(262\) distinct layers, of which \(P=198\) are in a precycle and \(C=64\) are in an endless repeated cycle. And the analogues with \(10\)th and \(11\)th powers have \(P+C=140+128=268\) and \(P+C=512+32=544\) distinct layers, respectively.
**Question**.: _Is there a sequence of finite sequences of positive integers that generates helicoids with an unlimited number of distinct levels?_
## 6. Proof of Theorem 5
Let us start by noting that it is sufficient to prove the belonging relationship in (5) for proportions \(R_{\mathfrak{w}}(k)\), as it will also imply the one for \(R_{\mathfrak{e}}(k)\). This follows from the rotation-reflection symmetry, as we have seen in the discussion Section 5, and also by following the same reasoning below with the sequences in \(\mathfrak{u}=\mathfrak{u}(N)=(a_{0},\ldots,a_{N-1})\in\mathcal{L}_{0}(N)\) indexed from right to left instead of left to right, which obviously would lead to the same conclusion, this time for \(R_{\mathfrak{e}}(k)\).
Let \(\varepsilon\in(0,1/2)\) be fixed. We first prove that there exists an integer \(N_{\varepsilon}\) such that if \(N\geq N_{\varepsilon}\) then
\[\frac{1}{2^{N}}\#\Big{\{}\mathfrak{u}\in\mathcal{L}_{0}(N):\frac{1}{N}\# \big{\{}0\leq j\leq N-1:a_{j}=1\big{\}}\in\big{[}1/2-\varepsilon,\,1/2+ \varepsilon\big{]}\Big{\}}\geq 1-\varepsilon. \tag{19}\]
For each \(\mathfrak{u}\in\mathcal{L}_{0}(N)\), denote the set of indices with components equal to \(1\) by
\[S(\mathfrak{u}):=\big{\{}0\leq j\leq N-1:a_{j}=1\big{\}}\subset\{0,1,\ldots,N-1\}.\]
Then the left-hand side of inequality (19) can be rewritten as
\[\frac{1}{2^{N}}\#\Big{\{}\mathfrak{u}\in\mathcal{L}_{0}(N):\frac {1}{N}\#S(\mathfrak{u})\in\big{[}1/2-\varepsilon,\,1/2+\varepsilon\big{]} \Big{\}}\] \[= \frac{1}{2^{N}}\#\Big{\{}\mathfrak{u}\in\mathcal{L}_{0}(N):\frac {N}{2}-N\varepsilon\leq\#S(\mathfrak{u})\leq\frac{N}{2}+N\varepsilon\Big{\}}\] \[= \frac{1}{2^{N}}\sum_{\frac{N}{2}-N\varepsilon\leq l\leq\frac{N}{2 }+N\varepsilon}\#\Big{\{}\mathfrak{u}\in\mathcal{L}_{0}(N):\#S(\mathfrak{u})= l\Big{\}}\,.\]
Noticing in passing that the unrestricted sum of all the cardinalities in the last sum is \(2^{N}\), the inequality (19) that we want to prove is
\[\frac{1}{2^{N}}\sum_{\frac{N}{2}-\varepsilon N\leq l\leq\frac{N}{2}+\varepsilon N }\binom{N}{l}\geq 1-\varepsilon\]
for sufficiently large \(N\). For this, it is sufficient to prove that in the sum above, the missed tails at the beginning and at the end, which are equal, due to the symmetry of the binomial coefficients, are small, that is,
\[2\sum_{0\leq l\leq M}\binom{N}{l}\leq\varepsilon 2^{N}\,,\]
where we denoted \(M:=\left\lfloor\frac{N}{2}-\varepsilon N\right\rfloor\). Since the binomial coefficients are increasing in the range of summation above, we replace the sum with a trivial upper bound and reformulate our object as the following convenient statement that is sufficient to be proven:
\[(M+1)\binom{N}{M}\leq\varepsilon 2^{N-1}\,, \tag{20}\]
for sufficiently large \(N\). To estimate the binomial coefficients we use Stirling's approximation formula for factorials given by Robbins [22] in the form of two tight upper and lower bounds:
\[\sqrt{2\pi n}\left(\frac{n}{e}\right)^{n}e^{\frac{1}{12n+1}}<n!<\sqrt{2\pi n} \left(\frac{n}{e}\right)^{n}e^{\frac{1}{12n}}.\]
Then the left-hand side of (20) is
\[(M+1)\binom{N}{M}=(M+1)\frac{N!}{M!(N-M)!}<a\frac{N^{N+1/2}}{M^{M-1/2}(N-M)^{N- M+1/2}}, \tag{21}\]
for a positive constant \(a<1/200\) if \(N>100\). Here, we change the variable \(M\) into \(R\), where \(R:=N/2-M\). Note that since \(M=N/2-\varepsilon N+\theta\), where \(|\theta|\leq 1\), it follows that
\[R=\varepsilon N+O(1). \tag{22}\]
Then the new form of the inequality (20) that we want to prove for \(N\) sufficiently large is
\[a\cdot\frac{N^{N+1/2}}{\left(\frac{N}{2}-R\right)^{N/2-R-1/2}\!\left(\frac{N} {2}+R\right)^{N/2+R+1/2}}\leq\varepsilon 2^{N-1},\]
which can still be rearranged further into the more convenient form
\[2a\cdot\frac{N^{1/2}}{\left(1-\frac{2R}{N}\right)^{N/2-R-1/2}\!\left(1+\frac{2 R}{N}\right)^{N/2+R+1/2}}\leq\varepsilon. \tag{23}\]
Let us note that here \(r:=2R/N\) is small because (22) implies
\[r=\frac{2R}{N}=\frac{2\varepsilon N+O(1)}{N}=2\varepsilon+O\Big{(}\frac{1}{N} \Big{)}. \tag{24}\]
Introducing the new variable \(r\) in (23), we find that it is equivalent with
\[2a\cdot\frac{N^{1/2}}{\left(1-r^{2}\right)^{N/2}(1-r)^{-R-1/2}(1+r)^{R+1/2}} \leq\varepsilon. \tag{25}\]
It remains to show that here the denominator has an order of magnitude higher than that of the numerator. To do this, we evaluate the logarithm of the denominator, which is
\[= \log\Big{(}\big{(}1-r^{2}\big{)}^{N/2}(1-r)^{-R-1/2}(1+r)^{R+1/2} \Big{)}\] \[= \frac{N}{2}\log\big{(}1-r^{2}\big{)}+\Big{(}R+\frac{1}{2}\Big{)} \big{(}\log(1+r)-\log(1-r)\big{)}.\]
Then, by taking into account the size of \(R\) and \(r\) from (22) and (24), and particularly the fact that \(r\) is small, we replace the logarithms with their power series approximations and see that the logarithm of the denominator in (25) is further equal to
\[\begin{split}&\frac{N}{2}\big{(}-r^{2}+O(r^{4})\big{)}+\Big{(}R+ \frac{1}{2}\Big{)}\Big{(}2r+\frac{2r^{3}}{3}+O(r^{5})\Big{)}\\ =& N\sum_{k=1}^{\infty}2^{2k}\left(\frac{1}{2k-1}- \frac{1}{2k}\right)\varepsilon^{2k}+O(\varepsilon).\end{split} \tag{26}\]
It then follows that there exists an absolute constant \(c>0\) such that (25) is satisfied for sufficiently large \(N\) if
\[\frac{\log N}{c\varepsilon^{2}N}<\frac{N^{1/2}}{c\varepsilon^{2}N}=\frac{1}{c \varepsilon^{2}N^{1/2}}<\varepsilon.\]
And, in order for this last requirement to be fulfilled, it is enough to take \(N>N_{\varepsilon}\), with \(N_{\varepsilon}:=\max\left\{100,\,\left\lfloor\frac{1}{c\varepsilon^{6}} \right\rfloor\right\}\).
To conclude the proof of Theorem 5 for just the left edge of the (P-G) triangle, we apply the operator \(\Upsilon\) on the generating sequences \(\mathfrak{u}\). Since we know by Theorems 2 and 3 that \(\Upsilon\) is an involution, therefore also a bijection, it follows from the above argument that, aside from an exceptional set of the same size as that of \(\mathfrak{u}\)'s, on the western edge, the sequences \(\mathfrak{w}_{0}\) contain approximately the same number of \(0\)'s and \(1\)'s as well.
Then, in the same way, the same conclusion can be drawn for the next segments \(\mathfrak{w}_{1},\mathfrak{w}_{2},\ldots\), which are parallel to \(\mathfrak{w}_{0}\), by reasoning with the partial subsequences of \(\mathfrak{u}\) that ignore a few starting elements. In the following, we need to quantify precisely for how many segments we can be sure that the almost-equal-proportions-result actually holds for.
Let \(K\) be the number of the rays \(\mathfrak{w}_{0},\mathfrak{w}_{1},\ldots,\mathfrak{w}_{K-1}\) in question. The size of \(K\) that we can afford will be determined later. These rays are generated by the partial subsequences of \(\mathfrak{u}\) that are given by \(\mathfrak{u}_{k}=\mathfrak{u}_{k}(N):=(a_{k},a_{k+1},\ldots,a_{N-1})\) for \(0\leq k\leq K-1\). Note that the number of the sequences \(\mathfrak{u}_{k}(N)\) is \(2^{N-k}\) for any fixed \(k\) and \(N\).
Let \(\varepsilon\in(0,1/2)\) be fixed and let \(k\) and \(K\) be such that \(0\leq k\leq K-1<N\) for some large \(N\). In order to fulfill the additional requirements on the multiple rays, we choose a larger value for the term on the right side of the inequality (19), which means, a narrower target. Thus, since the size of \(\mathfrak{u}_{k}\) is \(N-k\), its counterpart form becomes:
\[\frac{1}{2^{N-k}}\#\Big{\{}\mathfrak{u}\in\mathcal{L}_{0}(N-k):\frac{\# \big{\{}k\leq j\leq N-1:a_{j}=1\big{\}}}{N-k}\in\big{[}1/2-\varepsilon,\,1/2 +\varepsilon\big{]}\Big{\}}\geq 1-\frac{\varepsilon}{K}. \tag{27}\]
Let us note that this would suffice to completely prove Theorem 5. Indeed, let \(\mathcal{E}_{N}(k)\) be the part of the exceptional set \(\mathcal{E}_{N}\) in the statement of Theorem 5 that corresponds to the sequences \(\mathfrak{u}\) that do not meet the requirement (7) of approximately-equal-proportion of \(1\)'s and \(0\)'s for ray \(\mathfrak{w}_{k}\). Then, the exceptional sets are smaller and smaller in size as \(k\) increases, and despite the fact that the sets \(\mathcal{E}_{N}(k)\) are not disjoint, we have
\[\mathcal{E}(N)=\bigcup_{k=0}^{K-1}\mathcal{E}_{N}(k)\quad\text{ and }\quad\#\mathcal{E}(N)\leq\sum_{k=0}^{K-1}\#\mathcal{E}_{N}(k). \tag{28}\]
We know, via the one-to-one correspondence \(T\) from Theorem 3, that the generating sequence \(\mathfrak{u}\) and the left edge \(\mathfrak{w}\) of any typical (P-G) triangle of zeros and ones are permutations of one another, so that, the cardinality of the analogous exceptional sets of sequences that do not fulfill the approximately equal number of \(1\)'s and \(0\)'s condition is the same.
Therefore, once proven (27), using the inequality in (28), for the exceptional set \(\mathcal{E}(N)\) in the statement of Theorem 5, we find that
\[\frac{1}{2^{N}}\#\mathcal{E}(N)\leq\frac{1}{2^{N}}\sum_{k=0}^{K-1}\#\mathcal{ E}_{N}(k)\leq\sum_{k=0}^{K-1}\frac{1}{2^{N-k}}\#\mathcal{E}_{N}(k)\leq K\cdot \frac{\varepsilon}{K}=\varepsilon,\]
that is, \(\#\mathcal{E}(N)\leq\varepsilon 2^{N}\), as needed. Then we only have to prove (27), indicating the range of \(N\) in which it holds, and how large we are allowed to take \(K\).
Following the same steps as before in arguing relation (19), by rewriting the cardinality of the sets inside (27), and then continuing by simplifying the resulting expression, the
inequality on the tails becomes
\[\frac{1}{2^{N-k-1}}\sum_{0\leq l\leq M}\binom{N-k}{l}\leq\frac{\varepsilon}{K}\,,\]
where \(M=\left\lfloor\frac{N-K}{2}-\varepsilon(N-K)\right\rfloor\). It then follows that it suffices to see that for large \(N\) we have
\[(M+1)\binom{N}{M}\leq\frac{\varepsilon}{K}2^{N-1-K}\,, \tag{29}\]
the strongest condition that, once met, all the corresponding \(k\) inequalities will also be satisfied for \(0\leq k\leq K-1\).
Further, the estimates of the binomial coefficients is done as before, and we arrive to ponder the inequality
\[\frac{2^{K}N^{1/2}}{\left(1-r^{2}\right)^{N/2}(1-r)^{-R-1/2}(1+r)^{R+1/2}}\leq \frac{\varepsilon}{AK}, \tag{30}\]
for some constant \(A>0\) that is independent of \(\epsilon,N\) and \(K\). Here \(r\) is small and \(R\) captures the new \(M\) from (29). Precisely, as in (22) and (24), we have:
\[r=2\varepsilon+O\Big{(}\frac{1}{N}\Big{)}\quad\text{ and }\quad R=\varepsilon N +O(1). \tag{31}\]
The logarithm of the numerator on the left-hand side of (30) is
\[K\log 2+\frac{1}{2}\log N \tag{32}\]
and, using (31), the logarithm of the denominator is the same as in (26):
\[N\sum_{k=1}^{\infty}\frac{2^{2k-1}}{k(2k-1)}\varepsilon^{2k}+O(1)>c_{0} \varepsilon^{2}N. \tag{33}\]
for some absolute constant \(c_{0}>0\).
As a consequence, using (32) and (33), we find that inequality (30) holds true for sufficiently large \(N\) as long as the following condition is also verified.
\[c_{0}\varepsilon^{2}N-K\log 2-\frac{1}{2}\log N>\log\frac{K}{\varepsilon}\]
But, for this to happen, it is necessary to exist absolute constants \(c_{1},c_{2}>0\) such that
\[c_{1}\varepsilon^{2}N>K\quad\text{ and }\quad c_{2}\varepsilon^{2}N>\log K-\log\varepsilon.\]
Both these conditions are fulfilled if we take \(K=\delta_{\varepsilon}N\), where \(\delta=\delta_{\varepsilon}>0\) is a suitable small constant that depends only on \(\varepsilon\) and the absolute constants above, and \(N\) is larger than a threshold \(N_{\varepsilon,\delta}\) that relies on the same dependencies and additionally on the choice of \(\delta\). This concludes the proof of Theorem 5.
**Acknowledgement.** The authors acknowledge the contributions of Sundaraman Madhusudanan, a co-author of [4], in the proof of Lemma 3.1. |
2309.16485 | A remark of Ricci-Bourguignon harmonic soliton | In this paper, we investigate the triviality of Ricci-Bourguignon harmonic
solitons. We also use the results of V-harmonic map to investigate the property
of Ricci harmonic soliton. | Xiangzhi Cao | 2023-09-28T14:50:17Z | http://arxiv.org/abs/2309.16485v1 | # A remark of Ricci-Bourguignon harmonic soliton
###### Abstract
In this paper, we investigate the triviality of Ricci-Bourguignon harmonic solitons. We also use the results of V-harmonic map to investigate the property of Ricci harmonic soliton.
## 1 Introduction
Muller [21] introduced Ricci-harmonic flow, which is defined as follows: for a closed manifold \(M\), given a map \(\phi\) from \(M\) to some closed target manifold \(N\) :
\[\frac{\partial}{\partial t}g=-2\mathrm{Rc}+2\alpha\nabla\phi\otimes\nabla\phi,\frac{\partial}{\partial t}\phi=\tau_{g}\phi\]
where \(g(t)\) is a time-dependent metric on \(M,\mathrm{Rc}\) is the corresponding Ricci curvature, \(\tau_{g}\phi\) is the tension field of \(\phi\) with respect to \(g\) and \(\alpha\) is a positive constant (possibly time dependent).
Later, people began to study its long existence such as these works([6][19][18][8][17][11][25]). Fang and Zheng [13] gived the heat kernel estimates along harmonic-Ricci flow. One can refer to the works ([20][27][7][15]) for the studies related to Ricci-harmonic flow.
Azami [4] introduced Ricci-Bourguignon harmonic flow, which is
\[\begin{cases}\frac{\partial}{\partial t}g=-2\mathrm{Rc}-2\rho Rg+2\alpha \nabla\phi\otimes\nabla\phi,\\ \frac{\partial}{\partial t}\phi=\tau_{g}\phi\end{cases}\]
Next, we give a definition,
**Definition 1.1** ( Almost Ricci-harmonic solitons).: Let \(u:(M,g)\rightarrow(N,h)\) a smooth map (not necessarily harmonic map), where \((M,g)\) and \((N,h)\) are static Riemannian manifolds. \(((M,g),(N,h),V,u,\rho,\lambda)\) is called almost Ricci-Bourguignon harmonic solitons if
\[\begin{cases}Rc-\rho Rg-\alpha\nabla u\otimes\nabla u-\frac{1}{2}\mathcal{L}_{ V}g=\lambda g\\ \tau_{g}u+\langle\nabla u,V\rangle=0,\end{cases}\]
where \(\alpha>0\) is a positive constant depending on \(m\), \(\rho\) is a real constant and \(\lambda\) is a smooth function.
When \(\lambda\) is a real constant, it is called Ricci-Bourguignon harmonic soliton. In particular, when \(V=-\nabla f\), \(((M,g),(N,h),f,u,\rho,\lambda)\) is called a gradient Ricci-Bourguignon harmonic soliton if it satisfies the coupled system of elliptic partial differential equations
\[\begin{cases}Rc-\rho Rg-\alpha\nabla u\otimes\nabla u+\operatorname{Hess}f= \lambda g\\ \tau_{g}u-\langle\nabla u,\nabla f\rangle=0,\end{cases} \tag{1.1}\]
here \(f:M\to\mathbb{R}\) be a smooth function and where. The function \(f\) is called the potential. One can refer to [26][2][14][16][3] for the studies on Ricci-harmonic solitons.
It is obvious that almost Ricci-Bourguignon harmonic solitons \(((M,g),(N,h),V,u,\rho,\lambda)\) is almost Ricci harmonic soliton if \(\rho=0\). Azami et al. [5] gived the condition under which complete shrinking Ricci-harmonic Bourgainion soliton must be compact.
The gradient Ricci-harmonic soliton is said to be shrinking, steady or expanding depending on whether \(\lambda>0,\lambda=0\) or \(\lambda<0\).
**Definition 1.2**.: Gradient Ricci-Bourguignon harmonic soliton is called trivial if the potential function \(f\) is constant.
It can from (1.1) that when \(u\) and \(f\) are constants, \((M,g)\) must be Einstein manifold.
If \(m\) is an integer with \(0\leq m\leq n\) and \(\alpha\) is a scalar, then the \(m\)-th invariant of \(\nabla^{2}f\), is denoted by \(S_{m}(f)\) and is defined by the condition (see, [22][23],pages, 461 or [24])
\[\det\left(I+\alpha\nabla^{2}f\right)=S_{0}(f)+\alpha S_{1}(f)+\cdots+\alpha^{ m}S_{m}(f).\]
Our main motivation comes from this paper [24]. On the one hand, we want to use their methods to explore Ricci-Bourguignon harmonic soliton. On the other hand, we know the relation between V-harmonic map and Ricci-harmonic soliton, so we want to use V-harmonic map to study Ricci-harmonic soliton
This paper is organized as follows. In section 2, we will use the quantity \(S_{2}(f)\) to derive the triviality of Ricci-Bourguignon harmonic soliton. In section 3, we will use the results of \(V\)-harmonic map to derive the property of Ricci-harmonic solitons.
## 2 Ricci-Bourguignon harmonic soliton
In the sequel, we use the convetions and notations: \(Sc=Ric-\alpha\nabla u\otimes\nabla u-\rho Rg\), its trace is \(S=(1-n\rho)R-\alpha|\nabla u|^{2}\)
**Lemma 2.1**.: _Let \(((M^{m},g),(N,h),f,u,\rho,\lambda)\) be a gradient almost Ricci-Bourguignon harmonic soliton, then we have the following equations:_
\[\mathrm{divSc}=\frac{1}{2}\nabla S-\alpha\tau_{g}(u)\nabla u, \tag{2.1}\]
\[\langle\nabla S,\nabla f\rangle =2(m-1)\langle\nabla\lambda,\nabla f\rangle+2Sc(\nabla f,\nabla f)+2 \rho R|\nabla f|^{2}+(m-2)\langle\nabla R,\nabla f\rangle, \tag{2.2}\] \[Sc(\nabla f,\cdot) =\frac{1}{2}\nabla S-(m-1)\nabla\lambda-2\rho\nabla f-(m-2)\nabla R,\] (2.3) \[\nabla\left(S+|\nabla f|^{2}\right) =2(m-1)\nabla\lambda+2\lambda\nabla f+2\rho\langle\nabla f,\cdot \rangle+(m-2)\langle\nabla R,\cdot\rangle, \tag{2.4}\]
_and_
\[\frac{1}{2}\Delta|\nabla f|^{2}= |\nabla^{2}f|^{2}-(m-2)\langle\nabla\lambda,\nabla f\rangle-Sc( \nabla f,\nabla f)+\alpha|\langle\nabla u,\nabla f\rangle|^{2} \tag{2.5}\] \[-(m-2)\rho\langle\nabla R,\nabla f\rangle\]
**Remark 2.1**.: _When \(((M^{m},g),(N,h),f,u,\rho,\lambda)\) is gradient Ricci-harmonic solition, the terms about \(\rho,\lambda\) and \((m-2)\nabla R\) disappear._
Proof.: We modify the proof of [1, Proposition 3.1] and [12, Proposition 2.1]. Using the definition of the tensor \(Sc\), we see that
\[\frac{1}{2}\nabla R=\operatorname{div}Rc =\operatorname{div}Sc+\alpha\operatorname{div}(\nabla u\otimes \nabla u)+\rho g(\nabla R,\cdot)\] \[=\operatorname{div}Sc+\alpha\tau_{g}(u)\nabla u+\frac{\alpha}{2} \nabla|\nabla u|^{2}+\rho g(\nabla R,\cdot),\]
It implies that
\[\operatorname{divSc}=\frac{1}{2}\nabla S-\alpha\tau_{g}(u)\nabla u+(\frac{m}{2 }-1)\rho g(\nabla R,\cdot)\]
By the definition, we have
\[Sc_{ij}+\nabla_{i}\nabla_{j}f=\lambda g_{ij}\]
Taking trace, we get
\[S+\Delta f=m\lambda\]
Next, taking the covariant derivative me gives
\[\nabla_{i}S+\nabla_{i}\nabla_{j}\nabla_{j}f=m\nabla_{i}\lambda.\]
iant derivatives and using the contracted second Bianchi identity, we
\[\nabla_{i}S= -\nabla_{j}\nabla_{i}\nabla_{j}f+R_{il}\nabla_{l}f+n\nabla_{i}\lambda\] \[= -\nabla_{j}\left(-Sc_{ij}+\lambda g_{ij}\right)+R_{il}\nabla_{l} f+n\nabla_{i}\lambda\] \[= \frac{1}{2}\nabla_{i}S-\alpha\tau_{g}(u)\nabla u+(\frac{m}{2}-1) \rho g(\nabla R,\cdot)-\nabla_{i}\lambda+R_{il}\nabla_{l}f+n\nabla_{i}\lambda\]
Thus, we proved eq. (2.3)
\[\frac{1}{2}\nabla_{i}S=R_{il}\nabla_{l}f+(n-1)\nabla_{i}\lambda-\alpha\tau_{g} (u)\nabla u+(\frac{m}{2}-1)\rho\nabla_{i}R, \tag{2.6}\]
which is equivalent to
\[\left(\frac{1}{2}-(n-1)\rho\right)\nabla_{i}R-\frac{1}{2}\alpha\nabla_{i}|\nabla u |^{2}=R_{il}\nabla_{l}f+(m-1)\nabla_{i}\lambda-\alpha\tau_{g}(u)\nabla u,\]
In addition, noticing that \(\tau(u)=\langle\nabla u,\nabla f\rangle\), we also have by eq. (2.6)
\[\langle\nabla S,\nabla f\rangle\] \[= 2(m-1)\langle\nabla\lambda,\nabla f\rangle+2Sc(\nabla f,\nabla f )+2\rho R|\nabla f|^{2}+(m-2)\rho\langle\nabla R,\nabla f\rangle,\]
We proved eq. (2.2).
In addition,
\[\nabla S =2(m-1)\nabla\lambda+2Sc(\nabla f,\cdot)+2\rho\langle\nabla f, \cdot\rangle+(m-2)\langle\nabla R,\cdot\rangle\] \[=2(m-1)\nabla\lambda+2(\lambda g-\operatorname{Hess}f)\nabla f+2 \rho\langle\nabla f,\cdot\rangle+(m-2)\langle\nabla R,\cdot\rangle\] \[=2(m-1)\nabla\lambda+2\lambda\nabla f-2\operatorname{Hess}f( \nabla f)+2\rho\langle\nabla f,\cdot\rangle+(m-2)\langle\nabla R,\cdot\rangle\] \[=2(m-1)\nabla\lambda+2\lambda\nabla f-\nabla|\nabla f|^{2}+2\rho \langle\nabla f,\cdot\rangle+(m-2)\langle\nabla R,\cdot\rangle,\]
which implies
\[\nabla\left(S+|\nabla f|^{2}\right)-2(m-1)\nabla\lambda-2\lambda\nabla f=2\rho \langle\nabla f,\cdot\rangle+(m-2)\langle\nabla R,\cdot\rangle\]
This is eq. (2.4)
In the end, we prove eq. (2.5). By Bochner formula, we have
\[\frac{1}{2}\Delta|\nabla f|^{2} =|\operatorname{Hess}f|^{2}+\langle\nabla\Delta f,\nabla f \rangle+Rc(\nabla f,\nabla f)\] \[=|\operatorname{Hess}f|^{2}+m\langle\nabla\lambda,\nabla f \rangle-\langle\nabla S,\nabla f\rangle+Rc(\nabla f,\nabla f).\]
Using the second equation of the proposition yields
\[\frac{1}{2}\Delta|\nabla f|^{2}= |\operatorname{Hess}f|^{2}+m\langle\nabla\lambda,\nabla f \rangle-2(m-1)\langle\nabla\lambda,\nabla f\rangle-2Sc(\nabla f,\nabla f)\] \[-2\rho R|\nabla f|^{2}-(m-2)\rho\langle\nabla R,\nabla f \rangle+Rc(\nabla f,\nabla f)\] \[= |\operatorname{Hess}f|^{2}-(m-2)\langle\nabla\lambda,\nabla f \rangle-Sc(\nabla f,\nabla f)+\alpha\nabla u\otimes\nabla u(\nabla f,\nabla f)\] \[-\rho R|\nabla f|^{2}-(m-2)\rho\langle\nabla R,\nabla f\rangle,\]
**Lemma 2.2** (cf.[23] ).: _Let \((M,g)\) be a compact Riemannian manifold. Then the following holds:_
\[2\int_{M}S_{2}(f)=\int_{M}\operatorname{Ric}(\nabla f,\nabla f).\]
**Lemma 2.3** (cf. [23]).: _Let \((M,g)\) be a compact Riemannian manifold. If \(f\) is a smooth function on \(M\), then \(S_{2}(f)\) can't be constant unless it vanishes._
**Theorem 2.1**.: _(1) If \(((M^{m},g),(N,h),f,u,\rho,\lambda)\) compact gradient shrinking almost Ricci-Bourguignon harmonic solitons with constant \(S_{2}(f)\), \(\rho R\leq 0\) and_
\[\int_{M}\langle\nabla\lambda,\nabla f\rangle+\int_{M}\rho\langle\nabla R, \nabla f\rangle\leq 0\]
_then the soliton is trivial._
_(2) If \(((M^{2},g),(N,h),f,u,\rho,\lambda)\) compact gradient shrinking Ricci-Bourguignon harmonic solitons with constant \(S_{2}(f)\), then the soliton is trivial._
Proof of Theorem 2.1.: As \(S_{2}(f)\) is a constant, by Lemma 2.3, we infer that \(S_{2}(f)\) vanishes. Therefore, we have
\[\int_{M}\mathrm{Ric}(\nabla f,\nabla f)=0. \tag{2.7}\]
we obtain
\[\int_{M}\langle\nabla f,\nabla S\rangle=2(m-1)\int_{M}\langle\nabla\lambda, \nabla f\rangle+\int_{M}2\rho R|\nabla f|^{2}+(m-2)\langle\nabla R,\nabla f\rangle.\]
Now, by divergence theorem, we get
\[-\int_{M}S\Delta f=2(m-1)\int_{M}\langle\nabla\lambda,\nabla f \rangle+\int_{M}2\rho R|\nabla f|^{2}+(m-2)\langle\nabla R,\nabla f\rangle.\]
Again, as \(\lambda\) is a constant, we have
\[\int_{M}m\lambda\Delta f+m\int_{M}\langle\nabla\lambda,\nabla f \rangle=0.\]
Thus, combining, we obtain
\[\int_{M}(m\lambda-S)\Delta f=(m-2)\int_{M}\langle\nabla\lambda, \nabla f\rangle+\int_{M}2\rho R|\nabla f|^{2}+(m-2)\langle\nabla R,\nabla f\rangle.\]
together with equation
\[m\lambda-S=\Delta f,\]
yields
\[\int_{M}(\Delta f)^{2}=(m-2)\int_{M}\langle\nabla\lambda,\nabla f \rangle+\int_{M}2\rho R|\nabla f|^{2}+\int_{M}(m-2)\rho\langle\nabla R,\nabla f\rangle.\]
It follows that \(\Delta f=0\) in the case (1) or (2) after noticing that eq. (2.7) is
\[\int_{M}R|\nabla f|^{2}=0.\]
Since \((M,g)\) is a compact Riemannian manifold, hence \(f\) is constant. This completes the proof.
Next, we give an application of the Bochner formula eq. (2.5),
**Theorem 2.2**.: _(1) If \(((M^{m},g),(N,h),f,u,\lambda)\) is a compact gradient Ricci-harmonic solitons with constant \(S_{2}(f)\) and \(\lambda\), then this soliton is trivial._
_(2) If \(((M^{2},g),(N,h),f,u,\rho,\lambda)\) is a two dimensional compact gradient almost Ricci-Bourguignon harmonic solitons with constant \(S_{2}(f)\), then this soliton is trivial._
Proof.: By eq. (2.5), we obtain
\[\frac{1}{2}\Delta|\nabla f|^{2}= \left|\nabla^{2}f\right|^{2}-(m-2)\langle\nabla\lambda,\nabla f \rangle-Sc(\nabla f,\nabla f)+\alpha\nabla u\otimes\nabla u(\nabla f,\nabla f)\] \[+\rho R|\nabla f|^{2}-(m-2)\langle\nabla R,\nabla f\rangle,\]
Since \(S_{2}(f)\) is constant, by Lemma 2.3 we get
\[\int_{M}\mathrm{Ric}(\nabla f,\nabla f)=0. \tag{2.8}\]
Putting all these together and using the divergence theorem yields
\[\int_{M}\bigg{(}\left|\nabla^{2}f\right|^{2}+(2-m)\langle\nabla\lambda, \nabla f\rangle+\rho R|\nabla f|^{2}-(m-2)\rho\langle\nabla R,\nabla f\rangle \bigg{)}=0.\]
From this, we can conclude the proof easily. In the case (1), since \(\lambda=\)constant, \(\rho=0\), we have \(f\) is constant. In the case (2), since \(m=2\), and eq. (2.8) is just
\[\int_{M}R|\nabla f|^{2}=0.\]
## 3 Ricci harmonic soliton
**Theorem 3.1** (c.f. [9], Theorem 2.).: _(1) Let \((M,g)\) be a complete noncompact Riemannian manifold with_
\[\mathrm{Ric}_{V}:=\mathrm{Ric}^{M}-\frac{1}{2}L_{V}g\geq-A,\]
_where \(A\geq 0\) is a constant, \(\mathrm{Ric}^{M}\) is the Ricci curvature of \(M\) and \(L_{V}\) is the Lie derivative. Let \((X,h)\) be a complete Riemannian manifold with sectional curvature bounded above by a positive constant \(\kappa\). Let \(u:M\to X\) be a \(V\)-harmonic map such that \(u(M)\subset B_{R}(p)\), where \(B_{R}(p)\) is a regular ball in \(X\), i.e., disjoint from the cut-locus of \(p\) and \(R<\frac{\pi}{2\sqrt{\kappa}}\). If \(V\) satisfies_
\[\langle V,\nabla r\rangle\leq v(r),\]
_for some nondecreasing function \(v(\cdot)\) satisfying \(\lim_{r\to+\infty}\frac{|v(r)|}{r}=0\), where \(r\) denotes the distance function on \(M\) from a fixed point \(\tilde{p}\in M\), then \(e(u)\) is bounded by a constant depending only on \(A,\kappa\) and \(R\). Furthermore, if \(A=0\), namely,_
\[\operatorname{Ric}^{M}\geq\frac{1}{2}L_{V}g\]
_then \(u\) must be a constant map._
_(2) Let \(M^{m}\) be a complete noncompact manifold with \(\operatorname{Ric}_{f}\geq 0\) and \(N\) be a complete Riemannian manifold with nonpositive sectional curvature. If \(u:M\to N\) is a f-harmonic maps with finite weighted energy, then e(u) must be constant._
**Theorem 3.2** (cf. [10, Theorem 12]).: _Let \((M^{m},g)\) be a complete noncompact Riemannian manifold with_
\[\operatorname{Ric}_{V}:=\operatorname{Ric}^{M}-\frac{1}{2}L_{V}g\geq A,\]
_where \(A\geq 0\) is a constant, \(\operatorname{Ric}^{M}\) is the Ricci curvature of \(M\) and \(L_{V}\) is the Lie derivative. Let \((N^{n},h)\) be a complete Riemannian manifold with sectional curvature bounded above by a negative constant \(-\kappa^{2}(\kappa>0)\). Let \(u:M\to N\) be a \(V\)-harmonic map such that \(u(M)\subset B_{c}\), where \(B_{c}\) is a horoball centered at \(c(+\infty)\) with respect to a geodesic \(c(t)\) parametrized by arc length. Suppose that \(\|V\|_{L^{\infty}(M)}<+\infty\). If \(A\geq\frac{\|V\|_{L^{\infty}}^{2}}{m-1}\), then \(u\) must be a constant map. If \(A<\frac{\|V\|_{L^{\infty}}^{2}}{m-1}\), then \(\frac{e(u)}{(Bou)^{2}}\) is bounded by a constant depending only on \(A,m,\kappa\) and \(\|V\|_{L^{\infty}(M)}\)._
Thus, using the above Theorems, we can infer
**Theorem 3.3**.: _Let u be in the same situations as Theorem 3.1, If \(((M,g),(N,h),V,u,\lambda)\) is a non-compact Ricci-harmonic soliton with constant \(S_{2}(f)\) and \(\lambda\), then it is non-shrinking Ricci soliton, i.e, \(\lambda\leq 0\)._
Proof.: By the assumption and Theroem 3.1, we know that \(u\) is constant. So, \(\alpha\)-Ricci-harmonic soliton is reduced to Ricci soliton. By contradiction, we know that shrinking Ricci soliton is Einstein manifold (cf. [24, Theorem 1.2]). This is a contradiction.
Similarly, by the similar argument as above, we get
**Theorem 3.4**.: _Let u be in the same situations as Theorem 3.2, if \(((M,g),(N,h),V,u,\lambda)\) is a compact \(\alpha\)-Ricci-harmonic soliton with constant \(S_{2}(f)\) and \(\lambda\geq\frac{\|V\|_{L^{\infty}}^{2}}{m-1}\), then it is non-shrinking Ricci soliton, i.e, \(\lambda\leq 0\)._
Recall that
**Theorem 3.5** (cf. [28]).: _Let \(((M^{m},g),(N^{n},h),f,u,\lambda)\) be a shrinking or steady gradient Ricci-Harmonic soliton satisfying with \(\alpha>0\). If in addition the sectional curvature Sect \({}^{N}\) of \(N\) satisfies \(K=\sup_{N}\operatorname{Sect}^{N}<\frac{\alpha}{m}\), then \(u\) is a constant map._
Combining with Theorem 2.2 gives that
**Corollary 3.1**.: _Let \(((M^{m},g),(N^{n},h),f,u,\lambda)\) be a compact shrinking or steady gradient Ricci-Harmonic soliton with \(\alpha>0\), constant \(S_{2}(f)\) and \(\lambda\). If in addition the sectional curvature Sect \({}^{N}\) of \(N\) satisfies \(K=\sup_{N}\mathrm{Sect}^{N}<\frac{\alpha}{m}\), then \((M,g)\) must be Einstein manifold._
|
2309.08863 | Trajectory Tracking Control of Skid-Steering Mobile Robots with Slip and
Skid Compensation using Sliding-Mode Control and Deep Learning | Compensating for slip and skid is crucial for mobile robots navigating
outdoor terrains. In these challenging environments, slipping and skidding
introduce uncertainties into trajectory tracking systems, potentially
compromising the safety of the vehicle. Despite research in this field, having
a real-world feasible online slip and skid compensation remains challenging due
to the complexity of wheel-terrain interaction in outdoor environments. This
paper proposes a novel trajectory tracking technique featuring real-world
feasible online slip and skid compensation at the vehicle level for
skid-steering mobile robots operating outdoors. The approach employs
sliding-mode control to design a robust trajectory tracking system, accounting
for the inherent uncertainties in this type of robot. To estimate the robot's
slipping and undesired skidding and compensate for them in real-time, two
previously developed deep learning models are integrated into the
control-feedback loop. The main advantages of the proposed technique are that
it (1) considers two slip-related parameters for the entire robot, as opposed
to the conventional approach involving two slip components for each wheel along
with the robot's skidding, and (2) has an online real-world feasible slip and
skid compensator, reducing the tracking errors in unforeseen environments.
Experimental results demonstrate a significant improvement, enhancing the
trajectory tracking system's performance by over 27%. | Payam Nourizadeh, Fiona J Stevens McFadden, Will N Browne | 2023-09-16T03:58:03Z | http://arxiv.org/abs/2309.08863v2 | # Trajectory Tracking Control for Skid-Steering Mobile Robots with Slip and Skid Compensation
###### Abstract
Compensating for slip and skid is crucial for mobile robots navigating outdoor terrains. In these challenging environments, slipping and skidding introduce uncertainties into trajectory tracking systems, potentially compromising the safety of the vehicle. Despite research in this field, having a real-world feasible online slip and skid compensation remains challenging due to the complexity of wheel-terrain interaction in outdoor environments. This paper proposes a novel trajectory tracking technique featuring real-world feasible online slip and skid compensation at the vehicle level for skid-steering mobile robots operating outdoors. The approach employs sliding-mode control to design a robust trajectory tracking system, accounting for the inherent uncertainties in this type of robot. To estimate the robot's slipping and undesired skidding and compensate for them in real-time, two previously developed deep learning models are integrated into the control-feedback loop. The main advantages of the proposed technique are that is (1) considers two slip-related parameters for the entire robot, as opposed to the conventional approach involving two slip components for each wheel along with the robot's skidding, and (2) has an online real-world feasible slip and skid compensator, reducing the tracking errors in unforeseen environments. Experimental results demonstrate a significant improvement, enhancing the trajectory tracking system's performance by over 27%.
## I Introduction
Wheeled mobile robots (WMRs) have the capacity to autonomously navigate in on-road and off-road conditions and a wide range of environments such as urban areas, orchards, forests, and planetary exploration. These robots come in various configurations, including independent steering, differential steering, Ackerman steering, and skid-steering, where each category presents its own control issues [1, 2].
Skid-steering mobile robots (SSMRs), unlike other types, lack a dedicated steering mechanism and rely on skidding for steering manoeuvre. This design choice results in lightweight, simplified, and robust robots suitable for off-road terrains and rugged environments. However, the absence of traditional steering mechanisms makes tracking curvilinear trajectories a challenging task for SSMRs [3, 4].
Despite the relatively high traction and robustness, SSMRs still need to deal with terrain-related hazards to be able to operate autonomously in off-road terrains. In off-road environments, the wheel-terrain interaction (WTI) affects the dynamics and controllability of the robot, which cannot be neglected [5]. Consequently, the control system for an autonomous SSMR must account for slipping and skidding when operating outdoors and on uneven terrains. Addressing these issues is vital to minimize tracking errors, prevent immobilization, maintain control, and preserve the robot's mechanical stability.
WTI can cause undesired skidding and slipping for WMRs operating in outdoor environments. Undesired skidding is defined at the vehicle level in the lateral direction and could cause deviation from the desired trajectory [6]. Slippage can be defined at the wheel-level in longitudinal and lateral directions. Hence, a navigation system must determine both longitudinal and lateral slip for each wheel, resulting in two slip parameters per wheel, in addition to accounting for the robot's skidding [7]. Alternatively, the robot's slippage can be assessed at the vehicle-level. In this case, the hypothesis is that the motion control system could only rely on the robot's slipping and undesired skidding (i.e., two parameters in total) for trajectory tracking. This contrasts with the wheel-level perspective, which requires two slip components for each wheel along with the robot's skidding, i.e., commonly nine parameters. Embracing this approach not only reduces the number of necessary slip and skid parameters but also considerably reduces the number of onboard sensors required for their estimation [6, 8].
The main contribution of this paper is the design and real-time implementation of a robust controller for SSMRs with slip and skid compensation at the vehicle-level in outdoor environments and uneven terrains. Initially, the kinematics and dynamics model of an SSMR with slipping and skidding at the vehicle-level is proposed based on the model developed by Pazderski and Kozlowski [9]. Subsequently, a sliding-mode
controller is designed based on this dynamics model, aiming to ensure robust performance against model uncertainties during trajectory tracking.
To account for WTI, two deep learning models (i.e., CNN-LSTM-AE and CNN-LSTM algorithms) developed in our prior works [6, 8] are incorporated into the control-feedback loop. These models facilitate estimation for slip and undesired skid at the vehicle-level, enabling the feedback control loop to compensate for these factors in real-time. Notably, these models deliver real-world feasible estimations without relying on prior knowledge of terrain surfaces, utilizing two proprioceptive sensors, i.e., IMU and wheel encoder.
To mitigate the inherent chattering issue associated with the sliding-mode controller, the conventional sign function is replaced with a saturation function, ensuring smoother control actions. Furthermore, this paper investigates and resolves the singularity problem commonly encountered in sliding-mode controllers. It is worth mentioning that singularity points can cause sudden and unpredicted spikes in the control signal, resulting in navigation inaccuracies, non-smooth behaviour, and stability concerns.
The performance of the proposed trajectory tracking system is evaluated using a 4-wheel SSMR (i.e., Pioneer 3-AT) in an outdoor environment. The performance of the proposed controller is also compared with the controller without slip and skid compensation.
The rest of this paper is organized as follows. In Section II, we review related works in trajectory tracking techniques, and slip and skid compensation strategies for mobile robots. Section III presents the modelling of the SSMR with slipping and undesired skidding. Section IV presents the proposed trajectory tracking technique with slip and undesired skid compensation. Section V describes the experimental setup, and the performance of the proposed controllers is evaluated in Section VI. Finally, the conclusion of this paper is presented in Section VII.
## II Related Works
The problem of modelling and control of SSMRs have been studied in the literature considering the target environment, e.g., indoor or outdoor [4, 10]. In indoor environments, the effect of the WTI is predictable, and as a result under standard operating conditions, the robot's slipping and skidding will be negligible [9, 11]. However, operating the robot in outdoor environments and uneven terrains requires considering the WTI effects on the kinematics and dynamics of the SSMR as well as on the controller design procedure [12]. Note that regardless of the working environment, the SSMR suffers from parametric uncertainties such as the location of the instance centre of rotation.
Due to the nonlinearity of the SSMR, nonlinear controllers have been utilized extensively in the literature [11]. Some researchers have tried to use the linear PID controller for the trajectory tracking of these robots [13, 14]. However, the stability of the closed-loop system might not be guaranteed using that method. The Lyapunov-based controller design technique was utilized as one of the earliest attempts to design a nonlinear trajectory tracking system for SSMR. Other nonlinear controllers were also applied for the trajectory tracking such as nonlinear model predictive control [15, 5, 16] and backstepping techniques [17]. Having a stability proof for the closed-loop system is the main advantage of these techniques [18]. Kozlowski and Pazderski [9] proposed the kinematics and dynamics equations for SSMR without slip and skid consideration, which ensures that their model is valid for indoor environments. They developed a Lyapunov-based trajectory tracking system and validated their technique through simulation and experimental studies. However, the conventional Lyapunov-based controllers do not have the capacity to consider uncertainty in their design procedure and therefore, the trajectory tracking system could be subject to steady-state error due to the parametric uncertainty of this type of robot.
To be able to rectify the parametric uncertainty of SSMRs, Martins et al. [19] integrated an observer into their control-feedback loop. However, observers are dependent on the initial condition and may not provide accurate estimations under different conditions [3]. Model-free techniques were proposed to provide robust trajectory tracking performance including fuzzy logic [20, 21, 22] and neural networks [23, 24]. Both fuzzy logic and neural network techniques are computationally expensive and might not have stability proof, which could make them difficult to use for real-time experiments.
In outdoor environments, measuring the exact location of the instant centre of rotation of SSMRs is challenging, which causes uncertainty for the control design procedure of these types of robots. Having variable loading is another source of uncertainty that affects the robot's weight and moment of inertia. Therefore, robust controllers such as sliding-mode controllers are common control techniques for this robot as they can guarantee the stability of the closed-loop system considering the uncertainties. Sliding-mode control (SMC) is a well-established robust nonlinear controller that can guarantee the stability of the closed-loop system under disturbances and parametric uncertainty [25]. This controller is designed to force the closed-loop system to a predefined sliding surface (or manifold) considering the uncertainties and varying dynamics of the system. As a result, this controller is robust to system uncertainties and less sensitive to modelling errors. It has been applied to different domains in robotics including holonomic [26, 27] and nonholonomic [3, 28] systems and has shown reliable performance in both simulation and experimental studies. Moreover, this controller is compatible with both single-input-single-output and multi-input-multi-output nonlinear systems, and once the controller is designed, the implementation for experimental studies is simple and computationally efficient in comparison with metaheuristic
techniques. These advantages make the sliding-mode controller a suitable candidate for SSMRs' navigation. However, the SMC technique suffers from chattering in the actuators due to having the discontinuous sign function in the control input. To rectify this problem, two approaches mostly have been considered, e.g. (1) using higher-order SMC, or (2) replacing the sign function with continuous alternatives like the saturation function. Note that in the case of replacing the sign function with the continuous one, the stability of the closed-loop system should be investigated. Another inherent challenge with this controller involves the potential singularity issue in some dynamic systems within the equivalent control term, which needs to be investigated during the stability analysis [29]. Matraji et al. [3] implemented a second-order sliding mode controller on an SSMR (Pioneer 3-AT) in indoor environments. They experimentally compared their designed controller with the conventional SMC and showed less chattering in the control inputs. However, their system is an indoor feasible technique only as it did not consider the WTI.
Recent research has addressed the slip and skid problem for motion control of WMRs in different ways such as considering the WTI as an uncertainty [30, 31, 32], model-based determination of slip and skid [5, 33, 34, 35, 12, 36], or measuring/estimating [37, 38, 39] slip and skid to integrate into a control-feedback loop.
The robust control method, like SMC, has the capacity to consider slip and skid as bounded uncertainty. However, as the range of defined uncertainties grows, the steady-state error might increase. The lack of ability to detect high slip/skid conditions (as it is an offline technique) to avoid mentioned terrain-related hazards is another disadvantage of this technique. Therefore, this technique is most suitable for environments with low-slip/skid conditions.
Model-based determination of slip/skid is a classic method that relies on an empirical model of the WTI. The disadvantage of this method is that it requires prior knowledge of soil properties to be able to determine wheel longitudinal and lateral slips, and its accuracy depends significantly on the accuracy of the measured soil properties [33].
Another solution could be measuring or estimating the robot's slip/skid. The slip/skid measurement requires an accurate measurement of the robot's velocity with a sufficient sampling rate for control under different environmental conditions, which in itself is a challenging problem. Alternatively, a slip/skid estimation system can be integrated into the control-feedback loop. This estimator should be real-world feasible, easy to integrate, and be able to operate with a sufficient sampling rate, which has been discussed in the literature [7]. Biswas and Kar [40] proposed a nonlinear observer for slipping and skidding of mobile robots at the vehicle-level based on the kinematics equations of the robot using the Extended Kalman Filter (EKF) technique in indoor environments. They defined the difference between the robot's commanded and observed slip angle as the vehicle skidding and proposed a controller to compensate for the robot's slipping and skidding.
In our previous works [8, 6], we presented novel in-situ slip and undesired skid estimators at the vehicle-level using deep learning and proprioceptive sensors for outdoor environments. In those papers, the deep learning models were trained in outdoor environments for the Pioneer 3-AT robot. Both slip and undesired skid estimators used an IMU and the default wheel encoders, which are low-cost and easy-to-integrate sensors that enabled a real-world feasible estimation system.
In this current paper, we integrate the slip and undesired skid estimators developed in our previous research [8, 6] with a trajectory tracking system to be able to navigate the robot in outdoor environments. The aims of this research are (1) to reformulate the WTI characterization by utilizing two slip and undesired skid parameters at the vehicle-level, as opposed to employing two slip parameters per wheel in addition to the vehicle's skidding, and (2) to design and implement a real-world feasible trajectory tracking system with slip and undesired skid compensation capable of operating in unforeseen outdoor terrains.
## III Modelling
Figure 1 shows an SSMR with a local coordinate frame \((x_{y},y_{h},z_{h})\) in a 2D plane and \(q=[x,y,\theta]^{T}\) is the location and orientation of the robot's centre of mass (COM) with respect to the global coordinates, \((X,Y,Z)\). \(\nu\) is the velocity of the COM and \(\theta\) is the angle of the total velocity of the robot with respect to \(x_{b}\). \(v_{x}\) and \(v_{y}\) are the longitude and lateral velocity components and \(\omega\) is the angular velocity of the robot. \(2c\) and \(r\) are the distance between the rear wheels and the effective radius of the wheels, respectively.
If the robot moves with linear velocity \(\nu=\left[v_{x},v_{y},0\right]^{T}\) and angular velocity \(\Omega=[0,0,\omega]^{T}\)expressed in the local frame, then the velocity vector of the robot in the global frame is \(\dot{q}=\left[x,\dot{y},\dot{\theta}\right]^{T}\), which based on Figure 1 is
\[\dot{x} =v_{x}\cos\theta-v_{y}\sin\theta\] \[\dot{y} =v_{x}\sin\theta+v_{y}\cos\theta. \tag{1}\] \[\dot{\theta} =\omega\]
Figure 1: Schematic model of an SSMR with skidding and slipping.
Eq. (1) provides the relation between the robot's linear and angular velocities and \(\dot{q}\) at the COM. The control inputs of the robot are the angular velocity of the wheels \(\omega_{l}\). Therefore, an equation is needed to map the control inputs to the robot's linear and angular velocities. Figure 1 shows wheel interaction with the terrain with \(\omega_{l}\) angular velocity (at the wheel-level). Due to this interaction, the wheel moves in both longitudinal and lateral directions, and the slip at the wheel-level causes a relative velocity at the contact surface. Since this is a rotation with slip, the wheel and terrain interact on a surface instead of a singular point. The linear velocity of the wheels' centre is as follows:
\[\begin{split}\mathbf{v}_{lx}=r\omega_{l}(1-s_{l})\\ \mathbf{v}_{ly}=\mathbf{v}_{lx}\tan\alpha_{l}\end{split} \tag{2}\]
In Eq. (2) \(s_{l}\) and \(\alpha_{l}\) are the wheel longitudinal slip ratio and slip angle, respectively.
In Figure 1, _ICR_ is the instantaneous centre of rotation and \(d_{l}=\left[d_{lx},d_{ly}\right]^{T}\) and \(d_{c}=\left[d_{cx},d_{cy}\right]^{T}\) are the radius vectors and therefore:
\[\begin{split}\begin{bmatrix}d_{1x}\\ d_{2x}\\ d_{1y}\\ d_{3y}\\ \end{bmatrix}=\begin{bmatrix}d_{4x}\\ d_{3x}\\ d_{2y}\\ d_{4y}\\ \end{bmatrix}=\begin{bmatrix}d_{cx}-\alpha\\ d_{cx}+b\\ d_{cy}+c\\ d_{cy}-c\\ \end{bmatrix}\end{split} \tag{3}\]
and
\[\begin{split}\omega=-\frac{v_{lx}}{d_{ly}}=-\frac{v_{x}}{d_{cy} }=\frac{v_{ly}}{d_{ix}}=\frac{v_{y}}{d_{cx}}\\ |\omega|=\frac{\|v_{i}\|}{\|d_{i}\|}=\frac{\|v\|}{\|d_{c}\|^{T}} \end{split} \tag{4}\]
where \(\|*\|\) denotes the Euclidian norm, and \(a\), \(b\) and \(c\) are positive geometrical parameters of the robot. The rigidity constraint can be extracted from Eqs. (3) and (4) as follows:
\[\begin{split}\mathbf{v}_{L}=\mathbf{v}_{1x}=\mathbf{v}_{2x},& \mathbf{v}_{R}=\mathbf{v}_{3x}=\mathbf{v}_{4x}\\ \mathbf{v}_{F}=\mathbf{v}_{2y}=\mathbf{v}_{3y},& \mathbf{v}_{B}=\mathbf{v}_{1y}=\mathbf{v}_{4y}\\ \end{split} \tag{5}\]
In Eq. (5) \(\mathbf{v}_{L}\) and \(\mathbf{v}_{R}\) are the \(x\)-component of the left and right wheels' linear velocity respectively, and \(\mathbf{v}_{F}\) and \(\mathbf{v}_{B}\) are the \(y\)-component of the front and back wheels' linear velocities respectively. Eq. (5) can be rewritten using Eq. (2).
\[\begin{split}\mathbf{v}_{L}=r\omega_{1}(1-s_{1})=r\omega_{2}(1-s_{2} )\\ \mathbf{v}_{R}=r\omega_{3}(1-s_{3})=r\omega_{4}(1-s_{4})\\ \end{split} \tag{6}\]
The _ICR_ can be defined in the robot's local frame as follows:
\[\begin{split} ICR=[\mathbf{x}_{0y},\mathbf{y}_{0}]^{T}=\left[-d_{cx},-d_ {cy}\right]^{T}\end{split} \tag{7}\]
Substituting Eq. (7) into (4) gives us:
\[\omega=\frac{\mathbf{v}_{x}}{\mathbf{y}_{0}}=-\frac{\mathbf{v}_{y}}{\mathbf{x}_{0}}. \tag{8}\]
The relationship between the wheels' linear velocity and the linear and angular velocities of the robot at COM can be derived using Eqs. (3)- (8).
\[\begin{bmatrix}\mathbf{v}_{L}\\ \mathbf{v}_{R}\\ \mathbf{v}_{F}\\ \mathbf{v}_{B}\\ \end{bmatrix}=\begin{bmatrix}1&-c\\ 1&c\\ 0&-\mathbf{x}_{0}+b\\ 0&-\mathbf{x}_{0}-a\\ \end{bmatrix}\begin{bmatrix}\mathbf{v}_{x}\\ \mathbf{v}\\ \mathbf{v}\\ \mathbf{v}\\ \end{bmatrix} \tag{9}\]
The slip ratio \(s_{l}\) is defined for each specific wheel. Assuming all wheels slip independently and the robot moves with the linear \(\mathbf{v}_{x}\) and angular \(\omega\) velocities, then the longitudinal vehicle slip ratio \(s_{y}\) can be defined at any arbitrary point, e.g., COM, at the vehicle-level as follows:
\[s_{y}=\frac{\mathbf{v}_{lx}-\mathbf{v}_{x}}{\mathbf{v}_{lx}}, \tag{10}\]
where \(\mathbf{v}_{lx}\) denotes the no-slip linear velocity of the robot in the \(x\) direction, which is:
\[\begin{split}\mathbf{v}_{lx}=r\frac{\omega_{L}+\omega_{R}}{2}\end{split} \tag{11}\]
and therefore:
\[\begin{split}\mathbf{v}_{x}=\mathbf{v}_{lx}(1-s_{y}).\end{split} \tag{12}\]
Note that due to the rigidity constraints (Eq. (5)), the angular velocity of wheels at each side is equal in the no-slip condition.
As also discussed in [6], skidding is the steering mechanism for the SSMR and that means the robot needs skidding to steer. But due to wheel terrain interaction, the robot may experience undesired skidding (\(\sigma_{y}\)) that is defined as follows:
\[\begin{split}\sigma_{y}=\mathbf{v}_{ly}-\mathbf{v}_{y}\end{split} \tag{13}\]
In the above equation, \(\mathbf{v}_{ly}\) denotes the no-slip linear velocity of the robot in the \(y\) direction. To be able to calculate the undesired skidding, \(\mathbf{v}_{ly}\) needs to be determined. For this reason, the nonholonomic constraint of the SSMR needs to be considered.
\[\begin{split}\mathbf{v}_{ly}-\mathbf{x}_{0}\omega_{l}=0\end{split} \tag{14}\]
In Eq. (14) \(\omega_{l}\) is the no-slip angular velocity of the robot that is defined as follows:
\[\begin{split}\mathbf{\omega}_{l}=r\frac{\omega_{R}-\omega_{L}}{2c} \end{split} \tag{15}\]
Now the linear and angular velocities of the robot with the slip and undesired skid can be written as:
\[\mu=\begin{bmatrix}v_{x}\\ \omega\end{bmatrix}=\begin{bmatrix}\xi\\ \rho\end{bmatrix}, \tag{16}\] \[\xi=\frac{r}{2}\left(\omega_{R}+\omega_{L}\right)(1-s_{v})\] \[\rho=\frac{r}{2c}(\omega_{R}-\omega_{L})+\sigma_{v}/x_{0}.\]
The nonholonomic constraint in Eq. (14) can be expressed in the following format as well.
\[A(q)q=0,\;\;A(q)=[-\sin\theta\,,\cos\theta\,,\mathbf{x}_{0}] \tag{17}\]
In Eq. (17), \(\dot{q}\) is the null space of \(A(q)\) and therefore,
\[q=R(q)\mu=\begin{bmatrix}\cos\theta&x_{0}\sin\theta\\ \sin\theta&-x_{0}\cos\theta\\ 0&1\end{bmatrix}\begin{bmatrix}v_{x}\\ \omega\end{bmatrix}. \tag{18}\]
Finally, substituting Eq. (16) in Eq. (18) gives us the kinematics equation of the SSMR with the slip and undesired skidding at the vehicle-level.
\[\begin{bmatrix}\dot{\mathbf{x}}\\ \dot{\mathbf{y}}\\ \dot{\theta}\end{bmatrix}=R(q)\begin{bmatrix}\xi\\ \rho\end{bmatrix} \tag{19}\]
_Remark 1_.: Eq. (16) determines that only two slip parameters (\(s_{v}\) and \(\sigma_{v}\)) need to be estimated to be able to control the robot using the vehicle-level slip definition. Whereas in the case of using the wheel-level slip definition, the estimation of the slip ratio for each wheel might be needed [33].
_Remark 2_.: It is common for an SSMR to have the same angular speed for each side due to having mechanical coupling, e.g. \(\omega_{1}=\omega_{2}=\omega_{L}\) and \(\omega_{3}=\omega_{4}=\omega_{R}\). Then based on Eq. (6) it can be seen that the longitudinal slip for each side of the robot should also be the same, e.g. \(s_{1}=s_{2}=s_{L}\) and \(s_{3}=s_{4}=s_{R}\), and therefore:
\[\begin{bmatrix}\omega_{L}\\ \omega_{R}\end{bmatrix}=\frac{1}{r}\begin{bmatrix}\frac{v_{t}}{1-s_{L}}\\ \frac{v_{R}}{1-s_{R}}\end{bmatrix}. \tag{20}\]
_Remark 3_.: The robot's undesired skidding was defined in Eq. (13) as the lateral velocity deviation. It also can be defined based on the angle deviation \(\delta_{v}\) as follows:
\[\delta_{v}=\beta_{I}-\beta=\tan^{-1}\left(\frac{v_{y}}{v_{Ix}} \right)-\tan^{-1}\left(\frac{v_{y}}{v_{x}}\right), \tag{21}\] \[-\pi<\delta_{v}<\pi\]
In Eq. (21) the \(\beta_{I}\) is the ideal robot's skidding angle (e.g., no undesired skidding). Finally, the input vector \(\mu\) in Eq. (16) can be rewritten using Eq. (21).
\[\mu=\begin{bmatrix}v_{x}\\ \omega\end{bmatrix}=\begin{bmatrix}\xi\\ Y\xi\end{bmatrix} \tag{22}\] \[\gamma=\frac{\tan\beta}{x_{0}}\]
It is worth mentioning that, as was discussed in [2], the velocity-based undesired skidding in Eq. (13) offers practical advantages over the angle-based definition in Eq. (21). This is particularly evident in low-speed conditions, where the velocity-based definition exhibits a higher signal-to-noise ratio (SNR).
## IV Controller Design Procedure
This section presents the design of the sliding mode controller with slip and skid compensation at the vehicle-level.
### _Tracking Error Dynamics_
The dynamics equations of the tracking error, \(e\), are required to be able to design the controller. Therefore, the tracking errors are defined based on the difference between the desired trajectory and the system states in the global coordinate system as follows:
\[\begin{bmatrix}e_{x}\\ e_{y}\\ e_{\theta}\end{bmatrix}=\begin{bmatrix}x^{d}-x\\ y^{d}-y\\ \theta^{d}-\theta\end{bmatrix} \tag{23}\]
In Eq. (23), \(d\) denotes the desired trajectory. To be able to represent the tracking errors in the robot's local frame, the following mapping is applied.
\[\varepsilon=\begin{bmatrix}\dot{\varepsilon}_{1}\\ \dot{\varepsilon}_{2}\\ \dot{\varepsilon}_{3}\end{bmatrix}= \tag{24}\] \[\begin{bmatrix}\cos\theta&\sin\theta&0\\ -\sin\theta&\cos\theta&0\\ 0&0&1\end{bmatrix}\begin{bmatrix}e_{x}\\ e_{y}\\ e_{\theta}\end{bmatrix}\]
Taking the time derivative from Eq. (24) gives us the tracking error dynamics.
\[\begin{cases}\dot{\varepsilon}_{1}=\frac{d\varepsilon_{1}}{dt} =-\sin\theta\,\omega e_{x}+\cos\theta\,\dot{e}_{x}+\cos\theta\,\omega e_{y}\\ \hskip 85.358268pt+\sin\theta\,\dot{e}_{y}\\ \dot{\varepsilon}_{2}=\frac{d\varepsilon_{2}}{dt}=-\cos\theta\,\omega e_{x}-\sin \theta\,\dot{e}_{x}-\sin\theta\,\omega e_{y}\\ \hskip 85.358268pt+\cos\theta\,\dot{e}_{y}\\ \dot{\varepsilon}_{3}=\frac{d\varepsilon_{3}}{dt}=\theta^{d}-\phi\end{cases} \tag{25}\]
Substituting Eq. (24) in Eq. (25) leads to the tracking error dynamics as follows:
\[\begin{cases}\dot{\varepsilon}_{1}=\omega e_{2}+v_{x}^{d}\cos \varepsilon_{3}+\omega^{d}x_{0}\sin\varepsilon_{3}-v_{x}\\ \dot{\varepsilon}_{2}=(x_{0}-\varepsilon_{1})\omega+v_{x}^{d}\sin\varepsilon_{3}- \omega^{d}x_{0}\cos\varepsilon_{3}.\end{cases} \tag{26}\] \[\dot{\varepsilon}_{3}=\omega^{d}-\omega\]
### _Sliding-mode Controller_
In this section, first, the sliding-mode controller is designed based on the tracking error dynamics and the dynamics model of the SSMR. Then the sign function is replaced with the _saturation_ function to reduce the controller's chattering and the stability of the controller is demonstrated. Finally, the singularity of the controller is investigated.
The objective of the controller is to regulate the tracking errors \(q^{d}=[\varepsilon_{1},\varepsilon_{2}]\) to force the robot to follow the desired time-varying trajectory considering parametric uncertainty. The controller input is \(\mu\) (see Eq. (16)), which requires a multi-input multi-output controller design procedure. Note that due to the robot's physical limitations, the first and second derivatives of the tracking errors are bounded [3].
To design the sliding-mode controller, two sliding manifolds are defined as follows:
\[\begin{cases}s_{1}=\lambda_{1}\varepsilon_{1}+\dot{\varepsilon}_{1}\\ s_{2}=\lambda_{2}\varepsilon_{2}+\varepsilon_{2}\end{cases},\qquad\lambda_{1},\lambda_{2}>0 \tag{27}\]
In Eq. (27), \(\lambda_{1}\) and \(\lambda_{2}\) are positive constants to have stable sliding manifolds. Both sliding manifolds in Eq. (27) are defined based on a proportional-derivative controller, which forces the robot to regulate the tracking error as well as the first derivative of it. Taking the first derivative of the sliding manifolds in Eq. (27) leads to:
\[\begin{split}\dot{s}_{1}=\frac{ds_{1}}{dt}=\lambda_{1}\dot{ \varepsilon}_{1}+\dot{\varepsilon}_{1}=\dot{\omega}\varepsilon_{2}+\\ \omega[-\omega\varepsilon_{1}+v_{x}^{d}\sin\varepsilon_{3}- \omega^{d}x_{0}\cos\varepsilon_{3}+\omega x_{0}+\lambda_{1}\varepsilon_{2}] -\\ \lambda_{1}v_{x}-\dot{v}_{x}+v_{x}^{d}[-(\omega^{d}-\omega)\sin \varepsilon_{3}+\lambda_{1}\cos\varepsilon_{3}]+\\ \dot{v}_{x}^{d}\cos\varepsilon_{3}+\dot{\omega}^{d}x_{0}\sin \varepsilon_{3}+\\ \omega^{d}[x_{0}(\omega^{d}-\omega)\cos\varepsilon_{3}+\lambda_{1}x_{0}\sin \varepsilon_{3}]\end{split} \tag{28}\]
and
\[\begin{split}\dot{s}_{2}=\frac{ds_{2}}{dt}=\lambda_{2}\dot{ \varepsilon}_{2}+\ddot{\varepsilon}_{2}=-\omega\dot{\varepsilon}_{1}+\\ (x_{0}-\varepsilon_{1})\dot{\omega}+\dot{v}_{x}^{d}\sin\varepsilon _{3}+(\omega^{d}-\omega)v_{x}^{d}\cos\varepsilon_{3}-\\ \dot{\omega}^{d}x_{0}\cos\varepsilon_{3}+(\omega^{d}-\omega) \omega^{d}x_{0}\sin\varepsilon_{3}+\\ \lambda_{2}[(x_{0}-\varepsilon_{1})\omega+v_{x}^{d}\sin \varepsilon_{3}-\omega^{d}x_{0}\cos\varepsilon_{3}].\end{split} \tag{29}\]
In Eqs. (28) and (29), we encounter the time derivatives of the controller inputs, i.e., \(\dot{v}_{x}\) and \(\dot{\omega}\). To be able to determine these derivatives, the dynamics equation of the robot is required. Since the experimental studies are performed using a commercial SSMR (Pioneer 3-AT), the low-level controller based on the wheels' rotation speeds is already integrated. Therefore, the following dynamics equation is considered [41].
\[\dot{\mu}=\begin{bmatrix}v_{x}\\ \dot{\omega}\end{bmatrix}=\begin{bmatrix}\frac{c_{3}}{c_{1}}\omega^{2}-\frac{ c_{4}}{c_{1}}v_{x}\\ -\frac{c_{5}}{c_{2}}v_{x}\omega-\frac{c_{6}}{c_{2}}\omega\end{bmatrix}+\begin{bmatrix} \frac{1}{c_{1}}&0\\ 0&\frac{1}{c_{2}}\end{bmatrix}\begin{bmatrix}v_{r}\\ \omega_{r}\end{bmatrix} \tag{30}\]
In Eq. (30), \(v_{r}\) and \(\omega_{r}\) are the commanded linear and angular velocities by the low-level controller, and \(c_{1}\colon\varepsilon_{6}\) are physical parameters of the robot. Note that as some of these physical parameters might be dependent on hardware and the experimental setup, knowing them precisely could be challenging. Therefore, they are considered with \(\pm 25\%\) variation to make the controller robust against the parametric uncertainties.
Substituting Eq. (30) in Eqs. (28) and (29) give us
\[\dot{s}_{1}=h_{1}+\frac{1}{c_{2}}\varepsilon_{2}\omega_{r}-\frac{1}{c_{1}}v_{r} \tag{31}\]
\[\dot{s}_{2}=h_{2}+\frac{x_{0}-\varepsilon_{1}}{c_{2}}\omega_{r} \tag{32}\]
where,
\[\begin{split} h_{1}=\left(-\frac{c_{5}}{c_{2}}v_{x}\omega-\frac {c_{6}}{c_{2}}\omega\right)\varepsilon_{2}+\omega(-\omega\varepsilon_{1}+\\ v_{x}^{d}\sin\varepsilon_{3}-\omega^{d}x_{0}\cos\varepsilon_{3}+\omega x_{0}+ \lambda_{1}\varepsilon_{2})-\lambda_{1}v_{x}+\\ \left(-\frac{c_{3}}{c_{1}}\omega^{2}+\frac{c_{4}}{c_{1}}v_{x}\right)+v_{x}^{ d}[-(\omega^{d}-\omega)\sin\varepsilon_{3}+\\ \lambda_{1}\cos\varepsilon_{3}]+\dot{v}_{x}^{d}\cos\varepsilon_{3}+\dot{ \omega}^{d}x_{0}\sin\varepsilon_{3}+\omega^{d}[x_{0}(\omega^{d}-\\ \omega)\cos\varepsilon_{3}+\lambda_{1}x_{0}\sin\varepsilon_{3}],\end{split} \tag{33}\]
\[\begin{split} h_{2}=-\omega(\omega\varepsilon_{2}+v_{x}^{d}\cos \varepsilon_{3}+\omega^{d}x_{0}\sin\varepsilon_{3}-v_{x})+\\ (x_{0}-\varepsilon_{1})\left(-\frac{c_{5}}{c_{2}}v_{x}\omega-\frac{c_{6}}{c_{ 2}}\omega\right)+\dot{v}_{x}^{d}\sin\varepsilon_{3}+\\ (\omega^{d}-\omega)v_{x}^{d}\cos\varepsilon_{3}-\omega^{d}x_{0}\cos \varepsilon_{3}+(\omega^{d}-\\ \omega)\omega^{d}x_{0}\sin\varepsilon_{3}+\lambda_{2}[(x_{0}-\varepsilon_{1}) \omega+v_{x}^{d}\sin\varepsilon_{3}-\\ \omega^{d}x_{0}\cos\varepsilon_{3}].\end{split} \tag{34}\]
The control inputs are given as follows:
\[v_{x}^{r}=\dot{\theta}_{x}^{r}+\ddot{\theta}_{x}^{r} \tag{35}\]
\[\begin{split}\omega^{r}=\widehat{\omega}^{r}+\widehat{\omega}^{r}, \end{split} \tag{36}\]
where \(\widehat{\nu}_{x}^{r}\) and \(\widehat{\omega}_{r}\) are equivalent control terms to stay on the sliding manifolds, and \(\vec{v}_{r}\) and \(\widehat{\omega}_{r}\) are the complementary terms to reach the sliding manifolds while dealing with uncertainties.
To determine the equivalent control terms,
\[\dot{s}_{2}=0 \rightarrow \widehat{\omega}^{r}=-\frac{c_{2}h_{2}}{\widehat{\nu}_{0}- \varepsilon_{1}} \tag{37}\] \[\dot{s}_{1}=0 \rightarrow \hat{\nu}_{x}^{r}=c_{1}\left(h_{1}-\frac{\varepsilon_{2}h_{2}}{ \widehat{x}_{0}-\varepsilon_{1}}\right), \tag{38}\]
where,
\[\hat{x}_{0}=\frac{a+b}{2},\qquad a\leq x_{0}\leq b, \tag{39}\]
and \(a\) and \(b\) are physical characteristics of the robot (see Figure 1).
To reach the sliding manifolds in finite time,
\[\dot{s}_{i}\leq-k_{i}sign(s_{i}),\qquad i=1\colon 2 \tag{40}\]
where \(k_{i}\) are positive constants. Therefore,
\[\widetilde{\omega}^{r}=-\left[\frac{\bar{c}_{2}}{x_{0}^{min}-|\varepsilon_{1}|} \big{(}-\bar{h}_{2}+k_{2}\big{)}\right]sign(s_{2}) \tag{41}\]
\[\begin{split}\bar{v}_{x}^{r}&=-\left[\bar{c}_{1} \left(\bar{h}_{1}+\frac{1}{c_{2}}\varepsilon_{2}\bar{\omega}_{r}-k_{1}\right) \right]sign(s_{1}),\end{split} \tag{42}\]
where \(\overline{*}\) is the maximum amount of that parameter and \(x_{0}^{min}\) is the minimum amount of \(x_{0}\) to guarantee the robustness of the controller.
The design of the sliding-mode controller is therefore completed and Eqs. (35) and (36) are the control inputs. This controller ensures the robust tracking of the desired trajectory with the defined uncertainties. However, due to having the _sign_ function in Eqs. (41) and (42), the controller suffers from chattering in actuators [3]. To rectify this problem, the _sign_ function is replaced with the _saturation_ (_sat_) function in Eqs. (41) and (42) as follows:
\[\vec{\omega}^{r} =-\left[\frac{\bar{c}_{2}}{x_{0}^{min}-|\varepsilon_{1}|}\big{(} -\overline{h}_{2}+k_{2}\big{)}\right]sat\Big{(}\frac{s_{2}}{\gamma_{2}}\Big{)} \tag{43}\] \[\vec{v}_{x}^{\,r} =-\Big{[}\bar{c}_{1}\left(\overline{h}_{1}+\frac{1}{c_{2}} \varepsilon_{2}\overline{\omega}_{r}-k_{1}\right)\Big{]}sat\Big{(}\frac{s_{1 }}{\gamma_{1}}\Big{)}, \tag{44}\]
where \(\gamma_{1}\) and \(\gamma_{2}\) are the positive constants to specify the boundary layer of the _sat_ function. The _sat_ function addresses the switching issue in the controller. However, the stability of the controller must be thoroughly examined following this modification. For that reason, Eq. (27) is rewritten as follows:
\[\begin{cases}\varepsilon_{1}=s_{1}-\lambda_{1}\varepsilon_{1}\\ \varepsilon_{2}=s_{2}-\lambda_{2}\varepsilon_{2}^{\,\prime}\end{cases} \tag{45}\]
and coupled with the first derivative of the sliding manifolds gives us:
\[\begin{cases}\dot{\varepsilon}_{1}=s_{1}-\lambda_{1}\varepsilon_{1}\\ \dot{\varepsilon}_{2}=s_{2}-\lambda_{2}\varepsilon_{2}\\ \dot{s}_{1}=h_{1}+\frac{1}{c_{1}}\varepsilon_{2}\omega^{r}-\frac{1}{c_{1}}v_{ x}^{r}.\\ \\ \dot{s}_{2}=h_{2}+\frac{(\dot{x}_{0}-\varepsilon_{1})}{c_{2}}\omega^{r}\end{cases} \tag{46}\]
First, the errors outside of the boundary layer are considered. The Lyapunov function is defined as follows:
\[\begin{split}& V_{1}=\frac{1}{2}(\varepsilon_{1}^{2}+\varepsilon_ {2}^{2}),\qquad|s_{i}|\leq\gamma_{i},|\varepsilon_{i}|\geq 2\gamma_{i},\\ &\dot{V}_{1}=-(\lambda_{1}\varepsilon_{1}^{2}+\lambda_{2} \varepsilon_{2}^{2})+\varepsilon_{1}s_{1}+\varepsilon_{2}s_{2}\\ &\dot{V}_{1}\leq-(\lambda_{1}\varepsilon_{1}^{2}+\lambda_{2} \varepsilon_{2}^{2})+|\varepsilon_{1}|\gamma_{1}+|\varepsilon_{2}|\gamma_{2} \\ &\dot{V}_{1}\leq(1-\lambda_{1})\varepsilon_{1}^{2}+(1-\lambda_{2} )\varepsilon_{2}^{2}\\ &\lambda_{1},\lambda_{2}>1\,\to\,\dot{V}_{1}\leq 0\end{split} \tag{47}\]
Eq. (47) illustrates if \(\lambda_{1},\lambda_{2}>1\), then the \(\dot{V}_{1}\leq 0\) and as a result, error trajectories from outside the boundary layer get inside the boundary layer.
Now, the stability of the error trajectories needs to be investigated once they get inside the boundary layer. The Lyapunov function is defined as follows:
\[\begin{split}& V_{2}=\frac{1}{2}({s_{2}}^{2}+\varepsilon_{2}^{2 }),\qquad|s_{2}|\leq\gamma_{2},|\varepsilon_{2}|\leq 2\gamma_{2},\\ &\dot{V}_{2}=-\frac{-\overline{h}_{2}+\overline{k}_{2}}{\gamma_{ 2}}{s_{2}}^{2}+\varepsilon_{2}s_{2}-\lambda_{2}\varepsilon_{2}^{2}\end{split} \tag{48}\]
\[\begin{split}&\dot{V}_{2}\leq-\frac{-\overline{h}_{2}+\overline{k }_{2}}{\gamma_{2}}{s_{2}}^{2}+|\varepsilon_{2}||s_{2}|-\lambda_{2}\varepsilon_{ 2}^{2}.\end{split}\]
In Eq. (48) choosing \(\overline{k}_{2}\) big enough to make \(-\overline{h}_{2}+\overline{k}_{2}=\overline{K}_{2}>0\) gives:
\[\begin{split}&\dot{V}_{2}\leq-\frac{\overline{K}_{2}}{\gamma_ {2}}{s_{2}}^{2}+|\varepsilon_{2}||s_{2}|-\lambda_{2}\varepsilon_{2}^{2}\\ &\dot{V}_{2}\leq-[|\varepsilon_{2}|\quad|s_{2}|]\begin{bmatrix} \lambda_{2}&-\frac{1}{2}\\ -\frac{1}{2}&\frac{\overline{K}_{2}}{\gamma_{2}}\end{bmatrix}\begin{bmatrix} |\varepsilon_{2}|\\ |s_{2}|\end{bmatrix}.\end{split} \tag{49}\]
Choosing \(A=\begin{bmatrix}\lambda_{2}&-\frac{1}{2}\\ -\frac{1}{2}&\frac{\overline{K}_{2}}{\gamma_{2}}\end{bmatrix}\) in Eq. (49) leads to \(\dot{V}_{2}\leq 0\) if \(\det(A)\geq 0\). Therefore:
\[\det(A)=\lambda_{2}\left(\frac{\overline{K}_{2}}{\gamma_{2}}\right)-\frac{1}{4} \geq 0\,\to\,\gamma_{2}\leq 4\lambda_{2}\overline{K}_{2} \tag{50}\]
Eq. (50) indicates the stability of \(s_{2}\) and \(\varepsilon_{2}\) inside of the boundary layer. The following Lyapunov function investigates the stability of \(s_{1}\) and \(\varepsilon_{1}\).
\[\begin{split}& V_{3}=\frac{1}{2}({s_{1}}^{2}+\varepsilon_{1}^{2 }),\qquad|s_{1}|\leq\gamma_{1},|\varepsilon_{1}|\leq 2\gamma_{1}\\ &\dot{V}_{3}=s_{1}\left[-\frac{\varepsilon_{2}\overline{K}_{2}}{ \dot{x}_{0}-\varepsilon_{1}}\Big{(}\frac{s_{2}}{\gamma_{2}}\Big{)}\Big{(}1+ \frac{s_{1}}{\gamma_{1}}\Big{)}+\overline{K}_{1}\frac{s_{1}}{\gamma_{1}}\Big{)} +\varepsilon_{1}s_{1}\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad- \lambda_{1}\varepsilon_{1}^{2},\end{split} \tag{51}\]
\[\begin{split}&\dot{V}_{3}\leq\overline{K}_{1}\frac{s_{1}^{2}}{ \gamma_{1}}-\frac{\varepsilon_{2}\overline{K}_{2}}{\dot{x}_{0}-\varepsilon_{1}} \Big{(}\frac{s_{2}}{\gamma_{2}}\Big{)}\Big{(}1+\frac{s_{1}}{\gamma_{1}}\Big{)} s_{1}+|\varepsilon_{1}||s_{1}|\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad- \lambda_{1}\varepsilon_{1}^{2}\end{split} \tag{52}\]
\[\begin{split}&\dot{V}_{3}\leq\left(\overline{K}_{1}-\frac{2 \overline{K}_{2}\gamma_{2}}{\dot{x}_{0}-\varepsilon_{1}}\right)\Big{(}\frac{s_{ 1}^{2}}{\gamma_{1}}\Big{)}-\frac{2\overline{K}_{2}\gamma_{2}}{\dot{x}_{0}- \varepsilon_{1}}s_{1}+|\varepsilon_{1}||s_{1}|\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad- \lambda_{1}\varepsilon_{1}^{2}\end{split}\]
In Eq. (51) \(\overline{K}_{1}=-\overline{h}_{1}+\overline{k}_{1}>0\). Substituting Eq. (50) in Eq. (51) gives us:
\[\begin{split}&\dot{V}_{3}\leq\left(\overline{K}_{1}+\frac{16 \overline{K}_{2}^{2}\lambda_{2}}{|\dot{x}_{0}|+|\varepsilon_{1}|}+| \varepsilon_{1}|\right)\gamma_{1}-\lambda_{1}\varepsilon_{1}^{2}\\ &\text{If}\\ &\gamma_{1}\leq\frac{\lambda_{1}\varepsilon_{1}^{2}}{\overline{K}_ {1}+\frac{16\overline{K}_{2}^{2}\lambda_{2}}{|\dot{x}_{0}|+|\varepsilon_{1}|}+| \varepsilon_{1}|}\,\to\,\dot{V}_{3}\leq 0.\end{split} \tag{53}\]
Eq. (52) ensures the stability of \(s_{1}\) and \(\varepsilon_{1}\) inside the boundary layer.
In conclusion, the above proof shows the stability of the closed-loop system with the _sat_ function. Now, the controller inputs in Eqs. (35) and (36) can be updated based on the estimated slip and undesired skid at the vehicle-level provided by the deep learning models as follows [33].
\[v_{x}^{c}=\frac{v_{x}^{r}}{1-s_{v}} \tag{53}\] \[\omega^{c}=\omega^{r}+\sigma_{v}/x_{0} \tag{54}\]
### _Singularity Analysis_
At this stage, to be able to proceed, a singularity of the controller needs to be investigated, which is caused by the \(\hat{x}_{0}-\varepsilon_{1}\) term in the denominator of the \(\partial_{r}\) in Eq. (37). \(\hat{x}_{0}\) is the nominal value of \(x_{0}\) and was chosen in Eq. (39). A singularity happens when \(\varepsilon_{1}\to\hat{x}_{0}\), which makes the denominator zero. To avoid the singularity at this specific moment, \(\hat{x}_{0}\) should be chosen in a way to make the nominator of the \(\hat{\omega}_{r}\) zero as well. If we consider \(\hat{x}_{0}=0\), then
\[\begin{split}\hat{\omega}_{r}=\frac{c_{2}h_{2}|_{\hat{x}_{0}=0}} {\varepsilon_{1}}=\\ \frac{c_{2}}{\varepsilon_{1}}\biggl{\{}-\omega(\omega\varepsilon _{2}+v_{x}^{d}\cos\varepsilon_{3}-v_{x})\\ -\varepsilon_{1}\Bigl{(}-\frac{c_{5}}{c_{2}}v_{x}\omega- \frac{c_{6}}{c_{2}}\omega\Bigr{)}\\ +\hat{v}_{x}^{d}\sin\varepsilon_{3}+(\omega^{d}-\omega)v_{x}^{d} \cos\varepsilon_{3}\\ -\lambda_{2}(-\varepsilon_{1}\omega+v_{x}^{d}\sin\varepsilon_{3 })\Bigr{\}}\end{split} \tag{55}\]
In the above equation, if \(\varepsilon_{1}\to 0\), then according to Eq. (24), two situations are expected.
\[\begin{cases}1.\varepsilon_{2}\to 0,&\varepsilon_{3}\to 0\xrightarrow{yields}\begin{cases}v_{x}\to v_{x}^{d}\\ \omega\to\omega^{d}\\ 2.\varepsilon_{2}\to 0,&\varepsilon_{3}\to\pi\xrightarrow{yields}\begin{cases}v_{x}\to-v_{x}^{ d}\\ \omega\to\omega^{d}\end{cases}\end{cases}\\ \end{cases} \tag{56}\]
In both cases, we have \(\hat{\omega}_{r}=\frac{0}{0}\) and as a result, L'Hopital's rule can be applied [42]. For both scenarios
\[\begin{split}\hat{\omega}_{r}=\lim_{\varepsilon_{1}\to 0}\frac{c \frac{dh_{2}|_{\hat{x}_{0}=0}}{d\varepsilon_{1}}}{\frac{d\varepsilon_{1}}{d \varepsilon_{1}}}\\ =\lim_{\varepsilon_{1}\to 0}\frac{\Bigl{(}-\frac{c_{5}}{c_{2}}v_{x} \omega-\frac{c_{6}}{c_{2}}\omega\Bigr{)}+\lambda_{2}\omega}{1}\\ =\tau\in\mathbb{R}.\end{split} \tag{57}\]
Eq. (57) indicates that the singularity for the designed controller can be avoided if zero is chosen for the nominal value of the \(x\)-component of the robot's _ICR_, e.g., \(\hat{x}_{0}=0\).
## V Slip and Undesired Skid Estimators
This paper utilizes the two previously developed deep learning models [6, 8] to estimate the robot's slipping (\(s_{v}\)) and undesired skidding (\(\sigma_{v}\)). The structure and details of the recommended models for slip (CNN-LSTM-AE) and undesired skid (CNN-LSTM) estimations are shown in Figure 2 and Table 1. Input sequences for these models are formulated utilizing data derived from the robot's IMU and wheel encoders. The slip estimation model is fed with the robot's commanded linear and angular velocities, roll and pitch angles, as well as angular velocities and linear
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Type** & **Undesired Skid** & **Slip** \\ \hline
**Batch size** & 128 & 64 \\
**Conv1D (F, K, S)** & 28, 3, 1 & 67, 5, 1 \\
**Conv1D (F, K, S)** & 32, 3, 1 & 73, 5, 1 \\
**Average Pooling** & 1 & 1 \\
**Dropout** & 0.0 & 0.2 \\
**LSTM unit** & 42 & 44 \\
**LSTM unit** & 121 & 50 \\
**Dropout** & 0.4 & 0.4 \\
**Dense unit** & 131 & 298 \\
**Dense unit** & 112 & - \\
**Dropout** & 0.5 & 0.0 \\
**Dense unit** & 1 & 1 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The hyperparameters of the undesired skid (CNN-LSTM) and slip (CNN-LSTM-AE) estimation models.
Figure 2: Left: Undesired skid estimation model (CNN-LSTM). Right: Slip estimation model (CNN-LSTM-AE).
accelerations in the robot's local frame. Furthermore, the undesired skid estimator receives the same input data along with the change in the yaw angle. The slip and undesired skid estimators consist of a combined total of 83,210 and 129,775 trainable parameters, respectively. Importantly, it should be noted that the training and evaluation of these two models took place in the same location where the trajectory tracking controller was tested in the present study.
## VI Experimental Setup
The designed sliding-mode controller with slip and skid compensation at the vehicle-level was implemented on a skid-steering Pioneer 3-AT robot as it is a well-established SSMR platform [4]. Testing occurred on outdoor grass terrain characterized by uneven surfaces and varying slope angles, intentionally inducing both slipping and undesired skidding scenarios.
Figure 3 shows the proposed control structure. The Pioneer 3-AT is an all-terrain commercial robot suitable for research purposes. It is equipped with 4 DC motors, high-resolution optical encoders, a microcontroller, and a low-level PID controller. The robot can be controlled using a serial port and a C++ SDK called Advanced Robot Interface for Applications (ARIA), which is developed by the manufacturer [3]. The robot also was equipped with onboard sensors to be able to estimate the slip and skidding as well as for localization purposes. An Xsens IMU (MTi 3-series Development Kit, 100Hz) and an RTK-GPS (U-Blox C94-M8P, 5Hz) were mounted above the middle of the front axle to be able to respectively measure kinematics responses and location/velocity of the robot. A Dell Latitude 5410 was mounted on the robot to collect the sensory data and control the robot in real-time through the robotic operating system (ROS) melodic, Ubuntu 18.04 and Python 3.6. It is worth mentioning that slip and skid estimators require only the IMU and wheel encoder's raw data as the input dataset.
The controller was manually tuned, where Table 2 shows the hyperparameters of the controller. Due to the physical limitation of the robot, two saturation functions were considered in the control-feedback loop having \(|v_{x}^{r}|\leq 0.5\;m/sec\) and \(|\omega^{r}|\leq 0.3\;rad/sec\).
In addition to tracking errors, the distance tracking error (_dis_) and the root mean square error (RMSE) were considered to compare the SMC and SMC-SS performance as follows:
\[dis=\sqrt{e_{x}^{2}+e_{y}^{2}} \tag{58}\]
\[RMSE=\sqrt{\frac{\sum_{i=1}^{N}e_{x}^{2}}{N}}, \tag{59}\]
where \(N\) is the number of samples.
Finally, the non-parametric Friedman aligned ranking (FAR) test was utilized to investigate the statistical significance in the performance of the SMC and SMC-SS controllers to check if the SMC and SMC-SS performed similarly. In the case of having the null hypothesis of the FAR test rejected, the post hoc Finner test was applied to determine if there was a significant difference between the two controllers. For both tests, the significance level of 0.05 was considered [43].
\begin{table}
\begin{tabular}{c c} \hline \hline
**Parameter** & **Value** \\ \hline \(\mathbf{x}_{0}^{max}\) (\(\mathbf{cm}\)) & 0.15 \\ \(\mathbf{x}_{0}^{min}\) (\(\mathbf{cm}\)) & -0.15 \\ \(\mathbf{\lambda_{1}}\) & 1.5 \\ \(\mathbf{\lambda_{2}}\) & 1.2 \\ \(\mathbf{k_{1}}\) & 5.5 \\ \(\mathbf{k_{2}}\) & 2.5 \\ \(\mathbf{\gamma_{1}}\) & 0.1 \\ \(\mathbf{\gamma_{2}}\) & 0.1 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Constant parameters of the controller.
Figure 3: The proposed control block-diagram
## VI Results
In this section, the performance of the proposed control scheme, incorporating slip and undesired skid compensation for SSMRs in outdoor and uneven terrains, is evaluated. To achieve this objective, we defined three specific manoeuvres for the robot to compare the controller's performance with slip and skid compensation (SMC-SS) and without (SMC). The defined manoeuvres are:
\(\bullet\) **Straight-line:** This is a constant trajectory (e.g., \(\dot{\nu}_{x}^{d}\) and \(\dot{\omega}^{d}\) are zero) that does not involve the robot's steering mechanism. It tests the controller's ability to track a stable trajectory while maintaining the robot's heading and achieving high SNR for slip estimation and low SNR for undesired skid estimation.
\(\bullet\) **Circular:** This trajectory is also constant but requires the robot's steering mechanism. It presents an additional challenge compared to the straight-line manoeuvre as the robot must follow a curvilinear path while compensating for both slip and undesired skidding.
\(\bullet\) **Bow-shape:** This manoeuvre is a time-varying trajectory that demands the use of the steering mechanism. It serves to showcase the controller's capabilities in handling a combination of tasks, including following a straight-line, navigating a curvilinear path, and executing stand-still rotations. Additionally, it introduces varying levels of SNR, ranging from low to high, in both slip and undesired skidding scenarios.
These three manoeuvres were chosen based on their varying difficulty levels, introducing complexity for both the SMC and the slip/skid estimators. The experiments aim to demonstrate the controller's competence in diverse operational scenarios, highlighting its capacity for trajectory tracking, stabilization of the robot's heading, and effective compensation for slip and undesired skidding across different challenging terrains and manoeuvres.
Note that for all three manoeuvres, the robot started from the same location with the following initial errors to assess the controllers' performance across both transient and steady-state phases (\(e_{x}\) and \(e_{y}\) are in \(m\), and \(e_{\theta}\) in \(rad\)).
\[\begin{bmatrix}e_{x}\\ e_{y}\\ e_{\theta}\end{bmatrix}=\begin{bmatrix}0.3\\ 0.1\\ 0.0\end{bmatrix} \tag{60}\]
### _A. Straight-Line Trajectory_
For this experiment, the robot started from the initial location with the initial errors of Eq. (60). The desired trajectory was defined as a straight-line with constant linear and angular velocities as follows:
\[\begin{split}\dot{\nu}_{x}^{d}&=0.3\;m/sec,\qquad\dot{ \nu}_{x}^{d}&=0\;m/sec^{2}\\ \dot{\omega}^{d}&=0\;rad/sec,\qquad\dot{\omega}^{d}& =0\;rad/sec^{2}\end{split}\]
Note that the desired trajectory is generated based on a virtual robot operating in an ideal condition without slipping and skidding. The Pioneer robot was driven using the SMC and then SMC-SS controllers.
The robot was driven three times for each controller and the results are in Table 3. This table shows that the SMC-SS controller consistently improved the tracking performance of the robot in following the defined straight-line for all three experiments. In this section, the results of the first experiment are visualized and the results of the second experiment are in the Appendix. The average performance of the SMC and SMC-SS controllers is given in Table 3.
Figure 4 to Figure 6 and Table 3 show the experimental results of the proposed SMC and SMC-SS controllers following a straight-line versus time. Figure 4-top shows that with each controller the robot starts from the origin and tries to regulate the initial error and then stay on the desired trajectory. Figure 4-bottom and Table 3 show that using the SMC-SS controller, the robot completes the manoeuvre with less tracking error and experiences on average 27.91% and 21.79% improvement in mean and RMS of distance error, which would be an important consideration when driving in a narrow passage, for example, an orchard.
between the left and right wheels, and as a result, generate undesired skidding and cause fluctuation in the heading angle for the robot. Although both controllers aimed to regulate the \(q^{d}=[\varepsilon_{1},\varepsilon_{2}]\), the SMC-SS steers the robot with 47.56% less heading angle fluctuation in the RMS of \(e_{\theta}\) due to its consideration of the vehicle slipping and undesired skidding (Table 3).
Figure 7 shows the robot's actual undesired skidding and slipping at the vehicle-level during the straight-line manoeuvre. Figure 7-left explains \(e_{y}\) and \(e_{\theta}\) in Figure 6 for this manoeuvre. For example, the jump of \(e_{y}\) and \(e_{\theta}\) with the SMC controller between 15 and 20 seconds are related to the undesired skidding at this moment for the robot. Figure 7-right shows that the robot experiences respectively about 60% and 80% of slip at the beginning of the manoeuvre with SMC-SS and SMC controllers, where more measurement and estimation error might be expected due to relatively lower SNR at such moments [6, 8]. Then the robots' slipping reduces to about 20% for the remainder of the manoeuvre with both controllers. It is important to emphasize that the aim of the controller was not to reduce the amount of the robot's slip and undesired skid but rather to accurately follow the given trajectory. Consequently, the assessment of the controllers' performance does not centre around the quantification of these two parameters.
### _Circular Trajectory_
The performance of the SMC and SMC-SS was evaluated following a curvilinear trajectory. This is a challenging trajectory for the SSMR due to the skid-based steering mechanism of this robot. The following desired linear and angular velocities for the robot were considered.
\[v_{x}^{d}=0.2\;m/sec, \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\
Figure 5: Trajectories of the SMC and SMC-SS controllers following the straight-line trajectory; (top) \(x\), (middle) \(y\), (bottom) \(\theta\).
Figure 6: Tracking errors of the SMC and SMC-SS controllers following the straight-line trajectory; (top) \(e_{x}\), (middle) \(e_{y}\), (bottom) \(e_{\theta}\).
Figure 7: The robot’s actual undesired skidding (left) and slipping (right) following the straight-line trajectory.
Figure 11 shows the robot's undesired skidding and slipping at the vehicle-level on the circular trajectory. Figure 11-left shows that the robot experiences a high amount of undesired skidding with both controllers between 20 to 25 seconds, but the compensation ability of the SMC-SS helps the robot to pass this moment with relatively less \(\mathbf{e}_{y}\) and \(\mathbf{e}_{\theta}\). In addition, the high slippage between 18 to 20 seconds with the SMC controller causes a tracking error for the robot in the \(x\)-direction (Figure 11-right), which takes time for the controller to reduce the tracking error due to not having a real-time compensation system for it (Figure 10-top).
### _Bow-Shape Trajectory_
The last test was conducted to evaluate the performance of the SMC-SS in comparison with the SMC for following a more complicated trajectory. The bow-shape trajectory, similar to the circular one, requires the steering mechanism of the robot during the manoeuvre. Moreover, this trajectory has cusp points where the robot needs to change its orientation sharply. At the cusp points, the robot needs a pure rotation, which is one of the abilities of the SSMR. The desired trajectory for the robot is defined as follows:
\[\mathbf{v}_{x}^{d} =0.2\sin(0.1t)\ m/sec,\] \[\hat{v}_{x}^{d} =0.02\cos(0.1t)\ m/sec^{2}\] \[\omega^{d} =0.2\cos(0.1t)\ rad/sec,\] \[\hat{\omega}^{d} =-0.02\sin(0.1t)\ rad/sec^{2}\]
For this manoeuvre, two experiments were performed with both controllers and the overall performance of the controllers is given in Table 5. The results of the first and second experiments are visualized in this section and the Appendix, respectively.
The experimental results of the robot following the bow-shape trajectory are presented in Figure 12 to Figure 14, Table 6, and Table 7. During this manoeuvre, the robot needs to constantly change its orientation as well as pass two sharp rotations at cusp points. Figure 12-top shows the trajectory of the robot with both controllers. This figure shows that the robot handles the two sharp rotations and stays on the desired trajectory with less error using the SMC-SS controller. On the other hand, the robot experiences a noticeable overshoot after passing each cusp point with the SMC controller. Figure 12-bottom shows that the robot converges slightly faster with the SMC-SS controller and according to Table 7, it experiences on average less mean and RMS of distance error by 27.41% and 26.36% respectively.
Figure 13 and Figure 14-top and middle show the \(x\) and \(y\)-component of the robot's trajectory and their equivalent tracking errors during this manoeuvre. In these figures, the first and second cusp points respectively happen around 40 and 80 seconds and the tracking error before and after these points illustrates the performance improvement by the SMC-SS controller.
Figure 15 shows the robot's actual undesired skidding and slipping at the vehicle-level following the bow-shape trajectory. Note that in this case, the longitudinal and lateral movements of the robot in the global coordinate system are in the \(y\) and \(x\)-direction, respectively. Therefore, the undesired skidding and slipping of the robot concerns mostly \(\mathbf{e}_{x}\) and \(\mathbf{e}_{y}\) respectively. Figure 15-left shows that the robot experiences jumps in undesired skidding with the SMC controller at around 50, 60, and 90 seconds, which equivalently causes high tracking errors in the \(x\)-direction with this controller (Figure 14-top). Figure 15-right shows two continuous high slip conditions (\(\mathbf{s}_{y}\geq 70\%\)) with both controllers around 40 and 80 seconds as the robot passes cusp points, which the robot experiences low-speed high slippage. On the other hand, this figure shows high slippage with the SMC-SS controller after the first cusp point at around 50 seconds.
Figure 8: Experimental results of the SMC and SMC-SS controllers following the circular trajectory; (top) trajectory, (bottom) distance error.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{**Trial**} & \multicolumn{4}{c}{**SMC, SMC-SS**} \\ \cline{2-5} & **M _dis_** & **RMS _dis_** & **M \(|\mathbf{e}_{\mathbf{\theta}}|\)** & **RMS \(\mathbf{e}_{\mathbf{\theta}}\)** \\ \hline
**1** & 9.16, 6.83 & 12.96, 9.18 & 20.07, 18.94 & 36.27, 32.42 \\
**2** & 10.32, 7.31 & 14.66, 11.16 & 21.51, 18.10 & 35.13, 31.91 \\ \hline
**Average** & 9.74, 7.07 & 13.81, 10.17 & 20.79, 18.52 & 35.70, 32.17 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Performance of SMC and SMC-SS controllers tracking the bow-shape trajectory under uneven grass terrain conditions. M: mean of, _dis_ in cm, and \(\mathbf{e}_{\theta}\) in degree
Figure 11: The robot’s actual undesired skidding (left) and slipping (right) following the circular trajectory.
Figure 10: Tracking errors of the SMC and SMC-SS controllers following the circular trajectory; (top) \(e_{x}\), (middle) \(e_{y}\), (bottom) \(e_{\theta}\).
Figure 9: Trajectories of the SMC and SMC-SS controllers following the circular trajectory; (top) \(x\), (middle) \(y\), (bottom) \(\theta\).
Figure 13 and Figure 14-bottom show the robot's heading angle and \(e_{\theta}\). The desired heading angle has two cusp points around 40 and 80 seconds, where the robot needs to harshly change its direction to be able to follow the desired trajectory. These figures and Table 7 show that the SMC-SS controller performed better than the other controller by 10.92% and 9.89% in the mean of \(|e_{\theta}|\) RMS of \(e_{\theta}\), respectively. Overall, results show that the SMC-SS controller provides a more accurate performance in tracking the bow-shape trajectory due to its compensation for vehicle slipping and skidding.
### _Significance Analysis_
Table 6 and Table 7 summarize the performance of the SMC and SMC-SS controllers across all experiments for all trajectories. Overall, the experimental results in Table 7 show better performance of the sliding-mode controller with slip and undesired skid compensation at the vehicle-level. This table shows, on average, more than 27% improvement in the distance error for the three manoeuvres. This table indicates that the highest and lowest improvement of the mean of distance errors are achieved for the circular and bow-shape trajectories by about 38% and 27%, respectively. The circular trajectory is a challenging path for the SSMR robots and the slip and undesired skid compensation reduced the tracking error. On the other hand, the bow-shape trajectory is challenging for the SSMR as well as the compensation system. During the bow-shape trajectory, the robot experiences two sharp rotations with low speeds at cusp points, which decreases the SNR and causes difficulty for the estimators. However, the SMC-SS still improved the performance of tracking the bow-shape trajectory similar to the straight-line one.
Figure 13: Trajectories of the SMC and SMC-SS controllers following the bow-shape trajectory; (top) \(x\), (middle) \(y\), (bottom) \(\theta\).
Figure 14: Tracking errors of the SMC and SMC-SS controllers following the bow-shape trajectory; (top) \(e_{x}\), (middle) \(e_{y}\), (bottom) \(e_{\theta}\).
Figure 15: The robot’s actual undesired skidding (left) and slipping (right) following the bow-shape trajectory.
Although both CNN-LSTM-AE and CNN-LSTM models were tested before [6, 8], the performance of these models was again evaluated during the trajectory tracking experiments of this paper. Note that the performance of the slip and undesired skid estimators were evaluated using all 8 manoeuvres' data of each controller (e.g., straight-line: 3, circular: 3, and bow-shape: 2 experiments), but only the first experiment of each trajectory is visualized here.
Table 9 and Table 10 show the performance of the slip and undesired skid estimators on grass, respectively. These tables show that the slip and undesired skid estimators respectively perform with 7.06%, 15.74% and 12.35 _mm/sec_, 26.43% in MAE and SMAPE.
The slip estimator's performance in this paper's experiments slightly differs from our previous slip estimation paper [8]. The first reason for this difference could be because of testing the robot only on grass for these experiments. Whereas the robot was driven on grass, sand, clay, and gravel for the performance analysis in the slip estimation paper [8]. The second reason could be because of driving the robot on the bow-shape trajectory, where the slip estimator deals with higher estimator errors due to having a low SNR at cusp points. It is worth mentioning that the slip estimator performs with a 5.14% MAE score when only straight-line and circular trajectories are considered, which is similar to the performance of the slip estimator in slip estimation paper [8]. Moreover, due to having high initial errors, the robot starts with the maximum wheels' rotation speeds at the beginning of all manoeuvres, and as a result, high estimation errors at those moments [8].
In contrast, the undesired skid estimator performs almost similarly to the performance of the model in our skid estimation paper [6]. The reason could be because of the robustness of the velocity-based undesired skid definition to the measurement noises, which helped the model to not get affected by the low SNR during the manoeuvres, especially at cusp points and the beginning moment of the robot's movement.
Figure 16 shows the response of the slip estimator during the first experiment of each trajectory. In this figure, the first 900 samples belong to the straight-line and circular trajectories and the remainder represents the robot on the bow-shape trajectory. Figure 16-top and middle show higher errors at four cusps of the bow-shape trajectory (e.g., two for the SMC and two for the SMC-SS controller) at 1100, 1300, 1650, and 1850 samples. Figure 16-bottom shows the distribution of the estimation error. This distribution, similar to the distribution of the estimation error presented in slip estimation paper [8], shows that the residuals are roughly evenly distributed around zero, which is desirable.
Figure 17 shows the performance of the undesired skid estimator during the first experiment of each trajectory. This figure shows that the CNN-LSTM model not only performs consistently well throughout all three trajectories but also follows the right direction of the actual undesired skidding with 97.73% accuracy (Table 10).
## V Conclusion
This paper presented a trajectory-tracking controller with slip and undesired skid compensation at the vehicle-level using sliding-mode control and deep learning techniques for outdoor environments. The kinematics model of the SSMR was modified to consider the slipping and undesired skidding at the vehicle-level, and sliding-mode controller was utilized to regulate the tracking errors. The robot's slipping and undesired skidding were estimated using the two previously validated deep learning models in our other works [6, 8] and then fed to the control-feedback loop to compensate for them. Three desired trajectories were defined for the robot to follow on uneven grass terrain. The results showed that the proposed slip and undesired skid compensation technique improved the mean of tracking distance error by more than 27%, highlighting the efficacy of the developed compensation technique in real-world scenarios. It is essential to note that while the evaluation focuses on SSMRs, the proposed slip and undesired skid compensation technique is not limited to this specific type of mobile robot, showcasing its potential applicability across various robotic platforms.
The novel contribution lies in the integration of robust control and deep learning techniques, enabling real-time compensation for slip and undesired skid at the vehicle-level operating in unforeseen outdoor terrains. By redefining the WTI representation using just two slip and undesired skid parameters at the vehicle-level, rather than the conventional approach requiring two slip parameters for each wheel, this research remarkably simplifies the compensation process. This paper shows the capacity of the previously developed deep learning slip and undesired skid estimators in a real-time implementation to significantly improve the performance of the trajectory tracking system in outdoor environments. This work not only refines our understanding of the effect of WTI on the dynamics of the robot but also paves the way for more streamlined and effective navigation strategies in real-world, unpredictable outdoor environments.
\begin{table}
\begin{tabular}{c c c|c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c|}{**Regression**} & \multicolumn{1}{c}{**Classification**} \\ \cline{2-4} & **MAE** (\(\downarrow\)) & **SMAPE** (\(\downarrow\)) & **F1** (\(\uparrow\)) \\ \hline
**CNN-LSTM-AE** & 7.06 & 15.74 & 87.01 \\ \hline \hline \end{tabular}
\end{table}
Table 10: Regression and classification results of the undesired skid estimator during the 8 trajectory tracking experiments (\(\uparrow\), \(\downarrow\): highest and lowest number desirable, respectively).
\begin{table}
\begin{tabular}{c c c|c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c|}{**Regression**} & \multicolumn{1}{c}{**Classification**} \\ \cline{2-4} & **MAE** (\(\downarrow\)) & **SMAPE** (\(\downarrow\)) & **Acc** (\(\uparrow\)) \\ \hline
**CNN-LSTM** & 12.35 & 26.43 & 97.73 \\ \hline \hline \end{tabular}
\end{table}
Table 9: Regression and classification results of the slip estimator during the 8 trajectory tracking experiments (\(\uparrow\), \(\downarrow\): highest and lowest number desirable, respectively).
## Acknowledgment
The authors would like to thank Dr. Peter Donelan for his valuable comments on the singularity problem of the sliding-mode controller.
|
2302.14686 | Approximately Stationary Bandits with Knapsacks | Bandits with Knapsacks (BwK), the generalization of the Bandits problem under
global budget constraints, has received a lot of attention in recent years.
Previous work has focused on one of the two extremes: Stochastic BwK where the
rewards and consumptions of the resources of each round are sampled from an
i.i.d. distribution, and Adversarial BwK where these parameters are picked by
an adversary. Achievable guarantees in the two cases exhibit a massive gap:
No-regret learning is achievable in the stochastic case, but in the adversarial
case only competitive ratio style guarantees are achievable, where the
competitive ratio depends either on the budget or on both the time and the
number of resources. What makes this gap so vast is that in Adversarial BwK the
guarantees get worse in the typical case when the budget is more binding. While
``best-of-both-worlds'' type algorithms are known (single algorithms that
provide the best achievable guarantee in each extreme case), their bounds
degrade to the adversarial case as soon as the environment is not fully
stochastic.
Our work aims to bridge this gap, offering guarantees for a workload that is
not exactly stochastic but is also not worst-case. We define a condition,
Approximately Stationary BwK, that parameterizes how close to stochastic or
adversarial an instance is. Based on these parameters, we explore what is the
best competitive ratio attainable in BwK. We explore two algorithms that are
oblivious to the values of the parameters but guarantee competitive ratios that
smoothly transition between the best possible guarantees in the two extreme
cases, depending on the values of the parameters. Our guarantees offer great
improvement over the adversarial guarantee, especially when the available
budget is small. We also prove bounds on the achievable guarantee, showing that
our results are approximately tight when the budget is small. | Giannis Fikioris, Éva Tardos | 2023-02-28T15:55:52Z | http://arxiv.org/abs/2302.14686v2 | # Approximately Stationary Bandits with Knapsacks
###### Abstract
Bandits with Knapsacks (BwK), the generalization of the Multi-Armed Bandit problem under budget constraints, has received a lot of attention in recent years. It has numerous applications, including dynamic pricing, repeated auctions, ad allocation, network scheduling, etc. Previous work has focused on one of the two extremes: Stochastic BwK where the rewards and consumptions of the resources of each round are sampled from an i.i.d. distribution, and Adversarial BwK where these parameters are picked by an adversary. Achievable guarantees in the two cases exhibit a massive gap: No-regret learning is achievable in the stochastic case, but in the adversarial case, only competitive ratio style guarantees are achievable, where the competitive ratio depends either on the budget or on both the time and the number of resources. What makes this gap so vast is that in Adversarial BwK the guarantees get worse in the typical case when the budget is more binding. While "best-of-both-worlds" type algorithms are also known (single algorithms that provide the best achievable guarantee in both extreme cases), their guarantees degrade to the adversarial case as soon as the environment is not fully stochastic.
Our work aims to bridge this gap, offering guarantees for a workload that is not exactly stochastic but is also not worst-case. We define a condition, _Approximately Stationary BwK_, that parameterizes how close to stochastic or adversarial an instance is. Based on these parameters, we explore what is the best competitive ratio attainable in BwK. We explore two algorithms that are oblivious to the values of the parameters but guarantee competitive ratios that smoothly transition between the best possible guarantees in the two extreme cases, depending on the values of the parameters. Our guarantees offer great improvement over the adversarial guarantee, especially when the available budget is small. We also prove bounds on the achievable guarantee, showing that our results are approximately tight when the budget is small.
## 1 Introduction
_Bandits with Knapsacks_ (BwK) was first introduced in [1] and models a natural extension of the _Multi-Armed Bandit_ (MAB) problem. In MAB a player repeatedly chooses one of many actions, each providing an unknown reward. To maximize her total reward, a player needs to balance exploration and exploitation while picking her actions. In the budgeted version of the problem (BwK), the player has the same objective but also needs to be mindful of different resources: each action consumes some amount of each resource; if any resource is depleted the player cannot get any more rewards. BwK was initially formulated inspired by numerous practical problems where a player wants to maximize her reward with constraints: participating in repeated auctions, dynamic pricing, ad allocation, network routing/scheduling, etc.
Previous work on BwK has focused on two extreme cases of the problem. First, in _Stochastic BwK_ the environment (rewards and consumptions) in each round is sampled from a distribution identical and independent
of other rounds. Second, in _Adversarial BwK_ the environment is picked each round by an adversary. Unlike MAB, in BwK there is a clear dichotomy in guarantees between the two cases. No-regret learning, i.e., additive error sublinear in the total number of rounds, is achievable in the stochastic case, but not in the adversarial case. Instead, work in Adversarial BwK focuses on bounding the achievable _competitive ratio_, i.e., the multiplicative error on the achievable reward. A line of work that tries to connect the adversarial and stochastic cases is "best-of-both-worlds" type results, where an algorithm achieves guarantees in both settings, without knowing if the environment is adversarial or stochastic, e.g., see [1]. However, the guarantee offered degrades to the adversarial guarantee as soon as the setting is not fully stochastic.
In this work, we aim to bridge this vast gap between Stochastic and Adversarial BwK and offer performance guarantees that smoothly degrade depending on the deviation from stochasticity, extending the "best-of-both-words" style guarantees to cases between the two extremes. We call our framework _Approximately Stationary BwK_, which offers a natural interpolation between Stochastic and Adversarial BwK. Adversarial BwK is much harder because of the potential for _huge heterogeneity of environments between rounds_. In the Approximately Stationary BwK problem, we limit this heterogeneity by assuming that the change in _expected_ rewards and consumptions of any arm is limited. We do not assume that the player is aware of the parameters limiting the change while running the algorithm. A natural constraint is that if \(x_{t}\) is the expected reward of some action in round \(t\), it must hold that \(\min x_{t}\geq\sigma\max x_{t}\), with \(\sigma\) limiting the variability of the expectation. In Stochastic BwK it must hold that \(\sigma=1\) (actually, distributions are identical across rounds, not just expectations), and in Adversarial BwK it can be that \(\sigma=0\).
There are multiple settings where the value of \(\sigma\) is neither of the two extremes. Consider repeated auctions where every round a budget-limited player bids to win a certain item. If the player's value for the item and its price are independently and identically distributed across rounds, then the setting is completely stochastic, and \(\sigma=1\). However, this is rarely the case in practice. The distribution of values might change across rounds (e.g., seasonal differences) or the price might be controlled by other players' bids who change their strategy or by a central entity that lowers or raises the price. This means that \(\sigma<1\), but the values and prices are not adversarial or arbitrary, i.e., the variability of the price is limited. Our goal is to have an algorithm that will have the best guarantees given the value of \(\sigma\), without knowing its actual value. In this paper, we show that it is possible to achieve performance that degrades gradually depending on the value \(\sigma\). For the most interesting range of the player's budget, we obtain close to optimal performance for all values of \(\sigma\) without assuming that the player is aware of this parameter.
Overview of our results.We introduce our model, _Approximately Stochastic BwK_ in Section3. Our model interpolates between Stochastic and Adversarial BwK by having two parameters that limit how much the expected value of the rewards and consumptions of any action can change across rounds. More specifically, the parameter \(\sigma_{r}\in[0,1]\) limits how much the reward of any action can fluctuate across rounds: if \(r_{t}(a)\) is the _expected reward_ of action \(a\) in round \(t\), we require that \(\min_{t}r_{t}(a)\geq\sigma_{r}\max_{t}r_{t}(a)\) for all actions \(a\). Note that we apply this definition to the expected reward of an arm since even in Stochastic BwK the rewards of a single arm can range from \(0\) to \(1\). Similarly, the parameter \(\sigma_{c}\in[0,1]\) limits the consumptions of any action across rounds: if \(c_{t,i}(a)\) is the _expected consumption_ of resource \(i\) by action \(a\) in round \(t\), we require that \(\min_{t}c_{t,i}(a)\geq\sigma_{c}\max_{t}c_{t,i}(a)\) for all actions \(a\) and resources \(i\). A sequence of rewards and consumptions that satisfy the above constraints is called \((\sigma_{r},\sigma_{c})\)_-stationary_. We assume that rewards and consumptions are \((\sigma_{r},\sigma_{c})\)-stationary but make no assumptions beyond this, so we can think that these values are set by a possibly adaptive adversary. We note that in Stochastic BwK the adversary is \((1,1)\)-stationary and in Adversarial BwK the adversary is \((0,0)\)-stationary. Our framework naturally generalizes "best-of-both-worlds" approaches, offering guarantees not only in fully stochastic and adversarial environments but also environments between the two extremes.
As is standard we assume without loss of generality that the rewards and consumptions are non-negative and bounded by \(1\) and every resource has budget \(B\). In adversarial BwK the best competitive ratio depends on the
player's average (or per round) budget. We will use \(\rho=B/T\) to denote the player's per-round budget. Given that the consumptions are bounded by \(1\) each round, \(\rho=1\) means that the player is not budget limited; in more typical cases players have budgets only for a small fraction of the items available. Our goal is to design algorithms that guarantee a fraction of the optimal solution when the player's budget per round is \(\rho\) and the adversary is \((\sigma_{r},\sigma_{c})\)-stationary. We denote the best achievable fraction with \(\alpha_{\rho}(\sigma_{r},\sigma_{c})\). We know that there are algorithms that have \(\alpha_{\rho}(1,1)=1\) and \(\alpha_{\rho}(0,0)=\rho\), and these are best possible when \(\rho\) is a constant independent of the time horizon, but nothing is known for intermediate values. This effectively means that previous work can only guarantee \(\alpha_{\rho}(\sigma_{r},\sigma_{c})=\rho\) when \(\sigma_{r}<1\) or \(\sigma_{c}<1\). This is an enormous and unnatural gap, especially in the typical case when \(\rho\) is small. We study algorithms that are oblivious to the values of \(\sigma_{r}\) and \(\sigma_{c}\) and achieve a fraction of the optimal solution \(\alpha_{\rho}(\sigma_{r},\sigma_{c})\) that is continuous and increasing in both arguments and satisfies \(\alpha_{\rho}(1,1)=1\) and \(\alpha_{\rho}(0,0)=\rho\).
All of our guarantees are against an _adaptive adversary_, an adversary that is restricted to be \((\sigma_{r},\sigma_{c})\)-stationary, but beyond this restriction, is allowed to pick the distributions of rewards and consumptions of each round based on outcomes in previous rounds. This is in contrast to an _oblivious adversary_, who picks all the rewards and consumptions upfront. Allowing an adaptive adversary is important: it makes our guarantees apply when the algorithm is used in a multi-agent game setting, e.g. in repeated auctions, where prices depend on other agents' bids, who are all adaptive to the history of play. All the guarantees we present in this paper are against an adaptive adversary, which, to the best of our knowledge, are the first such guarantees for Adversarial BwK.
In Section 5 we present our first guarantee. We show that we can achieve a guarantee of \(\alpha_{\rho}(\sigma_{r},\sigma_{c})\geq\rho+\sigma_{r}(\sigma_{c}-\rho)^{+}\) (Theorem 5.2). The most interesting range of parameters is when \(\rho\) is small (and therefore the gap between Stochastic and Adversarial BwK is biggest) and \(\sigma_{r}\sigma_{c}\) is much bigger than \(\rho\). In this case, our guarantee becomes approximately \(\sigma_{r}\sigma_{c}\). This is in stark contrast to the guarantee that previous work would suggest, \(\rho\). An alternate, naive approach would be to use an algorithm assuming a fully stochastic setting, which would yield a guarantee of \(\approx\sigma_{r}^{2}\sigma_{c}^{2}\) (see Section 5), two orders of magnitude smaller. Our results show that even if the player has a small budget, as long as the environment is not completely adversarial, good guarantees are achievable. We note that small budget is indeed the most common case: Typical budgets are far from sufficient for all items, so \(\rho\) is small, and the environment is often less variable and independent of the player's budget. For example, in repeated auctions, we expect expected item prices to fluctuate across rounds but that fluctuation to not depend on the player's small budget.
In Section 6 we provide a bound on the achievable guarantees any algorithm can get. The upper bound of Theorem 6.1 on \(\alpha_{\rho}(\sigma_{r},\sigma_{c})\) is using only one resource and an oblivious adversary. When \(\rho\) is much smaller than \(\sigma_{r}\sigma_{c}^{2}\), Theorem 6.1 shows that \(\alpha_{\rho}(\sigma_{r},\sigma_{c})\lessapprox\sigma_{r}\sigma_{c}\), making the result of Theorem 5.2 almost tight in that case. The bound uses that any algorithm needs to conserve its budget because of the uncertainty of the future. Without knowing the values of \(\sigma_{r}\) and \(\sigma_{c}\) it is not possible to utilize the reward of the optimal action in the initial rounds, in fear of the adversary being \((0,0)\)-stationary.
Our first guarantee in Theorem 5.2 is close to optimal when both \(\sigma_{r}\) and \(\sigma_{c}\) are much bigger than \(\rho\). When either \(\sigma_{c}\) or \(\sigma_{r}\) are similar to \(\rho\) the guarantee is very small: if \(\sigma_{c}\leq\rho\) the theorem only provides the adversarial guarantee of \(\alpha_{\rho}(\sigma_{r},\sigma_{c})\geq\rho\), and when \(\sigma_{r}\leq\rho\) the guarantee is similarly small. In Section 7 we offer better guarantees for the case when \(\sigma_{c}\) and \(\rho\) are small, but \(\sigma_{r}\) is larger. Our improved guarantee requires an additional condition to approximate stationarity: the total per-round change of the consumption of any action is sublinear in the total number of rounds. Under this assumption and \((\sigma_{r},\sigma_{c})\)-stationarity, we provide improved bounds in Theorem 7.2. The improvement is most impressive when there is only one resource (e.g., money), \(\sigma_{c}\) is close to \(\rho\), and \(\sigma_{r}\) is much larger, in which case the guarantee is close to the best possible (see Theorem 6.1).
Theorem 5.2 is based on a simplified version of the algorithm in [11] (see Algorithm 1 in Section 4), while to achieve the guarantee of Theorem 7.2 we need to break the time horizon into smaller batches and restart the algorithm periodically (see Algorithm 2 in Section 7). The main technical lemmas leading to the guarantee in both cases prove that the rewards of our algorithms have no-regret against the total reward of any action across all rounds but scaled down to observe the average consumption bound in each iteration. This property is not
useful in Stochastic BwK, where one knows that the optimal arm has low consumption, or in Adversarial BwK, where the rewards of the optimal arm can become \(0\) after a certain round. In Approximately Stochastic BwK, where the reward of the optimal arm cannot become \(0\), but its consumption can increase, guaranteeing a fraction of the reward from the entire period is a very useful property. To prove the improved guarantee of Theorem 7.2 the auxiliary lemma used, scales down the reward of each round based on the consumption of the optimal arm _in that round_. This makes its guarantee stronger when \(\sigma_{c}\) is small compared to the lemma used for Theorem 5.2 where the reward needs to be scaled down using the maximum consumption across rounds.
Related work.There is a vast amount of literature on online learning and regret minimization; we refer the reader to textbooks like [11] and [12] for background. The most commonly used algorithms are Hedge for full information feedback ([13]) and EXP3 for bandit feedback ([1]). A "best-of-both-worlds" type result providing guarantees for both stochastic and adversarial MAB was first proven in [10].
The BwK framework was first introduced in [10] in the stochastic setting. Following this work, various extensions have been studied including concave rewards and convex consumptions ([1]), combinatorial semi-bandits with knapsacks ([14]), and contextual bandits ([15]).
The first guarantees for Adversarial BwK were by [17] and then by [14]. The first work guarantees a \(1/O(d\log T)\) fraction of the optimal solution and the second a \(1/O(\log d\log T)\) fraction, where \(d\) is the number of resources. The second guarantee is tight when the average budget is \(\rho=O(T^{-\alpha})\) for some constant \(\alpha>0\). [1] study repeated second-price auctions with budgets, a special case of BwK; they assume that the budget grows linear with time, and get a "best-of-both-worlds" guarantee: a constant and independent of the number of resources \(\rho\) fraction of the optimal solution in adversarial settings and no-regret in stochastic ones. They also prove that even in this restricted second-price setting, when \(\rho\) is a constant their adversarial result is tight. [2] generalize the previous "best-of-both-worlds" guarantee for BwK. [16] study a variant of BwK where there is no time horizon and only one resource whose consumption is strictly positive; the game stops when that resource is depleted. Their variant is much easier than general BwK, evident by their guarantees of \(\text{poly}\log T\) and \(O(\sqrt{T})\) regret bounds for the stochastic and adversarial cases, respectively; in general BwK these types of guarantees are not achievable.
There have been other models that interpolate between Stochastic and Adversarial Multi-Armed Bandits. [1] study Stochastic MAB with Corruptions where the environment is stationary except for \(C\) rounds that are corrupted by an adversary; their no-regret guarantee interpolates between the stochastic and adversarial guarantees depending on how large \(C\) is. In Restless Bandits, e.g., studied by [11] and [15], the environment each round is generated by a Markov Chain whose state changes each round. Algorithms have no-regret when the Markov chain has size much smaller than \(T\), making the environments of two rounds approximately independent if they are far enough apart. Both of these models interpolate between the adversarial and stochastic case but in a fundamentally different way than ours: Before round \(1\) both have an "expected environment" (the uncorrupted one in Stochastic MAB with Corruptions and the one generated by the stationary distribution of the Markov chain in Restless bandits) which the environments of most rounds are "close" to in expectation. Instead, our \((\sigma_{r},\sigma_{c})\)-adversary does not have to conform to this restriction and can vary environments significantly at each time step. [1] study online allocation problems with budgets (a setting similar to BwK) but where the player sees the rewards and consumptions of the round before they pick an action. They provide guarantees similar to [2] in the stochastic and adversarial cases and study models that interpolate the two extremes, similar to Stochastic MAB with Corruptions and Restless Bandits. Their results focus on getting no-regret guarantees in those models.
Previous work has considered the Multi-Armed Bandit problem with close to stationarity constraints. [10] impose a different constraint on the rewards of the actions that we also consider in Section 7: they bound the total per-round change of the reward of any action by a parameter \(V\). They prove regret bounds of order \(\widehat{\Theta}(V^{1/3}T^{2/3})\) against an oblivious adversary. [1] generalize the previous results for general action spaces and convex reward functions. In contrast, such a constraint for both the rewards and consumptions of the actions
in BwK, even with \(V=1\), does not improve over Adversarial BwK: Theorem 6.1 satisfies the above constrain for \(\sigma_{r}=\sigma_{c}=0\) which shows that only a \(\rho\) fraction of the optimal solution is achievable.
## 2 Bandits with Knapsacks
In this section, we formally define the Bandits with Knapsack (BwK) framework. Our notation and definitions are similar to [15, Chapter 10]. We introduce some additional notation to help distinguish between the reward/consumptions of an action and their expectation.
There are \(T\) rounds, \(d\)_resources_ (denoted with \([d]\)), and a _budget_ per resource which w.l.o.g. we assume is the same for every resource \(i\in[d]\), \(B\). We denote \(\rho=B/T\). The player has a set of \(K\)_actions_, \([K]\). In every round \(t\in[T]\) the adversary chooses \((d+1)K\) random (and potentially dependent on previous rounds) variables: \(R_{t},C_{t,1},\ldots,C_{t,d}:[K]\rightarrow[0,1]\). For an action \(a\in[K]\), \(R_{t}(a)\) is the _reward_ the player receives on round \(t\) if they play action \(a\), and for a resource \(i\in[d]\), \(C_{t,i}(a)\) is the _consumption_ of that resource in round \(t\) by \(a\). As is standard, we assume that there is an action, called the _null action_, with \(0\) reward and \(0\) consumption of every resource.
Every round \(t\) the player chooses a (potentially randomized) action \(A_{t}\in[K]\) without any knowledge of the reward or consumptions of that round or their distribution. The game ends either after round \(T\) or in the round that any resource is depleted. We define \(T_{\text{A}}\) to be the last round the player receives a reward: \(T_{\text{A}}=\max\left\{t\in[T]:\forall i\in[d],\sum_{\tau=1}^{t}C_{t,i}(A_{t} )\leq B\right\}\).
We denote with \(\mathcal{H}_{t}\) the _history_ up to round \(t\). This includes the realization of the actions of the player up to round \(t\), \(A_{1},\ldots,A_{t}\), as well as the realization of the rewards and consumptions up to round \(t\), \(R_{1},\ldots,R_{t}\) and \(\{C_{1,i}\}_{i},\ldots,\{C_{t,i}\}_{i}\). We assume that the player has bandit knowledge, i.e., only knows the realization of rewards and consumptions of the actions she took in previous rounds.
If for every round \(t\) the distributions of the functions \(R_{t},C_{t,1},\ldots,C_{t,d}\) are independent of \(\mathcal{H}_{t-1}\) (it may depend on the algorithm used by the player but not on the realization of any randomness) then the adversary is called _oblivious_; otherwise, the adversary is called _adaptive_. Additionally, if the aforementioned functions are not time dependant, the adversary is called _stochastic_.
For any action \(a\in[K]\), we denote with \(r_{t}(a)\) the expected reward of action \(a\) in round \(t\)_conditioned on the history of the previous rounds_: \(r_{t}(a)=\mathbb{E}\left[R_{t}(a)|\mathcal{H}_{t-1}\right]\). Similarly, we define the conditional expected cost of action \(a\) and resource \(i\): \(c_{t,i}(a)=\mathbb{E}\left[C_{t,i}(a)|\mathcal{H}_{t-1}\right]\).
## 3 Approximately Stationary Bandits with Knapsacks
In this section, we present our model, _Approximately Stationary BwK_. Our model interpolates between Stationary and Adversarial BwK, providing guarantees that smoothly improve as expectations change less across time or equivalently as the setting is less adversarial. This generalizes "best-of-both-worlds" results by providing guarantees for the whole spectrum, not just the extremes.
As we mentioned in the introduction, Adversarial BwK is hard because the rewards and consumptions of an arm can oscillate between extreme values. The issue with Adversarial BwK becomes apparent by looking at the impossibility results of [1] and [16]. Both have a similar structure: every round, there is only one action that has positive reward. However, even if the player knows which action it is, she cannot fully utilize it, in case some other action has a much larger reward in a later round. Our model limits these extreme-case examples by constraining the expectation of the rewards and consumptions the adversary can pick. We focus on expected rewards and consumptions since even in Stochastic BwK the range of the realized rewards can be the entire interval \([0,1]\). Our definition uses two parameters. The first, \(\sigma_{r}\), bounds the multiplicative difference in the maximum and minimum expected reward of any action. The second, \(\sigma_{c}\), similarly constrains the expected
consumption of any resource by any action.
**Definition 3.1**.: An adversary in BwK is called _\((\sigma_{r},\sigma_{c})\)-stationary_ if for any action \(a\), resource \(i\), and history \(\mathcal{H}_{T}\) it holds that \(\min_{t}r_{t}(a)\geq\sigma_{r}\max_{t}r_{t}(a)\) and \(\min_{t}c_{t,i}(a)\geq\sigma_{c}\max_{t}c_{t,i}(a)\).
**Remark 3.1**.: _First, note that our definition bounds the relative variation of a sequence and not the absolute one, e.g., \(\max_{t}r_{t}(a)-\min_{t}r_{t}(a)\leq\epsilon\). Bounding the relative size rather than the absolute difference makes sense, making the result invariant on the scale of the rewards. Second, we notice that if \(\sigma_{r}=\sigma_{c}=0\) then the setting is completely adversarial. If the setting is Stochastic BwK, we get \(\sigma_{r}=\sigma_{c}=1\). Third, since the reward constraint is applied to the expected rewards of an action given the history of the previous rounds, our adversary can be adaptive._
## 4 Benchmarks and Algorithm
In this section, we present the benchmark that we use to compare the quality of our algorithm, as well as the algorithm that we use that provides our first guarantee.
Benchmark.The benchmark we use is the standard _best-fixed distribution of actions in hindsight_. Its reward \(\mathtt{OPT_{FD}}\) is equal to the reward of the best distribution of actions \(A^{*}\in\Delta([K])\), up to the round when it runs out of budget. For simplicity of presentation, we define \(\mathtt{OPT_{FD}}\) using the expected rewards and consumptions \(r,c\):
\[\begin{split}\mathtt{OPT_{FD}}=\max_{\begin{subarray}{c}T^{*} \in[T]\\ A^{*}\in\Delta([K])\end{subarray}}&\sum_{t=1}^{T^{*}}\mathbb{E}_{a \sim A^{*}}\left[r_{t}(a)\right]\\ &\text{such that}&\sum_{t=1}^{T^{*}}\mathbb{E}_{a \sim A^{*}}\left[c_{t,i}(a)\right]\leq B,\quad\forall i\in[d]\end{split} \tag{1}\]
We note that in \(\mathbb{E}_{a\sim A^{*}}\left[\cdot\right]\) the expectation is taken only over the action \(a\sim A^{*}\) and not any choices the player or adversary make, i.e., \(\mathbb{E}_{a\sim A^{*}}\left[r_{t}(a)\right]=\sum_{a}\mathbb{P}\left[A^{*}=a \right]r_{t}(a)\), where \(r_{t}(a)\) is the expected reward of action \(a\) for the actual history of the play, and not for the history of playing action distribution \(A^{*}\) each round. This means that \(\mathtt{OPT_{FD}}\) depends on the realization of the random choices of the game, i.e., the player's and adversary's actions. This is similar to benchmarks in MAB with an adaptive adversary, where the optimal reward used as a comparator for no-regret, depends on the actions the player takes. We denote with \((T^{*},A^{*})\) the solution to the above optimization problem.
Optimization problem (1) is simplified when the expectations of the rewards and consumptions are the same every round \(r_{t}(\cdot)=r(\cdot)\) and \(c_{t,i}(\cdot)=c_{i}(\cdot)\) for all \(t,i\). In this case, (1) becomes
\[\max_{A^{*}\in\Delta([K])}\quad T\mathbb{E}_{a\sim A^{*}}\left[r(a)\right] \quad\text{such that}\quad T\mathbb{E}_{a\sim A^{*}}\left[c_{i}(a)\right]\leq B,\quad\forall i\in[d] \tag{2}\]
where we can drop the dependence on \(T^{*}\) because of the null action: For every feasible solution \((\hat{T},\hat{A})\) there is a feasible solution \((T,\hat{A}^{\prime})\) with the same reward. More specifically, \(\hat{A}^{\prime}\) is the same distribution as \(\hat{A}\) with probability \(\hat{T}/T\) and the null action otherwise.
We use REW to denote the total reward of the player: \(\mathtt{REW}=\sum_{t=1}^{T_{A}}R_{t}(A_{t})\). We focus on high-probability bounds. The player has competitive ratio \(\gamma\geq 1\) and regret Reg against \(\mathtt{OPT_{FD}}\) with probability \(1-\delta\) if \(\mathbb{P}\left[\mathtt{REW}\geq\frac{\mathtt{OPT_{FD}}-\mathtt{Reg}}{\gamma} \right]\geq 1-\delta\).
As mentioned in the related work, in Adversarial BwK the player can guarantee competitive ratio at most \(\gamma=\min\{1/\rho,O(\log T)\}\) and sublinear regret, where \(\rho=B/T\). Without any additional assumptions, this
result is tight. By assuming that the adversary is \((\sigma_{r},\sigma_{c})\)-stationary, we prove greatly improved guarantees for the competitive ratio \(\gamma\). Our guarantees provide a smooth interpolation between Adversarial and Stochastic BwK.
Algorithm based on Lagrangian maximization/minimization.Next, we present a simplified version of the algorithm of [11] that achieves a "best-of-both-worlds" guarantee: competitive ratio of \(1\) in stochastic environments and \(1/\rho\) in adversarial ones. Our first guarantee against a \((\sigma_{r},\sigma_{c})\)-adversary in Theorem 5.2 is based on this algorithm and provides a competitive ratio that ranges between \(1\) and \(1/\rho\) depending on the values of \(\sigma_{r}\) and \(\sigma_{c}\).
The algorithm is inspired by the Lagrangian of (2), \(\mathcal{L}(a,\vec{\lambda})=r(a)+\sum_{i\in[d]}\lambda_{i}(\rho-c_{i}(a))\) where \(\lambda\in\mathbb{R}_{\geq 0}^{d}\). The importance of this function can be seen by the fact that in the stochastic case
\[\frac{\mathtt{OPT}_{\mathtt{FD}}}{T}=\max_{A\in\mathcal{A}([K])}\min_{\vec{ \lambda}\in\mathbb{R}_{\geq 0}^{d}}\mathbb{E}_{a\sim A}\left[\mathcal{L}(a, \vec{\lambda})\right]=\min_{\vec{\lambda}\in\mathbb{R}_{\geq 0}^{d}}\max_{A\in \Delta([K])}\mathbb{E}_{a\sim A}\left[\mathcal{L}(a,\vec{\lambda})\right] \tag{3}\]
as shown by by [10]. [11] improve (3) by restricting the domain of \(\vec{\lambda}\): (3) also holds if the minimum is over \(\vec{\lambda}\in\mathcal{D}\) where \(\mathcal{D}=\{\vec{\lambda}\in\mathbb{R}_{\geq 0}^{d}:\sum_{i}\lambda_{i}\leq 1/\rho\}\).
Even though the original inspiration comes from Stochastic BwK, previous work designed Adversarial and Stochastic BwK algorithms that aim to find a saddle point of a Lagrangian. However, the player does not know the expected values of the rewards and consumptions, so their realized values are used instead. Additionally, since the Lagrangian is linear in \(\vec{\lambda}\), similarly to [10], we replace the domain \(\mathcal{D}\) with its \(d+1\) extreme points: instead of \(\vec{\lambda}\) the second argument becomes \(i\in[d]\cup\{0\}\) where \(i=0\) corresponds to the zero vector and \(i>0\) corresponds to the all-zero vector with \(1/\rho\) in its \(i\)-th position. Handling this function is easier since it is defined over a discrete set. Putting all of these together, we define for every \(t\in[T]\), \(a\in[K]\), and \(i\in[d]\cup\{0\}\):
\[\mathcal{L}_{t}(a,i)=R_{t}(a)+\frac{1}{\rho}\mathbbm{1}\left[i\neq 0\right]( \rho-C_{t,i}(a)).\]
The algorithms of [10] and [11] focus on finding a saddle point of functions similar to the above1. [11] use the initial formulation with \(\vec{\lambda}\in\mathcal{D}\). More specifically, they use two online algorithms, one that tries to maximize \(\mathcal{L}_{t}(a,i)\) over \(a\) and one that tries to minimize it over \(i\). We follow a similar approach and since the domain of both arguments of \(\mathcal{L}_{t}(a,i)\) is discrete, we use two Multi-Armed Bandit algorithms with no-regret guarantees: In round \(t\), \(\mathtt{Alg}_{\max}\) chooses an action \(A_{t}\) and \(\mathtt{Alg}_{\min}\) chooses a resource or the number \(0\), \(I_{t}\). Then, \(\mathtt{Alg}_{\max}\) gets reward \(\mathcal{L}_{t}(A_{t},I_{t})\) and \(\mathtt{Alg}_{\min}\) incurs cost \(\mathcal{L}_{t}(A_{t},I_{t})\). We note that the choices of \(A_{t}\) and \(I_{t}\) are made without knowledge of the rewards and consumptions of round \(t\). We also note that the player can provide \(\mathtt{Alg}_{\max}\) with bandit information only, i.e., it knows only \(\mathcal{L}(A_{t},I_{t})\). In contrast, the player can give \(\mathtt{Alg}_{\min}\) full information since it knows the value of \(\mathcal{L}(A_{t},i)\) for all \(i\). Our full algorithm can be found in Algorithm 1.
Footnote 1: In [10] the second argument of the function considers only the \(d\) non-zero extreme points, i.e., the domain of \(i\) is \([d]\) instead; even if every consumption is less than \(\rho\) (in which case the algorithm is not budget constrained) the choice of action \(a\) still takes into account the consumptions making potentially sub-optimal choices. They fix this by picking a slightly different \(\mathcal{L}_{t}\)
We will use algorithms \(\mathtt{Alg}_{\max}\) and \(\mathtt{Alg}_{\min}\) that guarantee no-regret with high probability. We use EXP3.P from [1] as \(\mathtt{Alg}_{\max}\) which guarantees that for all \(\delta>0\) with probability at least \(1-\delta\) it holds that for all \(T^{\prime}\in[T]\)
\[\max_{a\in[K]}\sum_{t=1}^{T^{\prime}}\mathcal{L}_{t}(a,I_{t})-\sum_{t=1}^{T^{ \prime}}\mathcal{L}_{t}(A_{t},I_{t})\leq\mathtt{Reg}_{\max}(T,\delta):=O\left( \frac{1}{\rho}\sqrt{KT\log(T/\delta)}\right). \tag{4}\]
Using Hedge from [10] as \(\mathtt{Alg}_{\max}\) guarantees that for all \(\delta>0\) with probability at least \(1-\delta\) it holds that for all \(T^{\prime}\in[T]\)
\[\sum_{t=1}^{T^{\prime}}\mathcal{L}_{t}(A_{t},I_{t})-\min_{i\in[d]\cup\{0\}}\sum_ {t=1}^{T^{\prime}}\mathcal{L}_{t}(A_{t},i)\leq\mathtt{Reg}_{\min}(T,\delta):=O \left(\frac{1}{\rho}\sqrt{T\log(Td/\delta)}\right). \tag{5}\]
## 5 Guarantees of Algorithm \(1\) in Approximately Stationary BwK
In this section, we prove our guarantee for Algorithm 1 against a stationary adversary. For any values of \(\sigma_{r}\) and \(\sigma_{c}\) our algorithm achieves the competitive ratio of \(1/\rho\) that is guaranteed in Adversarial BwK. As \(\sigma_{r}\) and \(\sigma_{c}\) increase, the competitive ratio of Algorithm 1 smoothly improves and becomes \(1\) when \(\sigma_{r}=\sigma_{c}=1\). In the most interesting range of parameters, when \(\rho\) is much smaller than \(\sigma_{r}\sigma_{c}\) (which also implies that the gap in guarantees between Stochastic and Adversarial BwK is largest), our algorithm achieves a close to tight \(\sigma_{r}\sigma_{c}\) fraction of the optimal solution. This is a huge improvement over the \(\rho\) fraction that is guaranteed in Adversarial BwK.
We start with a lemma comparing the rewards achieved by Algorithm 1 against a distribution of actions whose maximum expected consumption of any resource is at most \(\rho\). This lemma is true for any \(\sigma_{r},\sigma_{c}\) and adaptive adversary. This lemma easily proves the two guarantees of [14] about Stochastic and Adversarial BwK and extends the second for adaptive adversaries.
**Lemma 5.1**.: _Let \(A\in\Delta([K])\) be a distribution of actions such that \(\max_{i,t}\mathbb{E}_{a\sim A}\left[c_{i,t}(a)\right]\leq\rho\). Then for any adversary and \(\delta>0\), with probability at least \(1-\delta\) Algorithm 1 achieves_
\[\mathtt{REW}\geq\sum_{t=1}^{T}\mathbb{E}_{a\sim A}\left[r_{t}(a)\right]- \mathtt{Reg}_{\max}(T,\delta)-\mathtt{Reg}_{\min}(T,\delta) \tag{6}\]
Using Lemma 5.1 it is easy to prove the two guarantees of [14]. First, if the adversary is stochastic we can prove a \(1\) competitive ratio and sublinear regret by noticing that the optimal action \(A^{*}\) in (2) satisfies the conditions of the lemma. Second, against an adaptive adversary, we notice that the action distribution that plays the best-fixed _unbudgeted_ action with probability \(\rho\), and the null action otherwise satisfies Lemma 5.1. This proves a \(1/\rho\) competitive ratio against \(\max_{a}\sum_{t}r_{t}(a)\) with high probability, which also proves the same guarantee against \(\mathtt{OPT}_{\mathtt{FD}}\) as without a budget, the optimal action is fixed.
Proof sketch.: The lemma's proof is based on the guarantees of the two algorithms, \(\mathtt{Alg}_{\max}\) and \(\mathtt{Alg}_{\min}\), found in (4) and (5), respectively, up to the stopping round of the algorithm, \(T_{\mathsf{A}}\). On the one hand, we compare \(\sum_{t\leq T_{\mathsf{A}}}\mathcal{L}_{t}(A_{t},I_{t})\) with \(\min_{i}\sum_{t\leq T_{\mathsf{A}}}\mathcal{L}_{t}(A_{t},i)\): this lower bounds the reward of the algorithm, \(\mathtt{REW}\), using an additive term that boosts \(\mathtt{REW}\) as \(T-T_{\mathsf{A}}\) becomes bigger: if the algorithm runs out of budget fast, the \(C_{t,i}(A_{t})\)
terms in \(\mathcal{L}(A_{t},i)\) become larger making this bound better. On the other hand, we compare \(\sum_{t\leq T_{\text{A}}}\mathcal{L}_{t}(A_{t},I_{t})\) with \(\sum_{t\leq T_{\text{A}}}\mathbb{E}_{\alpha\sim A}\left[\mathcal{L}(a,I_{t})\right]\) (where \(A\) is the distribution defined in Lemma 5.1): this contains the total reward of distribution \(A\) up to round \(T_{\text{A}}\) and an additive error that is not too high because on expectation \(C_{t,i}(a)=c_{t,i}(a)\) and the second term is low by the properties of \(A\). We defer the detailed proof to Appendix A.
We now move to the main theorem of this section. Our theorem guarantees a fraction of the optimal solution with high probability against an adaptive \((\sigma_{r},\sigma_{c})\)-stationary adversary.
**Theorem 5.2**.: _Against an adaptive \((\sigma_{r},\sigma_{c})\)-stationary adversary, the reward of Algorithm 1 satisfies for any \(\delta>0\) with probability at least \(1-\delta\)_
\[\mathtt{REW}\geq\left(\rho+\sigma_{r}(\sigma_{c}-\rho)^{+}\right)\mathtt{ OPT}_{\mathtt{FD}}-\mathtt{Reg}_{\max}(T,\delta)-\mathtt{Reg}_{\min}(T,\delta).\]
The proof of the theorem is deferred to Appendix A. The idea is to use Lemma 5.1. To take advantage of (6) we need to lower bound the reward of the optimal distribution \(A^{*}\) after its stopping time \(T^{*}\) and upper bound its maximum consumption. These two quantities depend on \(T^{*}\): as \(T^{*}\) becomes larger, both the reward of \(A^{*}\) after \(T^{*}\) and the maximum consumption become smaller (the first depending on \(\sigma_{r}\) and the second on \(\sigma_{c}\)). Carefully examining these effects and choosing the \(T^{*}\) (as a function of \(\rho,\sigma_{r},\sigma_{c}\)) that yields the worst guarantee for the algorithm gets us the theorem.
**Remark 5.3**.: _When \(\sigma_{r}\) and \(\sigma_{c}\) are much larger than \(\rho\) a naive approach would be to use an algorithm assuming a fully stochastic setting. Such an algorithm would lead to much weaker results. An algorithm that is based on the classic Arm Elimination algorithm would guarantee only a \(\sigma_{r}^{2}\sigma_{c}^{2}\) fraction of \(\mathtt{OPT}_{\mathtt{FD}}\): it might eliminate an action because it identified it as sub-optimal if it is a bit worse than the one that it has identified as optimal. However, after that round, an \((\sigma_{r},\sigma_{c})\)-stationary adversary might make the previously optimal action worse by a factor of \(\sigma_{r}\sigma_{c}\) and the sub-optimal one better by the same factor. This would result in a sub-optimality factor of \(\sigma_{r}^{2}\sigma_{c}^{2}\)._
## 6 Impossibility results for Approximately Stationary BwK
In this section, we show a bound on the guarantee any algorithm can achieve against an \((\sigma_{r},\sigma_{c})\)-stationary adversary. The bound applies to algorithms that are oblivious to the values of \(\sigma_{r}\) and \(\sigma_{c}\) and guarantee a fraction of at least \(\rho\) against an oblivious adversary. Additionally, we get this bound when there is only one resource. This theorem proves that when \(\rho\) is much smaller than \(\sigma_{r}\sigma_{c}^{2}\), the guarantee of Theorem 5.2 becomes approximately tight (in fact if \(\rho\leq\sigma_{r}\sigma_{c}^{2}\) the two bounds are within a factor of \(2\) of each other). The following theorem also proves that the achievable fraction is at most \(\sigma_{r}+\rho-\sigma_{r}\rho\), a small improvement over the \(\rho\) guarantee of Adversarial BwK when \(\sigma_{r}\approx\rho\). This makes Theorem 5.2 approximately tight in that case also.
**Theorem 6.1**.: _Fix any algorithm that achieves sublinear regret and an \(\alpha_{\rho}(\sigma_{r},\sigma_{c})\) fraction of the optimal solution against an oblivious \((\sigma_{r},\sigma_{c})\)-adversary. If the algorithm is oblivious to the values of \(\sigma_{r}\) and \(\sigma_{c}\), \(\rho=\Theta(1)\), and \(\alpha_{\rho}(0,0)\geq\rho\) (i.e., guarantees at least a \(\rho\) fraction of the optimal solution), then_
\[\alpha_{\rho}(\sigma_{r},\sigma_{c})\leq\begin{cases}\sigma_{r}+\rho(1-\sigma _{r}),&\text{if }\sigma_{r}\leq\rho\\ 2\sqrt{\sigma_{r}\rho}-\sigma_{r}\rho,&\text{if }\rho\leq\sigma_{r}\leq \frac{\rho}{\sigma_{c}^{2}}\\ \sigma_{r}\sigma_{c}+\rho(1/\sigma_{c}-\sigma_{r}),&\text{if }\sigma_{r}\geq \frac{\rho}{\sigma_{c}^{2}}\end{cases}\]
In the example we use to prove the theorem, there are \(K\) actions and \(K+1\) different outcomes (the oblivious adversary selects which outcome the algorithm faces). Additionally, the \(T\) rounds are divided into \(K\) batches. In outcome \(q\in[K]\), the optimal action has positive reward only in the rounds of the \(q\)-th batch. The algorithm
cannot distinguish if it is facing the \(q\)-th outcome on the \((q-1)\)-th batch, so it needs to act conservatively and not spend its resources to guarantee a \(\rho\) fraction of the optimal solution. Because of this behavior, any algorithm is guaranteed to miss out on a large part of the optimal solution in the \((K+1)\)-instance, which is the one that is \((\sigma_{r},\sigma_{c})\)-stationary. The proof of the theorem is deferred to Appendix B.
## 7 Improvement for the one resource case
In this section, we provide an algorithm that improves the guarantee of Algorithm 1 when \(\sigma_{c}\) is small, that is, the variability in expected consumption is high. The improved guarantee requires an additional assumption, that the sum of the differences in the expected consumptions of the actions is sublinear, which ensures a sublinear regret term. (The bound in the previous section shows that the guarantee of Theorem 5.2 is close to optimal when \(\rho\) and \(\sigma_{r}\) are small and \(\sigma_{c}\) is significantly larger.)
Our improved algorithm uses Algorithm 1 as a subroutine and a parameter \(T_{res}\). It runs and restarts Algorithm 1 every \(T_{res}\) rounds. We allocate each run of Algorithm 1 a budget of \(\rho T_{res}-1\). This way the per-round budget in every run is approximately \(\rho\). It also guarantees that every run of the algorithm uses at most \(\rho T_{res}\) of every resource since Algorithm 1 would have terminated when going above budget and not get the last item; however, when simulating the algorithm and using the actions it suggests, the player has to know to terminate it herself before it uses more than the desired budget which we achieve by allocating the algorithm \(1\) less budget than that.
```
Input: inputs needed for Algorithm 1 and parameter \(T_{res}\)
1 Split rounds into \(\lceil\frac{T}{T_{res}}\rceil\) disjoint batches \([T]=\mathcal{T}_{1}\cup\ldots\cup\mathcal{T}_{\lceil T/T_{res}\rceil}\), each batch having \(T_{res}\) rounds (except maybe for the last one).
2foreach batch \(j=1,\ldots,\lceil\frac{T}{T_{res}}\rceil\)do
3 Independently of previous rounds, run Algorithm 1 on rounds \(\mathcal{T}_{j}\) with budget \(\rho|\mathcal{T}_{j}|-1\).
4 end
```
**Algorithm 2**Restarting BwK Algorithm based on Algorithm 1
We now show a lemma for Algorithm 2, which will lead to the promised improved guarantee. The main ingredient of our guarantee for Algorithm 1 was Lemma 5.1 that bounds the reward against a distribution \(A\) that satisfies \(\max_{i,t}\mathbb{E}_{a\sim A}\left[c_{i,t}(a)\right]\leq\rho\). To ensure this condition for an arbitrary distribution \(A\), we need to scale it down by playing it only with probability \(\rho\) divided by the above maximum over the whole time horizon, and playing the null action with the remaining probability. The new lemma is structured similarly to Lemma 5.1: it shows that the reward of Algorithm 2 is at least the reward of any action distribution across all rounds but scaled down as a function of the consumptions of that distribution. In contrast to Lemma 5.1 however, the deterioration of the reward is much more fine-grained. Instead of scaling down the whole reward by the maximum consumption, the reward of each round is scaled down by the _consumptions of that round_.
**Lemma 7.1**.: _For any \(A\in\Delta([k])\), Algorithm 2 guarantees reward that for every \(\delta>0\),_
\[\mathbb{P}\left[\mathtt{REW}\geq\sum_{t=1}^{T}\mathbb{E}_{a\sim A}\left[r_{t }(a)\right]\min\left\{1,\frac{\rho}{\max_{i}\mathbb{E}_{a\sim A}\left[c_{t,i} (a)\right]}\right\}-\mathtt{Reg}\right]\geq 1-\delta\]
_where using \(\mathcal{E}\geq\sum_{t=1}^{T-1}\max_{i\in[d]}\left|\mathbb{E}_{a\sim A}\left[c _{t,i}(a)\right]-\mathbb{E}_{a\sim A}\left[c_{t+1,i}(a)\right]\right|\) we have_
\[\mathtt{Reg}=\frac{T}{T_{res}}\big{(}\mathtt{Reg}_{\max}(T_{res},\delta T/T_{ res})+\mathtt{Reg}_{\min}(T_{res},\delta T/T_{res})\big{)}-\frac{T_{res}}{\rho} \mathcal{E}\]
_If \(T_{res}=\Theta\left((\nicefrac{{\rho T}}{{\varepsilon}})^{2/3}\right)\) and we ignore dependence on \(K,d\), then \(\mathtt{Reg}=O(T^{2/3}\mathcal{E}^{1/3}\log(T/\delta)/\rho^{1/3})\)._
The proof of the lemma is based on Lemma 5.1. The reward of Algorithm 2 in each batch \(j\) is at least the reward of distribution \(A\) in that batch scaled down by \(\max_{i,\tau\in\mathcal{T}_{j}}\mathbb{E}_{a\sim A}\left[c_{\tau,i}(a)\right]\). This factor can be improved to \(\max_{i,t}\mathbb{E}_{a\sim A}\left[c_{t,i}(a)\right]\) for any \(t\in\mathcal{T}_{j}\) by introducing an additive error that depends on the variance of consumptions on that round. Using the condition on \(\mathcal{E}\), this additive error over all batches is sublinear, which proves the lemma. The full proof can be found in Appendix C.
We now present the main result of this section. Using Lemma 7.1 we can get a strictly better bound. Our result is parametric and depends on a parameter \(x\in[\rho,1]\).
**Theorem 7.2**.: _Against an adaptive \((\sigma_{r},\sigma_{c})\)-stationary adversary, for any \(\delta>0\) with probability at least \(1-\delta\), Algorithm 2 guarantees a fraction \(\alpha_{\rho}(\sigma_{r},\sigma_{c})\) of \(\mathsf{OPT_{FD}}\) that is_
\[\alpha_{\rho}(\sigma_{r},\sigma_{c})=\min_{x\in[\rho,1]}\left(\max\left\{\rho,x\sigma_{c},\sigma_{r}\frac{x}{d+x}\right\}+\max\left\{\rho\sigma_{r}\frac{1- x}{x},\sigma_{r}\sigma_{c}(1-x)\right\}\right)\]
_and regret \(\mathsf{Reg}\) which is sublinear if \(\mathcal{E}/\rho\) is sublinear, where \(\mathsf{Reg}\) and \(\mathcal{E}\) are defined in Lemma 7.1._
**Remark 7.3**.: _The bound of Theorem 7.2 improves the bound of Theorem 5.2 significantly when there is only one resource \(d=1\) and \(\sigma_{r}\) is much larger than \(\rho\) and \(\sigma_{c}\). We showcase this in Figure 1 using some numerical examples, where we compare the bounds of Theorems 5.2, 6.1, and 7.2._
|
2309.12185 | Solving linear objective optimization problem subjected to novel max-min
fuzzy relational equalities as a generalization of the vertex cover problem | This paper considers the linear objective function optimization with respect
to a novel system of fuzzy relation equations, where the fuzzy compositions are
defined by the minimum t-norm. It is proved that the feasible solution set is
formed as a union of the finite number of closed convex cells. Some necessary
and sufficient conditions are presented to conceptualize the feasibility of the
problem. Moreover, seven rules are introduced with the aim of simplifying the
original problem, and then an algorithm is accordingly presented to find a
global optimum. It is shown that the original problem in a special case is
reduced to the well-known minimum vertex cover problem. Finally, an example is
described to illustrate the proposed algorithm. | Amin Ghodousian, Mahdi Mollakazemiha | 2023-09-21T15:51:50Z | http://arxiv.org/abs/2309.12185v1 | Solving linear objective optimization problem subjected to novel max-min fuzzy relational equalities as a generalization of the vertex cover problem
###### Abstract
This paper considers the linear objective function optimization with respect to a novel system of fuzzy relation equations, where the fuzzy compositions are defined by the minimum t-norm. It is proved that the feasible solution set is formed as a union of the finite number of closed convex cells. Some necessary and sufficient conditions are presented to conceptualize the feasibility of the problem. Moreover, seven rules are introduced with the aim of simplifying the original problem, and then an algorithm is accordingly presented to find a global optimum. It is shown that the original problem in a special case is reduced to the well-known minimum vertex cover problem. Finally, an example is described to illustrate the proposed algorithm.
keywords: linear optimization, fuzzy relational equations, Minimum Vertex Covering, +
Footnote †: journal: Journal of LaTeX Templates
## 1 Introduction
The theory of fuzzy relational equations (FRE) as a generalized version of Boolean relation equations was firstly proposed by Sanchez and was applied to problems related to the medical diagnosis [40]. Pedrycz categorized and extended two ways of the generalizations of FRE in terms of sets under discussion and various operations which are taken account [36]. Since then, FRE was applied in many other fields such as fuzzy control, prediction of fuzzy systems, fuzzy decision making, fuzzy pattern recognition, image compression and reconstruction, fuzzy clustering |
2306.17650 | Sensing of Side Lobes Interference for Blockage Prediction in Dense
mmWave Networks | The integration of sensing capability in the design of wireless communication
systems is foreseen as a key enabler for efficient radio resource management in
next-generation networks. This paper focuses on millimeter-wave communications,
which are subject to severe attenuation due to blockages, ultimately
detrimental to system performance. In this context, the sensing functionality
can allow measuring or even imaging the wireless environment allowing
anticipation of possible link failures, thus enabling proactive resource
reallocation such as handover. This work proposes a novel mechanism for
opportunistic environment sensing, which leverages existing network
infrastructure with low complexity. More specifically, our approach exploits
the fluctuations of interference, perceived in antenna side lobes, to detect
local activity due to a moving blocker around the reference communication link.
Numerical evaluations show that the proposed method is promising as it allows
effective assessment of the blocker direction, trajectory and possibly, its
location, speed, and size. | Mohamed Sana, Hiba Dakdouk, Benoit Denis | 2023-06-30T13:36:11Z | http://arxiv.org/abs/2306.17650v1 | # Sensing of Side Lobes Interference for Blockage Prediction in Dense mmWave Networks
###### Abstract
The integration of sensing capability in the design of wireless communication systems is foreseen as a key enabler for efficient radio resource management in next-generation networks. This paper focuses on millimeter-wave communications, which are subject to severe attenuation due to blockages, ultimately detrimental to system performance. In this context, the sensing functionality can allow measuring or even imaging the wireless environment allowing anticipation of possible link failures, thus enabling proactive resource reallocation such as handover. This work proposes a novel mechanism for opportunistic environment sensing, which leverages existing network infrastructure with low complexity. More specifically, our approach exploits the fluctuations of interference, perceived in antenna side lobes, to detect local activity due to a moving blocker around the reference communication link. Numerical evaluations show that the proposed method is promising as it allows effective assessment of the blocker direction, trajectory and possibly, its location, speed, and size.
Sensing, Blockages Prediction, mmWave Communications, Network densification, 6G Networks.
## I Introduction
Millimeter-Wave (mmWave) frequencies (ranging, _e.g._, between \(28\) and \(300\) GHz) are getting great attention recently for their various advantages over traditional radio frequencies (sub-6 GHz). With the large spectrum available at these frequencies, mmWave technology can effectively boost the network capacity. It also supports advanced beamforming techniques, which allow for highly directional signal transmissions, reducing interference and enhancing network performance. However, these advantages also come along with a critical challenge: mmWave communications suffer from severe path-losses and are very sensitive to blockages and attenuation by physical obstacles (_e.g._, buildings, trees, human body). Penetration losses through the human body can range between \(20-40\)\(\mathrm{dB}\), whereas attenuation through buildings can be as high as \(40-80\)\(\mathrm{dB}\)[1].
Frequent interruptions and long-duration blockages may cause severe degradation in the quality of service (QoS) of end-users, requiring frequent handover procedures that are detrimental to network performance [2]. Therefore, effective blockage prediction mechanisms are needed to enable efficient radio resource management (RRM). Joint communication and sensing has been identified as a key feature of future 6G systems [3]. These sensing capabilities could be used to improve network performance by providing optimization inputs for network steering, including the ability to detect objects that (temporarily) obstruct or block the line of sight (LoS) between two communicating nodes [4]. Sensing the surrounding for detecting moving blockages in mmWave communications has become a fundamental research topic [5, 6, 7, 8, 9, 10]. A major axis of research that aims to predict and prevent blockages in mmWave systems considers making use of in-band mmWave signal and data rate observations. The authors in [11] uses the fluctuation of the received power level occurring before the shadowing event to predict the future time instance of a blockage with the aid of deep neural networks. However, the prediction accuracy decreases as the blocker is far from the mmWave link, which means that it can only be detected accurately when it is close to the mmWave communication beam. In [12], the data rate fluctuation occurring before the shadowing indicates the potential blockage. Using deep reinforcement learning techniques, the authors could predict handover timings while obstacle-caused data rate degradation are predicted before the degradation occurs. On the other hand, authors in [13] propose the use of an additional passive mmWave beam (guard beam) next to the main communication beam that is intended to sense the environment by expanding the field of view of a base station (BS). Thus, a blocker could be detected early by observing the received signal fluctuation resulting from the non line of sight (NLoS) component from the user equipment (UE) due to the blocker's presence within the field of view. Yet, all these approaches are limited in terms of detection range, as the blocker should be close to the main communication beam causing fluctuation in the received signal, and might fail with large-velocity moving blockers. In addition, the sensing feature may require an additional and dedicated mechanism (_e.g._, a dedicated beam).
In contrast, we propose a mechanism that exploits side lobes information for the passive and opportunistic sensing of a dense mmWave network. Network densification is a key feature of future networks that will further improve their capacity [14]. At the same time, it may also lead to increased intra- and inter-cell interference, which may impact communication performance. However, in this work, we take advantage of this specific characteristic of dense networks to detect the presence of moving blockers in the surrounding environment of communicating nodes. Our method relies on the observation of the interference fluctuations in antenna side lobes caused by the existence of moving blockers in angular sectors around the communication link of concern. Unlike the aforementioned studies, our approach is capable to detect and track moving objects all around the sensing device, _i.e._ in range of \(360^{\circ}\). This makes it possible to predict some characteristics of blockers,
including their trajectory and velocity, allowing early detection of blockage events and avoiding link outages by triggering, _e.g._, a handover procedure.
## II System Model
We consider a dense mmWave network composed of a set \(\mathcal{B}=\{b_{1},...,b_{M}\}\) of \(M\) BSs deployed in a bi-dimensional Euclidean space of radius \(R\) to provide service coverage to a set \(\mathcal{U}=\{u_{1},...,u_{K}\}\) of \(K\) UEs. We assume BSs and UEs form two distinct homogeneous Poisson Point Processes (PPP) with densities \(\lambda_{b}\ [\mathrm{m}^{-2}]\) and \(\lambda_{u}\ [\mathrm{m}^{-2}]\) respectively such that in average, \(\mathbb{E}[M]=\lambda_{b}\pi R^{2}\) and \(\mathbb{E}[K]=\lambda_{u}\pi R^{2}\). In this dense network, a mobile and passive object (_e.g._, a robot), modelled as a cylindrical object of radius \(r_{B}\) moves around, causing the blockage of interfering and direct communication paths. Let \(\omega_{B}(t)\) denote its angular velocity and \(d_{B}(t)\) its distance with respect to (_w.r.t._) BS \(b_{0}\), referred to as the _typical BS_ and taken as the reference point in the following. Clearly, \((\omega_{B}(t),d_{B}(t))\) characterizes the trajectory of the blocking object. In this work, we propose a novel approach for a passive sensing of such a moving object, partially identifying its trajectory by leveraging the interference perceived in the side lobes of the antenna radiation patterns.
We focus on an uplink setting with only LoS communications for both direct and interfering links. In this scenario, an initial access phase allows new UEs to perform beam training and alignment mechanisms, configuring the appropriate beams, which exploit the maximum directivity gain _w.r.t._ serving BSs for the service phase. For simplicity, we assume each UE gets associated with the closest BS, as we do not specifically address the user association problem. However, this problem can be efficiently solved using approaches proposed in [15] to optimize service coverage. During the service phase, BSs exploit the fluctuations of interference perceived in their antennas side lobes, resulting from simultaneous communications with UEs, for sensing their nearby environment to detect blockages.
**Antennas.** In our system model, UEs and BSs are equipped with antenna arrays to perform directional beamforming. For easy analysis, we assume that antenna arrays operate on the same elevation plane, and accordingly, we set the beam elevation angle to zero. Therefore, we denote with \(G_{\theta}^{\mathrm{Tx}}(x)\) and \(G_{\theta}^{\mathrm{Rx}}(x)\) the transmitter and the receiver 2D antenna radiation pattern respectively, where \(\theta\) and \(\vartheta\) are the beam width and, \(x\) is the azimuth angle to the main lobe (either \(\psi\) or \(\phi\) in Fig. 1). For the tractability of analysis, we approximate the actual 2D array patterns with a sectored Gaussian directional antenna model [16] whose beamforming gain is given as follows:
\[G_{z}^{\ell}(x)=\left\{\begin{array}{ll}G_{m}^{\ell}e^{-\rho_{z}x^{2}},& \text{if }|x|\leq\frac{z}{2},\\ G_{s}^{\ell},&\text{otherwise},\end{array}\right.,\ z\in\{\theta,\vartheta\}, \tag{1}\]
where \(\rho_{z}=2.028\dfrac{\ln{(10)}}{z^{2}}\) and \(z\) is the beam width. In addition, \(G_{m}^{\ell}\) and \(G_{s}^{\ell}\) denote the gain of the main lobe and the side lobes as per \(\ell\in\{\mathrm{Tx},\mathrm{Rx}\}\), respectively. Following these definitions, we define the antenna peak-side-lobe (PSL) gain as \(\mathrm{PSL}^{\ell}=G_{m}^{\ell}(G_{s}^{\ell})^{-1}\). In particular, the value of \(\mathrm{PSL}^{\ell}\) depends on the number of antenna elements.
**Propagation Channel.** We adopt the commonly used Friis propagation loss model [17], where the received power \(P^{\mathrm{Rx}}\) is given as a function of the transmitted power \(P^{\mathrm{Tx}}\) and the distance \(d\) between the transmitter and the receiver:
\[P^{\mathrm{Rx}}(t)=\chi(t)\zeta(t)P^{\mathrm{Tx}}G_{\theta}^{\mathrm{Tx}}( \varphi)G^{\mathrm{H}}(d)G_{\vartheta}^{\mathrm{Rx}}(\psi), \tag{2}\]
where \(\varphi\) and \(\psi\) represent the azimuth angles at the transmitter and receiver respectively. Here, \(G^{\mathrm{H}}(d)\) is the distance-dependent channel gain, which captures the effect of path-loss and large-scale shadowing as follows:
\[G^{\mathrm{H}}(d)|_{\text{dB}}=\mathrm{PL}_{0}+10\eta\log_{10}\left(\dfrac{d }{d_{\mathrm{ref}}}\right)+X_{(\sigma_{s})}, \tag{3}\]
where \(\mathrm{PL}_{0}\) denotes the pathloss constant, \(d_{\mathrm{ref}}\) is a reference distance, \(\eta\) denotes the pathloss exponent and \(X_{(\sigma_{s})}\) represents the static shadowing effect, modeled as a Gaussian variable with zero mean and variance \(\sigma_{s}^{2}\). Also, \(\zeta(t)\) represents the small-scale fading coefficient, which follows a \(m\)-Nakagami distribution. Eventually, \(\chi(t)\) denotes the shadowing effect due to the passive object moving around the corresponding link. We adopt the following Gaussian modeling as in [18]:
\[\chi(t)|_{\mathrm{dB}}=-A\exp\left(-\dfrac{|\psi-\psi_{B}(t)|^{2}}{\sigma_{B }^{2}}\right) \tag{4}\]
where \(\psi_{B}(t)\) is the relative angle between the blocker and the receiver main lobe. Accordingly, \(|\psi-\psi_{B}(t)|\) represents the angle between the blocker and the interfering link with relative angle \(\psi\)_w.r.t._ the receiver (see Fig. 1); \(A\) denotes the attenuation (in dB) when the link is fully blocked (_i.e._, \(|\psi-\psi_{B}(t)|=0\)) and \(\sigma_{B}\) is a parameter that depends on the characteristics of the blocker (_e.g._, size) and radio parameters. Although simplistic, this model allows an effective geometric analysis of blocking phenomena.
**Cell interference.** In our system model, as we do not specifically optimize beamforming, interference results from the
Fig. 1: Network model with 3 UEs interfering on a communication between a typical UE \(u_{0}\) and serving BS \(b_{0}\). In this network, a mobile robot moves around causing blockages.
overlapping of communication beams between mmWave BSs and UEs. Indeed, let us consider a typical BS \(b_{0}\) placed at a distance \(d_{0}\) from its served UE \(u_{0}\). We refer to \(u_{0}\to b_{0}\) as the reference link. An interfering UE \(u_{i}\), located at a distance \(d_{i}\) with a relative angle of arrival (AoA) \(\psi_{i}\)_w.r.t._\(b_{0}\), is served by another BS \(b_{j}\) in a relative angle of departure (AoD) \(\varphi_{i,j}\) (see Fig. 1). We denote with \(I_{i,j}\), the resulting interference perceived by BS \(b_{0}\):
\[I_{i,j}(t)=\chi_{i}(t)\zeta_{i}(t)P_{i}^{\rm Tx}G_{\theta}^{\rm Tx}(\varphi_{i,j})G^{\rm H}(d_{i})G_{\vartheta}^{\rm Rx}(\psi_{i}) \tag{5}\]
Thus, the total interference perceived by the typical BS as a function of signal angle of arrival (AoA) \(\psi\) reads as:
\[I(\psi,t)=\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
## IV Numerical results
We consider a network of \(M\) BSs and \(K\) UEs distributed in the space of a circular industrial environment of radius \(R=100\ \mathrm{m}\) according to homogeneous PPP with densities \(\lambda_{b}=6\times 10^{-4}\ \mathrm{m}^{-2}\) and \(\lambda_{u}=1.5\times 10^{-3}\ \mathrm{m}^{-2}\) respectively. Each UE is associated to its closest BS. A BS \(b_{0}\) located at the center of the network is taken as a reference: it performs side lobe sensing and computes the detection matrix on a link with a randomly chosen associated UE \(u_{0}\). We set the width of the angular sectors to \(2\alpha=10^{\circ}\) (_i.e._\(n+1=36\) sectors overall, covering the 2D space), and the size of the observation window is set to \(\tau=50\ \mathrm{s}\). The sector width is always set equal to the antenna beamwidth \(\vartheta\) of BS \(b_{0}\). When we detect activity in a sector, the estimated angular position \(\hat{\psi}_{B}\) of the moving object is taken equal to the orientation of the sector resulting in a mean approximation error of \(5^{\circ}\) (if the object is actually there). We consider the pathloss model in Eq. (3), where the values of its parameters have been characterized via in-lab experiments based on mmWave channel sounding in a representative industrial Internet of Things (IoT)-type scenario [22]. Other simulation parameters are presented in Table I.
### _Detection of blocker signature_
For a random network deployment, we consider a mobile object of radius \(r_{B}=1\ \mathrm{m}\) moving along a random trajectory with an arbitrary velocity (\(w_{B}\sim[-15^{\circ},15^{\circ}]s^{-1}\)). Figure 1(a) shows an example of a network deployment, where multiple deployed BSs jointly provide service to UEs. In this figure, a blocker is moving, following a trajectory, which crosses the communication link \(u_{0}\to b_{0}\) at a certain point in the main sector \(s_{0}\). Figure 1(b) presents the logarithm of the raw sensing matrix; \(\log(\mathbf{\Lambda}_{\tau,n})\) of Eq. (9), before being processed. The processing applied to the sensing matrix makes it possible to effectively reveal the signature of the mobile object by empirically setting \(l_{0}=1\) and \(l_{1}=17\) in Figure 1(c), where the color of each pixel quantifies the strength of the signature, _i.e._ the likelihood of the blocker being present in the corresponding sector. Thanks to this processing, it is possible to detect earlier the occurrence of the blockage, _i.e._ as the blocker approaches sector \(s_{0}\), and therefore to be able to cope with it before it happens. Besides, it is also possible to get information on the blocker trajectory as the bottom of Figure 1(a) reveals. This figure compares the estimated trajectory of the blocker to its actual one in terms of its angular position _w.r.t._ the link \(u_{0}\to b_{0}\). We can observe that the proposed method allows an effective detection of the blocker and a follow-up of its angular trajectory. Note, however, that due to the low number of UEs in some regions of the network, which are the interferers on which this approach depends on to estimate the position of the blocker, the detection is not effective at all times (_i.e._, lack of observations regarding the blocker for some time epochs). Nevertheless, the detection is still accurate in the main region of interest, _i.e._ when the blocker is close to the BS as shown in Figure 1(a), that makes it possible to anticipate a blockage event. Beyond, both the missed detection events and the observation quantization error caused by space sectorization (see the step-wise fluctuations of at most \(5^{\circ}\) around the ground-truth angular trajectory in Figure 1(a)) could be easily overcome by standard Bayesian filtering tools, such as Extended Kalman Filtering (EKF), even if this extra processing step does not fall in the scope of this study here. In general, the closer the blocker gets to the BS and the closer it gets to a dense region of the network, the better the detection will be. Obviously, from Figure 1(c), it is also possible to extract additional information on the direction and speed of the blocker. Besides, although in this paper the trajectory is detected in terms of angular position, it is possible to be more precise, and even to locate the blocker, if we consider the sharing of sensed information between different entities in the network, which will be addressed in future work.
### _Detection accuracy vs antenna PSL_
In this section, we assess the performance of the proposed approach for blocker signature detection through \(N=500\) Monte-Carlo simulations. To avoid cumbersome computations, we assume that the circular area around \(b_{0}\) is partitioned into a mesh grid \(\mathcal{G}\) consisting of equal-size cells of angle equals to \(10^{\circ}\) and radius equals to \(5\ \mathrm{m}\). We consider a passive mobile object of radius \(r_{B}=1\ \mathrm{m}\) moving around \(b_{0}\) in a circular motion at different distances up to a maximum distance of \(50\ \mathrm{m}\). The blocker moves from one cell to another in a sequential manner. For a quantitative evaluation of the detection accuracy, we adopt the following weighted mean absolute error (wMAE):
\[\mathrm{wMAE}=\frac{1}{N}\sum_{k=1}^{N}\mathbb{E}_{(d,\psi)\in\mathcal{G}} \left[w(d_{B})|\psi_{B}-\hat{\psi}_{B}|\right], \tag{14}\]
where \(\hat{\psi}_{B}\) is the blocker estimated angular position when its actual location is in cell \((d_{B},\psi_{B})\) of the grid \(\mathcal{G}\). Here, \(w(d_{B})\) is a weighted factor, which depends on the actual distance of the blocker from the sensing BS \(b_{0}\). This allows for a realistic evaluation of the errors since the farther the blocker is, the more complex it is to predict its angular position accurately due to path losses. In practice, following the pathloss model, we define \(w(d_{B})=\exp(-\mu d_{B}^{\eta})\), where \(\eta\) is the path loss coefficient and \(\mu\) is a scaling factor. In particular, \(\mu=0\), corresponds to the non-weighted MAE.
\begin{table}
\begin{tabular}{c|c} \hline Parameters & Values \\ \hline Carrier frequency & \(28\ \mathrm{GHz}\) \\ Bandwidth \(\mathcal{B}\) & \(400\ \mathrm{MHz}\) \\ Pathloss & \(60.1+14\log(d\ [\mathrm{km}])\) \\ Power \(P^{\mathrm{Tx}}\) (UE) / \(P^{\mathrm{Rx}}\) (BS) & \(19.6/33\ \mathrm{dBm}\) \\ Noise power spectral density \(N_{0}\) & \(-174\ \mathrm{dBm}/\ \mathrm{Hz}\) \\ Small-scale fading \(\sim m\)-Nakagami & \(m=3\) \\ Rx beamwidth \(\vartheta\) & \(10^{\circ}\) \\ Tx beamwidth \(\theta\) & \(135^{\circ}\) \\ \(G_{0}(z)\) & \(\pi(21.32z+\pi)^{-1}\)[16] \\ \(G_{K}\) & \(G_{0}(\vartheta)\) \\ \(G_{0}(\vartheta)\times 10^{2.028}\) \\ \(G_{K}^{\mathrm{Tx}}\) & \(0\) \\ \(G_{m}^{\mathrm{Tx}}\) & \(2G_{0}(\theta)\times 10^{2.028}\) \\ Blockage Attenuation \(A\) & \(100\ \mathrm{dB}\) \\ \(\sigma_{B}\) & \(\sqrt{8}\ r_{B}\) \\ \hline \end{tabular}
\end{table} TABLE I: Simulation parameters
We start by assessing the impact of antenna PSL (by varying the side lobe gain) on the MAE. The associated results are presented in Figure (a)a. First, we can notice that for any value of PSL the accuracy degrades as the blocker moves away from the center where \(b_{0}\) is located. This is due to two main reasons: \(1)\) as the blocker moves away from the center, it is less likely to be obstructing the LoS between \(b_{0}\) and the interfering UEs, \(2)\) the received power of the interference degrades as the interferer is farther, so it is highly probable that blocking far interference will not be observed by \(b_{0}\). This result confirms the outcome of the previous experiment. In Fig. (b)b, we show the resulting \(\mathrm{wMAE}(\mu)\) for different values of \(\mu\), along with the corresponding \(95\%\) confidence intervals with different values of PSL. We can notice that the detection accuracy degrades as the PSL value increases. Indeed, with larger values of PSL, the interference perceived by the side lobes is weaker, and thus the detection of the signature of the blocker is less effective. For low PSL values, _i.e._ when the side lobe gain approaches the main-lobe gain, the detection accuracy decreases as well because of low signal-to-interference-and-noise ratio (SINR) values in the sensing matrix. Thus, a trade-off can be found between the accuracy of the detection _i.e._ by reducing the PSL, and the quality of the communication _i.e._ by increasing the PSL to less suffer from interference. Yet, we can observe that the proposed mechanism is still highly accurate as \(\mathrm{wMAE}(\mu=0.02)<5^{\circ}\) (half the sector width) and \(\mathrm{wMAE}(\mu=0.01)<10^{\circ}\) for most PSL values. Even the non-weighted MAE is less than \(10^{\circ}\) except for very low side lobe gain (_i.e._ very high PSL).
### _Detection accuracy vs beamwidth and blocker size_
Other metrics that impact the performance of the side lobe sensing are the antenna beamwidth and the blocker size. Similar to the previous experiment, Figure 4 presents the blocker angular position MAE in each cell of the network for different combinations of beamwidth and blocker radius \(r_{B}\). As the blocker size increases, it gets more detectable, as it is
Fig. 3: Weighted mean absolute error of blocker angular position vs antenna peak-side-lobe gain.
Fig. 2: Example of blockage detection using side lobe sensing mechanism.
more capable of blocking interference coming from different angles, but this indeed weakens the accuracy especially when the blocker is large and close to the center, as it could coexist in multiple sectors simultaneously. Also, as the beamwidth increases and so the sector width, the accuracy degrades since the range of error in a sector increases, however, the blocker gets more tracked as the probability to have interferers in the sector increases (see Eq. (10)). Consequently, the size of the sector should be carefully tuned so that to find a trade-off between the accuracy and the continuity of detection.
## V Conclusion
In the context of predicting and avoiding blockages in mmWave systems, this paper presents a new mechanism that detects moving objects in the surrounding of a particular communication link. This approach senses the radio environment using side-lobe information in order to detect moving objects. It relies on observing the fluctuation in the SINR values caused by the presence of the blocker in angular sectors around the communication link of interest. We show that it is capable of detecting moving objects in a range of \(360^{\circ}\), without requiring any additional system unlike other reported methods. This indeed further provides information on the position, direction, trajectory and certainly the velocity of a moving object. Using this information, the node would have enough time to handover to another available base station and avoid service outage. In order to improve the accuracy of this approach, sharing information between different entities in the network will be studied in future work. We will also exploit this method in order to predict blockages and manage resource allocation. Eventually, this work could be further extended to cover mobile UEs and to consider multiple moving blockers.
## Acknowledgment
This work was supported by the french government under the Recovery Plan (CRIOT project).
|
2309.09599 | MEDL-U: Uncertainty-aware 3D Automatic Annotation based on Evidential
Deep Learning | Advancements in deep learning-based 3D object detection necessitate the
availability of large-scale datasets. However, this requirement introduces the
challenge of manual annotation, which is often both burdensome and
time-consuming. To tackle this issue, the literature has seen the emergence of
several weakly supervised frameworks for 3D object detection which can
automatically generate pseudo labels for unlabeled data. Nevertheless, these
generated pseudo labels contain noise and are not as accurate as those labeled
by humans. In this paper, we present the first approach that addresses the
inherent ambiguities present in pseudo labels by introducing an Evidential Deep
Learning (EDL) based uncertainty estimation framework. Specifically, we propose
MEDL-U, an EDL framework based on MTrans, which not only generates pseudo
labels but also quantifies the associated uncertainties. However, applying EDL
to 3D object detection presents three primary challenges: (1) relatively lower
pseudolabel quality in comparison to other autolabelers; (2) excessively high
evidential uncertainty estimates; and (3) lack of clear interpretability and
effective utilization of uncertainties for downstream tasks. We tackle these
issues through the introduction of an uncertainty-aware IoU-based loss, an
evidence-aware multi-task loss function, and the implementation of a
post-processing stage for uncertainty refinement. Our experimental results
demonstrate that probabilistic detectors trained using the outputs of MEDL-U
surpass deterministic detectors trained using outputs from previous 3D
annotators on the KITTI val set for all difficulty levels. Moreover, MEDL-U
achieves state-of-the-art results on the KITTI official test set compared to
existing 3D automatic annotators. | Helbert Paat, Qing Lian, Weilong Yao, Tong Zhang | 2023-09-18T09:14:03Z | http://arxiv.org/abs/2309.09599v3 | # MEDL-U: Uncertainty-aware 3D Automatic Annotation based on Evidential Deep Learning
###### Abstract
Advancements in deep learning-based 3D object detection necessitate the availability of large-scale datasets. However, this requirement introduces the challenge of manual annotation, which is often both burdensome and time-consuming. To tackle this issue, the literature has seen the emergence of several weakly supervised frameworks for 3D object detection which can automatically generate pseudo labels for unlabeled data. Nevertheless, these generated pseudo labels contain noise and are not as accurate as those labeled by humans. In this paper, we present the first approach that addresses the inherent ambiguities present in pseudo labels by introducing an Evidential Deep Learning (EDL) based uncertainty estimation framework. Specifically, we propose MEDL-U, an EDL framework based on MTrans, which not only generates pseudo labels but also quantifies the associated uncertainties. However, applying EDL to 3D object detection presents three primary challenges: (1) relatively lower pseudo label quality in comparison to other autolabeters; (2) excessively high evidential uncertainty estimates; and (3) lack of clear interpretability and effective utilization of uncertainties for downstream tasks. We tackle these issues through the introduction of an uncertainty-aware IoU-based loss, an evidence-aware multi-task loss, and the implementation of a post-processing stage for uncertainty refinement. Our experimental results demonstrate that probabilistic detectors trained using the outputs of MEDL-U surpass deterministic detectors trained using outputs from previous 3D annotators on the KITTI val set for all difficulty levels. Moreover, MEDL-U achieves state-of-the-art results on the KITTI official _test_ set compared to existing 3D automatic annotators.
## I Introduction
Localizing 3D objects in world coordinates is a fundamental module in many robotics and autonomous driving applications. Recently, with the development of deep neural networks, network-based methods [1] have dominated this field and are capable of classifying, detecting, and reconstructing objects in 3D space.
However, the training of network-based 3D detectors requires a massive amount of data labeled with 3D bounding boxes, which often involves significant costs [2, 3]. To alleviate the heavy annotation burden, one promising direction is weakly supervised training that utilizes LiDAR data, image, and 2D bounding boxes to train a 3D object annotator [4, 5, 6, 7]. The weakly-supervised methods propose frameworks that can automatically annotate objects in 3D, minimizing the reliance on ground truth labels during downstream training of 3D detectors. Although current approaches can achieve good 3D bounding box annotations, the generated 3D bounding boxes are not as accurate as those labeled by humans. In the illustrated pseudo labels from MTrans [6] on the left side of Figure 1, it is evident that pseudo labels 1 and 3 contain imprecise estimates of box parameters. Unfortunately, current approaches neglect these annotation noises, directly utilizing these pseudo labels to train 3D detectors. Clearly, neglecting these noises in pseudo labels degrades the effectiveness of training downstream 3D detectors. To alleviate this problem, our work considers both the tasks of annotating the 3D bounding boxes and estimating the annotation uncertainty to indicate the annotation inaccuracies. On the right side of Figure 1, we show that our work not only predicts pseudo labels but also determines the uncertainty estimates for each 3D box parameter, which are then utilized to train 3D detectors more effectively.
Evidential deep learning (EDL) has been effectively utilized for uncertainty estimation in regression tasks [8] and has found diverse applications in computer vision tasks [9, 10, 11]. Hence, we propose **MT**rans-based **E**vidential **D**eep **L**earning autolabeler with **U**ncertainty **E**stimation capability (MEDL-U). Our input is similar to the typical 3D automatic annotator, which comprises of a collection of scene frames, corresponding LiDAR data, 2D image, and the 2D bounding boxes for the objects. With these inputs, our goal is to develop a model that produces not only accurate 3D bounding box annotations for the surrounding objects but also a measure of uncertainty for annotated bounding box parameters (the predicted center, length, width, height, and
Fig. 1: Illustration of the proposed MEDL-U in comparison with current state-of-the-art 3D autolabeler, MTrans [6]. MEDL-U not only generates pseudo labels but also estimates the associated uncertainties to indicate the inaccuracy of the pseudo labels. Ground-truth boxes and pseudo labels are colored red and blue, respectively.
rotation (yaw angle)), all the while avoiding any additional manual annotation costs or huge computational overhead.
However, directly applying the EDL framework for uncertainty estimation in 3D autolabeler introduces three main challenges: (1) directly incorporating the evidential loss with evidence regularizer from EDL framework [8] results in worse performance during inference compared to the IoU-based loss as the latter unifies all 3D box parameters into one metric and aligns with the evaluation objective; (2) the uncertainties are not well-calibrated and can become unreasonably high during training; and (3) lack of interpretability, reasonable applicability and proper utilization of the generated uncertainties due to the variations in the magnitude of the evidential parameters for each 3D box.
To address these problems, we introduce an uncertainty-aware IoU loss to help the model regress high-quality box variables. Moreover, we make the multi-task loss functions evidence-regularized with the intuition that the model's predicted total evidence, as determined by the EDL framework, is inversely related to the losses for multiple tasks. Finally, we propose a post-processing step involving the rescaling of uncertainties to ensure uniformity across diverse box parameters. Simultaneously, we incorporate an objective centered on minimizing a monotonic function, denoted as f and parameterized by \(\kappa\), whose primary aim is to determine the optimal value of \(\kappa\) that, upon applying the uncertainties through f, yields the lowest Negative Log-Likelihood (NLL) over the same limited training dataset utilized to train the 3D autolabeler.
With a limited number of annotated frames (e.g. 500 frames), our proposed MEDL-U not only generates 3D box annotations but also measures of uncertainty for each pseudo label box parameter, which can be utilized for loss reweighting during the training of existing 3D object detectors. Extensive experiments demonstrate that our MEDL-U improves the performance of 3D detectors during inference on the KITTI _val_ and _test_ set and outperforms previous 3D autolabelers.
## II Related Literature
### _Automatic 3D Bounding Boxes Annotation_
Recently, the literature has witnessed a rise in 3D automatic annotation frameworks. An example is WSPCD [12] which allows learning 3D object parameters from a few weakly annotated examples. It has a two-stage architecture design: the first stage for cylindrical object proposal generation and the second stage for cuboids and confidence score prediction. A non-learning based approach that detects vehicles in point clouds without any 3D annotations is FGR [4] which also consists of two stages: coarse 3D segmentation stage and the bounding box estimation stage. More recently, Liu _et al.[6]_ propose a Transformer-based 3D annotator called MTrans, which aims to address the prevalent sparsity problem of unstructured point clouds from LiDAR scans by generating extra 3D points through a multimodal self-attention mechanism combined with additional multi-task and self-supervision design. Different from previous appoaches, Qian _et al.[7]_ propose a simplified end-to-end Transformer model, CAT, which captures local and global relationships through an encoder-decoder architecture. The encoder consists of intra-object encoder (local) and inter-object encoder (global) which performs self-attention along the sequence and batch dimensions. Through this, it can model the inter-object feature relation that gives additional information to hard samples. Additionally, several approaches (GAL [13], VS3D [14], WS3DPR [15]) have also been proposed. However, all these 3D automatic annotators only generate 3D pseudo labels without any estimation of the uncertainty or noise associated with them. In this study, we utilize the generatd 3D pseudo labels and the estimated uncertainties to train 3D detectors. This approach aims to mitigate the impact of generated noisy labels by reducing the influence of inaccurate supervision signals and enabling the model to effectively learn from more reliable pseudo labels.
### _Uncertainty Estimation and 3D Probabilistic Detection_
Uncertainties in deep learning-based predictions can be categorized into two: one that is caused by inherent noise in data (aleatoric), and the other is model uncertainty due to incomplete training or model design (epistemic). As a tool for uncertainty estimation in regression tasks [8], Evidential Deep Learning (EDL) has found diverse applications in various tasks such as stereo matching [9], open set recognition [10], molecular structure prediction [16], and remote sensing [11]. In this work, we use the framework of EDL to estimate prediction uncertainties in 3D Object Detection. Utilizing these uncertainties to define pseudo label distributions, probabilistic object detectors, which typically adapt deterministic detectors' architecture, can enable the prediction of probability distributions for object categories and bounding boxes. A framework that utilizes probabilistic detectors is GLENet [17], where the 3D detector predict distributions for the 3D box and assume that the ground truth labels follow Gaussian distribution with the uncertainties as the variance. They train the models with the KL divergence loss to supervise the predicted localization uncertainty. In this work, we follow the same approach but instead incorporate 3D pseudo label uncertainties.
## III Evidential Deep Learning (EDL) for Uncertainty Estimation in 3D Automatic Labelers
### _Automatic Annotation with Pseudo label Uncertainty Estimation_
Given the point cloud data, 2D image and the 2D bounding boxes of each object, the objective of this work is to generate the 3D bounding boxes annotation for each object and estimate the corresponding uncertainty for each 3D box parameter. First, the autolabeler is initially trained with a small set of ground truth 3D bounding boxes (e.g., 500 frames of data), where the input of the autolabeler are the point cloud data, 2D image and the 2D bounding boxes of each object and the output are the estimated 3D bounding boxes and corresponding uncertainty estimates. Secondly,
with the trained autolabeler, we employ it to predict the 3D bounding boxes for the remaining data and do the uncertainty estimation for the predicted 3D boxes. Finally, we leverage the predicted 3D bounding boxes and estimated uncertainty to train a downstream probabilistic 3D detector on the massive weakly annotated data. Compared to fully supervised setting, our work only needs a few frames of labeled data to train the 3D autolabeler, which significantly reduces the manual annotation cost.
In this paper, we build the model architecture for the 3D automatic labeler from MTrans [6]. MTrans extracts object features using a multimodal self-attention module that processes point cloud and image inputs fused with point-level embedding vectors. The extracted object features are utilized for various tasks such as foreground segmentation, point generation, and 3D box regression. However, the generated 3D box pseudo labels may contain noise. To account for these inaccuracies of the pseudo labels, we incorporate uncertainty estimation task to MTrans by considering EDL, a powerful uncertainty estimation framework, and applying it to MTrans. For a straightforward incorporation of uncertainty estimation in MTrans via EDL, we include an evidential box head to regress the parameters of the evidential distribution and the 3D bounding box. For training the model, we replace the dIoU loss with the evidential loss to supervise the model in learning the box parameters. Moreover, the evidence regularizer is added to calibrate the uncertainties. However, there are problems with this approach of directly applying EDL in MTrans, which we will discuss in the next sections.
### _Background on Evidential Deep Learning_
In 3D object detection, we define a 3D bounding box by its center coordinates (\(x\), \(y\) and \(z\)), length (\(l\)), width (\(w\)), height (\(h\)), and rotation (yaw angle denoted as \(rot\)). From the viewpoint of EDL, we assume each label \(j\in\mathbb{J}=\{x,y,z,l,w,h,rot\}\) is drawn i.i.d. from a Gaussian distribution where the mean \(\mu_{j}\) and variance \(\sigma_{j}^{2}\) are unknown. EDL framework assumes that \(\mu_{j}\) is drawn from a Gaussian prior and \(\sigma_{j}^{2}\) is drawn from an inverse-gamma prior.
\[j\sim\mathcal{N}(\mu_{j},\sigma_{j}^{2}),\ \ \ \mu_{j}\sim\mathcal{N}(\gamma_{j}, \sigma_{j}^{2}\nu_{j}^{-1}),\ \ \ \sigma_{j}^{2}\sim\Gamma^{-1}(\alpha_{j},\beta_{j})\]
where \(\gamma_{j}\in\mathbb{R}\), \(\nu_{j}>0\), and \(\Gamma(\cdot)\) is the gamma function with \(\alpha_{j}>1\) and \(\beta_{j}>1\).
Let \(\Phi_{j}\) and \(\Theta_{j}\) denote the set of parameters \(\{\mu_{j},\sigma_{j}^{2}\}\) and \(\{\gamma_{j},\nu_{j},\alpha_{j},\beta_{j}\}\), respectively. Assuming independence of the mean and variance, the posterior \(p(\Phi_{j}|\Theta_{j})\) is defined to be an normal-inverse gamma (NIG) distribution, which is a Gaussian conjugate prior.
As presented in [8], the hyperparameters of the evidential distribution can be obtained by training a deep neural network (called evidential head) to output such values. For each 3D bounding box parameter, our model predicts four evidential parameters: \(\gamma_{j},\nu_{j},\alpha_{j},\beta_{j}\).
Through an analytic computation of the maximum likelihood Gaussian without repeated inference for sampling [8], EDL provides a framework for the estimation of uncertainties in regression. We can calculate the prediction, aleatoric, and the epistemic uncertainties for each 3D box parameter as follows:
\[E[\mu_{j}]=\gamma_{j},\ \ \ \ E[\sigma_{j}^{2}]=\frac{\beta_{j}}{ \alpha_{j}-1}, \tag{1}\] \[Var[\mu_{j}]=E[\sigma_{j}^{2}]/\nu_{j}=\frac{\beta_{j}}{\nu_{j}( \alpha_{j}-1)}. \tag{2}\]
_Maximizing the data fit:_ We are given the hyperparameters \(\Theta_{j}\) of the evidential distribution as outputs of the proposed evidential head for \(j\in\mathbb{J}=\{x,y,z,l,w,h,rot\}\). The likelihood of an observation \(y_{j}\) is computed by marginalising over the likelihood parameters \(\Phi_{j}\). If we choose to impose a NIG prior onto the Gaussian likelihood function results, an analytical solution is derived as follows:
\[p(y_{j}|\Theta_{j})=St_{2\alpha_{j}}(y_{j}|\gamma_{j},\frac{\beta_{j}(1+\nu _{j})}{\nu_{j}\alpha_{j}}), \tag{3}\]
where \(St_{\nu}(t|r,s)\) corresponds to the evaluation of the Student's t-distribution at the value t, with parameters for location (\(r\)), scale (\(s\)), and degrees of freedom (\(\nu\)).
For training the EDL framework, we define the evidential loss as the mean of the negative log likelihood (NLL) for each 3D box parameter as follows:
\[\mathcal{L}_{\textit{crit}}(\Psi)=-\frac{1}{|\mathbb{J}|}\sum_{j\in\mathbb{J }}log\ p(y_{j}|\Theta_{j}). \tag{4}\]
_Uncertainty Calibration:_ Similar to Amini _et al.[8]_, we can scale the total evidence with the prediction error for each 3D box parameter in the following manner:
\[\mathcal{L}_{R}(\theta)=\frac{1}{|\mathbb{J}|}\ \sum_{j\in\mathbb{J}}\phi_{j} \|y_{j}-\gamma_{j}\|, \tag{5}\]
where \(\phi_{j}\) is the total evidence defined as \(\phi_{j}=2\nu_{j}+\alpha_{j}\), and \(||\cdot||\) is L1 norm.
### _Problems with Directly Applying EDL to 3D Bounding Box Regression_
* Significant variability exists in uncertainty estimates for box regression parameters, with some resulting in excessively high values. As also discussed in [18], there exist the gradient shrinkage problem in the evidential NLL loss, where the model can decrease the loss value by increasing the uncertainty instead of getting accurate point estimates for the predicted bounding box parameters. The gradient with respect to the the predicted values can become very small by merely increasing the uncertainties.
* Since the NLL-based evidential loss is not sufficient to optimize the accuracy of the prediction, the generated pseudo labels exhibit lower quality than MTrans. Empirical results also demonstrate that IoU-based loss is better than evidential loss to learn the 3D box parameters as the latter treats each 3D box parameter independently.
### _Evidence-aware Multi-task Loss_
Intuitively, the loss information obtained from several multi-task loss functions during training corresponding to the same object can be utilized to help the model understand the evidence in support to the prediction. In line with the works on uncertainty estimation [19, 9], we introduce regularized multi-task loss functions based on the form of
NLL minimization of aleatoric uncertainty estimation and intuitively aligned with the learned loss attenuation. The main insight behind these loss functions is that the model's predicted evidence is inversely related to the losses for multiple tasks, effectively serving as an evidence regularizer. Let \(\mathcal{L}_{t}^{{}^{\prime}}\) be the loss function corresponding to the task \(t\) where \(t\in\{\text{seg, depth, conf, dir}\}\). The proposed evidence-aware multi-task loss is
\[\begin{split}\mathcal{L}_{t}&=\frac{\mathcal{L}_{t }^{{}^{\prime}}}{1/(\phi-1)}+\text{ log }\frac{1}{\phi-1}\\ &=(\nu+2\alpha-1)\,\mathcal{L}_{t}^{{}^{\prime}}-\text{ log }(\nu+2\alpha-1),\end{split} \tag{6}\]
where \(\alpha=|\mathbb{J}|^{-1}\sum_{j\in\mathbb{J}}\alpha_{j}\) and \(\nu=|\mathbb{J}|^{-1}\sum_{j\in\mathbb{J}}\nu_{j}\). In Figures C.1 and C.2, we demonstrate the effect of including the evidence-aware multi-task loss in the magnitude of the epistemic uncertainties.
### _Uncertainty-aware IoU Loss_
While the NLL-based evidential loss can enable the model to learn the 3D box and evidential parameters, it is not sufficient to regress 3D box variables with a quality comparable to those produced by existing autolabelers. Hence, we propose to include an IoU-based loss inspired by the DIoU loss [20] similar in form to 6. This makes \(\mathcal{R}\) (penalty term for the prediction and ground truth) and IoU-related term uncertainty-aware.
\[\mathcal{L}_{IoU}=(\nu+2\alpha-1)\cdot(\mathcal{R}+(1-IoU))-\text{ log }(\nu+2\alpha-1). \tag{7}\]
Incorporating this new evidence-aware IoU loss improves the model's ability to handle uncertainty while generating high quality 3D box that is comparable to other 3D autolabelers. In Figure D.3, we demonstrate the effect of including the uncertainty-aware IoU loss in the model performance on the validation set. Note that the original evidence regularizer [8] could also train \(\gamma_{j}\). Our empirical findings suggest that eliminating this regularizer is necessary as components of the proposed multi-task losses and the uncertainty-aware IoU loss already serve for regularization purposes.
### _Training of the 3D Autolabeler_
In summary, the final overall loss function \(\mathcal{L}\) is computed as a weighted combination of the evidential loss, the uncertainty-aware IoU loss, and the multi-task losses with evidence regularizers:
\[\begin{split}\mathcal{L}=\eta_{seg}\mathcal{L}_{seg}+\eta_{depth }\mathcal{L}_{depth}+\eta_{conf}\mathcal{L}_{conf}+\eta_{dir}\mathcal{L}_{ dir}\\ +\eta_{vei}\mathcal{L}_{vei}+\eta_{IoU}\mathcal{L}_{IoU},\end{split} \tag{8}\]
where \(\eta_{seg},\eta_{depth},\eta_{conf},\eta_{dir},\eta_{box}\) and \(\eta_{vei}\) are hyperparameters. Please refer to Figure 2 for the overall workflow of the model.
### _Pseudo Label Uncertainty Post-processing_
Prior to utilizing the pseudo labels and uncertainties as supervision signals in the training process of existing 3D detectors, it is necessary to apply a post-processing step to address the variability in the magnitudes of uncertainties associated with each 3D box parameter and to make the uncertainties more appropriate for downstream task. We propose a post-processing procedure that rescales the uncertainties but ensures that the resultant uncertainty values maintain its Spearman's rank correlation with two crucial metrics: the L2 norm of the residuals and the 3D IoU between the predicted box and the ground truth box. Initially, the predicted epistemic uncertainty pertaining to each 3D box parameter \(j\), where \(j\in\mathbb{J}=\{x,y,z,l,w,h,rot\}\), undergoes a transformation which constrains these uncertainties within the range of 0 to 1 through the application of a simple monotonic function such as min-max scaling. Moreover, we pass uncertainty estimate for each box parameter \(x_{j}\) to a function \(f\), formulated as \(f(x_{j})=x_{j}^{1/\kappa_{j}}\), where we carefully select values of \(\kappa_{j}\in[0,10]\) to minimize the NLL of the uncertainties for each 3D box parameter \(j\) with respect to the same limited training data used to train the 3D autolabeler. The assumption is that lower NLL means better uncertainty estimates, consequently improving supervision for the downstream task. Lastly, we generate multiple variations of uncertainty by passing them to the function \(g(x_{j})=x_{j}^{1/\epsilon_{j}}\), with \(\epsilon_{j}\) acting as a downstream training hyperparameter. The underlying rationale for introducing \(\epsilon_{j}\) is rooted in the observation that the \(\kappa_{j}\) parameter only effectively minimizes NLL over the limited training data and there must exist an additional parameter which may prove essential for appropriately adjusting the uncertainties to achieve lower NLL over the entire training dataset.
### _Downstream Training via Probabilistic 3D Detector_
A probabilistic object detector enables the inclusion of pseudo label uncertainties during the training phase, where these uncertainties can be interpreted as factors for reweighting losses. Our work follows [21, 17] in transforming a 3D detector from deterministic to probabilistic, where the detection head is enforced to estimate a Gaussian probability distribution over bounding boxes. Let \(\theta\) indicate the learnable network weights of a typical detector, \(\hat{y}\) indicate the predicted box parameters, and \(\hat{\sigma}^{2}\) the predicted localization variance. Moreover, the pseudo ground truth bounding boxes are also assumed to be Gaussian distributed having a variance \(\sigma^{2}\) where \(\sigma^{2}\) are estimated by MEDL-U. Let the pseudo ground truth bounding box be denoted by \(y_{g}\) and D refer to the pseudo label distribution. Hence, the generated pseudo label uncertainty can be incorporated in the KL Divergence loss between the distribution of prediction and pseudo ground truth in the detection head:
\[\begin{split} L_{reg}&=D_{KL}(P_{D}(y)||P_{\theta}( y))\\ &=log\frac{\hat{\sigma}}{\sigma}+\frac{\sigma^{2}}{2\hat{\sigma}^{ 2}}+\frac{(y_{g}-\hat{y})^{2}}{2\hat{\sigma}^{2}}.\end{split} \tag{9}\]
Similar to [17, 22], we also employ 3D Variance Voting which uses the predicted variance \(\hat{\sigma}^{2}\) to combine nearby bounding boxes for better 3D localization prediction.
## IV Experimental Setup
### _Dataset_
The KITTI Object Detection dataset [23], renowned for 3D detection in autonomous driving, is employed in this study.
The dataset has a total of 7481 frames with labels in 3D. Following the official procedure, the dataset is divided into training and validation sets, consisting of 3,712 and 3,769 frames, respectively. Similar to previous works [4, 5, 6, 7], we concentrate on the Car class and exclude objects with fewer than 5 foreground LiDAR points.
### _Implementation Details and Model Structure_
Our method is implemented in PyTorch [24]. Similar to the original MTrans [6], MEDL-U architecture incorporates four multimodal self-attention layers, each having a hidden size of 768 and 12 attention heads. Unless otherwise stated, training the autolabeler requires 500 annotated frames only. We employed a dropout rate of 0.4 and utilized the Adam optimizer with a learning rate of \(0.60e-04\). Autolabeler training is conducted for 300 epochs, with a batch size of 5. Training of probabilistic 3D detectors is conducted for 80 epochs. Note that only epistemic uncertainties from MEDL-U are utilized. Unless specified differently, hyperparameter tuning on the KITTI validation set suggests using \(\epsilon=1\) for PointPillars and CIA-SSD, and \(\epsilon=5\) for other detectors. All trainings are executed on NVIDIA RTX 2080Ti GPU.
MEDL-U evidential regression head consists of four output units for each of the seven box attributes. The input to this head are the transformed element representations extracted from the self-attention layer. The evidential regression head comprises a sequence of linear layers, followed by Layer-Norm, a Dropout layer, and a ReLU activation function. To ensure that certain values are positive, a Softplus activation is applied to \(\nu\), \(\alpha\), and \(\beta\), where \(\alpha\) is then incremented by 1 to ensure \(\alpha>1\). For \(\gamma\), a linear activation is used. Overall, MEDL-U has over 23 million trainable parameters. It is trained to learn seven evidential distributions simultaneously for each of the 3D box parameters, along with the other tasks of segmentation, point generation, direction, and confidence prediction. In the next sections, MEDL refers to utilizing the pseudo labels only while MEDL-U refers to utilizing both pseudo labels and the uncertainties in the downstream training of 3D detectors.
### _Evaluation metrics_
#### Iv-C1 3D Box Prediction
To assess localization, we measure the Average Precision, specifically for 3D objects (\(AP_{3D}\)) and Birds Eye View (BEV) with a stringent IoU threshold of 0.70 to determine positive detections. Average Precision at 40 points (R40) means that the precision and recall are calculated at 40 different recall levels.
#### Iv-C2 Uncertainty Estimation
Widely employed in previous studies [25], the Negative Log-Likelihood (NLL) is utilized as a metric to evaluate the model's ability to estimate uncertainty. Lower NLL values indicate more accurate and more effective uncertainty estimation. Moreover, we also calculate the Spearman's rank correlation coefficients of the predicted uncertainties to the corresponding L2 norm of the residuals.
### _Experiment on Different Kinds of 3D Detectors_
We evaluate several one-stage and two-stage 3D detectors on the KITTI _val_ set when trained using outputs from various annotators. As seen in Table I, detectors trained on MEDL-U outputs outperform vanilla deterministic 3D detectors trained on MTrans and MEDL pseudo labels, demonstrating the effectiveness of utilizing not only the pseudo labels but also the uncertainty estimates for each 3D box parameter.
### _Comparison with 3D Automatic Annotation Frameworks_
As shown in Table II, evaluating the probabilistic PointRCNN trained with MEDL-U opu
Fig. 2: Architecture of the Training and Automatic Annotation Workflow of MEDL-U. The evidential box head regresses the evidential parameters which can be used to calculate the 3D box parameters and the uncertainties. During the automatic annotation, MEDL-U regresses 3D box parameters and the associated uncertainties for the unlabeled data. In the downstream training of probabilistic 3D detectors, the generated pseudo labels provide supervision during training and the associated box parameter uncertainties serve as factors for reweighting via the KLD loss.
yields superior performance relative to all existing and current 3D autolabeling methods in terms of \(AP_{3D}\).
In Table III, probabilistic PointRCNN trained with outputs of MEDL-U on the entire KITTI training and val sets results in better performance on the KITTI official _test_ set compared to PointRCNN trained with vanilla MTrans. Moreover, PointRCNN trained with MEDL-U outputs yields superior performance in terms of \(AP_{3D}\) and \(AP_{BEV}\) for both Easy and Moderate levels relative to all existing 3D automatic labeling method. MEDL-U does not outperform CAT across all difficulty levels, which is understandable considering that MEDL-U is built upon MTrans, chosen for its open-source availability. Moreover, CAT is trained for 1000 epochs and a batch size of 24, which is different from the training setting for MTrans and MEDL-U. We argue that the enhancements seen in MEDL-U over MTrans can also be applied to CAT.
We also show evaluation performance on the KITTI _val_ and _test_ set using PointPillars when MTrans and MEDL-U are trained with 500 and 125 annotated frames. MEDL-U significantly improves the baseline as shown in Table IV.
### _Comparison with Other Uncertainty Estimation Methods_
Using 3D box parameter uncertainties generated by MEDL-U and other popular uncertainty estimation methods, we evaluate probabilistic version of PointRCNN on the KITTI _val_ set. Three baseline methods were implemented: (1) A Monte Carlo dropout (MC Dropout) system with a dropout rate of 0.2 and was forwarded 5 times during inference. (2) A Deep Ensemble of 5 systems trained with different random seeds. (3) Confidence score predicted by vanilla MTrans was used as proxy to uncertainty. As shown in Table V, PointRCNN trained with MEDL-U (\(\epsilon=5\)) yields the overall best result in terms of \(AP_{3D}\)\(R\)40. Noticeably, using other uncertainty estimation methods to generate uncertainties also effectively increase \(AP_{3D}\)\(R\)40 of the _base_ method, although MC Dropout and Deep Ensemble come at the cost of huge additional computational overhead. While the Deep Ensemble approach can improve the _base_ result, the original pseudo label quality is relatively poor.
in the downstream 3D object detection task on the KITTI _val_ and _test_ set, showing the importance of quantifying uncertainties or noise in pseudo labels.
|
2309.08687 | Speeding up charge exchange recombination spectroscopy analysis in
support of NERSC/DIII-D realtime workflow | We report optimization work made in support of the development of a realtime
Superfacility workflow between DIII-D and NERSC. At DIII-D, the ion properties
measured by charge exchange recombination (CER) spectroscopy are required
inputs for a Superfacility realtime workflow that computes the full plasma
kinetic equilibrium. In this workflow, minutes matter since the results must be
ready during the brief 10-15 minute pause between plasma discharges. Prior to
this work, a sample CERFIT analysis took approximately 15 minutes. Because the
problem consists of many calculations that can be done independently, we were
able to restructure the CERFIT code to leverage this parallelism with Slurm job
arrays. We reduced the runtime to approximately 51 seconds -- a speedup of
roughly 20x, saving valuable time for both the scientists interested in the CER
results and also for the larger equilibrium reconstruction workflow. | Aarushi Jain, Laurie Stephey, Erik Linsenmayer, Colin Chrystal, Jonathan Dursi, Hannah Ross | 2023-09-15T18:27:22Z | http://arxiv.org/abs/2309.08687v2 | # Speeding up charge exchange recombination spectroscopy analysis in support of NERSC/DIII-D
###### Abstract
We report optimization work made in support of the development of a realtime Superfacility workflow between DIII-D and NERSC. At DIII-D, the ion properties measured by charge exchange recombination (CER) spectroscopy are required inputs for a Superfacility realtime workflow that computes the full plasma kinetic equilibrium. In this workflow, minutes matter since the results must be ready during the brief 10-15 minute pause between plasma discharges. Prior to this work, a sample CERFIT analysis took approximately 15 minutes. Because the problem consists of many calculations that can be done independently, we were able to restructure the CERFIT code to leverage this parallelism with Slurm job arrays. We reduced the runtime to approximately 51 seconds- a speedup of roughly 20x, saving valuable time for both the scientists interested in the CER results and also for the larger equilibrium reconstruction workflow.
HPC Optimization Realtime Superfacility Parallelism Slurm Job Arrays NERSC Fusion Plasma Dim- Charge Exchange
## 1 Introduction
DIII-D is currently the largest operating magnetic confinement fusion experiment in the United States. It operates over campaigns several months long, with 30-40 plasma discharges per experimental runday, with experimental discharges occurring approximately every 15 minutes. DIII-D has over 50 diagnostics [1], many of which require moderate to substantial computational resources to convert from raw signals to usable physical measurement quantities. [2]
The major goal of the DIII-D/NERSC Superfacility effort [3, 4] is to optimize an equilibrium reconstruction workflow at NERSC so that the walltime is fast enough to enable between-discharge analysis. This Superfacility work builds on previous efforts to couple magnetic confinement fusion experiments with realtime HPC resources [5, 6]. CAKE is Consistent Automatic Kinetic Equilibrium [7]- it is a module within the larger OMFIT framework [8]. The ion properties measured by the CER (charge exchange recombination spectroscopy) diagnostic are required inputs for the
CAKE workflow, so the reconstruction portion of CAKE cannot start running until CERFIT, the analysis suite for the CER diagnostic, completes.
CAKE provides fully kinetic plasma equilibrium reconstructions, yielding a higher accuracy magnetic topology of the plasma. Although the plasma topology is important to both operators and researchers, it is computationally expensive to obtain and the current CAKE walltime is too long to be run between discharges. As a result, CAKE is often run after the experimental runday has completed; at this time any insights about plasma equilibrium are no longer actionable.
The DIII-D CER diagnostic system (see [9] and the references therein) is comprised of approximately 76 channels, each of which has a different viewing position, or chord, into the DIII-D vacuum vessel. A diagram of the CER system is shown in Figure 1. Visible light from these chords is analyzed by spectrometers, and particular spectral lines emitted by the plasma are used to determine ion velocity, temperature, and density via the measured Doppler shift, Doppler broadening, and radiance. These quantities are derived from fits to the spectra, and the code used to perform these fits is called CERFIT [10; 9]. For standard analysis, CERFIT processes 64 chords.
The duration of a typical DIII-D discharge is between 4 and 10 seconds, and the CER system normally acquires data at 200 Hz. While any one spectral fit is not computationally difficult, complete analysis for a typical discharge requires more than 10,000 fits, and executing the highest quality automatic version of these fits in series previously required approximately 15 minutes running on a node of the DIII-D Omega cluster (node details are discussed in detail in Section 2). Since information from a previous discharge is needed as quickly as possible to inform the setup of the next discharge, automatic CER fits done between discharges are less complex (and correspondingly less accurate) and take approximately 4 minutes to complete. For this work, the goal was to speed up the highest quality fitting (which is required for CAKE) by at least a factor of 10 so that those results could be used between discharges and also be inputs to further computations that also aim to complete significantly before the next discharge.
## 2 Experimental setup
We performed this study using the local DIII-D cluster, Omega. Owing to its nature as a data-analysis code, CERFIT is substantially less portable than a simulation code. CERFIT has complex dependencies that are not easily satisfied on NERSC systems. One of these dependencies is a custom DIII-D data access library PTDATA [11], which to our knowledge has not yet been installed on a system outside of DIII-D. In addition to the PTDATA library, the DIII-D raw data itself much also be accessible as an input to CERFIT, including the timing and profile of neutral beams, which requires access to the DIII-D data system. CERFIT has several sub-modules which are tracked via a set of environment variables, which in turn can be set by custom system modules. The CERFIT test suite requires known-good datasets in an expected directory structure. All of these factors make it a challenge to move CERFIT to another system; however, we would like to study the feasibility of running this workflow at NERSC to leverage the additional computational resources available on Perlmutter as future work [12].
Figure 1: A diagram of the DIII-D Charge Exchange Recombination Spectroscopy diagnostic system (CER). There are approximately 76 total channels in the system; vertical viewing channels (chords) are shown in blue, and tangential viewing channels (chords) are shown in red. 64 chords are analyzed in standard CERFIT processing [9].
The Omega cluster is a heterogeneous Linux cluster situated at DIII-D. It contains two login nodes with 2x Intel Xeon Gold 6252 CPUs and 34 total compute nodes including: 2 nodes of 2x Intel Xeon Gold 6252 CPUs, each with 1 NVIDIA V100, 1 node of 2x Intel Xeon Gold 6252, 2 nodes of 2x Intel Xeon Platinum 8260, 16 nodes of 2x AMD EPYC 7513, 12 nodes of 2X AMD EPYC 7502, and 1 node of 2x AMD EPYC 7343. Job submission is managed by the Slurm scheduler. For queue permission reasons, we have targeted the two Intel Xeon Gold 6252 CPUs nodes during our study, as the CERFIT code currently must be compiled and run on Intel nodes due to an apparent outstanding nvfortran compiler issue on AMD nodes. Work is ongoing to address this nvfortran issue. To work around this, the CER team is actively switching to the gfortran compiler. Early results have shown that there are no issues using the AMD hardware with gfortran.
## 3 Determining Optimization Strategy
FITCER, the outer wrapper for CERFIT, performs complete CERFIT analysis of all CER chord data from a discharge. The CERFIT analysis code is comprised both of Fortran and C code- the main analysis algorithms are written in Fortran, and the main data access components are written in C. CERFIT performs spectral fitting by adding Gaussian functions to produce a synthetic spectra (_Sum of Gauss_), and then using the Levenberg-Marquardt minimization algorithm, CERFIT iteratively fine-tunes the model to minimize the \(\chi^{2}\) value of the fit, ultimately obtaining the best fit ion properties based on the fit line's location, width, and amplitude (_Chi-squared minimization_). There are several additional data acquisition and pre-processing steps.
We began this optimization study with application profiling. Understanding the structure of the CERFIT application and where the time was spent was crucial to identifying a strategy to achieve speedup. We used the NVIDIA Nsight system profiler nsys for our CPU code since that was available on the system. We profiled the application running on a single process/single CPU since that was how CERFIT was typically run. Our main interest was how much speedup can be practically achieved for the team given the conditions under which they typically run.
Sample NVIDIA nsys profiling data are shown in Figure 2. Displaying the nsys profiling data in "Bottom-up" view [13] provided useful information for our CPU-only application. It provided information about the structure of the application and the major hotspots in the code. The _Chi-squared minimization_, symbol holger_eval_model, took about 16 percent of total runtime, and the _Sum of Gauss_ function, symbol sum_of_gauss_model_csigma, that constructed synthetic spectra for fitting took about 10 percent of total runtime. Many individual functions performing adjacent tasks accounted for the remainder of the application runtime, which we believe is a result of application branching.
These profiling data indicated that no single function was doing a lot of the heavy lifting in CERFIT. In a best-case scenario in which we ported both _Chi-squared minimization_ and _Sum of Gauss_ to GPU and achieved speedup, we would
Figure 2: NVIDIA nsys profiling results illustrating the runtime distribution of the CERFIT code in “Bottom-up view”. Note that nsys is displaying the symbol names rather than the function names. The _Chi-squared minimization_, symbol holger_eval_model, occupies approximately 16% of the total runtime, while the _Sum of Gauss_ operations, symbol sum_of_gauss_model_csigma, used for preparing synthetic spectra, accounts for around 10%. The absence of a single hotspot suggests that GPU porting of these algorithms may not yield significant overall speedup without a major redesign of the CERFIT code.
still be speeding up only about a quarter of the application runtime. Although our initial goal in this study was to try to adapt CERFIT to leverage GPUs, the profiling data made it clear that CERFIT in its present state was not a good candidate for GPU optimization.
Recognizing the limited potential from GPU porting, we redirected our focus towards obtaining CPU speedup through adding parallelism. Both from discussions with the CER team and from our profiling data, we were aware that CERFIT was performing fully independent calculations for each CER chord and could be reconfigured to run in an embarrassingly parallel manner. The independent nature of the calculations made the algorithm an excellent candidate for parallelization, so we decided to pursue this approach.
## 4 Implementing chord-level CPU parallelism
Given the complex nature of the CERFIT code (see Section 3), which involves multiple branching points, our goal was to validate our understanding of chord-level parallelism. We first needed to locate the right place in the code where the chord-splitting takes place. Once located, as a test, we used the Linux utility xargs to divide the CERFIT input file in two pieces and launch two independent CERFIT jobs, each running on a subset of the input file. We observed that CERFIT ran as expected in this mode of operation and determined it was safe to move forward with this strategy depicted in Figure 3.
To obtain results quickly, we began by writing a prototype Python script to break the CERFIT input file into 64 individual input files- one for each viewing chord. This prototype script enabled us to explore large-scale parallelization via Slurm job arrays. Slurm job arrays [14] are a feature in the Slurm workload manager that enables many similar jobs to be launched from a single batch script. We decided to start with Slurm job arrays instead of a solution like MPI largely due to the lower barrier to entry and faster path to implementation. Incorporating MPI into CERFIT, a complex C and Fortran code, would likely be time-consuming and can be considered for future work. One benefit of relying on Slurm to provide parallelism is that the distribution of work is extremely flexible and the size of the job can be quickly adjusted (i.e. using one node instead of two). This is one major advantage compared to a similar MPI implementation.
To enable FITCER to launch its own Slurm jobs, we wrote a Slurm job array script designed to launch 64 nearly identical CERFIT jobs, one for each CER chord. The variable SLURM_ARRAY_TASK_ID was used to access the corresponding CERFIT chord input file. The script launched one job per physical CPU core with 24 total jobs per node. Additionally, it was necessary to allocate the memory per CPU correctly so that the resources requested for each job would not exceed the total resources on the node and cause the jobs to block. With this configuration, we observed that our jobs ran in parallel as expected. Our Slurm job array batch script is shown below:
#1/bin/bash
#SBATCH -p gpus
Figure 3: Illustration of serial and parallel execution of CERFIT. In the serial execution, a single input file contains data for all chords, and CERFIT processes each chord sequentially. In contrast, the parallel execution divides the input data into separate files, one for each chord. Multiple instances of CERFIT run in parallel in this embarrassingly parallel model.
#SBATCH --array=1-64
#SBATCH --cpus-per-task=1
#SBATCH -n 1
#SBATCH --ntasks-per-node=24
#SBATCH --mem-per-cpu=1G
srun time cerfit < chord $SLURM_ARRAY_TASK_ID.in \ >& fit_SSLUM_ARRAY_TASK_ID.out
Once we confirmed that this new chord-parallel structure was suitable, our objective was to implement our changes back into the production version of FITCER, with the goal of making the parallel version of the code look and feel like the original version. The key feature was to re-implement the chord-splitting function that had been tested via a Python script into FITCER in Fortran. A specialized Fortran subroutine was developed to effectively partition individual chords from the primary input file called fit_<chord_number>_in. One challenge in translating from a Python prototype to a Fortran subroutine is that handling multi-file I/O requires more care and certainly more lines of code. Another challenge was handling the splitting in a robust manner. During the process, we observed that the structure of the CERFIT input file differs between production runs and running within the test suite. As a result, this meant that we needed to split the file on a certain word rather than after a certain number of lines. This resulting subroutine seamlessly integrates within the existing fitcer_input_file.f90 code.
## 5 CER Regression Testing
Verifying that our modifications to CERFIT did not meaningfully alter the outputs of the code was essential. We used the established CERTEST test script developed at DIII-D to check for correctness of our CERFIT outputs.
CERTEST can be run in two modes- in the first mode, it generates a set of known-good reference files stored in a bespoke directory. In test mode, CERTEST runs CERFIT and generates standard output files, which are then compared to the reference files. We generated the reference files using the original serial code, and we generated the test files using our parallel version of CERFIT. CERTEST needed some modifications to be able to handle the paradigm in which output files are written from Slurm jobs. Once we ran the updated CERTEST, the CER team determined that the differences that were present between the two versions were negligible.
## 6 Results
In this section we discuss the measurements to assess the overall improvements in runtime from our implementation of chord-level parallelism via Slurm job arrays.
We used the following procedure to perform our benchmarking on the Omega cluster and achieve our speedup results shown in Figure 4:
1. We ran CERFIT via FITCER using a standard sample discharge 163100 for analysis. This run was performed on an Omega Intel Xeon Gold 6252 CPU login node since this is the standard procedure for the CER team. We ran 3 trials of the original serial implementation, shown in blue, using the Linux time command to obtain the measurement. The mean runtime of these 3 trials was 1010 seconds.
2. We ran our parallelized version of FITCER using the same standard sample discharge. For this benchmark we ran on two Omega Intel Xeon Gold 6252 compute nodes, each with 24 physical cores, for a total of 48 physical cores. We ran 3 trials of the parallel implementation, shown in orange, using a special bash wrapper script to obtain the full FITCER runtime. It was not adequate to time FITCER using Linux time since FITCER currently finishes executing as soon as the job array is submitted via SBATCH. In the wrapper script, we included a while loop to query Slurm every 2 seconds to determine if the job array jobs were still running. When the query returned no results, the timer ended and the runtime duration was calculated. Future work could include adding such timing capabilities into FITCER itself. The mean runtime of these 3 trials was 51 seconds. Dividing the original 1016 seconds by 51 seconds, we arrive at a speedup of 19.9, or approximately 20x.
We performed a separate study to examine the individual chord processing times. The execution times for each chord are shown in Figure 5. We ran 3 trials of this CERFIT per-chord test and found that some chords are consistently processed more quickly than others. Given this behavior, using a flexible framework like Slurm job arrays affords some load-balancing by allowing any cores that many have finished early the opportunity to process another channel.
It is additionally useful to discuss the speedup in terms of a per-core analysis. The initial benchmark (roughly 1016 seconds) was run in serial on a single core. The final benchmark (roughly 51 seconds) was run on 48 physical cores. With perfect strong scaling one might expect a 48x speedup (roughly 19 seconds)- what accounts for this difference? First, there are 64 total chords being processed, so 16 CPUs processed more than one chord, which reduces the potential strong-scaling speedup. Next, Figure 5 demonstrates that the time to process each chord varies from a few seconds to nearly 30 seconds. In an ideal situation, a CPU that finishes early with an "fast" chord would be available to process another chord. Using Slurm Job Arrays does inherently provide some load balancing in this regard, although it is not optimized. Finally, there is some overhead of submitting each CERFIT task as a Slurm job. We should note that the target CPU compute nodes were unoccupied, so we expect that there is some overhead to submit and start the job, but there was no queue wait-time (except from the application itself). If the test could have been performed on 3 nodes with more than 64 available CPU cores, we might expect to achieve closer to the ideal strong-scaling speedup, although the speedup will be determined by the "long pole in the tent", which in this case is the chord that takes approximately 30 seconds to be processed. Since there is so much variation in the per-chord processing time, achieving the ideal strong-scaling speedup of 19s is not possible with this application.
Figure 4: Benchmarking results of sequential vs. parallel CERFIT Execution. This plot presents a comparison of execution times for 3 trials of each case. Sequential execution of FITCER, indicated by the blue bars, completes in approximately 15 minutes. In contrast, parallel execution, indicated by the orange bars, completes in approximately 50 seconds. On average, the parallel version yields a 20x speedup relative to the sequential version.
Figure 5: Examination of the individual chord runtimes in CERFIT. We performed 3 trials of this test. The plot shows chord number vs execution time, with each trial shown in a different color. We observed that the differences in processing time between chords is robust.
## 7 Future Work
The work we have reported here is an initial effort towards speeding up the walltime of CERFIT analysis at DIII-D. However, there are many additional avenues for this work.
First and foremost, the objective of this work is to develop a production quality parallel version of CERFIT that the CER and CAKE teams can use in routine analysis. Work is ongoing to finalize changes, including switching from nvfortran to gfortran and adjusting the Slurm job array configuration to fit Omega queue policies, to enable CERFIT to run routinely between discharges on the AMD compute nodes in the Omega cluster.
One potential next step would be to study replacing the Slurm job array based parallelism with MPI. This would provide a scheduler-independent framework for parallelism. It would also be instructive to evaluate the differences in overhead between starting an independent Slurm job for each ADM compared to MPI process startup within a single job.
The whole of this work was performed on the Omega cluster at DIII-D. However, another major area of study would be to try running this analysis on NERSC's Perlmutter. This would require some additional libraries (like PTDATA, a DIII-D internal data system library) to be locally installed at NERSC. It may also require some examination of the efficiency of external data transfer via PTDATA. The goal would be to determine if the realtime resources NERSC could offer would benefit the overall workflow and overcome the additional overheads of raw data transfer (estimated to be a relatively modest 1 GB per discharge). Running CERFIT locally at NERSC would mean that the ion physics outputs from CERFIT would be located at NERSC and could be used directly in CAKE and other workflows, which could be beneficial. Questions about how to locally store these data for efficient access and for what duration would need to be explored.
## 8 Summary
To summarize, this work describes our efforts to achieve 20x speedup for the high-quality CERFIT CER diagnostic analysis code used at DIII-D which yields plasma ion properties. This speedup is expected to benefit DIII-D scientists and operators working with CER results in the control room since it will provide them this high-quality information substantially faster. We believe this CERFIT speedup will additionally translate into speedup for the CAKE Superfacility workflow project that connects DIII-D to NERSC with the goal of enabling routine between-discharge plasma equilibrium reconstruction.
## Acknowledgments
This research used resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility located at Lawrence Berkeley National Laboratory, operated under Contract No. DE-AC02-05CH11231 using NERSC award ASCR-ERCAP0019913.
This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Fusion Energy Sciences, using the DIII-D National Fusion Facility, a DOE Office of Science user facility, under Award(s) DE-FC02-04ER54698.
Disclaimer: This report was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government or any agency thereof. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof.
This work was completed in part at the July 2023 NERSC Open Hackathon, part of the Open Hackathons program. The authors would like to acknowledge OpenACC-Standard.org for their support.
The authors would like to thank David Schissel and 3 anonymous reviewers from the SC23 XLOOP workshop for their suggestions to improve this paper.
Finally, the authors would like to thank the CAKE Superfacility team for useful discussions that contributed to this work. |
2309.06872 | Cyclic 2-Spreads in $V(6,q)$ and Flag-Transitive Affine Linear Spaces | In this paper we completely classify spreads of 2-dimensional subspaces of a
6-dimensional vector space over a finite field of characteristic not two or
three upon which a cyclic group acts transitively. This addresses one of the
remaining open cases in the classification of flag-transitive linear spaces. We
utilise the polynomial approach innovated by Pauley and Bamberg to obtain our
results. | Cian Jameson, John Sheekey | 2023-09-13T10:29:30Z | http://arxiv.org/abs/2309.06872v1 | # Cyclic \(2\)-Spreads in \(V(6,q)\) and Flag-Transitive Affine Linear Spaces
###### Abstract
In this paper we completely classify spreads of \(2\)-dimensional subspaces of a \(6\)-dimensional vector space over a finite field of characteristic not two or three upon which a cyclic group acts transitively. This addresses one of the remaining open cases in the classification of flag-transitive linear spaces. We utilise the polynomial approach innovated by Pauley and Bamberg to obtain our results.
## 1 Introduction
In this paper we aim to construct and classify _spreads_ of a vector space upon which a cyclic group of automorphisms acts transitively, This corresponds to a classification of certain _flag-transitive linear spaces_ with a prescribed automorphism group. The problem of classifying flag-transitive linear spaces has a long history, with a series of celebrated results culminating in [6] which classified most cases, leaving open the case of linear spaces arising from \(t\)-spreads of \(V(tm,q)\) upon which a subgroup of \(\Gamma\mathrm{L}(1,q^{tm})\) acts transitively. However this remaining open case remains a very difficult problem. In [15], Bamberg and Pauley used a polynomial approach to give a new means of attacking this problem in the specific case of a cyclic group acting transitively on a \(2\)-spread in \(V(2m,q)\), including constructing new examples. Recently in [8], Feng and Lu used this approach and some results from permutation polynomials in order to find further examples.
In this paper we completely solve the case of \(2\)-spreads in a \(6\)-dimensional vector space over any finite field of characteristic not two or three. In particular we construct all possible examples, count the number of equivalence classes, and give canonical representatives for each equivalence class.
## 2 Definitions and background
Throughout the paper we let \(q\) be a power of a prime \(p>3\), \(\mathbb{F}_{q}\) the field with \(q\) elements, and \(\overline{\mathbb{F}_{q}}\) its algebraic closure. We denote by \(V(n,q)\) a vector space of dimension \(n\) over \(\mathbb{F}_{q}\). We will use \(\langle\rangle\) to denote the \(\mathbb{F}_{q}\)-span of a set or list of vectors or elements of an extension field of \(\mathbb{F}_{q}\).
### Spreads
A \(t\)_-spread_ in a vector space \(V=V(n,q)\) is a set \(\mathcal{S}\) of \(t\)-dimensional subspaces such that every nonzero vector of \(V\) is contained in precisely one element of \(\mathcal{S}\). A well-known result of Segre [17] tells us that a \(t\)-spread
exists in \(\mathbb{F}_{q}^{n}\) if and only if \(n=tm\) for some positive integer \(m\). The "only if" part of this statement follows by counting, while the "if" part follows from the so-called _Desarguesian spread_; if we identify \(\mathbb{F}_{q^{tm}}\) and \(V(tm,q)\) as \(\mathbb{F}_{q}\)-vector spaces, then the set
\[\mathcal{D}=\{\langle ax:x\in\mathbb{F}_{q^{\ell}}\rangle:a\in\mathbb{F}_{q^{tm }}^{\times}\}\]
is a Desarguesian spread.
We say that two \(t\)-spreads \(\mathcal{S}_{1}\) and \(\mathcal{S}_{2}\) are _equivalent_ (resp. _projectively equivalent_) if there is an element of \(\Gamma\mathrm{L}(n,q)\) (resp. \(\mathrm{GL}(n,q)\)) mapping \(S_{1}\) to \(S_{2}\). The _automorphism group_ of a spread \(\mathcal{S}\) is defined as the setwise stabiliser of \(\mathcal{S}\) in \(\Gamma\mathrm{L}(tm,q)\), and is denoted by \(\mathrm{Aut}(\mathcal{S})\). It is well known that the automorphism group of the Desarguesian spread is isomorphic to \(\Gamma\mathrm{L}(m,q^{t})\). Furthermore this group acts transitively on \(\mathcal{D}\); in fact, it acts transitively on any set of \(m+1\) elements of \(\mathcal{D}\) in general position, where _general position_ means that any \(m\) elements of the set span all of \(V\).
Note that we could equally work in the projective space \(\mathrm{PG}(V)=\mathrm{PG}(tm-1,q)\). In this case for the above we would speak of a \((t-1)\)-spread in an \((tm-1)\)-dimensional projective space, and consider automorphisms of the spread as elements of \(\Gamma\mathrm{L}(tm,q)\simeq\Gamma\mathrm{L}(tm,q)/\mathbb{F}_{q}^{\times}\). As there is no consensus in the literature regarding whether to use a vector space or projective space setting, we choose to work with the former for convenience but may borrow terminology from the latter. In particular, we will consider \(2\)-spreads in \(V(2m,q)\), but refer to them as _line spreads_ when convenient.
### Linear spaces
A _linear space_ is a point-line incidence geometry \(\mathcal{I}\) in which
1. every pair of points is contained in precisely one common line;
2. every pair of lines meet in at most one common point.
If every pair of lines meet in precisely one common point, it is called a _projective plane_. If for any line \(\ell\) and any point \(p\) not contained in \(\ell\) there exists a unique line containing \(p\) and disjoint from \(\ell\), it is called an _affine space_.
A _flag_ of a point-line incidence geometry is a pair \((p,\ell)\in\mathcal{P}\times\mathcal{L}\) such that \(p\in\ell\). If a point \(p\) is not contained in a line \(\ell\) then \((p,\ell)\) is called an _anti-flag_.
Let \(\mathcal{P}\) and \(\mathcal{L}\) denote the set of points and lines of \(\mathcal{I}\) respectively. A bijective map \(\phi\) from \(\mathcal{P}\) to itself is said to be an _automorphism_ of \(\mathcal{I}\) if the image of the set of points on any line is again the set of points of a line. We denote the group consisting of all automorphisms of \(\mathcal{I}\) as \(\mathrm{Aut}(\mathcal{I})\) and refer to it as _the (full) automorphism group_ of \(\mathcal{I}\). We refer to any subgroup of \(\mathrm{Aut}(\mathcal{I})\) as _a group of automorphism of \(\mathcal{I}\)_.
We say that a linear space \(\mathcal{I}\) is _point-transitive_ resp. _line-transitive_ resp. _flag-transitive_ if it possesses a group of automorphisms acting transitively on points resp. lines resp. flags. Much work has been done on classifying linear spaces with certain transitivity properties. We refer to [6] for an overview, and summarise the results relevant to this paper in the next section.
### Linear spaces from spreads
From a spread \(\mathcal{S}\) of a vector space \(V\) we can define a point-line incidence structure \(\mathcal{I}(\mathcal{S})\) whose points are the elements of \(V\) and whose lines are cosets of elements of \(\mathcal{S}\); that is, cosets \(u+U\) for \(u\in V\) and \(U\in\mathcal{S}\). It is straightforward to verify that \(\mathcal{I}(\mathcal{S})\) satisfies the axioms of a linear space [2]; indeed, it has the further property of possessing _parallelism_. Such spaces are sometimes referred to as _translation Sperner spaces_. The lines through the point \(u\in V\) are those of the form \(u+U\) for \(U\in\mathcal{S}\), and any vector \(v\neq u\) is contained in \(u+U\) if and only if \(u-v\in U\). Since \(\mathcal{S}\) is a spread, there is a unique spread element \(U\) containing \(u-v\).
It is known that the automorphism group of the linear space \(\mathcal{I}(\mathcal{S})\) is equal to \(T.\mathrm{Aut}(\mathcal{S})\), where \(T\) denotes the group of _translations_ (maps of the form \(t_{u}:v\mapsto v+u\) for \(u\in V\)). The subgroup \(T\) clearly acts transitively on points of \(\mathcal{I}(\mathcal{S})\). Then any subgroup of automorphisms which acts transitively on flags of \(\mathcal{I}(\mathcal{S})\) must be of the form \(T.G\), where \(G\) is a subgroup of \(\mathrm{Aut}(\mathcal{S})\) acting transitively on \(\mathcal{S}\). Note that \(\mathrm{Aut}(\mathcal{S})\) acts transitively on \(\mathcal{S}\) if and only if \(\overline{\mathrm{Aut}}(\mathcal{S})\) acts transitively on the induced spread of the projective space, and so for the purposes of studying flag-transitivity, it does not matter whether we consider spreads of a vector space or of the corresponding projective space.
In a series of seminal papers [10, 5, 13, 16], most cases were completely classified.
**Theorem 2.1**.: _In order to classify all linear spaces with a flag-transitive automorphism group \(H\), it remains only to classify the case \(H=TG_{0}\), where \(T\cong(\mathbb{F}_{q^{n}},+)\) is a group of translations and \(G_{0}\leq\Gamma\mathrm{L}(1,q^{n})\)._
For the remaining case of linear spaces with automorphism group contained in \(\mathrm{A}\Gamma\mathrm{L}(1,q^{n})\), full classification remains open. Various constructions were provided by Kantor in [11], leading him to suspect that a full classification may not be feasible. Hence additional restrictions on the linear space and the automorphism group are necessary in order to make headway towards classification; in particular, we seek to classify all \(t\)-spreads in \(V(tm,q)\) possessing a transitive group of automorphisms \(G\) contained in \(\Gamma\mathrm{L}(1,q^{tm})\), regarded as a subgroup of \(\Gamma\mathrm{L}(tm,q)\) in the natural way.
In [15] the authors considered the case of \(t=2\) and \(G\) a cyclic subgroup of \(\mathrm{GL}(1,q^{2m})\simeq\mathbb{F}_{q^{2m}}^{\times}\). In this paper we aim to utilise the techniques developed therein in order to further the constructions and classifications in this case, with particular focus on the case \(m=3\). In this case the associated linear spaces possess \(q^{6}\) points, with each line containing \(q^{2}\) points.
### Transitive \(2\)-spreads
For the remainder of this paper we will work with \(2\)-spreads of \(V(2m,q)\), which one may also view as a line spread in \(\mathrm{PG}(2m-1,q)\). We again identify \(V(2m,q)\) with the elements of \(\mathbb{F}_{q^{2m}}\). We consider \(2\)-spreads whose automorphism group contains the following group \(C\leq\mathrm{GL}(1,q^{2m})\leq\Gamma\mathrm{L}(1,q^{2m})\):
\[C:=\left\{x\mapsto cx:c^{\frac{(q-1)(q^{2m}-1)}{(q^{2}-1)}}=1\right\}.\]
Note that elements of \(\Gamma\mathrm{L}(1,q^{2m})\) are of the form \(x\mapsto ax^{\sigma}\) for some \(\sigma\in\mathrm{Aut}(\mathbb{F}_{q^{2m}})\). Suppose \(\mathcal{S}\) is a \(2\)-spread in \(V(2m,q)\) on which the group \(C\) acts transitively. Then \(\mathcal{S}=\ell^{C}\) for some two-dimensional subspace \(\ell\) of \(V(2m,q)\). Since \(C\) is normal in \(\Gamma\mathrm{L}(1,q^{2m})\), it follows that for any \(\phi\in\Gamma\mathrm{L}(1,q^{2m})\) we have \(\phi(\ell^{C})=\phi(\ell)^{C}\), and so \(\ell^{C}\) and \(\phi(\ell)^{C}\) are equivalent.
It can be shown that \(\ell\) can be mapped by an element of \(\Gamma\mathrm{L}(1,q^{2m})\) to a subspace of the form \(\ell_{\varepsilon}\) for some \(\varepsilon\in\mathbb{F}_{q^{2m}}\), where \(\ell_{\varepsilon}=\langle x-\varepsilon x^{q}:x\in\mathbb{F}_{q^{2}}\rangle\). Thus it suffices to determine when \(\ell_{\epsilon}^{C}\) is a \(2\)-spread. In [15], these were characterised as follows.
**Theorem 2.2**.: _[_15_, Theorem 1]_ _A \(2\)-spread in \(V(2m,q)\) upon which the group \(C\) acts transitively is equivalent to one of the form \(\ell_{\varepsilon}^{C}\), where \(\varepsilon\) is an element of \(\mathbb{F}_{q^{2m}}\), and_
\[\ell_{\varepsilon}=\langle x-\varepsilon x^{q}:x\in\mathbb{F}_{q^{2}}\rangle.\]
_Moreover if \(P(x)\) is the minimal polynomial of \(\varepsilon\) over \(\mathbb{F}_{q^{2}}\), \(\deg(P)=d\) and \(\varepsilon^{q+1}\neq 1\), then \(\ell_{\varepsilon}^{C}\) is a \(2\)-spread if and only if for all nonzero \(x,y\in\mathbb{F}_{q^{2}}\) it holds that_
\[\left(\frac{x^{d}P(x^{q-1})}{y^{d}P(y^{q-1})}\right)^{m/d}\in\mathbb{F}_{q} \implies\frac{x}{y}\in\mathbb{F}_{q}.\] ( **Condition (1)** )
**Theorem 2.3**.: _[_15_, Proposition 2]_ _Two \(2\)-spreads \(\ell_{\varepsilon}^{C}\) and \(\ell_{\zeta}^{C}\) of \(V(2m,q)\) are equivalent if and only if_
\[\zeta^{\sigma}=\frac{v+u^{q}\varepsilon}{u+v^{q}\varepsilon}\]
_for some \(u,v\in\mathbb{F}_{q^{2}}\) with \(u^{q+1}\neq v^{q+1}\), and some \(\sigma\in\operatorname{Aut}(\mathbb{F}_{q^{2}}:\mathbb{F}_{q})\)._
A straightforward simplification of this theorem gives that \(\ell_{\varepsilon}^{C}\) and \(\ell_{\zeta}^{C}\) are _projectively_ equivalent if and only if \(\zeta=\frac{v+u^{q}\varepsilon}{u+v^{q}\varepsilon}\) for some \(u,v\in\mathbb{F}_{q^{2}}\) with \(u^{q+1}\neq v^{q+1}\); that is, when we require that \(\sigma\) is the identity automorphism.
**Definition 2.4**.: For an irreducible polynomial \(P(x)\) satisfying Condition (1), we will refer to a \(2\)-spread \(\ell_{\varepsilon}^{C}\) defined by a root \(\varepsilon\) of \(P(x)\) as the _\(2\)-spread defined by \(P(x)\)_. If \(P(x)\) and \(Q(x)\) define (projectively) equivalent \(2\)-spreads then we will say that \(P(x)\) and \(Q(x)\) are _(projectively) equivalent_.
Given this definition, the following follows immediately from Theorem 2.3.
**Corollary 2.5**.: _Two irreducible degree \(d\) polynomials \(P(x)\) and \(Q(x)\) satisfying Condition (1) are equivalent if and only if_
\[Q(x)=\lambda(u+v^{q}x)^{d}P^{\sigma}\left(\frac{v+u^{q}x}{u+v^{q}x}\right)\]
_for some \(\lambda,u,v\in\mathbb{F}_{q^{2}}\) with \(\lambda\neq 0,u^{q+1}\neq v^{q+1}\), and some \(\sigma\in\operatorname{Aut}(\mathbb{F}_{q^{2}}:\mathbb{F}_{q})\)._
Again the corresponding statement for projective equivalence can be obtained by omiting the automorphsism \(\sigma\).
Note that this equivalence corresponds to equivalence under certain _linear fractional transformations_ (often also called _Mobius transformations_), namely those defined by the group generated by the following subgroup of \(\operatorname{GL}(2,q^{2})\), and field automorphisms.
**Definition 2.6**.: We denote by \(U\) the subgroup of \(\operatorname{GL}(2,q^{2})\) defined as
\[U:=\left\{\phi_{u,v}:=\begin{pmatrix}u^{q}&v\\ v^{q}&u\end{pmatrix}:u,v\in\mathbb{F}_{q^{2}},u^{q+1}\neq v^{q+1}\right\}.\]
Note that \(U\) is isomorphic to \(\operatorname{GL}(2,q)\). In fact, it is equal to the group of invertible _autocirculant matrices_, also known as _Dickson matrices_, in \(\operatorname{GL}(2,q^{2})\).
### Known examples
We briefly summarise the known examples, with particular regard to the case of cubic polynomials, since these will be the main focus of this paper.
In [15] it was shown that the polynomial
\[\operatorname{BP}_{p}(x):=\frac{x^{p+1}-1}{x-1}-2\in\mathbb{F}_{p}[x]\]
is irreducible and satisfies Condition (1). The only cubic polynomial in this family is the polynomial \(x^{3}+x^{2}+x-1\in\mathbb{F}_{3}[x]\). Since in this paper we consider only fields with characteristic greater than three, this example will not appear.
In [11], various examples of transitive \(2\)-spreads were constructed. In [15], it was shown that the only ones amongst these which arise from a \(2\)-spread with a transitive cyclic group of automorphisms are those of _Type 4_, which correspond to binomials, namely polynomials of the form
\[B_{\theta}(x):=x^{n}-\theta,\]
where \(\theta\) is a primitive element of \(\mathbb{F}_{q^{2}}\). We will study the general case of binomials in Section 5. This family contains irreducible cubics satisfying Condition (1) if and only if \(q\equiv 1\mod 3\), since no cubic binomial can be irreducible unless \(q\equiv 1\mod 3\).
In [8], Feng and Lu showed that the polynomials
\[g_{n,\rho}(x)=\frac{(\rho x-1)^{n}-\rho(x-\rho)^{n}}{\rho^{n}-\rho}\in\mathbb{ F}_{q}[x],\]
where \(\rho\in\mathbb{F}_{q^{2}}^{*}\) has order \(q+1\) and \(n=d^{t}u\) for any odd divisor \(d>1\) of \(q+1\), any proper divisor \(u\) of \(d\) and any \(t\in\mathbb{N}^{+}\), have degree \(n\), are irreducible in \(\mathbb{F}_{q^{2}}[x]\), and satisfy Condition (1). For the case \(n=3\), we must have \(d=3\) and \(t=u=1\), and so \(q\equiv 2\mod 3\). Hence the cubics in this family are those of the form
\[g_{3,\rho}(x)=x^{3}-3x+(\rho+\rho^{q}),\]
where \(\rho\) has order \(q+1\).
## 3 A curve formulation
We now show an equivalence between Condition (1) and properties of a curve \(H_{P}\) related to \(P(x)\). We introduce some notation which will be of use throughout.
**Definition 3.1**.: Given a polynomial \(P(x)=\sum_{i=0}^{m}a_{i}x^{i}\in\mathbb{F}_{q^{2}}[x]\), we define
\[\tilde{P}(x) :=\sum_{i=0}^{m}a_{m-i}^{q}x^{i}\] \[G_{P}(z,w) :=P(z)\tilde{P}(w)-\tilde{P}(z)P(w),\] \[H_{P}(z,w) :=\frac{P(z)\tilde{P}(w)-\tilde{P}(z)P(w)}{z-w}.\]
We will be concerned with zeroes of these polynomials of a certain form. We introduce the following set for convenience:
\[Z:=\{(z,w)\in\mathbb{F}_{q^{2}}^{2}:z^{q+1}=w^{q+1}=1,z\neq w\}.\]
**Lemma 3.2**.: _An irreducible polynomial \(P(x)\in\mathbb{F}_{q^{2}}[x]\) of degree \(d=m\) satisfies Condition (1) if and only if \(G_{P}\) has no zeroes in \(Z\)._
Proof.: First we note that for any nonzero elements \(a,b\in\overline{\mathbb{F}_{q}}\), we have that \(a/b\in\mathbb{F}_{q}\) if and only if \(ab^{q}-a^{q}b=0\), if and only if \(a^{q-1}=b^{q-1}\). Applying this to the expressions from Theorem 2.2 we get that
\[\frac{x^{m}P(x^{q-1})}{y^{m}P(y^{q-1})}\in\mathbb{F}_{q}\Leftrightarrow x^{ mq}P(x^{q-1})^{q}y^{m}P(y^{q-1})=x^{m}P(x^{q-1})y^{mq}P(y^{q-1})^{q}\]
for all nonzero \(x,y\in\mathbb{F}_{q^{2}}\). Now we define \(z=x^{q-1},w=y^{q-1}\), and divide both sides by \((xy)^{m}\) to get
\[\frac{x^{m}P(x^{q-1})}{y^{m}P(y^{q-1})}\in\mathbb{F}_{q}\Leftrightarrow z^{m }P(z)^{q}P(w)=P(z)w^{m}P(w)^{q}.\]
Now observe that \(z^{m}P(z)^{q}=\tilde{P}(z)\) and \(w^{m}P(w)^{q}=\tilde{P}(w)\). Now \(x/y\in\mathbb{F}_{q}\) if and only if \(z=w\), and \(z\) is a \((q-1)\)-st power of a nonzero element of \(\mathbb{F}_{q^{2}}\) if and only if \(z^{q+1}=1\). Thus Theorem 2.2 is equivalent to the claim.
As \(G_{P}(z,w)\) is clearly divisible by \(z-w\), and as dividing by \(z-w\) does not affect the conditions, the following result in terms of \(H_{P}(z,w)\) follows immediately.
**Lemma 3.3**.: _An irreducible polynomial \(P(x)\in\mathbb{F}_{q^{2}}[x]\) of degree \(d=m\) satisfies Condition (1) if and only if \(H_{P}\) has no zeroes in \(Z\)._
### Two connections to permutation polynomials
A polynomial \(f(x)\in\mathbb{F}_{q}[x]\) is called a _permutation polynomial_ of \(\mathbb{F}_{q}\) if the map \(x\mapsto f(x)\) is a permutation of \(\mathbb{F}_{q}\). In [8], the following connection between certain permutation polynomials and polynomials satisfying Condition (1) was shown.
**Lemma 3.4**.: _[_8_]_ _Suppose \(P(x)\) is a polynomial of degree \(d\), where \(\gcd(d,q-1)=1\). Then \(x^{d}P(x^{q-1})\) is a permutation polynomial of \(\mathbb{F}_{q^{2}}\) if and only if \(P(x)\) satisfies Condition (1)._
Note however that this correspondence is only valid when \(\gcd(d,q-1)=1\); when \(\gcd(d,q-1)>1\), a polynomial of the form \(x^{d}P(x^{q-1})\) can never be a permutation polynomial, whereas there do exist polynomials satisfying Condition (1) in this case.
In [3], permutation polynomials of \(\mathbb{F}_{q^{2}}\) of the form
\[f_{a,b}(X)=X(1+aX^{q(q-1)}+bX^{2(q-1)})\in\mathbb{F}_{q^{2}}[X],\]
where \(a,b\in\mathbb{F}_{q^{2}}^{*}\), were completely characterized for finite fields with characteristic greater than \(3\). To attain their results, the authors consider the _algebraic plane curve_\(\mathcal{C}_{a,b}\) with affine equation
\[F_{a,b}(X,Y)=\frac{(a^{q}X^{3}+X^{2}+b^{q})(bY^{3}+Y+a)-(a^{q}Y^{3}+Y^{2}+b^{q} )(bX^{3}+X+a)}{X-Y}=0.\]
It was shown that \(f_{a,b}\) is a permutation polynomial of \(\mathbb{F}_{q^{2}}\) if and only if there is no point in \(Z\) on \(\mathcal{C}_{a,b}\). We observe that
\[F_{a,b}(X,Y)=-b^{q+1}H_{P}(X,Y)\]
where \(P(x)=x^{3}+b^{-1}x+ab^{-1}\). Hence we have the following.
**Lemma 3.5**.: _Let \(P(x)=x^{3}+b^{-1}x+ab^{-1}\) for \(a,b\in\mathbb{F}_{q^{2}}\), \(b\neq 0\). Then \(f_{a,b}(x)\) is a permutation polynomial of \(\mathbb{F}_{q^{2}}\) if and only if \(P(x)\) satisfies Condition (1)._
Note however that it is not necessary for \(P(x)\) to be irreducible in order for \(f_{a,b}(X)\) to be a permutation polynomial, whereas it is required in order for \(P(x)\) to define a cyclic spread.
From the results of [3], we get full characterisation of cubics satisfying Condition (1) whose coefficient of \(x^{2}\) is zero. However, we can not necessarily assume this, since not every cubic polynomial is equivalent under \(U\) to one with this property. Hence this result is not sufficient to characterise all cubics satsifying Condition (1). Furthermore, [3] does not consider any question of equivalence, and indeed the notion of equivalence of cubic polynomials does not directly correspond to an equivalence amongst permutation polynomials of the form \(f_{a.b}(x)\).
### Determining the reducibility of \(H_{p}\)
In [3], the authors show that for \(q\) sufficiently large, if the curve \(\mathcal{C}_{a,b}\) is absolutely irreducible then it must have points in \(Z\). This was achieved by an application of the Aubry-Perret bound [1]. We will follow this method to generalise the result to the larger family of curves \(\mathcal{H}_{P}\) with affine equation \(H_{P}(X,Y)=0\) for arbitrary degree.
**Lemma 3.6**.: _Let \(P(x)\in\mathbb{F}_{q^{2}}[x]\) have degree \(m\) and let \(q\) be sufficiently large with respect to \(m\). If the polynomial \(H_{P}(z,w)\) is absolutely irreducible and not identically zero, then it has zeroes in \(Z\) and hence \(P\) does not satisfy Condition (1)._
Proof.: First let \(e\in\mathbb{F}_{q^{2}}\setminus\mathbb{F}_{q}\) such that \(e^{q}=-e\), and define two transformations as in [3] by
\[\psi(X,Y)=\left(\frac{X+e}{X-e},\frac{Y+e}{Y-e}\right)\]
and
\[\phi(X,Y)=\left(e\frac{X+1}{X-1},e\frac{Y+1}{Y-1}\right).\]
Then the curve \(\mathcal{H}_{P}^{*}\) defined by \(K_{P}(X,Y)=(X-e)^{m-1}(Y-e)^{m-1}H_{P}(\psi(X,Y))\) and the curve \(\mathcal{H}_{P}\) are \(\mathbb{F}_{q^{2}}\)-isomorphic since \((X-1)^{m-1}(Y-1)^{m-1}K_{P}(\phi(X,Y))=(2e)^{2(m-1)}H_{P}(X,Y)\). Note that \(K_{P}(X,Y)\in\mathbb{F}_{q}[X,Y]\).
Let \(\partial\) denote the degree of \(K_{P}(X,Y)\) and \(D\) the number of ideal points (i.e. points at infinity) of \(\mathcal{H}_{P}^{*}\). By the Aubry-Perret bound [1, Corollary 2.5], the curve has affine \(\mathbb{F}_{q}\)-rational points \((x,y)\) with \(x\neq y\) provided
\[q+1-(\partial-1)(\partial-2)\sqrt{q}-\partial-D>0\] \[\Longleftrightarrow q>\frac{\left((\partial-1)(\partial-2)+ \sqrt{\partial^{4}-6\partial^{3}+13\partial^{2}-8\partial+4D}\right)^{2}}{4}.\] ( \[\dagger\] )
Since \(D\leq\partial\leq 2(m-1)\), \(\mathcal{H}_{P}^{*}\) will have affine \(\mathbb{F}_{q}\)-rational points \((x,y)\) with \(x\neq y\) if
\[q>\left((m-2)(2m-3)+\sqrt{(m-1)(4m^{3}-24m^{2}+49m-31)}\right)^{2}.\]
Thus for such \(q\), there exists a point \(\left(\frac{x+e}{x-e},\frac{y+e}{y-e}\right)\in Z\) that lies on \(\mathcal{H}_{P}\). Therefore there are no degree \(m\) polynomials \(P\) satisfying Condition (1) for which \(H_{P}\) is absolutely irreducible when \(q\) satisfies the above inequality.
Note that while Lemma 3.2 of [1] may appear to be more directly relevant to the curves considered here, we use instead Corollary 2.5 due to the fact that we will later have more information on the number \(D\), leading to better bounds.
### Preliminary restrictions on the factorisation of \(H_{p}\)
Our strategy for the remainder of the paper will be to consider the possible factorisations of \(H_{P}\). We begin by ruling out certain factors.
**Lemma 3.7**.: _Let \(P(x)\in\mathbb{F}_{q^{2}}[x]\). Then \(P(x)\) and \(\tilde{P}(x)\) each divide both \(G_{P}(x^{q^{2}},x)\) and \(H_{P}(x^{q^{2}},x)\)._
Proof.: We directly calculate that
\[G_{P}(x^{q^{2}},x) =P(x^{q^{2}})\tilde{P}(x)-\tilde{P}(x^{q^{2}})P(x)\] \[=P(x)^{q^{2}}\tilde{P}(x)-\tilde{P}(x)^{q^{2}}P(x)\] \[=P(x)\tilde{P}(x)[P(x)^{q^{2}-1}-\tilde{P}(x)^{q^{2}-1}],\]
proving the first claim.
Now \(P(x)\) and \(\tilde{P}(x)\) divide \(G_{P}(x^{q^{2}},x)=(x^{q^{2}}-x)H_{P}(x^{q^{2}},x)\), but do not divide \(x^{q^{2}}-x\) (as otherwise a root \(\varepsilon\) of either polynomial would satisfy \(\varepsilon^{q^{2}}=\varepsilon\)), they must divide \(H_{P}(x^{q^{2}},x)\)
**Lemma 3.8**.: _Let \(P(x)\in\mathbb{F}_{q^{2}}[x]\) be an irreducible polynomial of degree \(m\). Then \(H_{P}(z,w)\) cannot factorize as_
\[\prod_{i=1}^{2(m-1)}\left(c_{i}zw+a_{i}(z+w)+d_{i}\right)\]
_for any \(a_{i},c_{i},d_{i}\in\overline{\mathbb{F}_{q}}\)._
Proof.: Suppose that \(H_{P}(z,w)\) factorizes as
\[\prod_{i=1}^{2(m-1)}\left(c_{i}zw+a_{i}(z+w)+d_{i}\right)\]
for some \(a_{i},c_{i},d_{i}\in\overline{\mathbb{F}_{q}}\) and let \(\{\varepsilon^{q^{2i}}:1\leq i\leq m\}\) be the roots of \(P\). Since \(P(x)\) divides \(H_{P}(x^{q^{2}},x)\), it must divide \(cx^{q^{2}+1}+a(x^{q^{2}}+x)+d\) for some \(a,c,d\in\overline{\mathbb{F}_{q}}\). Thus
\[c\left(\varepsilon^{q^{2(m-1)}}\right)^{q^{2}+1}+a\left(\left( \varepsilon^{q^{2(m-1)}}\right)^{q^{2}}+\varepsilon^{q^{2(m-1)}}\right)+d=0\] \[\Longleftrightarrow c\left(\varepsilon^{q^{2(m-1)}+1}\right)+a \left(\varepsilon+\varepsilon^{q^{2(m-1)}}\right)+d=0\] \[\Longleftrightarrow c\left(\varepsilon^{q^{2(m-1)}+1}\right)+a \varepsilon^{q^{2(m-1)}}-\left(c\varepsilon^{q^{2}+1}+a\varepsilon^{q^{2}} \right)=0\] \[\Longleftrightarrow(\varepsilon^{q^{2(m-1)}}-\varepsilon^{q^{2} })(c\varepsilon+a)=0.\]
If \(\varepsilon^{q^{2}}=\varepsilon^{q^{2(m-1)}}\), then \(\varepsilon=\varepsilon^{q^{2(m-2)}}\) which cannot occur because the smallest field containing \(\varepsilon\) is \(\mathbb{F}_{q^{2m}}\), so \(a=-c\varepsilon\). Then
\[c\varepsilon^{q^{2}+1}+a(\varepsilon^{q^{2}}+\varepsilon)+d=0\iff d=c \varepsilon^{2}.\]
Hence \(P(x)\) divides
\[cx^{q^{2}+1}-c\varepsilon(x^{q^{2}}+x)+c\varepsilon^{2}=c(x^{q^{2}}- \varepsilon)(x-\varepsilon).\]
Since \(P(x)\) cannot divide the linear factor, it must divide \(x^{q^{2}}-\varepsilon\), which gives \(\varepsilon=\varepsilon^{q^{2}}\). This contradiction means that \(H_{P}(z,w)\) cannot factorize in this way.
## 4 Cubic polynomials
We now focus on the case \(m=3\), studying irreducible cubics in \(\mathbb{F}_{q^{2}}[x]\) satisfying Condition (1), and hence cyclic \(2\)-spreads in \(V(6,q)\).
When \(m=3\), we have that
\[-H_{P}(z,w) =(\theta^{q}\delta+\gamma^{q})z^{2}w^{2}+(\theta^{q}\gamma+ \delta^{q})(z^{2}w+zw^{2})+(\theta^{q+1}-1)(z^{2}+zw+w^{2})\] \[\quad+(\gamma^{q+1}-\delta^{q+1})zw+(\theta\gamma^{q}+\delta)(z+w )+(\theta\delta^{q}+\gamma)\]
for \(P(x)=x^{3}-\delta x^{2}-\gamma x-\theta\in\mathbb{F}_{q^{2}}[x]\).
### Proving the reducibility of \(H_{p}\)
In [3] it was shown via Lemma 3.5 that when \(\delta=0\), \(P(x)\) can satisfy Condition (1) only if \(H_{P}(z,w)\) is reducible. We use an identical approach to cover also the case when \(\delta\neq 0\).
**Lemma 4.1**.: _Let \(P(x)=x^{3}-\delta x^{2}-\gamma x-\theta\in\mathbb{F}_{q^{2}}[x]\). If \(H_{P}\) is absolutely irreducible, then \(P(x)\) does not satisfy Condition (1)._
Proof.: First suppose that \(\theta^{q}\delta+\gamma^{q}\neq 0\), which ensures that \(H_{P}(z,w)\) has degree four. We homogenise \(H_{P}(z,w)\) to obtain the polynomial \(\overline{H_{P}}(Z,W,X)\), obtaining \(\overline{H_{P}}(Z,W,0)=-(\theta^{q}\delta+\gamma^{q})Z^{2}W^{2}\). Hence \(\mathcal{H}_{P}\) has precisely two ideal points. Applying inequality \((\dagger)\) from the proof of Lemma 3.6 with \(\partial=4\) and \(D=2\) yields that there are no cubic polynomials \(P\) satisfying Condition (1) for which \(H_{P}\) is absolutely irreducible when \(q\geq 47\).
Finally suppose that \(\theta^{q}\delta+\gamma^{q}=0\), in which case we have \(H_{P}(z,w)=(\theta^{q+1}-1)(\delta^{q}zw(z+w)-(z^{2}+zw+w^{2})-\delta^{q+1}zw+ \delta(z+w))\). If \(\delta=0\), then \(H_{P}=(1-\theta^{q+1})(z^{2}+zw+w^{2})\), which is either identically zero or reducible. If \(\delta\neq 0\), then \(H_{P}\) has degree 3, and homogenising we obtain \(\overline{H_{P}}(Z,W,0)=(\theta^{q+1}-1)\delta^{q}ZW(Z+W)\), and so there are three ideal points. Using again inequality \((\dagger)\) with \(\partial=3\) and \(D=3\) yields that there are no cubic polynomials \(P\) satisfying Condition (1) for which \(H_{P}\) is absolutely irreducible when \(q\geq 13\).
For values of \(q<47\), an exhaustive Magma search returns that \(H_{P}\) is reducible for any cubic \(P\) satisfying Condition (1).
We now examine the case in which \(H_{P}\) is reducible, and study the possible factorizations of \(H_{P}\).
### Further restrictions on the factorization of \(H_{p}\)
**Lemma 4.2**.: _Suppose \(H_{P}(z,w)\) is reducible over \(\overline{\mathbb{F}_{q}}\). Then \(H_{P}(z,w)\) is reducible over \(\mathbb{F}_{q^{2}}\), and \(H_{P}(z,w)=(czw+az+bw+d)(czw+bz+aw+d)\) for some \(a,b,c,d,\mu\in\mathbb{F}_{q^{2}}\), where \(a\neq b\)._
Proof.: Since \(H_{P}(z,w)\) has degree at most 4, has degree at most 2 in \(z\) and in \(w\), and is symmetric in \(z\) and \(w\), we must have that either
\[H_{P}(z,w)=\mu(cz^{2}+az+d)(cw^{2}+aw+d)\] (A)
or
\[H_{P}(z,w)=\mu(czw+az+bw+d)(czw+bz+aw+d)\] (B)
or
\[H_{P}(z,w)=(czw+a(z+w)+d)(c^{\prime}zw+b(z+w)+d^{\prime})\] (C)
for some \(a,b,c,c^{\prime},d,d^{\prime},\mu\in\overline{\mathbb{F}_{q}}\).
By Lemma 3.8, case (C) cannot occur and \(a\neq b\) in case (B). Since the the coefficents of \(H_{P}\) are in \(\mathbb{F}_{q^{2}}\), then raising the coefficients in the irreducible factors of \(H_{P}\) must permute these factors up to scalar multiples. In case (A), we can assume without loss of generality that \(a,c,d\in\mathbb{F}_{q^{2}}\). In case (B) we can assume without loss of generality that \(c,d\in\mathbb{F}_{q^{2}}\), and either \(a,b\in\mathbb{F}_{q^{2}}\) or \(a,b\in\mathbb{F}_{q^{4}}\) with \(a^{q^{2}}=b\).
If \(H_{P}(z,w)\) factorizes as in (A), then by Lemma 3.7,
\[P(x)\mid H_{P}(x^{q^{2}},x) =\mu(cx^{2q^{2}}+ax^{q^{2}}+d)(cx^{2}+ax+d)\] \[=\mu(cx^{2}+ax+d)^{q^{2}+1}.\]
As \(P(x)\) is irreducible, it must divide \(cx^{2}+ax+d\). But the degree of \(P(x)\) is 3, so case (A) cannot occur.
Thus \(H_{P}(z,w)\) must factorize as in (B). If \(a,b\not\in\mathbb{F}_{q^{2}}\) then \(b=a^{q^{2}}\) and
\[P(x)\mid H_{P}(x^{q^{2}},x)=\mu(cx^{q^{2}+1}+ax^{q^{2}}+a^{q^{2}}x+d)(cx^{q^{2 }+1}+a^{q^{2}}x^{q^{2}}+ax+d).\]
Let \(\varepsilon\) be a root of \(P\). Then either
\[ce^{q^{2}+1}+(a\varepsilon)^{q^{2}}+a\varepsilon+d=0\]
\[c\varepsilon^{q^{2}+1}+(b\varepsilon)^{q^{2}}+b\varepsilon+d=0.\]
We can assume without loss of generality that the first equation holds. Then raising both sides to the power of \(q^{2}\) yields
\[c\varepsilon^{q^{4}+q^{2}}+a\varepsilon^{q^{4}}+(a\varepsilon)^{q^ {2}}+d =0\] \[\iff c\varepsilon^{q^{4}+q^{2}}+a\varepsilon^{q^{4}}+(a\varepsilon)^{q^ {2}}-(c\varepsilon^{q^{2}+1}+(a\varepsilon)^{q^{2}}+a\varepsilon) =0\] \[\iff(\varepsilon^{q^{4}}-\varepsilon)(c\varepsilon^{q^{2}}+a) =0.\]
The first factor cannot equal zero since \(\mathbb{F}_{q^{2}}(\varepsilon)=\mathbb{F}_{q^{6}}\). Hence \(c\varepsilon^{q^{2}}=-a\). If \(c=0\) then \(a=0\), so \(d=0\) and \(H_{P}\equiv 0\). Thus \(\varepsilon^{q^{2}}=-ac^{-1}\in\mathbb{F}_{q^{4}}\), which cannot occur since it is also a root of \(P\). Hence \(a,b\in\mathbb{F}_{q^{2}}\).
The following technical lemma will be of use in the subsequent theorem.
**Lemma 4.3**.: _Suppose \(f(x)=ex^{2}+\lambda x+e^{q}\) for some \(0\neq e\in\mathbb{F}_{q^{2}},\lambda\in\mathbb{F}_{q}\). Then \(f(x)\) has a root \(w\) such that \(w^{q+1}=1\) if and only if its discriminant \(\lambda^{2}-4e^{q+1}\) is either \(0\) or a nonsquare in \(\mathbb{F}_{q}\)._
Proof.: Let \(w\) be a root of \(f\). Then \(w=\frac{-\lambda\pm\sqrt{\lambda^{2}-4e^{q+1}}}{2e}\in\mathbb{F}_{q^{2}}\). Let \(\Delta=\lambda^{2}-4e^{q+1}\), which is in \(\mathbb{F}_{q}\).
Suppose \(\Delta\) is a square in \(\mathbb{F}_{q}\). Then \((\sqrt{\Delta})^{q}=\sqrt{\Delta}\), and so
\[w^{q+1} =\left(\frac{-\lambda\pm\sqrt{\Delta}}{2e}\right)\left(\frac{- \lambda\pm\sqrt{\Delta}}{2e^{q}}\right)\] \[=\frac{\lambda^{2}+\Delta\mp 2\lambda\sqrt{\Delta}}{4e^{q+1}}\]
Then \(w^{q+1}=1\) if and only if \(\lambda^{2}+\Delta\mp 2\lambda\sqrt{\Delta}=4e^{q+1}\), if and only if \(2\Delta=\pm 2\lambda\sqrt{\Delta}\), if and only if \(\Delta=0\) or \(\lambda=\pm\sqrt{\Delta}\). But if \(\lambda=\pm\sqrt{\Delta}\) then \(e=0\), and so \(w^{q+1}=1\) if and only if \(\Delta=0\).
Suppose now that \(\Delta\) is not a square in \(\mathbb{F}_{q}\). Then \((\sqrt{\Delta})^{q}=-\sqrt{\Delta}\), and so
\[w^{q+1} =\left(\frac{-\lambda\pm\sqrt{\Delta}}{2e}\right)\left(\frac{- \lambda\mp\sqrt{\Delta}}{2e^{q}}\right)\] \[=\frac{\lambda^{2}-\Delta}{4e^{q+1}}\] \[=1,\]
completing the proof.
By Lemma 4.2, we know the possible factorizations of \(H_{P}\). We now find further restrictions on the possible values of \(a,b,c,d\). Note that the roles of \(a\) and \(b\) are interchangeable, and so whenever we encounter a condition that must be satisfied by either \(a\) or \(b\), we can assume without loss of generality that it is satisfied by \(a\).
**Lemma 4.4**.: _Suppose \(H_{P}(z,w)=(czw+az+bw+d)(czw+bz+aw+d)\) for some \(a,b,c,d\in\mathbb{F}_{q^{2}}\), \(a\neq b\). If \(ab=cd\), then \(P(x)\) is reducible._
Proof.: First suppose \(d\neq 0\). By Lemma 3.7, \(P(x)\) divides
\[G_{P}(x^{q^{2}},x)=(x^{q^{2}}-x)H_{P}(x^{q^{2}},x) =\prod_{\lambda\in\mathbb{F}_{q^{2}}}(x-\lambda)(cx^{q^{2}+1}+ax^{ q^{2}}+bx+d)(cx^{q^{2}+1}+bx^{q^{2}}+ax+d)\] \[=\prod_{\lambda\in\mathbb{F}_{q^{2}}}(x-\lambda)(x+ac^{-1})(x+bc^{ -1})((cx+b)(cx+a))^{q^{2}}.\]
Since \(P(x)\) divides a product of linear factors with coefficients in \(\mathbb{F}_{q^{2}}\), it must be reducible. If \(d=0\), then either \(a=0\) or \(b=0\). Suppose without loss of generality that \(a=0\). Then \(P(x)\) divides
\[G_{P}(x^{q^{2}},x)=(x^{q^{2}}-x)H_{P}(x^{q^{2}},x) =\prod_{\lambda\in\mathbb{F}_{q^{2}}}(x-\lambda)(cx^{q^{2}+1}+bx)( cx^{q^{2}+1}+bx^{q^{2}})\] \[=\prod_{\lambda\in\mathbb{F}_{q^{2}}}(x-\lambda)(x(cx+b))^{q^{2}+ 1},\]
so \(P(x)\) is again reducible.
Hence when considering divisors of \(H_{P}\), we can assume that \(ab\neq cd\). We now find further conditions on the divisors of \(H_{P}\) if \(P\) satisfies Condition (1).
**Theorem 4.5**.: _Let \(H_{\Psi}(z,w)=czw+az+bw+d\), where \(a,b,c,d\in\mathbb{F}_{q^{2}}\), \(ab\neq cd\). Then there exist \(z,w\in\mathbb{F}_{q^{2}}\) such that \(H_{\Psi}(z,w)=0\), \(w\neq z\), and \(z^{q+1}=w^{q+1}=1\) if and only if_
\[\Delta=(a^{q+1}-b^{q+1}+c^{q+1}-d^{q+1})^{2}-4(bd^{q}-a^{q}c)^{q+1},\]
_is zero or a nonsquare in \(\mathbb{F}_{q}\), and the quadratic \((bd^{q}-a^{q}c)x^{2}+(d^{q+1}+b^{q+1}-c^{q+1}-a^{q+1})x+(b^{q}d-ac^{q})\) possesses a root which is not a root of \(cx^{2}+(a+b)x+d\)._
Proof.: Let \(z,w\in\mathbb{F}_{q^{2}}\) be such that \(H_{\Psi}(z,w)=0\) and \(z^{q+1}=w^{q+1}=1\). Then either \(cw+a=bw+d=0\), or \(z=-\left(\frac{bw+d}{cw+a}\right)\). In the first case we have \(ab=-bcw=cd\), so by Lemma 4.4\(P(x)\) is reducible and does not satisfy Condition (1).
Next we suppose that \(cw+a\neq 0\) and \(z=-\left(\frac{bw+d}{cw+a}\right)\). Raising both sides to the power \(q+1\), imposing \(z^{q+1}=w^{q+1}=1\) and rearranging, we get that
\[(bd^{q}-a^{q}c)w^{2}+(d^{q+1}+b^{q+1}-c^{q+1}-a^{q+1})w+(b^{q}d-ac^{q})=0. \tag{1}\]
If \(bd^{q}-a^{q}c\neq 0\), then this is a quadratic equation in \(w\) with coefficients in \(\mathbb{F}_{q^{2}}\) satisfying the conditions of Lemma 4.3. The discriminant of the quadratic is
\[\Delta=(a^{q+1}-b^{q+1}+c^{q+1}-d^{q+1})^{2}-4(bd^{q}-a^{q}c)^{q+1},\]
and so from Lemma 4.3 we have that \(w^{q+1}=1\) if and only if \(\Delta\) is either zero or a nonsquare in \(\mathbb{F}_{q}\).
Now \(z=w\) if and only if \(w=-\left(\frac{bw+d}{cw+a}\right)\), if and only if \(cw^{2}+(a+b)w+d=0\). Thus we have a solution with \(z\neq w\) if and only if not every solution of equation (1) is also a solution of \(cw^{2}+(a+b)w+d=0\).
We summarise the results of this section with the following statement.
**Corollary 4.6**.: _Let \(P(x)\) be an irreducible cubic in \(\mathbb{F}_{q^{2}}[x]\). Suppose \(H_{P}(z,w)\) is reducible, with \(H_{P}(z,w)=(czw+az+bw+d)(czw+bz+aw+d)=0\) for some \(a,b,c,d\in\mathbb{F}_{q^{2}}\), and let \(\Delta\) be as in Theorem 4.5. Then \(P(x)\) satisfies Condition (1) if and only if one of the following occur:_
* \(\Delta\) _is a nonzero square in_ \(\mathbb{F}_{q}\)_;_
* \(\Delta\) _is a nonsquare in_ \(\mathbb{F}_{q}\) _and the quadratic polynomials_ \((bd^{q}-a^{q}c)x^{2}+(d^{q+1}+b^{q+1}-c^{q+1}-a^{q+1})x+(b^{q}d-ac^{q})\) _and_ \(cx^{2}+(a+b)x+d\) _are nonzero scalar multiples of each other;_
* \(\Delta=0\)_,_ \(bd^{q}-a^{q}c\neq 0\)_, and the unique root of_ \((bd^{q}-a^{q}c)x^{2}+(d^{q+1}+b^{q+1}-c^{q+1}-a^{q+1})x+(b^{q}d-ac^{q})\) _is a root of_ \(cx^{2}+(a+b)x+d\)_._
## 5 Binomials
In this section, we determine exact conditions for when a binomial satisfies Condition (1). Note that we will start in the case of a binomial of arbitrary degree, before stating the consequences for cubics.
**Lemma 5.1**.: _Let \(P(x)=x^{m}-\theta\in\mathbb{F}_{q^{2}}[x]\), where \(m>2\) is an integer. Then \(H_{P}(z,w)\) is not identically zero and reducible if and only if \(\theta^{q+1}\neq 1\). Furthermore, \(P(x)\) satisfies Condition (1) if and only if \(\gcd(m,q+1)=1\)._
Proof.: We calculate that
\[G_{P}(z,w)=(\theta^{q+1}-1)(w^{m}-z^{m}).\]
Hence \(G_{P}\) has a zero in \(Z\) if and only if there exists \((z,w)\in\mathbb{F}_{q^{2}}^{2}\) with \(z^{m}=w^{m}\), \(z^{q+1}=w^{q+1}=1\) and \(z\neq w\). This occurs precisely when \(\gcd(m,q+1)\neq 1\), so \(P(x)\) satisfies Condition (1) if and only if \(m\) and \(q+1\) are coprime. Note that \(G_{P}\) is identically zero if and only if \(\theta^{q+1}=1\).
We can apply the next well-known result to determine when \(P(x)\) is irreducible.
**Lemma 5.2**.: _[_12_, Theorem 3.75]_ _Let \(m\geq 2\) be an integer and let \(\theta\in\mathbb{F}_{q}^{*}\). Then \(x^{m}-\theta\in\mathbb{F}_{q}[x]\) is irreducible if and only if the following hold:_
* \(\operatorname{rad}(m)\mid o(\theta)\)_;_
* \(\gcd\left(m,\frac{q-1}{o(\theta)}\right)=1\)_;_
* _if_ \(m\equiv 0\mod 4\) _then_ \(q\equiv 1\mod 4\)_._
When \(m=3\), we can combine Lemmas 5.1 and 5.2 to give the following.
**Theorem 5.3**.: _A cubic binomial \(x^{3}-\theta\in\mathbb{F}_{q^{2}}[x]\) is irreducible and satisfies Condition (1) if and only if \(q\equiv 1\mod 3\) and \(3\) does not divide \(\frac{q^{2}-1}{o(\theta)}\)._
Proof.: Suppose \(x^{3}-\theta\) is irreducible and satisfies Condition (1). By Lemma 5.1 we must have \(q\equiv 1\mod 3\), and by Lemma 5.2 we have that \(3\) does not divide \(\frac{q^{2}-1}{o(\theta)}\). Thus the two conditions are necessary.
Suppose now that \(q\equiv 1\mod 3\) and \(3\) does not divide \(\frac{q^{2}-1}{o(\theta)}\). Then \(o(\theta)\) does not divide \(\frac{q^{2}-1}{3}=(q+1)\left(\frac{q-1}{3}\right)\), and since \(\frac{q-1}{3}\) is an integer, we get that \(\theta^{q+1}\neq 1\). Finally since \(3\) divides \(q^{2}-1\) we must have that \(3\) divides \(o(\theta)\), and so \(x^{3}-\theta\) is irreducible and satisfies Condition (1), showing that the two conditions are sufficient.
**Remark 5.4**.: The case of binomials \(x^{m}-\theta\) with \(\theta\) a primitive element of \(\mathbb{F}_{q^{2}}\) and \(m\) an odd divisor of \(q-1\) corresponds Kantor's Type 4 construction. Thus we have a generalisation of this family, both in terms of new inequivalent examples when \(m\) divides \(q-1\), and new values of \(m\). For example, this section shows that there exist irreducible binomials of degree \(25\) over \(\mathbb{F}_{11^{2}}\) satisfying Condition (1), and hence new \(2\)-spreads of \(V(50,11)\) with a cyclic transitive group of automorphisms, and new flag-transitive linear spaces.
Characterisation of cubics
We are now ready to fully characterise the irreducible cubic polynomials satisfying Condition (1). We split them into three (not necessarily non-empty) parameterised families,
**Theorem 6.1**.: _Let \(P(x)=x^{3}-\delta x^{2}-\gamma x-\theta\in\mathbb{F}_{q^{2}}[x]\) be irreducible. Then \(H_{P}(z,w)\) is not identically zero and reducible if and only if one of the following holds:_
\[P(x)=B_{\theta}(x) :=x^{3}-\theta,\ \theta^{q+1}\neq 1;\] \[P(x)=P_{\delta,\alpha}(x) :=x^{3}-\delta x^{2}-(\delta\alpha+3\alpha^{1-q})x-(\delta\alpha^ {2}(1-\alpha^{-(q+1)})/3+\alpha^{2-q}),\ \alpha\neq 0;\] \[P(x)=Q_{\delta,\gamma}(x) :=x^{3}-\delta x^{2}-\gamma x+\delta\gamma/9,\ \gamma^{q+1}=9.\]
_Moreover,_
* _an irreducible of the form_ \(B_{\theta}(x)\) _satisfies Condition (_1_) if and only if_ \(\theta^{q+1}\neq 1\) _and_ \(q\equiv 1\mod 3\)_;_
* _an irreducible of the form_ \(P_{\delta,\alpha}(x)\) _satisfies Condition (_1_) if and only if_ \(\frac{4-\alpha^{q+1}}{3\alpha^{q+1}}\) _is a nonzero square in_ \(\mathbb{F}_{q}\)_, and either_ \(\delta=0\) _or_ \((\alpha+3\delta^{-q})^{q+1}\neq 1\)_;_
* _an irreducible of the form_ \(Q_{\delta,\gamma}(x)\) _satisfies Condition (_1_) if and only if_ \(\gamma^{\frac{q+1}{2}}=3\)_._
Proof.: We first note that the set of polynomials \(\{z^{2}w^{2},z^{2}w+zw^{2},z^{2}+zw+w^{2},zw,z+w,1\}\) is linearly independent in \(\mathbb{F}_{q^{2}}[z,w]\). By Lemmas 4.1 and 4.2 we have that
\[H_{P}(z,w)=\mu(czw+az+bw+d)(czw+bz+aw+d)\]
for some \(a,b,c,d,\mu\in\mathbb{F}_{q^{2}}\). Thus by comparing coefficients (see the beginning of Section 4) we see that
\[\begin{array}{rl}(1\text{A})&-(\theta^{q}\delta+\gamma^{q})&=\mu c^{2}\\ (1\text{B})&-(\theta\delta^{q}+\gamma)&=\mu d^{2}\\ (2\text{A})&-(\theta^{q}\gamma+\delta^{q})&=\mu c(a+b)\\ (2\text{B})&-(\theta\gamma^{q}+\delta)&=\mu d(a+b)\\ (3)&1-\theta^{q+1}&=\mu ab\\ (4)&\delta^{q+1}-\gamma^{q+1}&=\mu(2cd+a^{2}+b^{2}-ab)\end{array}\]
**Case 1:** Assume \(c=0\). Then \((\theta\delta^{q}+\gamma)^{q}=\theta^{q}\delta+\gamma^{q}=0\), and so \(d=0\). Therefore \(\delta=-\theta\gamma^{q}\), so (1A) and (3) imply that \(ab=0\) or \(\gamma=0\). If either \(a=0\) or \(b=0\), (3) and (4) require that \(a=b=0\), giving \(H_{P}(z,w)\equiv 0\). Thus \(\gamma=0\), which implies that \(\delta=0\) and so \(P(x)=x^{3}-\theta=B_{\theta}(x)\). The binomial case is characterised in Theorem 5.3.
**Case 2:** Assume \(c\neq 0\) and \(a+b\neq 0\). We may assume without loss of generality that \(c=1\). Since \((\theta^{q}\delta+\gamma^{q})^{q}=\theta\delta^{q}+\gamma\), equations (1A) and (1B) tell us that \(\mu^{q-1}=d^{2}\). Since \((\theta^{q}\gamma+\delta^{q})^{q}=\theta\gamma^{q}+\delta\), equations (2A) and (2B) give that \(\mu^{q}(a+b)^{q}=\mu d(a+b)\). Thus we have \(a+b=d(a+b)^{q}\), so \(d=(a+b)^{1-q}\) and the following equations hold:
\[\begin{array}{rl}(1\text{A})&-(\theta^{q}\delta+\gamma^{q})&=\mu\\ (1\text{B})&-(\theta\delta^{q}+\gamma)&=\mu(a+b)^{2-2q}\\ (2\text{A})&-(\theta^{q}\gamma+\delta^{q})&=\mu(a+b)\\ (2\text{B})&-(\theta\gamma^{q}+\delta)&=\mu(a+b)^{2-q}\\ (3)&1-\theta^{q+1}&=\mu ab\\ (4)&\delta^{q+1}-\gamma^{q+1}&=\mu(2(a+b)^{1-q}+a^{2}+b^{2}-ab)\end{array}\]
To obtain an expression for \(\theta\) in terms of \(\delta,a\) and \(b\), we substitute the expression for \(\gamma^{q}\) from (1A) into (2B) to yield \(\mu(\theta-(a+b)^{2-q})=\delta(\theta^{q+1}-1)\). Replacing \(\theta^{q+1}-1\) using (3) and dividing by \(\mu\) we get
\(\theta=\delta ab+(a+b)^{2-q}\). To obtain an expression for \(\gamma\), we first multiply (1A) by \(\gamma\), then substitute in the expression for \(\theta^{q}\gamma\) from (2A) to get \(\mu(\gamma-\delta(a+b))=\gamma^{q+1}-\delta^{q+1}\). Replacing the right-hand side using (4) and dividing by \(\mu\), we get \(\gamma=\delta(a+b)+a^{2}-ab+b^{2}+2(a+b)^{1-q}\).
For convenience in the remaining calculations, we define \(\alpha=a+b\), and \(\beta=ab\). Note that we are assuming that \(\alpha\neq 0\). Then our expressions for \(\gamma\) and \(\theta\) become
\[\gamma =\delta\alpha+\alpha^{2}-3\beta+2\alpha^{1-q},\] \[\theta =\delta\beta+\alpha^{2-q}.\]
We substitute these expressions into (1A), obtaining
\[\mu=\alpha^{q-1}(2+\delta^{q}\alpha+\delta\alpha^{q}+\alpha^{q+1})+\beta^{q}( \delta^{q+1}-3)\]
and hence from (1B) we have that
\[\alpha^{2-2q}(\alpha^{2q-2}\beta-\beta^{q})(\delta^{q+1}-3)=0.\]
Suppose \(\alpha^{2q-2}\beta-\beta^{q}\neq 0\). Then \(\delta^{q+1}=3\). Equation (2A) says that
\[\delta^{q}(1-\alpha^{q+1}+(\alpha^{2}+2\alpha^{1-q})\beta^{q}-3\beta^{q+1})=3 \alpha(\alpha^{2q-2}\beta-\beta^{q}),\]
so multiplying both sides by \(\delta\) and rearranging gives
\[\delta=\frac{1-\alpha^{q+1}+(\alpha^{2}+2\alpha^{1-q})\beta^{q}-3\beta^{q+1}}{ \alpha(\alpha^{2q-2}\beta-\beta^{q})}=:\frac{X}{Y},\]
where \(X\) denotes the displayed numerator and \(Y\) the denominator. Then \(\delta^{q+1}=3\iff X^{q+1}-3Y^{q+1}=0\). Observe that \(X^{q}=X+(\alpha+2\alpha^{-q})Y\) and \(Y^{q}=-\alpha^{1-q}Y\). Hence
\[0 =X^{q+1}-3Y^{q+1}\] \[\iff 0 =X^{2}+(\alpha+2\alpha^{-q})XY+3\alpha^{1-q}Y^{2}\] \[\iff 0 =\alpha^{q}X^{2}+(\alpha^{q+1}+2)XY+3\alpha Y^{2}\] \[\iff 0 =\alpha^{q}\left(\frac{X}{Y}\right)+\alpha^{q+1}+2+3\alpha\left( \frac{Y}{X}\right)\] \[\iff 0 =\alpha^{q}\left(\frac{X}{Y}\right)+\alpha^{q+1}+2+\left(\frac{X^ {q+1}}{Y^{q+1}}\right)\alpha\left(\frac{Y}{X}\right)\] \[\iff 0 =2+\delta^{q}\alpha+\delta\alpha^{q}+\alpha^{q+1},\]
in which case \(\mu=0\), which contradicts \(H_{p}\not\equiv 0\).
Thus we must have \(\alpha^{2q-2}\beta=\beta^{q}\), so \(Y=0\). Equation (3) states that \(X^{q}=\delta^{q}Y^{q}=0\), so \(X=0\) also. Hence
\[0=X =1-\alpha^{q+1}+(\alpha^{2}+2\alpha^{1-q})\beta^{q}-3\beta^{q+1}\] \[=1-\alpha^{q+1}+(\alpha^{2}+2\alpha^{1-q})\alpha^{2q-2}\beta-3 \alpha^{2q-2}\beta^{2}\] \[=(\alpha^{q-1}\beta-1)(\alpha^{q+1}-3\alpha^{q-1}\beta-1).\]
If \(\beta=\alpha^{1-q}\), then \(P(x)\) has \(\delta+\alpha\) as a root and so is reducible. Thus we have
\[\alpha^{q+1}-3\alpha^{q-1}\beta =1\] \[\iff \alpha^{q-1}(\alpha^{2}-3\beta) =1\] \[\iff \alpha^{2}-3\beta=\alpha^{1-q}.\]
This yields the expressions for \(\gamma\) and \(\theta\) which gives \(P(x)=P_{\delta,\alpha}(\text{x})\).
We note that without loss of generality, we may assume that
\[a =\frac{\alpha}{2}\left(1+\sqrt{\frac{4-\alpha^{q+1}}{3\alpha^{q+1} }}\right), \tag{2a}\] \[b =\frac{\alpha}{2}\left(1-\sqrt{\frac{4-\alpha^{q+1}}{3\alpha^{q+1 }}}\right). \tag{2b}\]
Now \(H_{P_{\delta,\alpha}}\equiv 0\) if and only if \(\delta^{q+1}\left(\frac{\alpha^{q+1}-1}{3}\right)+\delta^{q}\alpha+\delta \alpha^{q}+3=0\), which occurs if and only if \(\delta\neq 0\) and \((\alpha+3\delta^{-q})^{q+1}=1\). In this case \(P_{\delta,\alpha}(x)\) does not satisfy Condition (1).
If \(H_{P_{\delta,\alpha}}\not\equiv 0\) and \(bd^{q}-a^{q}c\neq 0\), then the quadratic \((bd^{q}-a^{q}c)x^{2}+(d^{q+1}+b^{q+1}-c^{q+1}-a^{q+1})x+(b^{q}d-ac^{q})\) is a nonzero scalar multiple of the quadratic \(cx^{2}+(a+b)x+d\), since \((bd^{q}-a^{q}c)(a+b)=b(a+b)^{q}-a^{q}(a+b)=b^{q+1}-a^{q+1}=d^{q+1}+b^{q+1}-c^{q+ 1}-a^{q+1}\), and \((bd^{q}-a^{q}c)(a+b)^{1-q}=b-a^{q}(a+b)^{1-q}=b+b^{q}(a+b)^{1-q}-(a+b)^{q}(a+b) ^{1-q}=b^{q}(a+b)^{1-q}-a=b^{q}d-ac^{q}\), and so by Theorem 4.5, \(P_{\delta,\alpha}(x)\) satisfies Condition (1).
Now if \(bd^{q}-a^{q}c=0\), then the first quadratic is identically zero, and so \(P_{\delta,\alpha}(x)\) does not satisfy Condition (1). This occurs if and only if \(a^{q+1}=b^{q+1}\), if and only if \(\frac{4-\alpha^{q+1}}{3\alpha^{q+1}}\) is zero or a nonsquare in \(\mathbb{F}_{q}\).
**Case 3:** Assume \(c\neq 0\) and \(a+b=0\). Again we assume without loss of generality that \(H_{P}(z,w)\) factorises as
\[\mu(zw+az+bw+d)(zw+bz+aw+d)\]
for some \(\mu\in\mathbb{F}_{q^{2}}^{*}\). Then the following equations hold:
\[\begin{array}{rl}\text{(1A)}&-(\theta^{q}\delta+\gamma^{q})&=\mu\\ \text{(1B)}&-(\theta\delta^{q}+\gamma)&=\mu d^{2}\\ \text{(2A)}&-(\theta^{q}\gamma+\delta^{q})&=0\\ \text{(2B)}&-(\theta\gamma^{q}+\delta)&=0\\ \text{(3)}&1-\theta^{q+1}&=-\mu a^{2}\\ \text{(4)}&\delta^{q+1}-\gamma^{q+1}&=\mu(2d+3a^{2})\end{array}\]
From (2B), we have \(\delta=-\theta\gamma^{q}\). Substituting this into (1) gives \(-\gamma^{q}(\theta^{q+1}-1)=\mu\) and so \(\gamma^{q}a^{2}=1\) by (3). Hence \(a^{2}=\gamma^{-q}\). Equation (1B) tells us that
\[\theta(-\theta\gamma^{q})^{q}+\gamma =\mu d^{2}\] \[\iff-\gamma(\theta^{q+1}-1) =\mu d^{2}\] \[\iff\gamma a^{2} =d^{2}\] \[\iff\gamma^{1-q} =d^{2}.\]
Substituting the expression for \(\delta\) into (4) gives
\[\gamma^{q+1}-(-\theta\gamma^{q})^{q+1} =\mu(2d+3a^{2})\] \[\iff-\gamma^{q+1}(\theta^{q+1}-1) =\mu(2d+3a^{2})\] \[\iff\gamma^{q+1}a^{2} =2d+3a^{2}\] \[\iff\gamma =2d+3\gamma^{-q}\] \[\iff d =\frac{\gamma-3\gamma^{-q}}{2}.\]
Squaring the last equation yields
\[\gamma^{1-q}=d^{2} =\frac{\gamma^{2}-6\gamma^{1-q}+9\gamma^{-2q}}{4}\] \[\iff\gamma^{2}-10\gamma^{1-q}+9\gamma^{-2q} =0\] \[\iff\gamma^{2(q+1)}-10\gamma^{q+1}+9 =0\] \[\iff\gamma^{q+1}=1\text{ or }\gamma^{q+1} =9.\]
If \(\gamma^{q+1}=1\) then \(\gamma=\gamma^{-q}\), so (2B) states that
\[\theta\gamma^{q} =-\delta\] \[\iff\theta\gamma^{-1} =-\delta\] \[\iff\theta =-\delta\gamma.\]
The polynomial \(P(x)=x^{3}-\delta x^{2}-\gamma x+\delta\gamma\) has \(\delta\) as a root and is hence reducible, so we must have \(\gamma^{q+1}=9\).
If \(\gamma^{q+1}=9\) then \(\gamma=9\gamma^{-q}\), so \(d^{2}=\gamma^{2}/9\) and \(d=\pm\gamma/3\). If \(d=\gamma/3\), equations (1A)...(4) hold. If \(d=-\gamma/3\), we arrive at a contradiction in (4) with \(1=-3\). We now have \(P(x)=x^{3}-\delta x^{2}-\gamma x+\delta\gamma/9\), where \(\gamma^{q+1}=9\), \(a=-b\) and \(d=\gamma/3\).
By Theorem 4.5, there exist \(z,w\in\mathbb{F}_{q^{2}}\) such that \(H_{P}(z,w)=0\) and \(z^{q+1}=w^{q+1}=1\) if and only if
\[\Delta =(a^{q+1}-b^{q+1}+c^{q+1}-d^{q+1})^{2}-4(bd^{q}-a^{q}c)^{q+1}\] \[=\left(\frac{-4}{27}\right)(\gamma^{(q+1)/2}+3)^{2}\]
is zero or a nonsquare in \(\mathbb{F}_{q}\). Since \(\gamma^{q+1}=9\), \(\gamma^{(q+1)/2}=\pm 3\). Hence
\[\Delta=\begin{cases}\frac{16}{-3}=\frac{4^{2}}{-3},&\text{if }\gamma^{(q+1)/2}=3\\ 0,&\text{if }\gamma^{(q+1)/2}=-3.\end{cases}\]
When \(\gamma^{(q+1)/2}=-3\), the first quadratic in the statement of Theorem 4.5 is identically zero, and so Condition (1) is never satisfied. When \(\gamma^{(q+1)/2}=3\) and \(q\equiv 2\mod 3\), \(\Delta\) is a nonzero nonsquare. The two quadratics in the statement of Theorem 4.5 are \(-(a\gamma^{q}/3+a^{q})x^{2}-(a\gamma^{q}/3+a^{q})^{q}\) and \(x^{2}+\gamma/3\) respectively. These are scalar multiples of each other, since \((a\gamma^{q}/3+a^{q})\gamma/3=a+a^{q}\gamma/3=(a\gamma^{q}/3+a^{q})^{q}\), and hence Condition (1) is always satisfied.
When \(\gamma^{(q+1)/2}=3\) and \(q\equiv 1\mod 3\), \(\Delta\) is a nonzero square, and hence Condition (1) is satisfied.
## 7 Classification of cubics
In this section we determine the number and the nature of the equivalence classes of irreducible cubics satisfying Condition (1). We begin by enumerating the irreducible cubics satisfying Condition (1), and subsequently find representatives for each equivalence class.
### Enumeration
We first need some technical lemmas which will enable us to perform the desired enumeration. To start, we introduce the following characterisation of irreducible cubic polynomials of Dickson [7].
**Lemma 7.1**.: _The cubic \(x^{3}+sx+t=0\in\mathbb{F}_{q}[x]\) is irreducible over \(\mathbb{F}_{q}\) if and only if the following two conditions hold:_
* \(R:=-4s^{3}-27t^{2}\) _is a nonzero square in_ \(\mathbb{F}_{q}\)_;_
* \(S:=(-t+\mu\sqrt{-3})/2\) _is a noncube in_ \(\mathbb{F}_{q}\left(\sqrt{-3}\right)\)_, where_ \(R=81\mu^{2}\)_._
_Moreover, if \(R\) is a (not necessarily nonzero) square in \(\mathbb{F}_{q}\), then this cubic has either zero or three roots in \(\mathbb{F}_{q}\)._
We apply this result to the polynomials \(P_{\delta,\alpha}(x)\) to obtain useful criteria towards counting irreducible polynomials of this form satisfying Condition (1).
**Lemma 7.2**.: _A polynomial of the form \(P_{\delta,\alpha}(x)\in\mathbb{F}_{q^{2}}[x]\) is either irreducible or has all three of its roots in \(\mathbb{F}_{q^{2}}\). Furthermore, it is reducible if and only if at least one of the following holds:_
* \(\alpha^{q+1}=4;\)__
* \(\delta=\frac{-3\alpha}{2}\left(1+\sqrt{1-4\alpha^{-(q+1)}}\right);\)__
* \(\delta=\frac{-3\alpha}{2}\left(1+\frac{\kappa^{3}+1}{\kappa^{3}-1}\sqrt{1-4 \alpha^{-(q+1)}}\right),\)__
_for some \(\kappa\in\mathbb{F}_{q^{2}}\)._
Proof.: We first perform a change of variables in order to apply Lemma 7.1. Let \(x=y+\delta/3\). Then \(P_{\delta,\alpha}(x)=y^{3}+sy+t\), where
\[s=-(3\alpha^{1-q}+\delta\alpha+\delta^{2}/3);\quad t=-(3\alpha+2\delta)(9 \alpha^{1-q}+3\alpha\delta+\delta^{2})/27.\]
Using the notation of Lemma 7.1,
\[R=\frac{-\alpha^{1-q}}{3}(\alpha^{q+1}-4)(9\alpha^{1-q}+3\alpha\delta+\delta^ {2})^{2}\]
Hence \(R\) is always a square in \(\mathbb{F}_{q^{2}}\), and thus by Lemma 7.1 the first claim holds.
For convenience, define \(r:=\sqrt{1-4\alpha^{-(q+1)}}.\) Then it is clear that \(R\) is zero if and only if \(\alpha^{q+1}=4\) or
\[\delta=\delta_{\pm}:=-\frac{3\alpha}{2}(1\pm r)\]
Now
\[S=\frac{(\delta-\delta_{\pm})^{2}(\delta-\delta_{\mp})}{27}=\frac{\delta- \delta_{\mp}}{\delta-\delta_{\pm}}\left(\frac{\delta-\delta_{\pm}}{3}\right)^ {3}.\]
Hence \(S\) is a cube if and only if
\[\frac{\delta-\delta_{-}}{\delta-\delta_{+}}\]
is a cube. Suppose \(\frac{\delta-\delta_{-}}{\delta-\delta_{+}}=\kappa^{3}\) for some \(\kappa\in\mathbb{F}_{q^{2}}\). If \(\kappa^{3}=1\), then \(r=0\) and so \(\alpha^{q+1}=4\). If \(\kappa^{3}\neq 1\), then
\[\delta=\frac{(\delta_{-})-(\delta_{+})\kappa^{3}}{1-\kappa^{3}}=\frac{-3 \alpha}{2}\left(1+\frac{\kappa^{3}+1}{\kappa^{3}-1}r\right),\]
completing the proof.
We saw in Theorem 6.1 that the case where \((\alpha+3\delta^{-q})^{q+1}=1\) appears to require special attention. We show now that in this case, a polynomial satisfying Condition (1) is reducible, and so can be disregarded.
**Lemma 7.3**.: _If \((\alpha+3\delta^{-q})^{q+1}=1\) and \((4\alpha^{-(q+1)}-1)/3\) is a nonzero square in \(\mathbb{F}_{q}\) then \(P_{\delta,\alpha}(x)\in\mathbb{F}_{q^{2}}[x]\) is reducible._
Proof.: Let \((4\alpha^{-(q+1)}-1)/3=\lambda^{2}\) for some \(\lambda\in\mathbb{F}_{q}^{*}\) and let \(r=\sqrt{1-4\alpha^{-(q+1)}}\). Then \(r=\sqrt{-3}\lambda\in\mathbb{F}_{q}\iff\sqrt{-3}\in\mathbb{F}_{q}\iff q\equiv 1 \mod 3\). We also note that \(r\neq\pm 1\) since \(\alpha\neq 0\). We claim that any \(\delta\) satisfying \((\alpha+3\delta^{-q})^{q+1}=1\) is of the form listed in Lemma 7.2. There are at most \(q+1\) such \(\delta\) when \(\alpha^{q+1}\neq 1\) and at most \(q\) otherwise. Define
\[\delta_{\kappa}:=\frac{-3\alpha}{2}\left(1+\frac{\kappa^{3}+1}{\kappa^{3}-1}r \right),\]
where \(\kappa\in\mathbb{F}_{q^{2}}\) and \(\kappa^{3}\neq 1\).
We first suppose \(q\equiv 1\mod 3\). Then \((\alpha+3\delta_{\kappa}^{-q})^{q+1}=1\iff\kappa^{3(q+1)}(r+1)^{3}+(r-1)^{3}=0\). For each \(r\), there exist \(q+1\) elements \(\kappa\in\mathbb{F}_{q^{2}}\) such that
\[\kappa^{q+1}=\frac{1-r}{1+r}\]
since
\[\left(\kappa^{q+1}\right)^{q-1}=1=\left(\frac{1-r}{1+r}\right)^{q-1}.\]
Note that \(\delta_{\kappa}=\delta_{\iota}\) if and only if \(\kappa^{3}=\iota^{3}\). Since \(\kappa^{q+1}=\iota^{q+1}\) and \(q\equiv 1\mod 3\), the \(q+1\) values of \(\kappa\) such that \(\kappa^{q+1}=\frac{1-r}{1+r}\) give \(q+1\) distinct solutions \(\delta=\delta_{\kappa}\) to \((a+3\delta^{-q})^{q+1}\), provided \(\kappa^{3}\neq 1\). If \(\kappa^{3}=1\), then
\[\frac{1-r}{1+r}=\kappa^{q+1}=\kappa^{2}(\kappa^{3})^{(q-1)/3}=\kappa^{2}\]
and so
\[1=\kappa^{3}=\frac{1-r}{1+r}\kappa\implies\kappa=\frac{1+r}{1-r}.\]
It follows that \(r^{2}=-3\), which occurs if and only if \(\alpha^{q+1}=1\), in which case \((\alpha+3\delta_{\kappa}^{-q})^{q+1}=1\iff\kappa^{3(q+1)}=1\). Hence when \(r=\sqrt{-3}\), the \(q\) values of \(\kappa\) such that \(\kappa^{q+1}=1\) and \(\kappa^{3}\neq 1\) give \(q\) distinct solutions \(\delta=\delta_{\kappa}\) to \((a+3\delta^{-q})^{q+1}\).
Now suppose \(q\equiv 2\mod 3\). Then \((\alpha+3\delta_{\kappa}^{-q})^{q+1}=1\iff\kappa^{3}(\kappa^{3(q-1)}(r-1)^{3} +(r+1)^{3})=0\). Since \(r^{q}=-r\), we have \(\left(\frac{1+r}{1-r}\right)^{q+1}=1\), and so there exist \(q-1\) elements \(\kappa\in\mathbb{F}_{q^{2}}\) such that \(\kappa^{q-1}=\frac{1+r}{1-r}\). Note again that \(\delta_{\kappa}=\delta_{\iota}\) if and only if \(\kappa^{3}=\iota^{3}\). Since \(\kappa^{q-1}=\iota^{q-1}\) and \(q\equiv 2\mod 3\), the \(q-1\) values of \(\kappa\) such that \(\kappa^{q-1}=\frac{1+r}{1-r}\) give \(q-1\) distinct solutions \(\delta=\delta_{\kappa}\) to \((a+3\delta^{-q})^{q+1}\), provided \(\kappa^{3}\neq 1\). If \(\kappa^{3}=1\), then
\[\frac{1+r}{1-r}=\kappa^{q-1}=\kappa(\kappa^{3})^{(q-2)/3}=\kappa.\]
It follows that \(r^{2}=-3\), which occurs if and only if \(\alpha^{q+1}=1\), in which case \((\alpha+3\delta_{\kappa}^{-q})^{q+1}=1\iff\kappa^{3}(\kappa^{3(q-1)}-1)=0\). Hence when \(r=\sqrt{-3}\), the \(q-2\) values of \(\kappa\) such that \(\kappa^{q-1}=1\) and \(\kappa^{3}\neq 1\) give \(q-2\) distinct solutions \(\delta=\delta_{\kappa}\) to \((a+3\delta^{-q})^{q+1}\).
The remaining two solutions to \((\alpha+3\delta^{-q})^{q+1}=1\) for both the case in which \(\alpha^{q+1}\neq 1\) and the case in which \(\alpha^{q+1}=1\) are given by \(\delta=\delta_{0}\) and \(\delta=\frac{-3\alpha}{2}(1+r)\).
Thus the claim holds and hence \(P_{\delta,\alpha}(x)\) is reducible.
Next we determine precisely when different values of \((\delta,\alpha)\) define the same polynomial \(P_{\delta,\alpha}(x)\).
**Lemma 7.4**.: _Suppose \(P_{\delta,\alpha}(x)=P_{\delta^{\prime},A}(x)\) for \((\delta,\alpha)\neq(\delta^{\prime},A)\). Then \(P_{\delta,\alpha}(x)=(x-\delta/3)^{3}\)._
Proof.: By comparing coefficients of \(P_{\delta,\alpha}(x)\) and \(P_{\delta,A}(x)\), we see that \(\delta=\delta^{\prime}\), so \(\alpha\neq A\). Then
\[\delta=\frac{3(A^{1-q}-\alpha^{1-q})}{\alpha-A}\]
and
\[K:=\alpha^{2(1-q)}-\alpha^{2-q}A+\alpha^{1-q}A^{2}+A^{2(1-q)}+(\alpha^{2}-2 \alpha^{1-q})A^{1-q}-\alpha A^{2-q}=0.\]
We calculate that
\[P_{\delta,\alpha}(x)=x^{3}-\frac{3(A^{1-q}-\alpha^{1-q})}{\alpha-A}x^{2}- \frac{3(\alpha A^{1-q}-A\alpha^{1-q})}{\alpha-A}x-\frac{(\alpha^{2}-\alpha^{1- q})A^{1-q}-(\alpha A-\alpha^{1-q})\alpha^{1-q}}{\alpha-A}\]
and
\[\left(x-\frac{\delta}{3}\right)^{3}=x^{3}-\frac{3(A^{1-q}-\alpha^{1-q})}{ \alpha-A}x^{2}+\frac{3(A^{1-q}-\alpha^{1-q})^{2}}{(\alpha-A)^{2}}x-\frac{(A^{ 1-q}-\alpha^{1-q})^{3}}{(\alpha-A)^{3}}.\]
The difference of these two polynomials is
\[-\frac{3K}{(\alpha-A)^{2}}x-\frac{(\alpha^{2}+\alpha^{1-q}-\alpha A-A^{1-q})K} {(\alpha-A)^{3}}=0,\]
and so the result holds.
We are now ready to enumerate the number of irreducible polynomials of the form \(P_{\delta,\alpha}(x)\) which satisfy Condition (1).
**Lemma 7.5**.: _The number of polynomials of the form \(P_{\delta,\alpha}(x)\) which are irreducible and satisfy Condition (1) is \(\frac{(q+1)(q-3)(q^{2}-1)}{3}\) when \(q\equiv 1\mod 3\), and \(\frac{(q+1)(q-1)(q^{2}-1)}{3}\) when \(q\equiv 2\mod 3\)._
_Moreover, the number of polynomials of the form \(P_{\delta,1}(x)\) which are irreducible and satisfy Condition (1) is \(\frac{2(q^{2}-1)}{3}\)._
Proof.: For each \(\alpha\), we wish to determine the number of \(\delta\) such that \(P_{\delta,\alpha}(x)\) is irreducible. If \(\alpha^{q+1}=4\), then \(P_{\delta,\alpha}(x)=(x-(\delta+\alpha))(x+\alpha/2)^{2}\) is reducible. We fix \(\alpha\) such that \(\alpha^{q+1}\neq 4\) and count the number of \(\delta\) for which \(P_{\delta,\alpha}(x)\) is reducible.
Suppose \(P_{\delta,\alpha}(x)\) is reducible. Then \(P_{\delta,\alpha}(x)=(x-\tau)(x-\sigma)(x-\nu)\) for some \(\tau,\sigma,\nu\in\mathbb{F}_{q^{2}}\) by Lemma 7.2. Equating coefficients yields that
\[\tau+\sigma+\nu =\delta,\] (i) \[-(\tau\sigma+\tau\nu+\sigma\nu) =\delta\alpha+3\alpha^{1-q},\] (ii) \[\tau\sigma\nu =\delta\alpha^{2}(1-\alpha^{-(q+1)})/3+\alpha^{2-q}.\] (iii)
We obtain that (up to labelling of \(\sigma\) and \(\nu\))
\[\sigma=-\left(\frac{a\tau+\alpha^{1-q}}{\tau+b}\right)\]
and
\[\nu=-\left(\frac{b\tau+\alpha^{1-q}}{\tau+a}\right),\]
where \(a\) and \(b\) are as in (2), and \(\tau\notin\{-a,-b\}\). Note that if \(\tau\in\{-a,-b\}\), then \(\alpha^{q+1}=4\), contrary to our assumption. Note furthermore that \(a\neq b\) precisely when \(\alpha^{q+1}\neq 4\).
We remark that \(\tau=\sigma\) if and only if \(\tau^{2}+\alpha\tau+\alpha^{1-q}=0\), while \(\tau=\nu\) if and only if \(\tau^{2}+\alpha\tau+\alpha^{1-q}=0\), and \(\sigma=\nu\) if and only if \(\tau^{2}+\alpha\tau+\alpha^{1-q}=0\) or \(a=b\). Hence if any two of \(\tau,\sigma\) and \(\nu\) are equal, then all three are equal and \(P_{\delta,\alpha}(x)=(x-\tau)^{3}\) for some \(\tau\in\mathbb{F}_{q^{2}}\). Equations (i) and (ii) then imply that \(\tau^{2}+\alpha\tau+\alpha^{1-q}=0\), and (iii) is satisfied whenever (i) and (ii) are satisfied, since it can be rearranged to read \((\tau^{2}+\alpha\tau+\alpha^{1-q})(\tau-\alpha)=0\). The discriminant of \(\tau^{2}+\alpha\tau+\alpha^{1-q}\) is \(\alpha^{2}(1-4\alpha^{-(q+1)})\), which is nonzero by assumption and always a square in \(\mathbb{F}_{q^{2}}\), so there are precisely two values of \(\tau\), and hence two values of \(\delta\), for which \(P_{\delta,\alpha}(x)\) has a triple root in \(\mathbb{F}_{q^{2}}\).
Hence for any of the \(q^{2}-4\) values of \(\tau\) such that \((\tau+a)(\tau+b)(\tau^{2}+\alpha\tau+\alpha^{1-q})\neq 0\), there is a unique \(\delta\) for which \(\tau\) is a root of a polynomial \(P_{\delta,\alpha}(x)\) having three distinct roots in \(\mathbb{F}_{q^{2}}\). Therefore there are \(\frac{q^{2}-4}{3}\) values of \(\delta\) for which \(P_{\delta,\alpha}(x)\) has three distinct roots in \(\mathbb{F}_{q^{2}}\).
Hence there are \(q^{2}-2-\frac{q^{2}-4}{3}=\frac{2(q^{2}-1)}{3}\) values of \(\delta\) for which \(P_{\delta,\alpha}(x)\) is irreducible. Recall from Theorem 6.1 that \(P_{\delta,\alpha}(x)\) satisfies Condition (1) if and only if \(\frac{4-\alpha^{q+1}}{3\alpha^{q+1}}\) is a nonzero square in \(\mathbb{F}_{q}\), and \(\delta=0\) or \((\alpha+3\delta^{-q})^{q+1}\neq 1\). By Lemma 7.3, it cannot occur that \(P_{\delta,\alpha}(x)\) is irreducible when \(\frac{4-\alpha^{q+1}}{3\alpha^{q+1}}\) is a nonzero square in \(\mathbb{F}_{q}\) and \((\alpha+3\delta^{-q})^{q+1}=1\), and hence it remains only to count the number of values of \(\alpha\) for which \(\frac{4-\alpha^{q+1}}{3\alpha^{q+1}}\) is a nonzero square in \(\mathbb{F}_{q}\). Each such \(\alpha\) will contribute \(\frac{2(q^{2}-1)}{3}\) irreducibles satisfying Condition (1); in particular for \(\alpha=1\) we get the second claim.
Suppose \(\frac{4-\alpha^{q+1}}{3\alpha^{q+1}}=y^{2}\) for some \(y\in\mathbb{F}_{q}^{*}\). If \(y^{2}\neq-1/3\), then
\[\alpha^{q+1}=\frac{4}{3y^{2}+1}.\]
Since \(-3\) is a square in \(\mathbb{F}_{q}\) if and only if \(q\equiv 1\mod 3\), we have
\[\left|\{y^{2}:y\in\mathbb{F}_{q}\mid y^{2}\neq-1/3\}\right|=\begin{cases}(q-3)/ 2&\text{if }q\equiv 1\mod 3\\ (q-1)/2&\text{if }q\equiv 2\mod 3\end{cases}.\]
The number of such \(\alpha\) is hence \((q+1)(q-3)/2\) when \(q\equiv 1\mod 3\), and \((q+1)(q-1)/2\) when \(q\equiv 2\mod 3\), completing the proof.
Next we enumerate the number of irreducible polynomials of the form \(Q_{\delta,\gamma}(x)\) which satisfy Condition (1).
**Lemma 7.6**.: _The number of polynomials of the form \(Q_{\delta,\gamma}(x)=x^{3}-\delta x^{2}-\gamma x+\delta\gamma/9\) that are irreducible and satisfy Condition (1) is \(\frac{(q-1)(q+1)^{2}}{3}\)._
Proof.: First note that there are \(q^{2}(q+1)/2\) polynomials of the form \(Q_{\delta,\gamma}(x)\) satisfying Condition (1); there are \(q^{2}\) choices for \(\delta\) and \((q+1)/2\) choices for \(\gamma\), since \(\gamma^{(q+1)/2}=3\). We can transform \(Q_{\delta,\gamma}(x)\) into a cubic
\[Q^{\prime}(y)=y^{3}-(\delta^{2}/3+\gamma)y-2\delta(\delta^{2}+3\gamma)/27\]
whose coefficient of \(y^{2}\) is zero via the change of variable \(y=x-\delta/3\). Then, using the notation in Lemma 7.1, we require
\[R=\frac{4\gamma}{9}\left(\delta^{2}+3\gamma\right)^{2}\]
to be a nonzero square in \(\mathbb{F}_{q^{2}}\) in order for \(Q^{\prime}(y)\) to be irreducible. Since
\[\gamma^{(q^{2}-1)/2}=(\gamma^{(q+1)/2})^{q-1}=3^{q-1}=1,\]
we have that \(\gamma\), and hence \(R\), is a square in \(\mathbb{F}_{q^{2}}\). To ensure \(R\) is nonzero, we need \(\delta^{2}\neq-3\gamma\). We now have
\[\mu=\pm\frac{\sqrt{R}}{9}=\pm 2\sqrt{\gamma}\left(\frac{\delta^{2}+3\gamma}{27}\right)\]
and so for irreducibility of \(Q^{\prime}(y)\) we require
\[S =\frac{1}{27}(\delta\pm\sqrt{-3\gamma})(\delta+\sqrt{-3\gamma})( \delta-\sqrt{-3\gamma})\] \[=\frac{\delta\mp\sqrt{-3\gamma}}{\delta\pm\sqrt{-3\gamma}}\left( \frac{\delta\pm\sqrt{-3\gamma}}{3}\right)^{3}\]
to be a noncube in \(\mathbb{F}_{q^{2}}\). Thus, we need \((\delta\mp\sqrt{-3\gamma})/(\delta\pm\sqrt{-3\gamma})\) to be a noncube. Since
\[C:=\frac{\delta-\sqrt{-3\gamma}}{\delta+\sqrt{-3\gamma}}\]
is a cube if and only if
\[\frac{1}{C}=\frac{\delta+\sqrt{-3\gamma}}{\delta-\sqrt{-3\gamma}}\]
is a cube, we proceed with determining when \(C\) is a cube without loss of generality. Let \(x\in\mathbb{F}_{q^{2}}\). Then
\[C=\frac{\delta-\sqrt{-3\gamma}}{\delta+\sqrt{-3\gamma}}\cdot \frac{\delta-\sqrt{-3\gamma}}{\delta-\sqrt{-3\gamma}}=x^{3}\] \[\iff (x^{3}-1)\delta^{2}+2\sqrt{-3\gamma}\delta+(x^{3}-1)3\gamma=0\] \[\iff \delta=\sqrt{-3\gamma}\text{ or }\delta=-\left(\frac{x^{3}+1}{x ^{3}-1}\right)\sqrt{-3\gamma}.\]
If \(\delta=\sqrt{-3\gamma}\) then \(\delta^{2}=-3\gamma\). Note that
\[-\sqrt{-3\gamma}=-\sqrt{-3\sigma}\iff\gamma=\sigma\]
for \(\gamma,\sigma\in\mathbb{F}_{q^{2}}\) with \(\gamma^{(q+1)/2}=\sigma^{(q+1)/2}=3\) and that
\[\frac{x^{3}+1}{x^{3}-1}\phi=\frac{y^{3}+1}{y^{3}-1}\phi\iff x^{3}=y^{3}\]
for \(x,y,\phi\in\mathbb{F}_{q^{2}}\) with \(\phi\neq 0\). There are \((q^{2}-1)/3\) nonzero cubes in \(\mathbb{F}_{q^{2}}\). When \(x=0\), \(\delta=\sqrt{-3\gamma}\). Hence the number of pairs \((\delta,\gamma)\) that yield a reducible \(Q_{\delta,\gamma}(x)\) is
\[\left|\left\{\left(\left(\frac{x^{3}+1}{x^{3}-1}\right)\sqrt{-3 \gamma},\gamma\right):x,\gamma\in\mathbb{F}_{q^{2}}\,\middle|\,x\neq 0,\gamma^{(q+ 1)/2}=3\right\}\right|+\left|\left\{\left(\sqrt{-3\gamma},\gamma\right):\gamma \in\mathbb{F}_{q^{2}}\,\middle|\,\gamma^{(q+1)/2}=3\right\}\right|\] \[= \left(\frac{q^{2}-1}{3}\right)\left(\frac{q+1}{2}\right)+\frac{q +1}{2}\] \[= \frac{(q+1)(q^{2}+2)}{6}.\]
Since different pairs \((\delta,\gamma)\) clearly define different polynomials \(Q_{\delta,\gamma}(x)\) polynomials, the number of irreducibles of the form \(Q_{\delta,\gamma}(x)\) is
\[\frac{q^{2}(q+1)}{2}-\frac{(q+1)(q^{2}+2)}{6}=\frac{(q-1)(q+1)^{2}}{3}.\]
Finally we enumerate the number of irreducible polynomials of the form \(B_{\theta}(x)\) which satisfy Condition (1).
**Lemma 7.7**.: _The number of polynomials of the form \(B_{\theta}(x)=x^{3}-\theta\) that are irreducible and satisfy Condition (1) is \(\frac{2(q^{2}-1)}{3}\) when \(q\equiv 1\mod 3\), and zero otherwise._
Proof.: By Theorem 5.3, it suffices to count the number of elements \(\theta\in\mathbb{F}_{q^{2}}\) such that \(3\) does not divide \(\frac{q^{2}-1}{o(\theta)}\). Let \(\mathbb{F}_{q^{2}}^{*}=\left\langle\sigma\right\rangle\) and suppose that \(3\mid\frac{q^{2}-1}{o(\theta)}\). Then \(\frac{q^{2}-1}{o(\theta)}=3k\) for some \(k\in\mathbb{Z}\), so \(o(\theta)=\frac{q^{2}-1}{3k}\) and thus \(\theta\in\left\langle\sigma^{3}\right\rangle\). Hence there are \(\left|\left\langle\sigma\right\rangle\right|-\left|\left\langle\sigma^{3} \right\rangle\right|=\frac{2(q^{2}-1)}{3}\) elements \(\theta\) such that \(3\nmid\frac{q^{2}-1}{o(\theta)}\).
Combining Lemmas 7.5, 7.6, and 7.7 gives us the following. This enumeration will allow us in the next section to fully count and characterise the equivalence classes.
**Corollary 7.8**.: _The total number of irreducible cubic polynomials in \(\mathbb{F}_{q^{2}}[x]\) satisfying Condition (1) is_
\[\begin{cases}\frac{q(q-1)^{2}(q+1)}{3}&\text{if }q\equiv 1\mod 3\\ \frac{q(q-1)(q+1)^{2}}{3}&\text{if }q\equiv 2\mod 3\end{cases}.\]
### Equivalence representatives
In order to calculate equivalence classes, we need to utilise the theory of _orbit polynomials_. Let \(\Psi=\begin{pmatrix}-b&-d\\ c&a\end{pmatrix}\in\operatorname{GL}(2,q^{2})\), and denote by \([\Psi]\) the corresponding element of \(\operatorname{PGL}(2,q^{2})\). Define a polynomial \(F_{\Psi}(x)\) as follows:
\[F_{\Psi}(x)=cx^{q^{2}+1}+ax^{q^{2}}+bx+d.\]
Polynomials of this form have been studied extensively, for example in [4], [14], [18].
Given \(s=[\Psi]\in\operatorname{PGL}(2,q^{2})\) as above, define \(s(x)=-\left(\frac{bx+d}{cx+a}\right)\). The _orbit polynomial_ of the group \(G\) generated by \(s\) is defined as
\[O_{G}(x)=\prod_{s\in G}(x-s(y))\in\mathbb{F}_{q^{2}}(y)[x].\]
The factorisation of polynomials of the form \(F_{\Psi}(x)\) was determined in [18] and [9].
**Theorem 7.9**.: _Let \(s=[\Psi]=\left[\begin{pmatrix}-b&-d\\ c&a\end{pmatrix}\right]\in\operatorname{PGL}(2,q^{2})\), and suppose \(s\) has order \(r\) dividing \(q^{2}+1\). The irreducible factors of the polynomial \(F_{\Psi}(x)\) of degree greater than two all have degree \(r\), each of which are specialisations of \(O_{G}(x)\) at some \(y\)._
We consider the case \(\Psi=\begin{pmatrix}-1&-1\\ 1&0\end{pmatrix}\), whence \(F_{1}(x):=F_{\Psi}(x)=x^{q^{2}+1}+x+1\). The order of \(s=[\Psi]\) is three, and
\[O_{G}(x) =(x-y)(x-s(y))(x-s^{2}(y))\] \[=(x-y)\left(x+\frac{y+1}{y}\right)\left(x+\frac{1}{y+1}\right)\] \[=x^{3}+\left(\frac{1+3y-y^{3}}{y(y+1)}\right)x^{2}+\left(\frac{1- 3y^{2}-y^{3}}{y(y+1)}\right)x-1\] \[=P_{\delta,1}(x),\]
where \(\delta=\frac{1+3y-y^{3}}{y(y+1)}\). Thus all irreducible cubic factors of \(x^{q^{2}+1}+x+1\) over \(\mathbb{F}_{q^{2}}\) are of the form \(P_{\delta,1}(x)\) for some \(\delta\), and since there are precisely two roots of \(x^{q^{2}+1}+x+1\) in \(\mathbb{F}_{q^{2}}\), we get \(\frac{q^{2}-1}{3}\) such irreducible factors. Similarly, we can calculate that all irreducible cubic factors of \(F_{2}(x):=x^{q^{2}+1}+x^{q^{2}}+1\) over \(\mathbb{F}_{q^{2}}\) are of the form \(P_{\delta,1}(x)\) for some \(\delta\). Since these polynomials cannot have any irreducible cubic factors in common, together with the count of the number of irreducibles of the form \(P_{\delta,1}(x)\) performed in Lemma 7.5, we get the following.
**Theorem 7.10**.: _Every irreducible cubic polynomial of the form \(P_{\delta,1}(x)\) is a factor of \(F_{1}(x)F_{2}(x)=(x^{q^{2}+1}+x+1)(x^{q^{2}+1}+x^{q^{2}}+1)\), and every irreducible cubic factor of \(F_{1}(x)F_{2}(x)\) is of the form \(P_{\delta,1}(x)\)._
Note that if \([\Psi]\neq[\Phi]\), then \(F_{\Psi}(x)\) and \(F_{\Phi}(x)\) can have at most a quadratic factor in common. Therefore if \(P(x)\) divides \(F_{\Psi}(x)\) and \(Q(x)\) divides \(F_{\Phi}(x)\) where \(P\) and \(Q\) have degree greater than two, then \(P(x)\) and \(Q(x)\) are equivalent if and only if \(F_{\Psi}(x)\) and \(F_{\Phi}(x)\) are equivalent. Moreover, any group element mapping \(P(x)\) to \(Q(x)\) must also map \(F_{\Psi}(x)\) to \(F_{\Phi}(x)\).
The element \(\phi_{0,1}\) maps \(F_{2}(x)\) to \(F_{1}(x)\), and so every irreducible factor of \(F_{2}(x)\) is equivalent to an irreducible factor of \(F_{1}(x)\). Hence to calculate the equivalence classes amongst the polynomials of the form \(P_{\delta,1}(x)\), it suffices to calculate equivalences between the divisors of \(F_{1}(x)\) via elements of the stabiliser of \(F_{1}(x)\) in \(U\).
To this end, we now demonstrate how the action of the group \(U\) manifests on polynomials of the form \(F_{\Psi}(x)\).
**Lemma 7.11**.: _Let \(\phi=\begin{pmatrix}u^{q}&v\\ v^{q}&u\end{pmatrix}\) with \(u^{q+1}-v^{q+1}\neq 0\). Then \(F_{\Psi}^{\phi}(x)=(u^{q+1}-v^{q+1})F_{\phi^{-1}\Psi\phi}\)._
Proof.: We directly compute \(F_{\Psi}^{\phi}\) as follows.
\[F_{\Psi}^{\phi}(x) =(u+v^{q}x)^{q^{2}+1}F_{\Psi}\left(\frac{v+u^{q}x}{u+v^{q}x}\right)\] \[=(cu^{2q}+(a+b)u^{q}v^{q}+dv^{2q})x^{q^{2}+1}+(au^{q+1}+cu^{q}v+ duv^{q}+bv^{q+1})x^{q^{2}}\] \[\quad+(bu^{q+1}+cu^{q}v+duv^{q}+av^{q+1})x+du^{2}+(a+b)uv+cv^{2}\] \[=(u^{q+1}-v^{q+1})F_{\phi^{-1}\Psi\phi}(x),\]
where the final equality holds since
\[\phi^{-1}\Psi\phi =\frac{1}{u^{q+1}-v^{q+1}}\begin{pmatrix}u&-v\\ -v^{q}&u^{q}\end{pmatrix}\begin{pmatrix}-b&-d\\ c&a\end{pmatrix}\begin{pmatrix}u^{q}&v\\ v^{q}&u\end{pmatrix}\] \[=\frac{1}{u^{q+1}-v^{q+1}}\begin{pmatrix}-(bu^{q+1}+cu^{q}v+duv^ {q}+av^{q+1})&-(du^{2}+(a+b)uv+cv^{2})\\ cu^{2q}+(a+b)u^{q}v^{q}+dv^{2q}&au^{q+1}+cu^{q}v+duv^{q}+bv^{q+1}\end{pmatrix}.\]
Next, we apply this to calculate the subgroup of \(U\) stabilising \(F_{1}(x)\), and hence permuting its irreducible cubic factors.
**Lemma 7.12**.: _The stabiliser of \(F_{1}(x)=x^{q^{2}+1}+x+1\) in \(U\) is_
\[\{\phi_{u,u^{q}-u}:u\in\mathbb{F}_{q^{2}}^{\times},u^{q-1}\neq(1\pm\sqrt{-3})/ 2\}.\]
Proof.: Let \(\phi=\phi_{u,v}=\begin{pmatrix}u^{q}&v\\ v^{q}&u\end{pmatrix}\) with \(u^{q+1}\neq v^{q+1}\) and let \(\lambda\in\mathbb{F}_{q^{2}}\). Then the matrix equation
\[\phi^{-1}\psi\phi=\lambda\psi\]
holds if and only if
\[\frac{1}{u^{q+1}-v^{q+1}}\begin{pmatrix}-(u^{q+1}+uv^{q}+u^{q}v)&-(u^{2}+uv+v ^{2})\\ (u^{2}+uv+v^{2})^{q}&v^{q+1}+uv^{q}+u^{q}v\end{pmatrix}=\lambda\begin{pmatrix} -1&-1\\ 1&0\end{pmatrix}.\]
This equality holds if and only if \(u^{q+1}\neq v^{q+1}\) and
\[v^{q+1}+uv^{q}+u^{q}v=0\] (I)
\[(u^{2}+uv+v^{2})^{q}=u^{q+1}+uv^{q}+u^{q}v=u^{2}+uv+v^{2}.\] (II)
We now show that these conditions are equivalent to \(v^{q}=u-u^{q}\). First suppose \(v^{q}=u-u^{q}\). Then equations (I) and (II) hold. Furthermore \(u^{q+1}=v^{q+1}\) if and only if \((u-u^{q})(u^{q}-u)=u^{q+1}\). Rearranging, we get \(u^{2}(u^{2(q-1)}-u^{q-1}+1)=0\), which occurs if and only if \(u=0\) or \(u^{q-1}=(1\pm\sqrt{-3})/2\).
Now suppose (I) and (II) hold. If \(v=0\), then (II) gives \(u^{q+1}=u^{2}\), so \(u=u^{q}\) and hence \(v^{q}=-v=0=u-u^{q}\). If \(v\neq 0\), we have \(u^{q}=-(u+v)v^{q-1}\) from (I). Hence
\[u^{q+1}+uv^{q}+u^{q}v=u^{2}+uv+v^{2}\iff(u^{2}+uv+v^{2})(v^{q}+v)=0\]
and
\[(u^{2}+uv+v^{2})^{q}=u^{q+1}+uv^{q}+u^{q}v=u^{2}+uv+v^{2}\iff(u^{2}+uv+v^{2})( v^{q}+v)(v^{q}-v)=0.\]
If \(0=u^{2}+uv+v^{2}=u^{q+1}+uv^{q}+u^{q}v\), then \(u^{q+1}=v^{q+1}\), which is not allowed. Thus \(v^{q}=-v\). It follows from (I) that \(v=u^{q}-u\). As before, the condition \(u^{q+1}\neq v^{q+1}\) gives that \(u\neq 0\) and \(u^{q-1}\neq(1\pm\sqrt{-3})/2\), completing the proof.
This allows us to compute the number of projective equivalence classes amongst the polynomials \(P_{\delta,1}(x)\), as well as the size of the union of these equivalence classes. As we will observe, this matches the total number of irreducible cubics satisfying Condition (1), implying that every equivalence class contains a polynomial of the form \(P_{\delta,1}(x)\).
**Theorem 7.13**.: _The number of projective equivalence classes of irreducible polynomials of the form \(P_{\delta,1}(x)\) is_
\[\left\{\begin{array}{ll}\frac{q-1}{3}&\mbox{if }q\equiv 1\mod 3,\\ \frac{q+1}{3}&\mbox{if }q\equiv 2\mod 3.\end{array}\right.\]
_Moreover the number of monic irreducible polynomials projectively equivalent to some \(P_{\delta,1}(x)\) is_
\[\left\{\begin{array}{ll}\frac{q(q-1)(q^{2}-1)}{3}&\mbox{if }q\equiv 1\mod 3, \\ \frac{q(q+1)(q^{2}-1)}{3}&\mbox{if }q\equiv 2\mod 3.\end{array}\right.\]
Proof.: Recall that in order to calculate the number of equivalence classes of polynomials of the form \(P_{\delta,1}(x)\) satisfying Condition (1), it suffices to calculate the equivalence classes amongst the divisors of \(F_{1}(x)=x^{q^{2}+1}+x+1\) under the stabiliser of \(F_{1}(x)\). As shown in Lemma 7.12, this consists of matrices of the form \(\phi_{u,u^{q}-u}\) where \(u^{2}(u^{q-1}-u^{2(q-1)}-1)\neq 0\).
There are \(q^{2}-1\) such matrices when \(q\equiv 1\mod 3\), and \((q-1)^{2}\) such matrices when \(q\equiv 2\mod 3\), \(q-1\) of which are scalar multiples of the identity. Therefore the divisors of \(F_{1}(x)\) are partitioned into equivalence classes of size \(q+1\) (resp. \(q-1\)) under this action when \(q\equiv 1\mod 3\) (resp. \(q\equiv 2\mod 3\)), and so there are \(\frac{q-1}{3}\) equivalence classes when \(q\equiv 1\mod 3\) and \(\frac{q+1}{3}\) equivalence classes when \(q\equiv 2\mod 3\).
A further application of the Orbit-Stabiliser Theorem returns the claimed number of polynomials equivalent to some \(P_{\delta,1}\).
Choosing canonical representatives for each equivalence class among the \(P_{\delta,1}\) polynomials is not straightforward. The following lemma establishes criteria for equivalence amongst polynomials of this shape.
**Lemma 7.14**.: _The polynomials \(P_{\delta,1}(x)\) and \(P_{\epsilon,1}(x)\) are projectively equivalent if and only if_
\[\epsilon\in \left\{\frac{9w(w-1)+\delta(w^{3}-3w+1)}{w^{3}-3w^{2}+1-\delta w(w -1)}:w^{q+1}=1,w\neq(1\pm\sqrt{-3})/2\right\}\] \[\cup\left\{\frac{-3(w^{3}-3w^{2}+1)-\delta(w^{3}-3w+1)}{w^{3}-3w+ 1+\delta w(w-1)}:w^{q+1}=1,w\neq(1\pm\sqrt{-3})/2\right\}.\]
Proof.: We have determined in this section that two polynomials of the form \(P_{\delta,1}(x)\) are equivalent via \(\phi_{u,u^{q}-u}\) or \(\phi_{0,1}\phi_{u,u^{q}-u}=\phi_{u^{q}-u,u}\), where \(u^{q-1}\neq(1\pm\sqrt{-3})/2\).
First let \(v=u^{q}-u\). Then by Corollary 2.5, \(P_{\delta,1}(x)\) and \(P_{\epsilon,1}(x)\) are equivalent if and only if
\[\lambda P_{\epsilon,1}(x)=(u(x+1)-u^{q}x)^{3}P_{\delta,1}\left(\frac{u^{q}(x+1 )-u}{u(x+1)-u^{q}x}\right).\]
Comparing coefficients of these polynomials yields that
\[\epsilon=\frac{9u^{q-1}(u^{q-1}-1)+\delta(u^{3(q-1)}-3u^{q-1}+1)}{u^{3(q-1)}-3 u^{2(q-1)}+1-\delta u^{q-1}(u^{q-1}-1)}.\]
Now let \(u=v^{q}-v\). Then \(P_{\delta,1}(x)\) and \(P_{\epsilon,1}(x)\) are equivalent if and only if
\[\lambda P_{\epsilon,1}(x)=(v^{q}(x+1)-v)^{3}P_{\delta,1}\left(\frac{v(x+1)-v^ {q}x}{v^{q}(x+1)-v}\right).\]
Comparing coefficients again returns
\[\epsilon=\frac{-3(v^{3(q-1)}-3v^{2(q-1)}+1)-\delta(v^{3(q-1)}-3v^{q-1}+1)}{v^{ 3(q-1)}-3v^{q-1}+1+\delta v^{q-1}(v^{q-1}-1)}.\]
Replacing \(u^{q-1}\) and \(v^{q-1}\) with \(w\) in both expressions for \(\epsilon\) gives the stated result.
We now consider the question of when \(P_{\delta,1}(x)\) is equivalent to \(P_{\delta,1}^{\sigma}(x)=P_{\delta^{q},1}(x)\). This is necessary in order to determine the equivalence classes (rather than projective equivalence classes). Furthermore this demonstrates that all of the \(2\)-spreads obtained have full automorphism group strictly larger than the group \(C\).
**Lemma 7.15**.: _Suppose \(P_{\delta,1}(x)\) and \(P_{\delta^{q},1}(x)\) are irreducible and satisfy Condition (1). Then \(P_{\delta,1}(x)\) and \(P_{\delta^{q},1}(x)\) are projectively equivalent. Hence two irreducible cubics satisfying Condition (1) are equivalent if and only if they are projectively equivalent._
Proof.: By Lemma 7.14, it suffices to show the existence of some \(w\in\mathbb{F}_{q^{2}}\) such that \(w^{q+1}=1\) and
\[\delta^{q}=\frac{-3(w^{3}-3w^{2}+1)-\delta(w^{3}-3w+1)}{w^{3}-3w+1+\delta w(w- 1)}\]
or
\[\delta^{q}=\frac{9w(w-1)+\delta(w^{3}-3w+1)}{w^{3}-3w^{2}+1-\delta w(w-1)}.\]
Suppose the latter equality holds. Then
\[(\delta-\delta^{q})w^{3}+(\delta^{q+1}+3\delta^{q}+9)w^{2}-(\delta^{q+1}+3 \delta+9)w+\delta-\delta^{q}=0.\]
If \(\delta=\delta^{q}\), then clearly \(P_{\delta,1}(x)=P_{\delta^{q},1}(x)\), and so we assume that \(\delta\neq\delta^{q}\). Then we have
\[w^{3}+\frac{\delta^{q+1}+3\delta^{q}+9}{\delta-\delta^{q}}w^{2}-\frac{\delta^{ q+1}+3\delta+9}{\delta-\delta^{q}}w+1=0.\]
The left-hand side of this equation is a cubic polynomial in \(\mathbb{F}_{q^{2}}[w]\). Denote this polynomial by \(f(w)\). Since \(w^{3q}f(w^{-q})=f(w)^{q}\), if \(\tau\) is a root of \(f(w)\) then so is \(\tau^{-q}\). Hence if \(f(w)\) is reducible, it must factorise as
\[(w-\tau)(w-\tau^{-q})(w-\nu),\]
where \(\tau\in\mathbb{F}_{q^{2}}\) and \(\nu\in\mathbb{F}_{q^{6}}\). Since \(-\nu\tau^{1-q}=1\), it follows that \(\nu=-\tau^{q-1}\in\mathbb{F}_{q^{2}}\) and so \(w=\nu\) is a solution to the equation with \(w^{q+1}=1\).
Hence it only remains to show that \(f(w)\) cannot be irreducible. We apply a change of variables, and apply Lemma 7.1. We obtain that \(f(w)\) is irreducible if and only if \(g(w)=w^{3}+sw+t\) is irreducible, where
\[s=-\frac{(\delta^{2}+3\delta+9)^{q+1}}{3(\delta-\delta^{q})^{2}}\]
and
\[t=-\frac{(\delta^{2}+3\delta+9)^{q+1}(2\delta^{q+1}+3\delta^{q}+3\delta+18)}{ 27(\delta^{q}-\delta)^{3}}=\frac{2\delta^{q+1}+3\delta^{q}+3\delta+18}{9( \delta^{q}-\delta)}s.\]
Using the same notation as Lemma 7.1, we calculate that
\[R=\frac{(\delta^{2}+3\delta+9)^{2(q+1)}}{(\delta-\delta^{q})^{4}}.\]
Setting \(\mu=\pm\sqrt{R}/9\), then
\[S=\frac{(\delta^{2}+3\delta+9)^{q+1}(2\delta^{q+1}+3(1\pm\sqrt{-3})\delta^{q} +3(1\mp\sqrt{-3})\delta+18)}{54(\delta^{q}-\delta)^{3}}.\]
If \(q\equiv 2\mod 3\) then
\[S=\left(\frac{((\delta^{2}+3\delta+9)(\delta+3(1\pm\sqrt{-3})/2))^{(q+1)/3}}{3 (\delta^{q}-\delta)}\right)^{3}.\]
If \(q\equiv 1\mod 3\) then
\[S=\left(\frac{(\delta+3(1\mp\sqrt{-3})/2)^{(2q+1)/3}(\delta+3(1\pm\sqrt{-3})/2 )^{(q+2)/3}}{3(\delta^{q}-\delta)}\right)^{3}.\]
Hence \(S\) is always a perfect cube, and so \(f(w)\) cannot be irreducible. Therefore \(P_{\delta,1}(x)\) is always equivalent to \(P_{\delta^{q},1}(x)\).
**Remark 7.16**.: Note that this implies that the full stabiliser of the 2-spread \(\ell_{\epsilon}^{C}\) in \(\Gamma\mathrm{L}(1,q^{6})\) contains elements not in \(C\), namely the map \(x\mapsto x^{q^{3}}\).
However, this does not imply that every irreducible cubic satisfying Condition (1) is equivalent to a polynomial with coefficients in \(\mathbb{F}_{q}\); in fact, counterexamples can be easily found already when \(q=5\).
Finally, we remark that it is not true that all polynomials satisfying Condition (1) are equivalent if and only if they are projectively equivalent. We have counterexamples for polynomials of degree 5; this will be the subject of future work.
We summarise this section with our main result on equivalence classes.
**Corollary 7.17**.: _Every irreducible cubic in \(\mathbb{F}_{q^{2}}[x]\) satisfying Condition (1) is equivalent to one of the form \(P_{\delta,1}\). Furthermore, the number of equivalence classes of irreducible cubics satisfying Condition (1) is_
\[\left\{\begin{array}{ll}\frac{q-1}{3}&\text{if }q\equiv 1\mod 3,\\ \frac{q+1}{3}&\text{if }q\equiv 2\mod 3.\end{array}\right.\]
Proof.: This follows immediately from Corollary 7.8, Theorem 7.13, and Lemma 7.15.
Comparison with known results
In this section we compare our results to the constructions and partial classifications which follow from the previous work of [3] and [8].
### Results of Bartoli-Timpanella
Recall from Lemma 3.5 that \(f_{a,b}(X)=X(1+aX^{q(q-1)}+bX^{2(q-1)})\) is a permutation polynomial of \(\mathbb{F}_{q^{2}}\) if and only if \(P(x)=x^{3}+b^{-1}x+ab^{-1}\) satisfies Condition (1). In [3] the following was shown.
**Theorem 8.1** ([3], Main Theorem).: _Let \(p>3\) be a prime and \(q=p^{h}\), with \(h\geq 1\). Then \(f_{a,b}(X)\) is a permutation polynomial of \(\mathbb{F}_{q^{2}}\) if and only if either_
\[\begin{cases}a^{q}b^{q}=a(b^{q+1}-a^{q+1})\\ 1-4(ba^{-1})^{q+1}\text{ is a square in }\mathbb{F}_{q}^{*},\end{cases}\] (PP1)
_or_
\[\begin{cases}a^{q-1}+3b=0\\ -3(1-4(ba^{-1})^{q+1})\text{ is a square in }\mathbb{F}_{q}^{*}.\end{cases}\] (PP2)
We now compare the characterization of permutation polynomials of the form \(f_{a,b}(X)\) with our characterization of polynomials satisfying Condition (1). Note that \(P(x)=x^{3}+b^{-1}x+ab^{-1}\) cannot be of the form \(B_{\theta}(x)\) nor \(Q_{\delta,\alpha}(x)\). Hence if \(P(x)\) is irreducible and satisfies (PP1) or (PP2), then it must be of the form \(P_{\delta,\alpha}(x)\). Thus we must have \(\delta=0\), \(a=\alpha/3\) and \(b=-\alpha^{q-1}/3\).
With these parameters, Condition (PP1) becomes
\[\begin{cases}-\alpha/9=\alpha(1-\alpha^{q+1})/27\\ 1-4\alpha^{-(q+1)}\text{ is a square in }\mathbb{F}_{q}^{*}.\end{cases}\]
The equality holds if and only if \(\alpha^{q+1}=4\), in which case \(P_{\delta,\alpha}(x)\) is reducible, contradicting our assumptions. Hence any polynomial satisfying (PP1) must be reducible.
Under the same criteria, Condition (PP2) is now
\[\begin{cases}0=0\\ -3(1-4\alpha^{-(q+1)})\text{ is a square in }\mathbb{F}_{q}^{*}.\end{cases}\] (PP2)
Since \(\delta=0\) and \(-3(1-4\alpha^{-(q+1)})\) is a square in \(\mathbb{F}_{q}^{*}\) if and only if \(-(1-4\alpha^{-(q+1)})/3=\frac{4-\alpha^{q+1}}{3\alpha^{q+1}}\) is a square in \(\mathbb{F}_{q}^{*}\), Condition (PP2) agrees with the conditions in Theorem 6.1 for an irreducible polynomial of the form \(P_{\delta,\alpha}(x)\) to satisfy Condition (1).
### Results of Feng-Lu
Recall that in [8], the polynomials
\[g_{3,\rho}(x)=x^{3}-3x+(\rho+\rho^{q}),\]
were shown to be irreducible and satisfy Condition (1) when \(\rho\) has order \(q+1\). Such a polynomial lies in \(\mathbb{F}_{q}[x]\). We now show that our classification contains examples not equivalent to any of those constructed in [8].
**Lemma 8.2**.: _Every polynomial of the form \(g_{3,\rho}(x)\) is equivalent to one of the form \(P_{\delta,1}(x)\). Not every irreducible polynomial of the form \(P_{\delta,1}(x)\) is equivalent to one of the form \(g_{3,\rho}(x)\)._
Proof.: It is immediate to verify that \(g_{3,\rho}(x)=x^{3}-3x+(\rho+\rho^{q})=P_{0,-(\rho+\rho^{q})}(x)\). From Corollary 7.17, this is equivalent to some \(P_{\delta,1}(x)\), proving the first claim.
It is straightforward to see that \(g_{3,\rho}(x)=g_{3,\rho^{q}}(x)\), and that \(g_{3,\rho}(x)\) and \(g_{3,-\rho}(x)\) are equivalent via \(\phi_{u,0}\) with \(u^{q-1}=-1\). Hence the number of equivalence classes of polynomials of the form \(g_{3,\rho}(x)\) is at most \(\frac{q+1}{4}\), and by Corollary 7.17, the second claim holds.
### Conclusion
In this paper we have fully characterised and classified cyclic \(2\)-spreads in \(V(6,q)\) up to equivalence, and hence classified a class of flag-transitive linear spaces with assumed automorphism group. The classification includes new examples.
|
2309.04903 | Beyond the mixture of generalized Pauli dephasing channels | In recent times, there has been a growing scholarly focus on investigating
the intricacies of quantum channel mixing. It has been commonly believed, based
on intuition in the literature, that every generalized Pauli channel with
dimensionality $d$ could be represented as a convex combination of $(d+1)$
generalized Pauli dephasing channels (see [Phys. Rev. A 103, 022605 (2021)] as
a reference). To our surprise, our findings indicate the inaccuracy of this
intuitive perspective. This has stimulated our interest in exploring the
properties of convex combinations of generalized Pauli channels, beyond the
restriction to just $(d+1)$ generalized Pauli dephasing channels. We
demonstrate that many previously established properties still hold within this
broader context. For instance, any mixture of invertible generalized Pauli
channels retains its invertibility. It's worth noting that this property
doesn't hold when considering the Weyl channels setting. Additionally, we
demonstrate that every Pauli channel (for the case of $d=2$) can be represented
as a mixture of $(d+1)$ Pauli dephasing channels, but this generalization
doesn't apply to higher dimensions. This highlights a fundamental distinction
between qubit and general qudit cases. In contrast to prior understanding, we
show that non-invertibility of mixed channels is not a prerequisite for the
resulting mapping to constitute a Markovian semigroup. | Mao-Sheng Li, Wen Xu, Yan-Ling Wang, Zhu-Jun Zheng | 2023-09-10T00:50:38Z | http://arxiv.org/abs/2309.04903v1 | # Beyond the mixture of generalized Pauli dephasing channels
###### Abstract
In recent times, there has been a growing scholarly focus on investigating the intricacies of quantum channel mixing. It has been commonly believed, based on intuition in the literature, that every generalized Pauli channel with dimensionality \(d\) could be represented as a convex combination of \((d+1)\) generalized Pauli dephasing channels (see [Phys. Rev. A **103**, 022605 (2021)] as a reference). To our surprise, our findings indicate the inaccuracy of this intuitive perspective. This has stimulated our interest in exploring the properties of convex combinations of generalized Pauli channels, beyond the restriction to just \((d+1)\) generalized Pauli dephasing channels. We demonstrate that many previously established properties still hold within this broader context. For instance, any mixture of invertible generalized Pauli channels retains its invertibility. It's worth noting that this property doesn't hold when considering the Weyl channels setting. Additionally, we demonstrate that every Pauli channel (for the case of \(d=2\)) can be represented as a mixture of \((d+1)\) Pauli dephasing channels, but this generalization doesn't apply to higher dimensions. This highlights a fundamental distinction between qubit and general qudit cases. In contrast to prior understanding, we show that non-invertibility of mixed channels is not a prerequisite for the resulting mapping to constitute a Markovian semigroup.
## I Introduction
The temporal evolution of a quantum system is elegantly described through a family of completely positive and trace-preserving (CPTP) mappings denoted as \(\Lambda_{t}\) (\(t\geq 0\)), satisfying the initial condition \(\Lambda_{0}=1\). These dynamical mappings are commonly referred to as quantum channels. Given an initial state \(\rho\) of the system, the state \(\rho_{t}=\Lambda_{t}(\rho)\) characterizes the evolution of \(\rho\).
In the context of a closed quantum system, the evolution of quantum states is governed by unitary transformations, which can be succinctly expressed as \(\Lambda_{t}(\rho)=U_{t}\rho U_{t}^{\dagger}\), where \(U_{t}=e^{-\mathrm{i}Ht}\) with \(H\) signifying the Hamiltonian of the enclosed system (with \(\hbar=1\) set as a convention). In contrast, when we shift our focus to open quantum systems [1; 2; 3], interactions between the system and an external environment introduce nontrivial effects, leading to phenomena such as dissipation, decay, and decoherence [4; 5; 6]. Consequently, the dynamical evolution no longer strictly follows the principles of unitarity. In such scenarios, it is customary to employ a suitable Born-Markov approximation to describe the evolution. This approximation is governed by the Markovian semigroup master equation
\[\dot{\Lambda}_{t}=\mathcal{L}\circ\Lambda_{t},\]
where \(\mathcal{L}\) is the time-independent generator given by
\[\mathcal{L}(\rho)=-\mathrm{i}[H,\rho]+\sum_{\alpha}\gamma_{\alpha}\left(V_{ \alpha}\rho V_{\alpha}^{\dagger}-\frac{1}{2}\{V_{\alpha}^{\dagger}V_{\alpha}, \rho\}\right), \tag{1}\]
with \(V_{\alpha}\) being the noise operators, and \(\gamma_{\alpha}\geq 0\) representing the local decoherence rates [7; 8]. Solving this master equation yields the dynamical map as \(\Lambda_{t}=e^{t\mathcal{L}}\), ensuring that it remains a legitimate CPTP dynamical map. A crucial feature of Markovian semigroup dynamics is that \(\Lambda_{s+t}=\Lambda_{s}\circ\Lambda_{t}\) for all \(s,t\geq 0\). Going beyond the constraints of the Markovian semigroup, we frequently encounter scenarios where the generator \(\mathcal{L}\) becomes time-dependent \(\mathcal{L}_{t}\). It has exactly the same form as (1) with time-dependent \(\mathcal{H}(t),V_{\alpha}(t)\), and \(\gamma_{\alpha}(t)\). The formal solution for \(\Lambda_{t}\) is
\[\Lambda_{t}=\mathcal{T}\exp\left(\int_{0}^{t}\mathcal{L}_{\tau}\mathrm{d}\tau \right)\!, \tag{2}\]
where \(\mathcal{T}\) is the chronological time-ordering operator [9]. It is crucial to note that such a dynamical map is significantly more complex, and we currently lack a complete understanding of the necessary and sufficient conditions for \(\mathcal{L}_{t}\) that ensure (2) remains a CPTP map for all \(t\geq 0\). Recently, time-dependent generators \(\mathcal{L}_{t}\) have been widely employed to investigate quantum non-Markovian evolutions [4; 5; 6; 10; 11].
Dynamical maps \(\Lambda_{t}\) are said to be divisible if they can be represented as
\[\Lambda_{t}=V_{t,s}\circ\Lambda_{s}, \tag{3}\]
with \(V_{t,s}\) serving as the intermediate map for all \(t\geq s\). Furthermore, these maps are categorized as P-divisible if \(V_{t,s}\) is both positive and trace-preserving (PTP), and CP-divisible if \(V_{t,s}\) is completely positive and trace-preserving (CPTP). CP-divisibility of \(\Lambda_{t}\) is frequently employed as a criterion to define Markovianity [9].
Recently, there has been a surge in interest in exploring the properties arising from the convex combination of quantum channels [12; 13; 14; 15; 16; 17; 18; 19; 20; 21]. Notably, Pauli channels and generalized Pauli channels, two well-established channel
types, have been extensively investigated [22; 23; 24; 25; 26; 27]. Different mixing ways of them may lead to the emergence of intriguing properties of the resultant maps, such as the Markovianity, Markov semigroup structure, and singularity [28]. Several interesting works [23; 27] are predicated on the intuition that every generalized Pauli channel with dimensionality \(d\) can be expressed as a convex combination of \((d+1)\) generalized Pauli dephasing channels. In this work, we challenge this intuition and demonstrate that it does not hold, motivating us to comprehensively investigate the properties arising from the convex combination of general generalized Pauli channels.
The paper is organized as follows. In Sec. II, we give some notations and introduce the definition of generalized Pauli channels. In Sec. III, we study the properties of the resultant maps of a convex combination of generalized Pauli channels, such as Markovian semigroup, invertibility, subsets relations. Finally, we conclude our findings in Section IV.
## II Preliminaries
In this paper, we adopt the following notations: the set \([n]\) is represented as \(1,2,\cdots,n\) for any positive integer \(n\). The Hilbert space of dimension \(d\) is denoted as \(\mathcal{H}_{d}\). Furthermore, we use \(\mathbb{D}_{d}\) to denote the set of all density matrices associated with the system \(\mathcal{H}_{d}\), while \(\mathbb{L}_{d}\) is employed to represent the set of all linear operations acting from \(\mathcal{H}_{d}\) to itself.
A mixed unitary evolution of a qubit is precisely described by the Pauli channel expressed by
\[\Lambda_{t}^{(\mathbf{p})}[\rho]=\sum_{\alpha=0}^{3}p_{\alpha}(t)\sigma_{ \alpha}\rho\sigma_{\alpha}, \tag{4}\]
where \(\mathbf{p}=(p_{0}(t),p_{1}(t),p_{2}(t),p_{3}(t))\) and \(p_{\alpha}(t)\) denote the probability distribution with \(p_{0}(0)=1\), and \(\sigma_{\alpha}\) are the Pauli matrices. This formulation represents the most general form of a bistochastic quantum channel [29; 30].
A natural extension of Pauli channels to higher dimensions arises through the concept of mutually unbiased bases (MUBs) [31]. Let \(d\geq 2\) be an integer. Two normalized orthogonal bases \(\{|\psi_{i}\rangle\}_{i=1}^{d}\) and \(\{|\phi_{j}\rangle\}_{j=1}^{d}\) of \(\mathcal{H}_{d}\) are called mutually unbiased if \(|\langle\psi_{i}|\phi_{j}\rangle|^{2}=\frac{1}{d}\) for all \(i,j\in[d]\). Suppose that there exists \(d+1\) MUBs \(\mathcal{B}_{\alpha}=\{|\varphi_{k}^{(\alpha)}\rangle\}_{k=1}^{d}\) where \(\alpha=1,2,\cdots,d+1\), which have been known to be held when \(d\) is an integer power of some prime numbers [31]. For each \(\alpha\in[d+1]\), we can define a unitary operator
\[U_{\alpha}=\sum_{l=0}^{d-1}\omega_{d}^{l}|\phi_{k}^{(\alpha)}\rangle\langle \phi_{k}^{(\alpha)}|,\qquad\omega_{d}=e^{\frac{2\pi l}{d}}. \tag{5}\]
Given a time dependent probability distribution \(\mathbf{p}=(p_{0}(t),p_{1}(t),\cdots,p_{d+1}(t))\) with \(p_{0}(0)=1\) where each \(p_{\alpha}(t)\) is also assumed to be a continuous function on \([0,\infty)\), one defines the generalized Pauli channel as
\[\Lambda_{t}^{(\mathbf{p})}[\rho]=p_{0}(t)\rho+\frac{1}{d-1}\sum_{\alpha=1}^{d +1}p_{\alpha}(t)\mathbb{U}_{\alpha}[\rho], \tag{6}\]
where \(\mathbb{U}_{\alpha}[\rho]=\sum_{k=1}^{d-1}U_{\alpha}^{k}{\rho U_{\alpha}^{k}} ^{\dagger}.\) The dynamical map \(\Lambda_{t}^{(\mathbf{p})}\) is also assumed to satisfy the equation \(\dot{\Lambda}_{t}^{(\mathbf{p})}[\rho]=\mathcal{L}_{t}^{(\mathbf{p})}\circ \Lambda_{t}^{(\mathbf{p})}[\rho]\), where
\[\mathcal{L}_{t}^{(\mathbf{p})}[\rho]=\frac{1}{d}\sum_{\alpha=1}^{d+1}\gamma_{ \alpha}(t)\left(\mathbb{U}_{\alpha}[\rho]-(d-1)\rho\right),\]
which is called time-local generator of \(\Lambda_{t}^{(\mathbf{p})}.\) If all \(\gamma_{\alpha}(t)\) are non-negative constants, \(\Lambda_{t}^{(\mathbf{p})}\) forms a semigroup: \(\Lambda_{s+t}^{(\mathbf{p})}=\Lambda_{s}^{(\mathbf{p})}\circ\Lambda_{t}^{( \mathbf{p})}\) for all \(s,t\geq 0\) and we call it a Markovian semigroup. We denote \(\mathcal{P}_{d}\) the set of all generalized Pauli channels and \(\mathcal{S}_{d}\) the set of all Markovian semigroups in \(\mathcal{P}_{d}\).
Now, the eigenvalue equations for \(\Lambda_{t}^{(\mathbf{p})}\) read \(\Lambda_{t}^{(\mathbf{p})}[\mathbb{I}_{d}]=\mathbb{I}_{d}\) and \(\Lambda_{t}^{(\mathbf{p})}[U_{\alpha}^{k}]=\lambda_{\alpha}(t)U_{\alpha}^{k}\) with
\[\lambda_{\alpha}(t)=1-\frac{d}{d-1}\left(\sum_{\beta=1}^{d+1}p_{\beta}(t)-p_ {\alpha}(t)\right) \tag{7}\]
for each \(\alpha\in[d+1]\) and \(k\in[d-1]\).
For each \(\alpha\in[d+1]\) and a differentiable function \(\pi_{\alpha}(t)\) on \([0,\infty)\) with \(\pi_{\alpha}(0)=0\), we can define a probability distribution
\[\mathbf{p}_{\alpha,\pi_{\alpha}}:=(1-\pi_{\alpha}(t),0,\cdots,0,\pi_{\alpha}( t),0,\cdots,0),\]
where \(\pi_{\alpha}(t)\) is at the \((\alpha+1)\)-th coordinate. Set
\[\mathcal{D}_{d,\alpha}:=\{\Lambda_{t}^{(\mathbf{p})}\in\mathcal{P}_{d}\mid \Lambda_{t}^{(\mathbf{p})}=\Lambda_{t}^{(\mathbf{p}_{\alpha,\pi_{\alpha}})} \text{ for some }\pi_{\alpha}(t)\},\]
and we call it the \(\alpha\)-th generalized Pauli dephasing channels. And we define \(\mathcal{D}_{d}\) as the set of convex combinations of elements in \(\cup_{\alpha=1}^{d+1}\mathcal{D}_{d,\alpha}.\) Note that \(\mathcal{D}_{d,\alpha}\) is a convex set for every \(\alpha\in[d+1].\) As a consequence, every element \(\Lambda_{t}^{(\mathbf{p})}\in\mathcal{D}_{d}\) can be written as
\[\Lambda_{t}^{(\mathbf{p})}=\sum_{\alpha=1}^{d+1}x_{\alpha}\Lambda_{t}^{( \mathbf{p}_{\alpha,\pi_{\alpha}})}, \tag{8}\]
where \(x_{\alpha}\geq 0\) and \(\sum_{\alpha=1}^{d+1}x_{\alpha}=1.\) In fact, suppose that
\[\Lambda_{t}^{(\mathbf{p})}=\sum_{\alpha=1}^{d+1}\sum_{j=1}^{N_{\alpha}}x_{ \alpha,j}\Lambda_{t}^{(\mathbf{p}_{\alpha,\pi_{\alpha,j}})}, \tag{9}\]
where \(x_{\alpha,j}\geq 0\) and \(\sum_{\alpha=1}^{d+1}\sum_{j=1}^{N_{\alpha}}x_{\alpha,j}=1.\) Then we define \(x_{\alpha}=\sum_{j=1}^{N_{\alpha}}x_{\alpha,j}.\) If \(x_{\alpha}>0\), we define
\[\pi_{\alpha}(t):=\sum_{j=1}^{N_{\alpha}}\frac{x_{\alpha,j}}{x_{\alpha}}\pi_{ \alpha,j}(t),\]
otherwise, define \(\pi_{\alpha}(t):=0.\) Then expression (9) can be rewritten as Eq. (8) under these definitions.
The complete positivity conditions for \(\Lambda_{t}^{(\mathbf{p})}\) are the generalized Fujiwara-Algoet conditions [32; 33]
\[-\frac{1}{d-1}\leq\sum_{\alpha=1}^{d+1}\lambda_{\alpha}(t)\leq 1+d\min_{\alpha} \lambda_{\alpha}(t). \tag{10}\]
## III Properties of generalized Pauli channels
In this section, we first demonstrate that not every generalized Pauli channel with dimensionality \(d\) can be represented as a convex combination of \((d+1)\) generalized Pauli dephasing channels. Then, we shift our focus to investigating the properties that emerge from the convex combination of general generalized Pauli channels.
**Proposition 1**.: _Given an integer \(d\geq 2\) and a probability distribution \(\mathbf{p}=(p_{0}(t),p_{1}(t),\cdots,p_{d+1}(t)),\) the generalized Pauli channel \(\Lambda_{t}^{(\mathbf{p})}\) belong to \(\mathcal{D}_{d}\) if and only if_
\[\sum_{\alpha=1}^{d+1}\left(\sup_{t\geq 0}p_{\alpha}(t)\right)\leq 1. \tag{11}\]
_As a consequence, the set \(\mathcal{D}_{d}\) is strictly containing in \(\mathcal{P}_{d}.\) That is, \(\mathcal{D}_{d}\subseteq\mathcal{P}_{d}\) and there exists \(\Lambda_{t}^{(\mathbf{p})}\in\mathcal{P}_{d}\) such that \(\Lambda_{t}^{(\mathbf{p})}\notin\mathcal{D}_{d}.\)_
Proof.: **Necessity.** Suppose that \(\Lambda_{t}^{(\mathbf{p})}\) belongs to \(\mathcal{D}_{d}.\) As mentioned before, there exists \(0\leq\pi_{\alpha}(t)\leq 1\) with \(\pi_{\alpha}(0)=0\) such that
\[\Lambda_{t}^{(\mathbf{p})}=\sum_{\alpha=1}^{d+1}x_{\alpha}\Lambda_{t}^{( \mathbf{p}_{\alpha},\pi_{\alpha})},\]
where \(x_{\alpha}\geq 0\) and \(\sum_{\alpha=1}^{d+1}x_{\alpha}=1.\) So by the Corollary 1 in Appendix A, one should have
\[p_{\alpha}(t)=x_{\alpha}\pi_{\alpha}(t),\text{ for }\alpha\in[d+1].\]
Therefore, taking the supremum of both side at each of the above equations, one has
\[\sup_{t\geq 0}p_{\alpha}(t)\leq x_{\alpha},\text{ for }\alpha\in[d+1].\]
Taking the sum of both side, one has
\[\sum_{\alpha=1}^{d+1}\left(\sup_{t\geq 0}p_{\alpha}(t)\right)\leq\sum_{\alpha=1} ^{d+1}x_{\alpha}=1.\]
**Sufficiency.** Denote \(m_{\alpha}:=\sup_{t\geq 0}p_{\alpha}(t)\) for \(\alpha\in[d+1].\) By assumption, we have \(\sum_{\alpha=1}^{d+1}m_{\alpha}\leq 1.\) Therefore, we can suppose that \(m_{0}\geq 0\) is the number satisfies \(m_{0}+\sum_{\alpha=1}^{d+1}m_{\alpha}=1.\) For each \(\alpha\in[d+1],\) we define
\[\pi_{\alpha}(t)=\begin{cases}\dfrac{p_{\alpha}(t)}{m_{\alpha}},&m_{\alpha}>0, \\ 0,&m_{\alpha}=0,\end{cases}\]
and \(\pi_{0}(t)=0.\) One has \(d+2\) generalized Pauli dephasing channels
\[\Lambda_{t}^{(\mathbf{p}_{1},\pi_{0})},\text{ and }\Lambda_{t}^{(\mathbf{p}_{ \alpha},\pi_{\alpha})},\forall\alpha\in[d+1].\]
It is easy to check that
\[\Lambda_{t}^{(\mathbf{p})}=m_{0}\Lambda_{t}^{(\mathbf{p}_{1},\pi_{0})}+\sum_{ \alpha=1}^{d+1}m_{\alpha}\Lambda_{t}^{(\mathbf{p}_{\alpha},\pi_{\alpha})}.\]
For the last statement of the proposition, one notes that it is easy to construct some probability distribution functions whose corresponding equation (11) is violated.
Here we take \(d=2\) for an example to illustrate that \(\mathcal{D}_{2}\) is strictly containing in \(\mathcal{P}_{2}.\) Suppose that \(p_{1}(t),p_{2}(t),p_{3}(t)\) are defined as
\[p_{\alpha}(t)=\frac{1}{2(\alpha+1)}(\cos[\alpha t+\pi]+1),\alpha=1,2,3. \tag{12}\]
And set \(p_{0}(t)=1-\sum_{\alpha=1}^{3}p_{\alpha}(t)\) and
\[\mathbf{p}:=(p_{0}(t),p_{1}(t),p_{2}(t),p_{3}(t)),\]
see Fig. 1 for an intuition of the \(p_{\alpha}(t).\) As \(p_{\alpha}(t)\geq 0\) and \(\sum_{\alpha=0}^{3}p_{\alpha}(t)=1,\) the map \(\Lambda_{t}^{(\mathbf{p})}\) forms a legitimate Pauli channel. By the first statement of Proposition 1, \(\Lambda_{t}^{(\mathbf{p})}\notin\mathcal{D}_{2}\) as the sum of supremums of \(p_{\alpha}(t)\) is
\[\frac{1}{2}+\frac{1}{3}+\frac{1}{4}=\frac{13}{12}>1.\]
Several works [23; 24] have discussed the inability to generate noninvertible channels through convex combinations of \((d+1)\) invertible generalized Pauli dephasing channels. Nevertheless, it is crucial to recognize that this does not preclude the generation of noninvertible channels through convex combinations of invertible generalized Pauli channels which leads us to the following proposition.
**Proposition 2**.: _Any mixture of invertible generalized Pauli channels is still invertible. As a consequence, the invertible generalized Pauli channels form a convex set._
Proof.: Suppose that \(\{\Lambda_{t}^{(\mathbf{p}_{k})}\}_{k=1}^{K}\) are all invertible generalized Pauli channels and
\[\Lambda_{t}^{(\mathbf{p})}=\sum\limits_{k=1}^{K}x_{k}\Lambda_{t}^{(\mathbf{p} _{k})},\text{ where }x_{k}\geq 0\text{ and }\sum\limits_{k=1}^{K}x_{k}=1.\]
Suppose that \(\Lambda_{t}^{(\mathbf{p}_{k})}[U_{\alpha}]=\lambda_{\alpha}^{(\mathbf{p}_{k}) }(t)U_{\alpha}\) and \(\Lambda_{t}^{(\mathbf{p})}[U_{\alpha}]=\lambda_{\alpha}^{(\mathbf{p})}(t)U_{\alpha}\), then we have
\[\lambda_{\alpha}^{(\mathbf{p})}(t)=\sum\limits_{k=1}^{K}x_{k}\lambda_{\alpha}^ {(\mathbf{p}_{k})}(t). \tag{13}\]
By Eq. (7), the eigenvalues of \(\Lambda_{t}^{(\mathbf{p}_{k})}\) are
\[\lambda_{\alpha}^{(\mathbf{p}_{k})}(t)=1-\frac{d}{d-1}\left(\sum\limits_{\beta =1}^{d+1}p_{\beta}^{(\mathbf{p}_{k})}(t)-p_{\alpha}^{(\mathbf{p}_{k})}(t) \right),\]
which are real continuous functions on \([0,+\infty)\). Moreover, we have \(\lambda_{\alpha}^{(\mathbf{p}_{k})}(0)=1\). As \(\Lambda_{t}^{(\mathbf{p}_{k})}\) is invertible, therefore, we must have \(\lambda_{\alpha}^{(\mathbf{p}_{k})}(t)>0\) for all \(t\geq 0\) and \(\alpha\in[d+1]\). With these relations and Eq. (13), one deduces that \(\lambda_{\alpha}^{(\mathbf{p})}(t)>0\) for all \(t\geq 0\) and \(\alpha\in[d+1]\). That is, \(\Lambda_{t}^{(\mathbf{p})}\) is invertible.
However, this property does not hold for general channels. In fact, there exists some invertible channels whose mixture maybe non-invertible. For each \((k,l)\in\mathbb{Z}_{d}\times\mathbb{Z}_{d}\), denote \(U_{kl}\) be the unitary Weyl operator,
\[U_{kl}=\sum\limits_{m\in\mathbb{Z}_{d}}\omega_{d}^{km}|m\rangle\langle m+l|, \ \omega_{d}=e^{\frac{2\pi i}{d}}. \tag{14}\]
The time-dependent generalized Weyl channel is defined by
\[\mathcal{E}_{t}^{(\mathbf{p})}(X)=\sum\limits_{(i,j)\in\mathbb{Z}_{d}\times \mathbb{Z}_{d}}p_{ij}(t)U_{ij}XU_{ij}^{\dagger}, \tag{15}\]
where \(\sum\limits_{(i,j)\in\mathbb{Z}_{d}\times\mathbb{Z}_{d}}p_{ij}(t)=1\), \(p_{ij}(t)\) is the time-dependent probability distribution such that \(p_{00}(0)=1\), \(p_{ij}(0)=0\) for \((i,j)\neq(0,0)\), which guarantee that \(\mathcal{E}_{0}^{(\mathbf{p})}=1\).
**Example 1**.: _Let \(d=3\) and_
\[\begin{split} p_{1}(t)&=q_{2}(t)=\frac{1-e^{-t}}{2} ;\\ p_{2}(t)&=q_{1}(t)=\frac{1-e^{-t}}{3};\\ p_{0}(t)&=q_{0}(t)=1-p_{1}(t)-p_{2}(t).\end{split} \tag{16}\]
_Define \(\mathbf{p}=(p_{ij}(t))_{(i,j)\in\mathbb{Z}_{3}\times\mathbb{Z}_{3}}\) and \(\mathbf{q}=(q_{ij}(t))_{(i,j)\in\mathbb{Z}_{3}\times\mathbb{Z}_{3}}\), where \(p_{ij}(t)=p_{i}(t)p_{j}(t)\) and \(q_{ij}(t)=q_{i}(t)p_{j}(t)\). The dynamical maps \(\mathcal{E}_{t}^{(\mathbf{p})}\) and \(\mathcal{E}_{t}^{(\mathbf{q})}\) defined by Eq. (15) are invertible, but their mixture \(\frac{1}{2}\mathcal{E}_{t}^{(\mathbf{p})}+\frac{1}{2}\mathcal{E}_{t}^{( \mathbf{q})}\) is not._
The proof of Example 1 is given in Appendix B.
It is well-established that a generalized Pauli channel constitutes a Markovian semigroup if and only if its local decoherence rates remain nonnegative constants. In this context, we present a definitive and comprehensive condition for a generalized Pauli channel to qualify as a Markovian semigroup, which depends on the spectral properties of the channel.
**Theorem 1**.: _A generalized Pauli channel \(\Lambda_{t}^{(\mathbf{p})}\) is a Markovian semigroup if and only if its spectra \(\lambda_{\alpha}(t)\)'s are \(1,e^{-c_{1}t},\cdots,e^{-c_{d+1}t}\), where \(c_{\alpha}\)'s are nonnegative real constants and satisfy_
\[\sum\limits_{\alpha=1}^{d+1}c_{\alpha}\geq d\max_{\beta}\{c_{\beta}\}.\]
Proof.: **Necessity.** If the generalized Pauli channel \(\Lambda_{t}^{(\mathbf{p})}\) is a Markovian semigroup, it is clear that its spectra are of the form
\[\lambda_{\alpha}(t)=e^{-c_{\alpha}t},\ \ c_{\alpha}\geq 0,\]
and Eq. (10) holds. Therefore, if for each \(\beta\in[d+1]\) we define
\[F_{\beta}(t):=1+de^{-c_{\beta}t}-\sum\limits_{\alpha=1}^{d+1}e^{-c_{\alpha}t},\]
we should have \(F_{\beta}(t)\geq 0\), for all \(t\geq 0\). As \(F_{\beta}(0)=0\), we must have \(F_{\beta}^{\prime}(0)\geq 0\). Otherwise, \(F_{\beta}^{\prime}(0)<0\) implies there must exist some \(t>0\) such that \(F_{\beta}(t)<0\). Therefore, we have \(F_{\beta}^{\prime}(0)=\sum_{\alpha=1}^{d+1}c_{\alpha}-dc_{\beta}\geq 0\) for each \(\beta\) which yields our conclusion.
**Sufficiency.** Without loss of generality, we assume that \(c_{1}\geq c_{2}\geq\cdots\geq c_{d+1}\). Therefore, \(F_{d+1}(t)\geq F_{d}(t)\geq\cdots\geq F_{1}(t)\). Moreover, by our given condition, we have \(\sum_{\alpha=1}^{d+1}c_{\alpha}-dc_{1}\geq 0\). Therefore,
\[F_{1}^{\prime}(t)=\sum\limits_{\alpha=1}^{d+1}c_{\alpha}e^{-c_{\alpha}t}-dc_{1} e^{-c_{1}t}\geq(\sum\limits_{\alpha=1}^{d+1}c_{\alpha}-dc_{1})e^{-c_{1}t}\geq 0.\]
Therefore, \(F_{1}(t)\) is an increasing monotone function. Hence, \(F_{1}(t)\geq F_{1}(0)=0\) for all \(t\geq 0\). From the above argument we found that \(F_{\beta}(t)\geq 0\) for all \(\beta\). By
defining \(p_{\beta}(t)=\frac{d-1}{d^{2}}F_{\beta}(t)\), one finds that \(\Lambda_{t}^{(\mathbf{p})}\) is a legitimate generalized Pauli channel with spectra being \(1,e^{-c_{1}t},e^{-c_{2}t},\cdots,e^{-c_{d+1}t}\). As the spectra of \(\Lambda_{t}^{(\mathbf{p})}\) are all of the form \(e^{ct}\) with \(c\) being constant, it must be a Markovian semigroup.
With Theorem 1 at hand, we can generalize the previous known statement: any nontrivial mixture of \((d+1)\) generalized Pauli dephasing channels that are Markovian semigroups is not a Markovian semigroup again.
**Proposition 3**.: _Any nontrivial convex combination of elements in \(\mathcal{S}_{d}\) must be lying outside of \(\mathcal{S}_{d}\)._
Proof.: We will prove the statement by contradiction. Assume that \(\Lambda_{t}^{(\mathbf{p}_{k})}\), \(k=1,\cdots,K\) (where \(K\geq 2\)) are different Markovian semigroups and there exists a Markovian semigroup \(\Lambda_{t}^{(\mathbf{p})}\) and \(x_{k}>0\) with \(\sum_{k=1}^{K}x_{k}=1\) such that
\[\Lambda_{t}^{(\mathbf{p})}=\sum_{k=1}^{K}x_{k}\Lambda_{t}^{(\mathbf{p}_{k})}. \tag{17}\]
Assume that
\[\Lambda_{t}^{(\mathbf{p})}[U_{\alpha}]=e^{-c_{\alpha}^{(\mathbf{p})}t}U_{ \alpha},\ \Lambda_{t}^{(\mathbf{p}_{k})}[U_{\alpha}]=e^{-c_{\alpha}^{(\mathbf{p}_{k})}t}U _{\alpha} \tag{18}\]
for \(k=1,2,\cdots,K\) and \(\alpha=1,2,\cdots,d+1.\) By Eqs. (17) and (18), we should have
\[e^{-c_{\alpha}^{(\mathbf{p})}t}=\sum_{k=1}^{K}x_{k}e^{-c_{\alpha}^{(\mathbf{p }_{k})}t},\]
which implies that
\[1=\sum_{k=1}^{K}x_{k}e^{(c_{\alpha}^{(\mathbf{p})}-c_{\alpha}^{(\mathbf{p}_{k} )})t}. \tag{19}\]
Clearly, there is no \(k\in[K]\) such that \((c_{\alpha}^{(\mathbf{p})}-c_{\alpha}^{(\mathbf{p}_{k})})>0\). Otherwise, taking \(t\) tend to infinity, the right hand side of Eq. (19) must tend to infinity as \(x_{k}>0\) for all \(k\in[K]\). Therefore, we always have \((c_{\alpha}^{(\mathbf{p})}-c_{\alpha}^{(\mathbf{p}_{k})})\leq 0\) which implies that
\[e^{(c_{\alpha}^{(\mathbf{p})}-c_{\alpha}^{(\mathbf{p}_{k})})t}\leq 1\]
for all \(t\geq 0\). As \(x_{k}>0\) and \(\sum_{k=1}^{K}x_{k}=1\), Eq. (19) holds only if
\[e^{(c_{\alpha}^{(\mathbf{p})}-c_{\alpha}^{(\mathbf{p}_{k})})t}=1\]
for all \(t\geq 0\). Therefore, this forces \(c_{\alpha}^{(\mathbf{p})}=c_{\alpha}^{(\mathbf{p}_{k})}\) for all \(\alpha\in[d+1]\) and \(k\in[K]\) from which one deduces that \(\Lambda_{t}^{(\mathbf{p}_{k})}=\Lambda_{t}^{(\mathbf{p})}\) for all \(k\in[K]\). Therefore, we obtain a contradiction as we have assumed that \(\Lambda_{t}^{(\mathbf{p}_{k})}\) are different.
**Proposition 4**.: _We always have the inclusion \(\mathcal{S}_{2}\subseteq\mathcal{D}_{2}\) but \(\mathcal{S}_{d}\not\subseteq\mathcal{D}_{d}\) for all \(d\geq 3\)._
Proof.: First, we consider the qubit case, that is, \(d=2\). By the proof of Theorem 1, we known that the probability distribution functions of any Markovian semigroup \(\Lambda_{t}^{(\mathbf{p})}\) are of the form
\[p_{\beta}(t)=\frac{1}{4}(1+2e^{-c_{\beta}t}-\sum_{\alpha=1}^{3}e^{-c_{\alpha}t }).\]
Without loss of generality, we assume that \(0\leq c_{1}\leq c_{2}\leq c_{3}.\) Therefore,
\[0 \leq 1+e^{-c_{1}t}-e^{-c_{2}t}-e^{-c_{3}t}<1+e^{-c_{1}t}\leq 2,\] \[0 \leq 1+e^{-c_{2}t}-e^{-c_{3}t}-e^{-c_{1}t}\leq 1-e^{-c_{3}t}<1,\] \[0 \leq 1+e^{-c_{3}t}-e^{-c_{1}t}-e^{-c_{2}t}\leq 1-e^{-c_{1}t}<1.\]
Therefore, \(\sup_{t\geq 0}p_{1}(t)+\sup_{t\geq 0}p_{2}(t)+\sup_{t\geq 0}p_{3}(t)\leq 1\). Moreover, if we define
\[\pi_{1}(t) :=(1+e^{-c_{1}t}-e^{-c_{3}t}-e^{-c_{2}t})/2,\] \[\pi_{2}(t) :=1+e^{-c_{2}t}-e^{-c_{3}t}-e^{-c_{1}t},\] \[\pi_{3}(t) :=1+e^{-c_{3}t}-e^{-c_{1}t}-e^{-c_{2}t},\]
then one can check that
\[\Lambda_{t}^{(\mathbf{p})}=\frac{1}{2}\Lambda_{t}^{(\mathbf{p}_{1},\pi_{1})}+ \frac{1}{4}\Lambda_{t}^{(\mathbf{p}_{2},\pi_{2})}+\frac{1}{4}\Lambda_{t}^{( \mathbf{p}_{3},\pi_{3})}.\]
Now we show the surprise relation: \(\mathcal{S}_{d}\not\subseteq\mathcal{D}_{d}\) for \(d\geq 3.\) In fact, by Theorem 1, we can define a series of Markovian semigroups of generalized Pauli channels \(\Lambda_{t}^{(\mathbf{p}_{c})}\), which arising from the setting \(c_{1}=c\) and \(c_{2}=c_{3}=\cdots=c_{d+1}=1\), here we assume that \(0<c<1\) and \(\mathbf{p}_{c}:=(p_{0}^{(c)}(t),p_{1}^{(c)}(t),\cdots,p_{d+1}^{(c)}(t))\), where
\[p_{\beta}^{(c)}(t)=\frac{d-1}{d^{2}}(1+de^{-c_{\beta}t}-\sum_{\alpha=1}^{d+1}e^ {-c_{\alpha}t}).\]
By taking \(t\) tend to infinite, we obtain that
\[\sup_{t\geq 0}p_{\beta}^{(c)}(t)\geq\frac{d-1}{d^{2}}\]
for each \(\beta\in[d+1].\) Moreover, note that
\[p_{1}^{(c)}(t)=\frac{d-1}{d^{2}}\left(1+(d-1)e^{-ct}-de^{-t}\right).\]
Fix any \(t>\log 2\), equivalently, \(1-e^{-t}>1/2\) and let \(c\) tend to zero, we have
\[\lim_{c\to 0^{+}}p_{1}^{(c)}(t)=\frac{d-1}{d}(1-e^{-t})>\frac{d-1}{2d}\geq\frac{1} {d},\]
where the last inequality holds only if \(d\geq 3.\) Therefore, there exists a small enough \(c>0\) such that \(\sup_{t\geq 0}p_{1}^{(c)}(t)>\frac{1}{d}.\) For this \(c,\) we have
\[\sum_{\alpha=1}^{d+1}\left(\sup_{t\geq 0}p_{\alpha}^{(c)}(t)\right)>\frac{1}{d}+d \times\frac{d-1}{d^{2}}=1.\]
By Proposition 1, the Markovian semigroup \(\Lambda_{t}^{(\mathbf{p}_{c})}\) do not belong to \(\mathcal{D}_{d}.\)
The condition \(\sum_{\alpha\neq\beta}\gamma_{\alpha}(t)\geq 0\) for all \(\alpha\in[d+1]\) is a sufficient and necessary condition for a dynamical map \(\Lambda_{t}^{(\mathbf{p})}\) to be \(P\)-divisible when \(d=2\). However, this does not hold for \(d\geq 3\)[14] which provides a fundamental difference between the qubit and general qudit cases. In the following, we will show that \(\mathcal{S}_{2}\subseteq\mathcal{D}_{2}\) but \(\mathcal{S}_{d}\not\subseteq\mathcal{D}_{d}\) for all \(d\geq 3,\) which provides another difference between the qubit and general qudit cases. Therefore, the relations among \(\mathcal{P}_{2},\mathcal{D}_{2}\) and \(\mathcal{S}_{2}\) and relations among \(\mathcal{P}_{d},\mathcal{D}_{d}\) and \(\mathcal{S}_{d}\) for \(d\geq 3\) can be viewed in Fig. 2, respectively.
It is interesting to note that by convex combination of Markovian semigroups in \(\mathcal{S}_{d},\) the resultant map could has \((d-1)^{2}\) local decoherence rates that are permanently negative [17]. However, we do not know whether \((d-1)^{2}\) is the largest number of permanently negative decoherence rates. In the following, we prove that this is true for the setting of generalized Pauli channels. First, let \(\{\gamma_{\alpha}(t)\}_{\alpha\in[d+1]}\) be the local decoherence rates of the resultant map of a convex combination of Markovian semigroups in \(\mathcal{S}_{d}.\)
**Proposition 5**.: _Given \(d\geq 2\) is an integer and suppose that \(\Lambda_{t}^{(\mathbf{p})}\) is a convex combination of \(K\) Markovian semigroups \(\{\Lambda_{t}^{(\mathbf{p}_{k})}\}_{k=1}^{K}\subseteq\mathcal{S}_{d},\) that is,_
\[\Lambda_{t}^{(\mathbf{p})}=\sum_{k=1}^{K}x_{k}\Lambda_{t}^{(\mathbf{p}_{k})}, \text{ where }x_{k}\geq 0\text{ and }\sum_{k=1}^{K}x_{k}=1.\]
_If the time local generator of \(\Lambda_{t}^{(\mathbf{p})}\) is_
\[\mathcal{L}_{t}^{(\mathbf{p})}[\rho]=\frac{1}{d}\sum_{\alpha=1}^{d+1}\gamma_{ \alpha}(t)\left(\mathbb{U}_{\alpha}[\rho]-(d-1)\rho\right),\]
_then for all \(\beta\in[d+1],\) we have_
\[\sum_{\alpha\neq\beta}\gamma_{\alpha}(t)\geq 0.\]
_In particular, for \(d=2\), the mixture of Pauli channels must be \(P\)-divisible._
Proof.: The eigenvalues of \(\Lambda_{t}^{(\mathbf{p}_{k})}\) can be assumed to be \(e^{-c_{j}^{(k)}t},\) where \((j,k)\in[d+1]\times[K]\) and \(c_{j}^{(k)}\geq 0\). So the eigenvalues of \(\Lambda_{t}^{(\mathbf{p})}\) are
\[\lambda_{j}(t)=\sum_{k=1}^{K}x_{k}e^{-c_{j}^{(k)}t}>0.\]
Therefore, \(\Lambda_{t}^{(\mathbf{p})}\) is invertible and
\[\dot{\Lambda}_{t}^{(\mathbf{p})}(\Lambda_{t}^{(\mathbf{p})})^{-1}[U_{\beta}^{ \ell}]=\mathcal{L}_{t}^{(\mathbf{p})}[U_{\beta}^{\ell}].\]
The left hand side is \(\dot{\lambda}_{\beta}(t)/\lambda_{\beta}(t)U_{\beta}^{\ell},\) while the right hand side is \(-\sum_{\alpha\neq\beta}\gamma_{\alpha}(t)U_{\beta}^{\ell}.\) Therefore,
\[\sum_{\alpha\neq\beta}\gamma_{\alpha}(t)=-\dot{\lambda}_{\beta}(t)/\lambda_{ \beta}(t).\]
By the definition of \(\lambda_{\beta}(t),\) we have
\[\dot{\lambda}_{\beta}(t)=-\sum_{k=1}^{K}c_{\beta}^{(k)}x_{k}e^{-c_{\beta}^{( k)}t}\leq 0.\]
Therefore, \(\sum_{\alpha\neq\beta}\gamma_{\alpha}(t)\geq 0.\)
From Proposition 5, one finds that there are no \(d\) terms among \(\{\gamma_{\alpha}(t)\}_{\alpha\in[d+1]}\) that are simultaneously negative at some times \(t\geq 0\). As a consequence, there are at most \((d-1)^{2}\) local decoherence rates that are permanently negative. Moreover, we have the following statement.
**Proposition 6**.: _Each generalized Pauli channel in \(\mathcal{P}_{d}\) could have at most \((d-1)^{2}\) local decoherence rates that are permanently negative for all \(t\geq 0\)._
Proof.: Let \(\Lambda_{t}^{(\mathbf{p})}\in\mathcal{P}_{d}\) and suppose that the local decoherence rates are \(\{\gamma_{\alpha}(t)\}_{\alpha=1}^{d+1}\) (each is \((d-1)\) folds). It is sufficient to prove that among \(\{\gamma_{\alpha}(t)\}_{\alpha=1}^{d+1},\) there are at most \((d-1)\) terms could be permanently negative for all \(t\geq 0\). Suppose not, without loss of generality, we could assume that \(\{\gamma_{\alpha}(t)\}_{\alpha=2}^{d+1}\) are all permanently negative for all \(t\geq 0\). From the equation \(\dot{\Lambda}_{t}^{(\mathbf{p})}=\Lambda_{t}^{(\mathbf{p})}\circ\mathcal{L}_{t}^ {(\mathbf{p})},\) one finds that
\[\dot{\lambda}_{\alpha}(t)=-\left(\sum_{\beta\neq\alpha}\gamma_{\beta}(t) \right)\lambda_{\alpha}(t),\ \ \forall\alpha\in[d+1].\]
Define \(\gamma(t)=\sum_{\beta=2}^{d+1}\gamma_{\beta}(t).\) By the assumption, we have \(\gamma(t)<0\) for all \(t\geq 0\). Therefore, solving the equation \(\lambda_{1}(t)=-\gamma(t)\lambda_{1}(t),\lambda_{1}(0)=1,\) one has
\[\lambda_{1}(t)=\exp\left[-\int_{0}^{t}\gamma(\tau)\mathrm{d}\tau\right]>1, \forall t\geq 0\]
which is contradicted with the well-known constraints \(|\lambda_{\alpha}(t)|\leq 1\). \(\blacksquare\)
In Ref. [25], the authors demonstrated that noninvertibility is a prerequisite for generating a Markovian semigroup when mixing \((d+1)\) generalized dephasing channels. However, in the subsequent discussion, we reveal that this non-invertibility requirement can be eliminated by substituting the previous mixing channels with more general generalized Pauli channels from the set \(\mathcal{P}_{d}\). In fact, for any integer \(n\geq 2\), we can establish the existence of a Markovian semigroup that can be represented as a convex combination of \(n\) distinct invertible generalized Pauli channels within \(\mathcal{P}_{d}\).
We consider the simplest Markovian semigroup \(\Lambda_{t}^{(\mathbf{p})}\): \(c_{1}=c_{2}=\cdots=c_{d+1}=1\) and \(p_{\alpha}(t)=\frac{d-1}{d^{2}}(1-e^{-t})\) for \(\alpha\in[d+1].\) First, we consider the case \(n=2\). For each \(\alpha\in[d+1]\), we define
\[p_{\alpha}^{(1)}(t) =p_{\alpha}(t)(1-e^{-t})=\frac{d-1}{d^{2}}(1-e^{-t})^{2},\] \[p_{\alpha}^{(2)}(t) =p_{\alpha}(t)(1+e^{-t})=\frac{d-1}{d^{2}}(1-e^{-2t}).\]
Therefore, we obtain two generalized Pauli channels \(\Lambda_{t}^{(\mathbf{p}_{1})}\) and \(\Lambda_{t}^{(\mathbf{p}_{2})}\), which satisfy \(\Lambda_{t}^{(\mathbf{p})}=\frac{1}{2}\Lambda_{t}^{(\mathbf{p}_{1})}+\frac{1} {2}\Lambda_{t}^{(\mathbf{p}_{2})}.\) Moreover, the eigenvalues of \(\Lambda_{t}^{(\mathbf{p}_{1})}\) and \(\Lambda_{t}^{(\mathbf{p}_{2})}\) are
\[\lambda_{\alpha}^{(1)}(t) =1-\frac{d^{2}}{d-1}p_{\alpha}^{(1)}(t)=1-(1-e^{-t})^{2},\] \[\lambda_{\alpha}^{(2)}(t) =1-\frac{d^{2}}{d-1}p_{\alpha}^{(2)}(t)=1-(1-e^{-2t})=e^{-2t},\]
respectively, which are both greater than \(0\) for all \(t\geq 0\). Therefore, the Markovian semigroup \(\Lambda_{t}^{(\mathbf{p})}\) is a convex combinations of two invertible channels. Note that \(\Lambda_{t}^{(\mathbf{p}_{2})}\) is again a Markovian semigroup.
For the case of \(n=3\), we first show that \(\Lambda_{t}^{(\mathbf{p}_{2})}\) can be decomposed into a convex combination of two invertible generalized Pauli channels. For each \(\alpha\in[d+1]\), we define
\[q_{\alpha}^{(1)}(t) =p_{\alpha}^{(2)}(t)(1-e^{-2t})=\frac{d-1}{d^{2}}(1-e^{-2t})^{2},\] \[q_{\alpha}^{(2)}(t) =p_{\alpha}^{(2)}(t)(1+e^{-2t})=\frac{d-1}{d^{2}}(1-e^{-4t}).\]
Therefore, we obtain two generalized Pauli channels \(\Lambda_{t}^{(\mathbf{q}_{1})}\) and \(\Lambda_{t}^{(\mathbf{q}_{2})}\), which satisfy \(\Lambda_{t}^{(\mathbf{p}_{2})}=\frac{1}{2}\Lambda_{t}^{(\mathbf{q}_{1})}+ \frac{1}{2}\Lambda_{t}^{(\mathbf{q}_{2})}.\) Moreover, the eigenvalues of \(\Lambda_{t}^{(\mathbf{q}_{1})}\) and \(\Lambda_{t}^{(\mathbf{q}_{2})}\) are
\[\mu_{\alpha}^{(1)}(t) =1-\frac{d^{2}}{d-1}q_{\alpha}^{(1)}(t)=1-(1-e^{-2t})^{2},\] \[\mu_{\alpha}^{(2)}(t) =1-\frac{d^{2}}{d-1}q_{\alpha}^{(2)}(t)=1-(1-e^{-4t})=e^{-4t},\]
respectively, which are both greater than \(0\) for all \(t\geq 0\). Therefore, the Markovian semigroup \(\Lambda_{t}^{(\mathbf{p}_{2})}\) is a convex combinations of two invertible channels \(\Lambda_{t}^{(\mathbf{q}_{1})}\) and \(\Lambda_{t}^{(\mathbf{q}_{2})}\). Note that \(\Lambda_{t}^{(\mathbf{q}_{2})}\) is again a Markovian semigroup. Therefore, \(\Lambda_{t}^{(\mathbf{p})}\) can be decomposed as a convex combination of three different invertible generalized Pauli channels
\[\Lambda_{t}^{(\mathbf{p})}=\frac{1}{2}\Lambda_{t}^{(\mathbf{p}_{1})}+\frac{1} {2}\Lambda_{t}^{(\mathbf{p}_{2})}=\frac{1}{2}\Lambda_{t}^{(\mathbf{p}_{1})}+ \frac{1}{4}\Lambda_{t}^{(\mathbf{q}_{1})}+\frac{1}{4}\Lambda_{t}^{(\mathbf{q}_{2 })}.\]
We can follow the same procedure as described above and discover that \(\Lambda_{t}^{(\mathbf{p})}\) can be expressed as a mixture of \(n\) distinct invertible generalized Pauli channels. This finding diverges from the conventional understanding that noninvertibility is a prerequisite for generating a Markovian semigroup through the convex combination of generalized Pauli dephasing channels.
## IV Conclusions
In our investigation, we delved into the intricate properties arising from the convex combination of generalized Pauli channels. Our exploration led to several noteworthy findings. First, we made a surprising discovery: the existence of certain generalized Pauli channels that could not be represented as a mere convex combination of \((d+1)\) generalized Pauli dephasing channels. This revelation challenged our intuition, which had been shaped by historical literature. Subsequently, we endeavored to broaden our understanding beyond the confines of mixing just \((d+1)\) generalized Pauli dephasing channels. Instead, we examined the more general scenario of mixing any generalized Pauli channels. Remarkably, many fundamental properties remained consistent in this expanded framework. For instances, any mixture of invertible generalized Pauli channels is still invertible; any nontrivial convex combination of generalized Pauli channels which are Markovian semigroups could not lead to a Markovian semigroup again.
Moreover, we present a sufficient and necessary condition for a generalized Pauli channel to be a Markovian semigroup via the spectra of the dynamical map. This criterion allowed us to demonstrate that every Pauli channel (for \(d=2\)) could be expressed as a mixture of \((d+1)\) Pauli dephasing channels. However, this generality did not extend to higher dimensions, highlighting a crucial distinction between qubits and general qudits.
Additionally, we unveiled that each generalized Pauli channel of dimensionality \(d\) could exhibit, at most, \((d-1)^{2}\) number of local decoherence rates that remained persistently negative for all time intervals \(t\geq 0\). This finding shed light on the constraints governing these channels when subjected to decoherence effects. We discovered that by combining certain invertible generalized Pauli channels, we can create a Markovian semigroup. This outcome is different from what was previously reported in the research by Jagadish et al. [25].
It is interesting to study the Markovian and non-Markovian properties of the resultant map under mixing the generalized Pauli channels. These intriguing questions prompt further exploration in the field. Additionally, we wonder whether it is possible to construct a Weyl channel of dimensionality \(d\) that exhibits more than \((d-1)^{2}\) local decoherence rates that remain permanently negative for all time intervals \(t\geq 0\).
## Acknowledgements
This work is supported by National Natural Science Foundation of China (12371458, 11901084), the Key Research and Development Project of Guangdong province under Grant No. 2020B0303300001, the Guangdong Basic and Applied Research Foundation under Grant No. 2023A1515012074 and 2020B1515310016, Key Lab of Guangzhou for Quantum Precision Measurement under Grant No. 202201000010, the Science and Technology Planning Project of Guangzhou under Grants No. 2023A04J1296.
## Appendix A A fundamental lemma
**Lemma 1**.: _Given an integer \(d\geq 2\) and \(d^{2}\) linearly independent operators \(V_{ij}\in\mathbb{L}_{d}\) where \((i,j)\in[d]\times[d]\). For any \(\mathbf{x}:=(x_{ij})_{(i,j)\in[d]\times[d]}\) where \(x_{ij}\in\mathbb{C}\), we can define a map from \(\mathbb{L}_{d}\) to \(\mathbb{L}_{d}\) given by_
\[\Lambda^{(\mathbf{x})}[\sigma]:=\sum_{i=1}^{d}\sum_{j=1}^{d}x_{ij}V_{ij} \sigma V_{ij}^{\dagger},\ \ \forall\sigma\in\mathbb{L}_{d}.\]
_If \(\Lambda^{(\mathbf{x})}\) acts trivially on all density matrices \(\mathbb{D}_{d}\), that is, \(\Lambda^{(\mathbf{x})}[\rho]=\mathbf{0}\) for all \(\rho\in\mathbb{D}_{d}\), then \(\mathbf{x}=\mathbf{0}\)._
Proof.: By the linearity of \(\Lambda^{(\mathbf{x})}\) and well-known result \(\mathrm{span}_{\mathbb{C}}(\mathbb{D}_{d})=\mathbb{L}_{d}\), the operation \(\Lambda^{(\mathbf{x})}\) also acts trivially on all matrices in \(\mathbb{L}_{d}\), that is, \(\Lambda^{(\mathbf{x})}\) is the zeros operation from \(\mathbb{L}_{d}\) to \(\mathbb{L}_{d}\). Denote \(\{|e_{i}\rangle\}_{i\in[d]}\) the computational basis of \(\mathcal{H}_{d}\). Then we have
\[\mathbf{0}=\Lambda^{(\mathbf{x})}(|e_{k}\rangle\langle e_{l}|)=\sum_{(i,j)\in [d]\times[d]}x_{ij}V_{ij}|e_{k}\rangle\langle e_{l}|V_{ij}^{\dagger},\]
which is equivalent to
\[\sum_{(i,j)\in[d]\times[d]}x_{ij}V_{ij}\otimes V_{ij}^{*}|e_{k}\rangle \otimes|e_{l}\rangle=\mathbf{0}\]
for all \((k,l)\in[d]\times[d]\). Hence \(\sum_{(i,j)\in[d]\times[d]}x_{ij}V_{ij}\otimes V_{ij}^{*}\) is a zero map from \(\mathcal{H}_{d}\otimes\mathcal{H}_{d}\) to itself. By the tensor theory and the linear independence of \(\{V_{ij}\}\), we know that the set \(\{V_{ij}\otimes V_{kl}^{*}\}_{i,j,k,l=1}^{d}\) form a basis of \(\mathbb{L}_{d}\otimes\mathbb{L}_{d}\). Therefore, the set \(\{V_{ij}\otimes V_{kl}^{*}\}_{i,j,k,l=1}^{d}\) is linearly independent which implies that \(x_{ij}=0\) for all \((i,j)\in[d]\times[d]\).
**Corollary 1**.: _Given an integer \(d\geq 2\) and two probability distribution functions \(\mathbf{p}=(p_{0}(t),p_{1}(t),\cdots,p_{d+1}(t))\) and \(\mathbf{q}=(q_{0}(t),q_{1}(t),\cdots,q_{d+1}(t))\). The two generalized Pauli channels \(\Lambda_{t}^{(\mathbf{p})}\) and \(\Lambda_{t}^{(\mathbf{q})}\) are equal if and only if \(\mathbf{p}=\mathbf{q}\)._
This could be deduced easily from Lemma 1 once one notes that the \(d^{2}\) unitary matrices \(\{\mathbb{I}_{d}\}\cup\{U_{\alpha}^{k}\}_{(\alpha,k)\in[d+1]\times[d-1]}\) (where \(U_{\alpha}\) is defined by Eq. (5)) arising from the \((d+1)\) MUBs are linearly independent.
## Appendix B Proof of Example 1
Note that for any general probabilities distribution \(\mathbf{p}\), the eigenvalue equations of \(\mathcal{E}_{t}^{(\mathbf{p})}\) defined by Eq. (15) always satisfy \(\mathcal{E}_{t}^{(\mathbf{p})}(U_{kl})=\lambda_{kl}(t)U_{kl},\forall(k,l)\in \mathbb{Z}_{d}\times\mathbb{Z}_{d}\), where the time-dependent eigenvalues \(\lambda_{kl}(t)\) could be expressed as
\[\lambda_{kl}(t)=\sum_{(i,j)\in\mathbb{Z}_{d}\times\mathbb{Z}_{d}}H_{ij,kl}\;p_ {ij}(t), \tag{10}\]
with \(H\) being the \(d^{2}\times d^{2}\) Hermitian complex matrix defined by \(H_{ij,kl}=\omega_{d}^{jk-il}\).
**Proof of Example 1.** Set \(\omega_{3}=e^{\frac{2\pi\mathrm{i}}{3}}\). Denote \(\mathbf{r}=\frac{1}{2}\mathbf{p}+\frac{1}{2}\mathbf{q}\), that is, the probabilities distribution of the channel \(\frac{1}{2}\mathcal{E}_{t}^{(\mathbf{p})}+\frac{1}{2}\mathcal{E}_{t}^{( \mathbf{q})}\). Suppose their eigenvalue equations are \(\mathcal{E}_{t}^{(\mathbf{x})}(U_{kl})=\lambda_{kl}^{(\mathbf{x})}(t)U_{kl}, \forall(k,l)\in\mathbb{Z}_{3}\times\mathbb{Z}_{3}\), for \(\mathbf{x}\in\{\mathbf{p},\mathbf{q},\mathbf{r}\}\). By Eq. (10), we have
\[\lambda_{kl}^{(\mathbf{p})} =\left(\sum_{i=0}^{2}\omega^{-il}p_{i}(t)\right)\left(\sum_{j=0}^{ 2}\omega^{jk}p_{j}(t)\right),\] \[\lambda_{kl}^{(\mathbf{q})} =\left(\sum_{i=0}^{2}\omega^{-il}q_{i}(t)\right)\left(\sum_{j=0}^{ 2}\omega^{jk}p_{j}(t)\right),\] \[\lambda_{kl}^{(\mathbf{r})} =\left(\sum_{i=0}^{2}\omega^{-il}\frac{p_{i}(t)+q_{i}(t)}{2} \right)\left(\sum_{j=0}^{2}\omega^{jk}p_{j}(t)\right).\]
Note that \(\sum_{i=0}^{2}\omega^{-il}p_{i}(t)=1\) for \(l=0\) and
\[\sum_{i=0}^{2}\omega^{-i}p_{i}(t)=p_{0}(t)-\frac{p_{1}(t)+p_{2}(t) }{2}+\mathrm{i}\frac{\sqrt{3}}{2}(p_{2}(t)-p_{1}(t)),\] \[\sum_{i=0}^{2}\omega^{-2i}p_{i}(t)=p_{0}(t)-\frac{p_{1}(t)+p_{2}( t)}{2}+\mathrm{i}\frac{\sqrt{3}}{2}(p_{1}(t)-p_{2}(t)),\]
which are all nonzero as \(p_{1}(t)\neq p_{2}(t)\) for all \(t>0\). Similarly, \(\sum_{i=0}^{2}\omega^{-il}q_{i}(t)\neq 0\) for all \(t>0\). Note that \(\sum_{j=0}^{2}\omega^{jk}p_{j}(t)\) is just the complex conjugate of \(\sum_{j=0}^{2}\omega^{-jk}p_{j}(t)\). Therefore, both \(\lambda_{kl}^{(\mathbf{p})}\) and \(\lambda_{kl}^{(\mathbf{q})}\) are
nonzero for all \((k,l)\in\mathbb{Z}_{3}\times\mathbb{Z}_{3}\). That is, both \(\mathcal{E}_{t}^{(\mathbf{p})}\) and \(\mathcal{E}_{t}^{(\mathbf{q})}\) are invertible. Moreover, one finds that
\[\sum_{i=0}^{2}\omega^{-i}\;\frac{p_{i}(t)+q_{i}(t)}{2}=p_{0}(t)-\frac{p_{1}(t)+p _{2}(t)}{2}=\frac{5e^{-t}-1}{4},\]
which is equal to zero when \(t=\log 5\). Therefore, the channel \(\frac{1}{2}\mathcal{E}_{t}^{(\mathbf{p})}+\frac{1}{2}\mathcal{E}_{t}^{( \mathbf{q})}\) is non-invertible. \(\blacksquare\)
|
2301.06936 | The use of Octree in point cloud analysis with application to cultural
heritage | In this article we present the effects of our work on the subject of the
technical approach to the 3D point cloud data analysis through the use of the
Octree method to compress, analyse and compute the initial data. | Rafał Bieńkowski, Krzysztof E. Rutkowski | 2022-12-12T21:24:03Z | http://arxiv.org/abs/2301.06936v1 | # The use of Octree in point cloud analysis with application to cultural heritage
###### Abstract.
In this article we present the effects of our work on the subject of the technical approach to the 3D point cloud data analysis through the use of the Octree method to compress, analyse and compute the initial data.
Key words and phrases:octree, 3D point cloud, data classification
## 1. Introduction
3D documentation and renderings are becoming more and more ubiquitous in numerous fields, such as engineering, architecture, urban planning, large-scale landscape analysis, and cultural heritage (Art History, Archaeology, and Museum Studies). With the ongoing improvement of acquisition tools (e.g. laser scanning, photogrammetry, LiDAR) and methods of 3D model generation (3D modelling and prototyping software), the accuracy and resolution of widely available 3D data have greatly improved.
In our article, we address two aspects of handling large 3D point clouds, that is size reduction and point classification. For both of these aspects, we apply the octree approach.
The process of improving the 3D data quality follows a similar development to the use of 2D images, from small bitmaps to high-resolution images. As in the 2D case, for the purpose of storage, analysis or transfer 3D files should be reduced in size without any significant loss of quality.
For a 3D point cloud to be useful, in most applications, the points need to be classified first. In many applications, a large number of points can be classified as noise. Below, we propose an approach to size reduction in 3D point clouds. We focus on the detection of two types of areas present in point clouds: 1) vegetation, and 2) regions of insufficient point density to produce reliable documentation.
Below, we propose one such approach to size reduction of 3D data coming from the field of Archaeology.
## 2. State of the art
Topography point cloud analysis is a time and resource consuming process, especially in terms of manual analysis, like classification. There are a lot of different methods of point cloud creation such as laser scanning or Structure from Motion (SfM) [4]. In the case of our experiment, we use the database on the SfM method, based on collecting 2D images and computing them to create a 3D object.
The idea of Octree was first published by Donald Meagher in 1980 as a method to represent and process 3D objects in computer graphics [5]. In modern scientific work, there are a lot of publications on the application of Octree in different fields of computer science. Below we would like to mention just a couple of examples:
1. Octree Grid Topology - used to segment objects that have a known topology [2],
2. Nearest neighbour search [3],
3. Colour Quantization [6].
## 3. Problem statement
In the present contribution, we investigate the use of the octree method for the size reduction and classification of 3D point clouds. In the experiments and analysis, we use numerical data sets representing an area of cultural heritage interest (an archaeological trench, documented during ongoing fieldwork and its surroundings - topographical data). The data sets are given in the form of point clouds based on
photogrammetry. Each point in the point cloud (data set) is represented by its georeferenced position in space and its colour is given in the RGB system.
In our investigation, we use the Octree method to choose points to be merged based on the distance criterion. In the 3D Octree method, a space/object is represented by cuboids, of various sizes. If points are "close enough" in cuboids of a suitable length of the edges, points are merged.
## 4 Data sets
Below we present a short description of the data sets used in the investigation. For the preliminary results, presented in Section "Numerical experiment", we used three data sets. All sets come from the photogrammetric documentation (based on image processing) of an archaeological site. Photogrammetric documentation in our data sets has been created in Agisoft Metashape based on the orthogonal photos taken from a drone.
The sets are as follows:
Set 1 - a point cloud documenting a cross-section of an archaeological trench with remains of architecture (stone walls) inside the trench. The points cloud covers an area around 2,5 by 6 meters in plan and ca. 70-100 cm deep.
Set 2 - similar to Set 1, this set represents/documents part of an archaeological trench, but with a strong focus on its surroundings, not the contents of the trench. This point cloud covers an area of ca. 3,5 by 6,5 meters. During the acquisition of this data, the vegetation around the trench was also of interest, hence the vertical measurements registered on the cloud are from 2m above ground (tree height) to 1 m of depth (inside trench).
Set 3 - a point cloud documenting a part of the archaeological heritage site. The points cloud covers an area around 10 by 10 meters and ca. 30 meters in height. This data set has been chosen based on the high vegetation in the centre of the documented area - a large tree.
All points have location data in a georeferenced coordinates system. In our case, it is the UTM coordinate system with data represented as latitude, longitude and elevation. The UTM zone codes for our data sets are UTM37T and UTM38T. An example of the location data for one, the selected point takes the following form (case of variable sites in Georgia).
## 5 Data processing
It is characteristic for geographic data to reverse the order of the first two axes, namely the first values given are from \(y\) axis, the second values are given \(x\) axis and the third values given are from \(z\) axis. In our application, we decided to work with the geographic order of the axis and therefore we used this order of data in our algorithm.
From the data set we extract the following information:
* minimal value on y-axis of vertex,
* maximal value on y-axis of vertex,
* minimal value on x-axis of vertex,
* maximal value on x-axis of vertex,
* minimal value on z-axis of vertex,
* maximal value on z-axis of vertex.
and due to the very small differences of the \(yx\)-position of points (which differs on at most 4 positions after the decimal point), we perform the following change of the \(yx\)-data: for \(y\)-values and \(x\)-values we drop the decimal precision before 4 positions after the decimal point and scale the result by \(10^{4}\). The value of \(z\) remains unchanged.
The selected level cuboids, which contain points, are of dimensions
\[\frac{y_{\text{max}}-y_{\text{min}}}{lev}\times\frac{x_{\text{max}}-x_{\text {min}}}{lev}\times\frac{z_{\text{max}}-z_{\text{min}}}{lev},\]
where \(lev\) represents the maximal level of division in the Octree method.
The Octree method is as follows
1. Given data we put into a cuboid of dimension \((y_{\text{max}}-y_{\text{min}})\times(x_{\text{max}}-x_{\text{min}})\times(z_{ \text{max}}-z_{\text{min}})\),
2. If the cuboid contains a vertex, split the cuboid into 8 cuboids of equal dimensions (by dividing each edge by two),
3. For each new cuboid we assign step 2, whenever the level of nesting is less or equal to lev.
The preview of this procedure is illustrated in the following graph.
The result of this procedure are cuboids up to the desired level of nesting. From the cuboids of maximal depth, we extract the cuboids which contain vertices. In our approach, we have chosen to use cuboids. It is however possible to implement a similar method using exclusively cubes by choosing initial data to be contained in a cube. We found that cuboids are better suited to our application. In a real-life application, the whole process benefits from the use of cuboids, instead of cubes, as cuboids are able to fit better into the investigated shapes.
## 6. Algorithm
The algorithm to classify the point cloud firstly sorts the cuboids of max depth which contain vertices with respect to \(z_{k}\), \(z_{k}=z_{\text{min}}+\frac{z_{\text{min}}-z_{\text{min}}}{lev}\), \(k=0,1\ldots,2^{lev}-1\) values for \(y\),\(x\) dimensions \([y_{i},y_{i+1}]\times[x_{j},x_{j+1}]\), \(i,j\in 0,\ldots,2^{lev}-1\). Then for each coordinate \([y_{i},y_{i+1}]\times[x_{j},x_{j+1}]\), \(i,j\in 0,\ldots,2^{lev}-1\) we find the connected cuboids of minimal height \(z\). We mark these cuboids as "surface" and the rest cuboids of these coordinates mark as "above".
The whole process of processing the data is illustrated in the following graph:
The algorithm is presented as follows:
```
for each cuboid of selected depth level containing vertices do Sort the cuboid data with respect to the \(z\) variable for each \(yx\)-coordinate endfor for For each \(yx\) coordinate do for For each following pair of cuboids in \(yx\) cordinate1 do if Distance between two following cuboids in \(z\) is greater than \(0\) then Mark the first cuboid as "surface" Break the loop of "For each cuboid of \(yx\) coordinate" else Mark the first cuboid as "surface" endif endfor endfor
```
**Algorithm 1** Finding surface cuboids
The cost of the algorithm is \(O(lev^{3})\), since in the pessimistic case we need to analyse each of the \(yxz\) cuboids of the selected level depth.
Below we present the second algorithm to classify cuboids "surface" and "above" and also fill the gap cuboids between cuboids "surface"."above" or "above"."above" as "gap" cuboids.
```
for each cuboid of selected depth level containing vertices do Sort the cuboid data with respect to the \(z\) variable for each \(yx\)-coordinate endfor for For each \(yx\) coordinate do if Distance between two following cuboids in \(z\) is greater than \(0\)then Mark the second cuboid as "above" and the following cuboids of \(yx\) coordinates as "above" Fill in the cuboid of coordinate \(yx\), height \([z_{1},z_{2}]\), where \(z_{1}\) is the max height of the first cuboid of the pair, \(z_{2}\) is the min height of the second cuboid of the pair endif endfor endfor Mark the cuboids of selected depth level containing vertices which are not "above" as "surface".
```
**Algorithm 2** Finding surface, above cuboids and gaps
## 7. Numerical Experiment
We consider data from three sets, as described in Section 4. Sets \(1\), \(2\), \(3\) are made up of \(626831\), \(1219669\) and \(993802\) points respectively. The level of depth of cuboids was set to \(5\) (starting from the initial level \(0\)).
Each of the points has a representation in \(x,y,z\) values of the georeferenced coordinates system and the colour in the RGB system. The numerical experiment was performed on the computer with the following hardware parameters: processor AMD Ryzen 9 3950X 16-Core, 128 GB RAM DDR4. The software used for calculations was MatLab 2020 with the help of the readObj package (see [1]) and the OCTree package (see [7]. The result of performing Octree on the data by choosing the most nested cuboids, which contain points of the data is displayed in the following pictures:
For the respective data sets the number of cuboids is as follows:
* For set \(1-12297\) cuboids on levels 0-5, where 5003 cuboids of level 5 contain vertices,
* For set \(2-8281\) cuboids on levels 0-5, where 3479 cuboids of level 5 contain vertices,
* 4792 cuboids on levels 0-5, where 1677 cuboids of level 5 contain vertices.
The results are displayed in the appendix A. Figures. |
2301.13485 | A Tropical Geometric Approach To Exceptional Points | Non-Hermitian systems have been widely explored in platforms ranging from
photonics to electric circuits. A defining feature of non-Hermitian systems is
exceptional points (EPs), where both eigenvalues and eigenvectors coalesce.
Tropical geometry is an emerging field of mathematics at the interface between
algebraic geometry and polyhedral geometry, with diverse applications to
science. Here, we introduce and develop a unified tropical geometric framework
to characterize different facets of non-Hermitian systems. We illustrate the
versatility of our approach using several examples, and demonstrate that it can
be used to select from a spectrum of higher-order EPs in gain and loss models,
predict the skin effect in the non-Hermitian Su-Schrieffer-Heeger model, and
extract universal properties in the presence of disorder in the Hatano-Nelson
model. Our work puts forth a new framework for studying non-Hermitian physics
and unveils a novel connection of tropical geometry to this field. | Ayan Banerjee, Rimika Jaiswal, Madhusudan Manjunath, Awadhesh Narayan | 2023-01-31T09:09:40Z | http://arxiv.org/abs/2301.13485v3 | # A Tropical Geometric Approach To Exceptional Points
###### Abstract
Non-Hermitian systems have been widely explored in platforms ranging from photonics to electric circuits. A defining feature of non-Hermitian systems is exceptional points (EPs), where both eigenvalues and eigenvectors coalesce. Tropical geometry is an emerging field of mathematics at the interface between algebraic geometry and polyhedral geometry, with diverse applications to science. Here, we introduce and develop a unified tropical geometric framework to characterize different facets of non-Hermitian systems. We illustrate the versatility of our approach using several examples, and demonstrate that it can be used to select from a spectrum of higher-order EPs in gain and loss models, predict the skin effect in the non-Hermitian Su-Schrieffer-Heeger model, and extract universal properties in the presence of disorder in the Hatano-Nelson model. Our work puts forth a new framework for studying non-Hermitian physics and unveils a novel connection of tropical geometry to this field.
## I Introduction
Several branches of mathematics show an _unreasonable effectiveness_ in formulating and understanding a myriad of physical phenomena [1]. Striking recent examples include the role of topology in condensed matter systems [2; 3], advent of knot theory in quantum field theory [4], and applications of graph theory in statistical mechanics [5].
Tropical geometry is a branch of modern mathematics at the interface between algebraic geometry and polyhedral geometry [6; 7]. The tropical approach has not only had applications to geometry, but also to areas such as physics, number theory, genetics, economics, optimization theory, and computational biology [8; 9; 10; 11]. Notable has been the role of tropical geometry in understanding physical systems. Deep connections of tropical geometry to string theory have been discovered [12; 13], while tropical algebra has been used to analyze frustrated systems such as spin ice and spin glasses [14]. Another recent successful application of tropical ideas has been in understanding self-organized criticality in dynamical systems [15]. Tropical geometric tools such as the logarithmic transformation offer drastic computational simplification, and, interestingly, the low-temperature limit of statistical physics can be studied in terms of such a tropical mapping [9; 16].
Hermiticity of operators is a central principle in quantum mechanics, ensuring that a system has real eigenenergies and orthogonal eigenstates, and leads to the conservation of probability [17]. In recent decades the notion of non-Hermiticity has been introduced in a variety of physical contexts [18; 19; 20]. A unique feature of non-Hermitian systems are degeneracies called _exceptional points_ (EPs), where both eigenvalues and eigenvectors coalesce [21]. The energy level-splitting, \(\Delta\lambda\), upon moving away from an EP follows a distinctive fractional dependence on the perturbation. An \(N\)-th order EP [EP-\(N\), where two or more eigenvectors (\(N\geq 2\)) coalesce] shows a splitting of the form \(\Delta\lambda\sim\nu^{1/N}\), where \(\nu\) is an external perturbation [22; 23]. Recent advances have led to controllable realization of EPs in a variety of platforms [24; 25; 26; 27; 28; 29; 30]. Their control has enabled the exploration of novel phenomena, such as uni-directional sensitivity [31; 32], laser mode selectivity [33; 34], and non-Hermitian skin effect (NHSE) [35].
In this work, we propose and develop a general tropical geometric framework for understanding and characterizing various facets of non-Hermitian systems. We demonstrate that the tropical geometric information encoded in the characteristic polynomial of the non-Hermitian Hamiltonian can be used to identify and classify EPs using _valuation_ and _tropical roots_ - concepts that naturally emerge in the tropical setting. We show that EPs of different orders and their transitions can be captured in an elegant manner by _amoeba_s and Newton polygons. We illustrate our framework using experimentally-realized gain and loss models, and show how it allows obtaining a higher-order EP or choosing from a spectrum of EPs. Using the paradigmatic non-Hermitian Su-Schrieffer-Heeger (SSH) model, we demonstrate how our tropical geometric approach can be used to predict the NHSE. Our approach naturally allows extracting the universal properties of EPs in the presence of disorder, which we highlight using the celebrated Hatano-Nelson model. Our framework allows a unified approach to different facets of non-Hermitian phenomena, including EPs, NHSE, and holonomy.
### Tropical characterization of exceptional points
**Basics of tropical geometry.** We begin by briefly summarizing the fundamental ideas of tropical geometry (see supplemental material [36] for a detailed discussion). Broadly speaking, tropical geometry studies solutions of systems of polynomials by transforming them into piecewise linear subsets of Euclidean space [37]. The basic algebraic object underlying tropical geometry is the _tropical semiring_, \((\mathbb{R}\cup\{\infty\},\oplus,\odot)\). This denotes a set that is the union of the set of real numbers \(\mathbb{R}\), together with an element "infinity", and two operations on it, namely tropical addition \(\oplus\) and tropical multiplication \(\odot\). The tropical sum of two numbers is their minimum and the tropical product is their usual sum,
\[x\oplus y=\min(x,y),\quad x\odot y=x+y. \tag{1}\]
Many of the usual axioms of arithmetic remain valid in the tropical setting. These operations satisfy all the ring axioms except for the existence of an additive inverse and thus turn \((\mathbb{R}\cup\{\infty\},\oplus,\odot)\) into a semiring.
**Defining order of exceptional points.** In the following, we define the notion of order of an EP of a non-Hermitian system. To the best of our knowledge, this definition is consistent with the literature on this topic. Let \(H(\nu)\) be the Hamiltonian of a non-Hermitian system in one variable \(\nu\) with an EP at \(\nu=0\) and let \(p(\nu,\lambda)\in\mathbb{C}[\nu,\lambda]\) be its characteristic polynomial. In the following, we regard \(p\) as a polynomial in one variable \(\lambda\) and with coefficients in the field \(\mathbb{C}\{\{\nu\}\}\) of Puiseux series.
**Definition.1**.: _Let \(p\in\mathbb{C}\{\{\nu\}\}\{\lambda]\) have at least one non-zero root. Suppose that \(p\) has a non-trivial Puiseux series root, i.e. a root \(s\) such that the least exponent of \(s\) is non-zero. In this case, the order of this EP (at \(\nu=0\)) is the maximum absolute value of the denominator \(n\) of \(m/n\in\mathbb{Q}\) (in reduced form) where \(m/n\) varies over the least exponent in the Puiseux series expansion over all the non-trivial roots of \(p\). Otherwise, if all the roots of \(p\) have zero as their least exponent, then \(\nu=0\) is called a degenerate point._
Consider a system at an EP-\(N\) (at \(\nu=0\)) given by the Hamiltonian \(H_{0}(x_{1},x_{2},...)\) where \(x_{1},x_{2}...\) are system-dependent parameters. When we perturb this system around the EP, the eigenvalues of the perturbed Hamiltonian \(H(\nu)=H_{0}+\nu H_{1}\) follow a Puiseux series in \(\nu\),
\[\lambda(\nu)=\gamma_{1}\nu^{1/N}+\gamma_{2}\nu^{2/N}+..., \tag{2}\]
where \(\nu\) is the perturbation strength. To leading order, the response goes as \(\Delta\lambda_{EP-N}\propto\nu^{1/N}\). Our tropical geometric approach features a characterization of EPs by determining such leading order behavior.
**Characterizing exceptional points using tropical geometry.** Next, we present the tropical geometric framework that can be used to reveal the structure of EPs and characterize as well as tune them in various physical platforms.
For a field \(\mathbb{K}\), a valuation on \(\mathbb{K}\) is defined as a function \(\text{val}:\mathbb{K}\rightarrow\mathbb{R}\cup\{\infty\}\) such that:
* \(\text{val}(a)=\infty\) if and only if \(a=0\);
* \(\text{val}(ab)=\text{val}(a)+\text{val}(b)\) ;
* \(\text{val}(a+b)\geq\min\{\text{val}(a),\text{val}(b)\}\) for all \(a,b\in\mathbb{K}\).
In our framework, we primarily deal with the field of Puiseux series with coefficients in the complex numbers \(\mathbb{C}\). This field has a natural valuation which is given by taking a non-zero Puiseux series to the lowest exponent that appears in its expansion. For example, \(\text{val}(t^{2}-2t+3)=\min\{\text{val}(t^{2}),\text{val}(-2t),\text{val}(3)\}= \min\{2,1,0\}=0\) and \(\text{val}(t^{1/2}-t^{3/4}+t^{1}+t^{2}+\dots)=1/2\).
In its most basic form, tropical geometry gives a method to compute the valuations of the non-zero roots of a non-zero polynomial \(p\in\mathbb{K}[\lambda]\) in terms of the valuations of the coefficients of \(p\). More precisely, given a non-zero polynomial \(p=\sum_{i=0}^{d}a_{i}\lambda^{i}\in\mathbb{K}[\lambda]\), its tropicalization \(\text{trop}(p):\mathbb{R}\rightarrow\mathbb{R}\) is defined as \(\text{trop}(p)(\omega)=\min_{i}\{\text{val}(a_{i})+i\cdot\omega\}\).
A real number \(\omega_{0}\) is called a _tropical root_ of \(\text{trop}(p)\) if the minimum defining \(\text{trop}(p)(\omega_{0})\) is attained by at least two distinct terms \(\text{val}(a_{j})+j\cdot\omega_{0}\) and \(\text{val}(a_{k})+k\cdot\omega_{0}\) for \(j\neq k\). Equivalently, the tropical roots of \(\text{trop}(p)\) precisely
Figure 1: **Illustration of the tropical geometric framework with two site gain and loss model.** (a) Schematic of the two site gain and loss model with \(\gamma\) as the gain and loss parameter and \(\kappa\) as the coupling between the sites. (b) Using tropicalization to find the order of EP. The tropical polynomial contains different linear monomials with integer coefficients (see Eq. 5). The bend locus of the monomials (encircled in red) gives the tropical root \(\omega_{0}=1/2\) implying a second order EP.
are the real numbers where \(\operatorname{trop}(p)\) is not differentiable, called the _bend locus_ of \(\operatorname{trop}(p)\).
The _fundamental theorem of tropical geometry_ asserts that the set of tropical roots of \(\operatorname{trop}(p)\) is precisely the set of valuations of the non-zero roots of \(p\)[37, Chapter 3, Section 2]. This leads us to one of the main propositions of our framework. For a non-Hermitian Hamiltonian, \(H(\nu)\), with a characteristic polynomial \(p(\nu,\lambda)\in\mathbb{C}[\nu,\lambda]\), as described before, \(p\) can be regarded as an element in \(\mathbb{C}\{\{\nu\}\}[\lambda]\), where \(\mathbb{C}\{\{\nu\}\}\) is equipped with its standard valuation that takes a non-zero Puiseux series \(s\) to the exponent of the leading order term of \(s\).
**Proposition.2**.: _Suppose that \(\operatorname{trop}(p(\nu,\lambda))\) has a non-zero tropical root. The order of the EP at \(\nu=0\) of \(H(\nu)\) is the maximum absolute value of the denominator \(n\) of \(m/n\) (in reduced form) where \(m/n\) varies over all the non-zero tropical roots of \(\operatorname{trop}(p(\nu,\lambda))\). Otherwise, if \(\operatorname{trop}(p(\nu,\lambda))\) has no non-zero tropical roots, then \(\nu=0\) is a degenerate point._
Proof.: By the fundamental theorem of tropical geometry [37, Chapter 3, Section 2], the set of tropical roots of \(\operatorname{trop}(p)\) is precisely the set \(\{\operatorname{val}(s)\}_{s}\) where \(s\) varies over all the non-zero Puiseux series solutions of \(p(\nu,\lambda)\in\mathbb{C}\{\{\nu\}\}[\lambda]\). With this information at hand, the statement follows from the definition of the order of an EP.
To simply illustrate our framework, we consider an experimentally realizable non-Hermitian system consisting of two coupled sites with gain and loss (see Fig. 1a). The Hamiltonian reads
\[H_{2}=\begin{pmatrix}\alpha+i\gamma&\kappa\\ \kappa&-\alpha-i\gamma\end{pmatrix}. \tag{3}\]
Here \(\alpha\) quantifies the onsite energies, \(\gamma\) is the corresponding gain/loss coefficient, and \(\kappa\) is the coupling between the sites. This system has an EP at \(\alpha=0\) if \(\gamma=\kappa\). The characteristic polynomial and the corresponding tropicalization for \(\gamma=\kappa\) are
\[p(\alpha,\lambda)=-2i\kappa\alpha-\alpha^{2}+\lambda^{2}, \tag{4}\] \[\operatorname{trop}\left(p(\alpha,\lambda)\right)(\omega)=\min \left(1,2\omega\right). \tag{5}\]
The root of the tropical polynomial is given by the bend locus of \(\operatorname{trop}(p(\alpha,\lambda))(\omega)\) which occurs at \(\omega_{0}=1/2\), as shown in Fig. 1b. Using the fundamental theorem of tropical geometry, we then conclude that \(p(\alpha,\lambda)\) has a non-zero root with valuation \(s=1/2\). This implies that the roots of \(p(\alpha,\lambda)\), i.e., the eigenvalues of \(H\) have the form \(\lambda\sim\alpha^{1/2}\) near the EP at \(\alpha=0\). Thus, the EP at \(\alpha=0\) is a second-order EP (see the supplement [36] for a detailed discussion). Further, in the supplement [36], we use tropicalization to illustrate how our framework provides a natural way to characterize and tune to higher-order EPs using companion matrices.
**Relation to amoebas and Newton polygons.** Above, we saw how tropical geometry can be used to determine the order of EPs. A precursor to tropical geometry is a construction called the _amoeba of a complex algebraic variety_ due to Gelfand, Kapranov and Zelevinsky [38]. Let \(V\subseteq(\mathbb{C}^{*})^{n}\) be the set of solutions, all of whose coordinates are non-zero, of a finite set of Laurent polynomials in \(n\) variables. Let \(\operatorname{Log}:(\mathbb{C}^{*})^{n}\to\mathbb{R}^{n}\) be the logarithmic map that takes \((z_{1},\ldots,z_{n})\) to \((\log(|z_{1}|),\ldots,\log(|z_{n}|))\). The amoeba of \(V\) is the image of the logarithmic map restricted to \(V\). A related and important notion is the _spine of the amoeba_ and is defined as the limit as \(t\to\infty\) of the parameterized logarithmic map \(\operatorname{Log}_{t}(z_{1},\ldots,z_{n})=(\log_{t}(|z_{1}|),\ldots,\log_{t} (|z_{n}|))\). In the Methods section we present the connection of amoeba to tropicalization.
We are primarily concerned with amoebas of polynomials in two variables with complex coefficients, namely characteristic polynomials of a non-Hermitian Hamiltonian in one variable. Typical examples of such amoebas are shown in Fig. 2, which will be discussed shortly. The amoeba of a typical polynomial contains (unbounded) rays that are called its _tentacles_. We recall that the Newton polygon of \(p\) is the convex hull of the exponents of the monomials in the support of \(p\). The following proposition that relates the edges of the Newton polygon of \(p\) and the amoeba (of the algebraic variety) associated to \(p\) is of fundamental importance to our framework.
**Proposition.3**.: _The set of directions of the tentacles of the amoeba associated to \(p\) is precisely the set of outer normals of the edges of the Newton polygon of \(p\)._
Figure 2: **Characterization of exceptional points through amoebas in a three-site gain and loss model.** Realization of Newton polygon (left), amoeba (center), and the spine of the amoeba for (a) \(\phi=-\pi/6\) and (b) \(\phi=-\pi/4\). The convex slope of the Newton polygon defines the order of EPs. We obtain third-order EPs in (a) and second-order EPs in (b). The interior point in the Newton polygon in (a) results in a vacuole in the amoeba. The amoeba structures abruptly change from (a) to (b) while transitioning from third-order to second-order EPs. We set \(\gamma=\sqrt{2}\kappa\) and \(\kappa=1.0\).
We refer to Proposition 1.9 [38] and Section 1.4 [37] for a more general version of this proposition.
We illustrate this proposition using a three-site non-Hermitian trimer model with balanced gain and loss and an asymmetric onsite potential. The Hamiltonian for the trimer is
\[H_{3}=\begin{pmatrix}\alpha+i\gamma&\kappa&0\\ \kappa&0&\kappa\\ 0&\kappa&\beta-i\gamma\end{pmatrix}, \tag{6}\]
where the different symbols have a meaning analogous to the two-site model. We use the transformation \(\beta=\alpha\tan\phi\) to scan all angles in the \(\alpha\)-\(\beta\) parameter plane. Using the formalism developed above, we can find the tropical roots to reveal the nature of EPs. In Fig. 2 we illustrate the amoebas and the concomitant Newton polygons for various \(\phi\). Note that the steepest slope of the Newton polygon \(\Delta\) determines the order of the EPs. Interestingly, the integer points of the Newton polygon correspond to the vacuole in the amoeba (see Fig. 2a). The structure of the amoeba drastically transforms from \(\phi=-\pi/4\) to \(\pi/4\) while the tropical roots change from \(1/2\) to \(1/3\) with a transition from second-order to third-order EPs. Therefore, the structure of the amoeba can be directly used to identify the various non-Hermitian phases.
### Tropical analysis of the Su-Schrieffer-Heeger model
Below we apply the complete tropical approach developed in this work to the paradigmatic non-Hermitian SSH Hamiltonian with non-reciprocal hopping [39; 40]. We demonstrate how our tropical geometric approach can detect the NHSE, which is a unique feature of non-Hermitian systems where a large number of states accumulate at boundaries of open systems [35; 41]. The Hamiltonian of the non-Hermitian SSH model reads
\[H_{SSH} = -\sum_{i}[t_{1}(c_{i,A}^{\dagger}c_{i,B}+h.c.)+t_{2}(c_{i+1,A}^{ \dagger}c_{i,B}+h.c.)] \tag{7}\] \[+\sum_{i}\gamma(c_{i,B}^{\dagger}c_{i,A}-c_{i,A}^{\dagger}c_{i,B}),\]
where \(c_{i,\alpha}^{\dagger}(c_{i,\alpha})\) is the fermionic creation (annihilation) operator at site \(i\) for sublattice \(\alpha=A,B\). The intra- and inter-unit cell hopping amplitudes are given by \(t_{1}\) and \(t_{2}\), respectively, and \(\gamma\) introduces a non-reciprocity only in the intra-unit cell hopping, resulting in non-Hermiticity (see Fig. 3a). We introduce a perturbation, \(\sigma t_{2}\), \(\sigma\in[0,1]\), which connects the last and first sites. The Hamiltonian takes the matrix form (\(\epsilon=\sigma t_{2}\))
\[H_{SSH}(\epsilon)=\begin{bmatrix}0&t-\gamma&0&\cdots&\epsilon\\ t+\gamma&0&t_{2}&\cdots&0\\ 0&t_{2}&0&\cdots&0\\ \vdots&\vdots&\vdots&\ddots&\vdots\end{bmatrix}_{N\times N}. \tag{8}\]
The characteristic equation, in turn, is
\[p(\epsilon,\lambda) =\text{mod}(N-1,2)\{\gamma^{2}-t_{1}^{2}\}^{\frac{N}{2}}-t_{2}^{ \frac{N-2}{2}}(t_{1}+\gamma)^{N/2}\epsilon\] \[+\sum_{M=N,N-2,\cdots}[z_{M}\{\gamma^{2}-t_{1}^{2}\}^{\frac{N-M}{ 2}}+t_{2}\mathcal{O}_{M}]\lambda^{M}, \tag{9}\]
where each \(z_{M}\) is a constant, \(z_{M}\in\mathbb{Z}\). The tropical polynomial is calculated to be
Figure 3: **Detecting skin effect in the non-Hermitian SSH model with higher-order EPs via tropical geometry** (a) Schematic of SSH model with non-reciprocal hopping and a weak link connecting the last site to the first. The inter-unit cell hopping is a constant \(t_{2}\), but the left and right intra-unit cell hoppings are given by \(t\pm\gamma\) incorporating non-Hermiticity in the system. (b) Newton polygons and the concomitant amoebas for SSH model with odd number of sites. Here we choose \(t_{1}=2.0\), \(\gamma=1.0\) and \(t_{2}=1.0\). The structure is similar for all values \(t_{1}\neq\gamma\). (c) At \(t_{1}=\gamma\) the Newton polygon abruptly transforms to a single line with slope \(1/N\) at \(t_{1}=\gamma\), indicative of a higher order EP and the skin effect. The amoeba collapses to a single line perpendicular to the Newton polygon, characterizing the non-Hermitian phase transition. Panel (d) shows the tropicalization for the general case corresponding to Eq. 10. Each straight line represents a term in Eq. 10. (e) Tropicalization for the case of \(t_{1}=\gamma\), wherein the coefficients of all the points corresponding to \(\lambda^{M}\) vanish, other than \(M=0,N\). The bend locus gives the tropical root \(\omega_{0}=1/N\), which indicates the presence of an \(N\)-th order EP and, correspondingly, the occurrence of a non-Hermitian skin effect. Here we choose \(N=5\).
\[\mathrm{trop}\left(p(\epsilon,\lambda)\right)(\omega)=\min\{m,\cdots(N-2)\omega,N \omega\}, \tag{10}\]
where \(m=0\) (1) for even (odd) sites. The tropicalization and bend locus for \(p(\epsilon,\lambda)\) are shown in Fig. 3d and 3e. Strikingly at \(t_{1}=\gamma\) and \(t_{2}\to 0\), the coefficients of all the terms in \(p(\epsilon,\lambda)\) vanish other than the \(\epsilon^{1}\) and the \(\lambda^{N}\) terms which lead to the solution \(\lambda=\epsilon^{1/N}\). This fractional exponent, in turn, shows that higher-order EPs appear for \(t_{1}=\pm\gamma\) with an algebraic multiplicity that scales with system size while the geometric multiplicity remains unity. This is a signature of the NHSE, wherein all the bulk modes collapse to one state and are exponentially localized at the edge under open boundary conditions.
This physics can be beautifully captured by the amoebas and their corresponding Newton polygons. In Fig. 3b and 3c, we present the Newton polygon and associated amoeba for this model. The edges of the Newton polygon are perpendicular to the tentacles of the amoeba. The structure of the amoeba remains invariant unless we have \(t_{1}=\gamma\), where the amoeba and the corresponding Newton polygon strikingly collapse to a single straight line, as shown in Fig. 3c. The Newton polygon in the latter case has a slope of \(1/N\), establishing the presence of an EP-\(N\).
### Application to disorder and holonomy
Our tropical geometric framework can be used to extract universal properties of EPs even in the presence of disorder as we show next. To illustrate, we consider the celebrated Hatano-Nelson model [42] under open boundary conditions with \(N\) sites, along with upper corner perturbations, i.e., additional couplings in the \((1,j)\)-th entries of the Hamiltonian, where \(j=N,N-2\cdots\). For 4 sites, the Hamiltonian reads
\[H_{4}=\begin{pmatrix}0&\delta&\eta&\Delta\\ (2+\delta)&0&\delta&0\\ 0&(2+\delta)&0&\delta\\ 0&0&(2+\delta)&0\end{pmatrix}. \tag{11}\]
The Hamiltonian can be written as a sum of two companion matrices indicating that it features different exceptional behaviour along different sections of the parameter space. To study the structure of EPs in the parameter space of \(\delta\), \(\Delta\) and \(\eta\), we shift to generalized spherical coordinates \(\delta=r\cos\theta\cos\phi\), \(\Delta=r\cos\theta\sin\phi\), \(\eta=r\sin\theta\) and study the tropicalization of the characteristic equation \(p(r,\lambda)\). We find highly anisotropic behaviour resulting in EP-2, EP-3 or EP-4 along various directions, as summarized in Fig. 4. Please refer to the supplement for
Figure 4: **Holonomy characterization of disordered Hatano-Nelson model.** (a) Riemann surface for a quartic root in the complex plane. (b) The Hatano-Nelson model exhibits anisotropic exceptional behaviour in the parameter space as illustrated by the different projection of eigenbands along different parameter planes (the number of petals representing the order of EP) (c), (d) Swapping of eigenmodes arising from Riemann sheet exchange while tracing a loop in parameter space given by \(R=ce^{i\psi},\psi\in(0,2\pi)\). We show the holonomy properties when the loop (c) critically touches EP and (d) encloses the EP. Note that in (d) the \(N\) eigenmodes undergo a cyclic permutation among themselves while in (c) eigenmode evolution forms \(N\) petals in the complex energy plane where \(N\) is the order of EP. Tropicalization and tropical roots showing (e) fourth-\((\theta=0,\phi=\pi/4)\), (f) second- (\(\theta=0,\phi=0\)), and (g) third-order (\(\theta=\pi/4,\phi=0\)) EPs for different values of \(\phi\) and \(\theta\). The insets show holonomy characterization in the presence (brown) and absence (blue) of disorder. Disorder preserves the stability of EPs but renormalizes the spectral properties.
a more detailed analysis [36].
Here, we will use the \(N=4\) case as an example to show that the exceptional behaviour in Hatano-Nelson model remains universal even in the presence of certain kinds of disorder, and the disordered Hamiltonians are homotopic to each other with respect to the tropicalization. The Hamiltonian, \(H_{4}\), in the presence of a general form of scaling disorder reads
\[H_{dis}=\begin{pmatrix}0&a\delta&\eta&\Delta\\ (2+\delta)c&0&\delta b&0\\ 0&(2+\delta)d&0&\delta m\\ 0&0&(2+\delta)n&0\end{pmatrix}, \tag{12}\]
where \(a,b,c,m\) and \(n\) are arbitrary real numbers that introduce disorder in the asymmetric hopping terms. Such models are well-studied and can be experimentally realized in different physical settings [43]. The form of the characteristic equation now changes, but remarkably, its tropicalization remains the same as for \(H_{4}\).
\[\operatorname{trop}\left(p(r,\lambda)(\omega)\right)=\min(4\omega,2\omega+1, \omega+1,1), \tag{13}\]
for \(\cos\theta,\sin\theta,\cos\phi,\sin\phi\neq 0\). The tropical polynomial remains invariant to the values of disorder scaling parameters suggesting the exceptional behaviour remains invariant, or is universal in the presence of disorder. Our framework makes this apparent through the tropicalization. A complementary view is to analyze the holonomy around the EPs. Consider varying some system parameters to form a loop in the parameter space while simultaneously tracing the evolution of the complex eigenmodes. If the loop encloses an EP-\(N\), \(N\) eigenmodes would undergo a cyclic permutation, which can be understood using holonomy matrices [44; 45]. Whereas, if the loop marginally touches an EP-\(N\), the projection of the eigenmode evolution forms \(N\) petals in the complex energy plane, as shown in Fig. 4c. We used such a marginally touching loops to study the holonomy properties for \(H_{dis}\). We find that in the presence of disorder, the eigenvalues get scaled, however their holonomy properties do not change, as shown in insets of Fig. 4e-g. As the tropicalization remains invariant, the set of disordered Hamiltonians are homotopically connected and the EPs are universal.
## V Outlook
Our work opens up several avenues for exploration. While we have formulated the tropical geometric framework for a single variable, we envisage that it should be possible to generalize this to several variables - this will allow treating multiple perturbations on the same footing. It will be interesting to use our approach to classify the different non-Hermitian symmetry classes, and explore potential connections of tropical geometry to \(K\) theory [46]. Since our approach allows treating disorder in a natural way, it could be interesting to connect tropical geometry and random matrices, which have applications in many different fields of physics [47]. We also expect our analytical approach to be practically useful for tuning to EPs and identifying conditions for NHSE in various experimental arenas. Finally, we note that, very recently, amoebas have been used to determine the generalized Brillouin zone for non-Hermitian systems [48]. In summary, we have introduced and developed a new framework to characterize EPs using tropical geometry. We have illustrated its implications using paradigmatic SSH and Hatano-Nelson models. Our work, bridging the fields of tropical geometry and non-Hermitian phenomena, is particularly timely given the surge of interest in non-Hermitian systems. We hope that our findings motivate further synergy between mathematics and non-Hermitian physics.
###### Acknowledgements.
A.B. thanks the Prime Minister's Research Fellowship. R.J. thanks the Kishore Vaigyanik Protsahan Tojana fellowship. M.M was supported by a MATRICS grant from the Department of Science and Technology (DST) India. A.N. is supported by the startup grant of the Indian Institute of Science (SG/MHRD-19-0001) and by DST-SERB (project number SRG/2020/000153). A part of this work was carried out during the program "Combinatorial Algebraic Geometry: Tropical and Real" held at the International Centre for Theoretical Sciences, Bangalore (ICTS) in June-July, 2022. We thank ICTS for their warm hospitality. We thank Vamsi P. Pingali for bringing us together on this project. We thank G. Reddy and N. Aetukuri for feedback on the manuscript. MM also thanks Indian Institute of Science, Bangalore and Waseda University, Tokyo for their hospitality during the visits in May, 2022 and January, 2023, respectively, in which a part of the work was carried out.
## Author contributions
A.B. and R.J. carried out the calculations in consultation with M.M. and A.N. All authors analyzed the results, developed the theory, and wrote the manuscript.
## Methods
**Fundamentals of tropical geometry.** Here we summarize some of the fundamentals of tropical geometry. The algebraic structure of tropical geometry is also known as the min-plus algebra. Many of the usual axioms of arithmetic remain valid in the tropical setting. For instance, addition and multiplication are commutative
\[x\oplus y=y\oplus x,\quad x\odot y=y\odot x. \tag{14}\]
Associative property also holds, as does the distributive law
\[x\odot(y\oplus z)=x\odot y\oplus x\odot z. \tag{15}\]
Both tropical operations have an identity element - infinity for addition and zero for multiplication.
\[x\oplus\infty=x,\quad x\odot 0=x. \tag{16}\]
A distinct feature of tropical arithmetic is the absence of subtraction operation. On the other hand, tropical division is the classical subtraction. So, \((\mathbb{R}\cup\{\infty\},\oplus,\odot)\) satisfies all the ring axioms except for the existence of additive inverse - such algebraic structures are termed _semirings_.
**Newton polygon formalism.** We briefly discuss the Newton polygon formalism which is dual to amoebas. Let us start with the Puiseux series solution of the equation \(f(x,y)=0\) in a suitable neighbourhood of the origin (in our case an EP (\(\nu=0\))). Any polynomial \(f(x,y)\) with the form
\[f(x,y)=\sum_{\eta,\zeta}a_{\eta\zeta}x^{\eta}y^{\zeta}, \tag{17}\]
admits a solution \(y=tx^{\mu}\), where \(t\) is a complex number and \(\mu=p/q\) is a positive rational number. One can find a solution by substituting \(y=tx^{\mu}\) in Eq. 17, to obtain
\[f(x,tx^{\mu})=\sum_{\eta,\zeta}a_{\eta\zeta}x^{\eta+\mu\zeta}t^{\zeta}=x^{ \xi}\sum_{\eta,\zeta}a_{\eta\zeta}t^{\zeta}. \tag{18}\]
The above equation puts a constraint that \(f(x,y)\) contains only monomials \(x^{\eta}y^{\zeta}\) for which \(\eta+\mu\zeta=\xi\), which is the essential feature of the Newton polygon. The geometric interpretation of Newton polygon is embedded in the following mapping. Each monomial \(x^{\eta}y^{\zeta}\) maps to the pair \((\eta,\zeta)\) of natural numbers comprising a set of \(\mathbb{N}^{2}\) lattice points with integer coordinates for non-zero coefficients \(a_{\eta\zeta}\neq 0\). This set of lattice points forms the carrier \(\Delta(f)\) of \(f\), thus
\[\Delta(f)=\{(\eta,\zeta)\in\mathbb{N}^{2}|a_{\eta\zeta}\neq 0\}. \tag{19}\]
For a convergent power series \(f(x,y)\) with a carrier \(\Delta\left(f\right)\), one can define a convex hull from each point of the carrier \(\Delta\left(f\right)\). The boundary of the convex hull, delineating a compact polygonal path, gives the Newton polygon of \(f\). The steepest segment of the Newton polygon gives the lowest order term for the Puiseux series solution, thus defining the order of EP [49; 50]. More concretely, the condition \(\eta+\mu\zeta=\xi\) for all \((\eta,\zeta)\in\Delta(f)\) indicates that all points of \(\Delta(f)\) lie on a line, with a slope \(-\frac{1}{\mu}\), and the line meets the \(\alpha-\)axis at \(\eta=\xi\).
**Amoeba and tropicalization.** We next present the connection between amoeba and tropicalization as used in the main text. The absolute value \(|.|\) over the complex numbers satisfies the archimedean property [51; Chapter 9, page 313]. Any field \(F\) has an absolute value \(|.|_{t}\) that is non-archimedean, i.e., does not satisfy the archimedean property: \(|0_{F}|_{t}=0\) and \(|c|_{t}=1\) for all \(c\neq 0_{F}\) in \(F\), where \(0_{F}\) is the additive identity of \(F\). This is usually called the _trivial absolute value_ on \(F\). Otherwise, the non-archimedean absolute value is called _non-trivial_. Fields such as the rational numbers \(\mathbb{Q}\), the field \(\mathbb{C}((t))\) of formal Laurent power series in one variable (with complex coefficients) are naturally equipped with non-trivial (non-archimedean) absolute values. More explicitly, i. \(|n|_{p}:=e^{-\mathrm{val}_{p}(n)}\) for any \(n\in\mathbb{Q}\) where \(p\) is a prime, \(\mathrm{val}_{p}(n)\) is \(\mathrm{ord}_{p}(r)-\mathrm{ord}_{p}(s)\) where \(n=r/s\) such that \(r,s\in\mathbb{Z}\), \(s\neq 0\) and \(\mathrm{ord}_{p}(i)\), for an integer \(i\), is the largest power of \(p\) that divides \(i\), ii. \(|\ell(z)|\) for a Laurent power series \(\ell(z)\) is defined as \(e^{-\mathrm{ord}(\ell(z))}\) where \(\mathrm{ord}(\ell(z))\) is the least exponent of \(z\) in the support of \(\ell\). The rational number \(\mathrm{ord}(\ell(z))\) is also called the _valuation_ of \(\ell(z)\) and is denoted by \(\mathrm{val}(\ell(z))\).
Suppose that \(\mathbb{K}\) is an algebraically closed field (every polynomial of degree at least one in \(\mathbb{K}[t]\) has a root) equipped with a non-trivial, non-archimedean absolute value \(|.|_{\mathbb{K}}\). Our primary example of such a field is the field \(\mathbb{C}\{\{t\}\}\) of Puiseux series is one variable. The notion of amoeba that we defined over the complex numbers can also be mimicked over \(\mathbb{K}\) as follows. Suppose that \(V\subseteq(\mathbb{K}^{*})^{n}\) is the set of solutions, all of whose coordinates are non-zero, to a finite set of Laurent polynomials in \(n\)-variables with coefficients in \(\mathbb{K}\). Let \(\mathrm{Log}_{\mathbb{K}}:(\mathbb{K}^{*})^{n}\rightarrow\mathbb{R}^{n}\) be the map \(\mathrm{Log}_{\mathbb{K}}(s)=-\log|s|_{\mathbb{K}}\) (note that \(s\neq 0_{\mathbb{K}}\)) [52]. The tropicalization of \(V\) is defined as the image of the map \(\mathrm{Log}_{\mathbb{K}}\) restricted to \(V\). Hence, the tropicalization of \(V\) is a non-archimedean analogue of an amoeba.
|
2306.17644 | Upscaling and Effective Behavior for Two-Phase Porous-Medium Flow using
a Diffuse Interface Model | We investigate two-phase flow in porous media and derive a two-scale model,
which incorporates pore-scale phase distribution and surface tension into the
effective behavior at the larger Darcy scale. The free-boundary problem at the
pore scale is modeled using a diffuse interface approach in the form of a
coupled Allen-Cahn Navier-Stokes system with an additional momentum flux due to
surface tension forces. Using periodic homogenization and formal asymptotic
expansions, a two-scale model with cell problems for phase evolution and
velocity contributions is derived. We investigate the computed effective
parameters and their relation to the saturation for different fluid
distributions, in comparison to commonly used relative permeability saturation
curves. The two-scale model yields non-monotone relations for relative
permeability and saturation. The strong dependence on local fluid distribution
and effects captured by the cell problems highlights the importance of
incorporating pore-scale information into the macro-scale equations. | Mathis Kelm, Carina Bringedal, Bernd Flemisch | 2023-06-30T13:30:54Z | http://arxiv.org/abs/2306.17644v1 | # Upscaling and Effective Behavior for Two-Phase Porous-Medium Flow using a Diffuse Interface Model
###### Abstract
We investigate two-phase flow in porous media and derive a two-scale model, which incorporates pore-scale phase distribution and surface tension into the effective behavior at the larger Darcy scale. The free-boundary problem at the pore scale is modeled using a diffuse interface approach in the form of a coupled Allen-Cahn Navier-Stokes system with an additional momentum flux due to surface tension forces. Using periodic homogenization and formal asymptotic expansions, a two-scale model with cell problems for phase evolution and velocity contributions is derived. We investigate the computed effective parameters and their relation to the saturation for different fluid distributions, in comparison to commonly used relative permeability saturation curves. The two-scale model yields non-monotone relations for relative permeability and saturation. The strong dependence on local fluid distribution and effects captured by the cell problems highlights the importance of incorporating pore-scale information into the macro-scale equations.
## 1 Introduction
Flow through porous media, especially in multi-phase systems, is of interest in a variety of applications from oil recovery and \(CO_{2}\) sequestration to fuel cells and biological systems. Traditionally these are modeled using extended Darcy's law for two phases, which lacks the mathematical derivation from pore-scale information available for the single-phase Darcy's law. Single-phase Darcy's law can be derived from a pore-scale model of (Navier-)Stokes equations through upscaling procedures. For two phases Darcy's law has been extended by introducing empirically derived relative permeability saturation curves to capture fluid interactions. The effective behavior depends only on the averaged saturation and the model fails to capture different pore-scale effects. Further empiric modifications such as play-type hysteresis have been added to account for missing behavior. In this work we start from a two-phase-flow model at the pore scale and through appropriate assumptions and a periodic homogenization approach, we derive a two-scale model of macroscopic Darcy's law-type equations with effective parameters computed from solutions of resulting pore-scale cell problems.
The flow of the two fluids is modeled on the pore scale using quasi-compressible Navier-Stokes equations with phase-dependent density and viscosity as well as an additional term capturing surface tension forces at the fluid-fluid interface. For the pore-scale model with resolved phase
distribution we use a diffuse interface approach in the form of an Allen-Cahn phase-field model [5]. Where a sharp-interface model separates the pore space into subdomains for each fluid phase, our approach captures the interface implicitly through a smooth phase field function. All unknowns are defined and modified equations hold on the entire domain of the pore space. This removes the need to track or capture the interface and decompose the domain in the numeric simulation. To resolve the transition zone between the two bulk phases a fine discretization is required near the interface, requiring high computational effort in the absence of adaptive local mesh refinement. Furthermore the Allen-Cahn model used in this work is not conservative, including a curvature driven motion of the interface. While the conservative Cahn-Hilliard model requires solving a fourth order partial differential equation, we consider the second order Allen-Cahn equation, which offers a much simpler numerical implementation, and we investigate the two-scale model derived from it. While there are different approaches to ensuring conservation of the phase field [7] or counteracting this curvature-driven motion entirely [26], we apply the scaled mobility approach presented in [1] to eliminate the curvature-driven motion only in the sharp-interface limit. A core advantage of the diffuse interface approach is the ease of upscaling the equations, as they are defined on a stationary domain encompassing both fluid phases. In contrast, upscaling a sharp-interface model requires special attention to the evolving domains [25].
The model is presented for two phases but extends well to more evolving phases [14, 20], only a non-neutral contact angle at internal triple points between phases requires more attention [21].
To bridge the scale gap between pore scale and the averaged Darcy scale we employ the method of formal asymptotic homogenization [13]. Representing the porous medium by a periodic arrangement of scaled reference cells, one introduces an additional spatial coordinate for this smaller scale. Here the macroscopic scale corresponds to the Darcy scale and the microscopic scale is the pore scale. These two scales are sufficiently separated, with representative length scales \(L\) and \(\ell\) respectively defining a length scale ratio \(\epsilon=\ell/L\ll 1\). Assuming asymptotic expansions of the unknowns in terms of \(\epsilon\), considering the limit \(\epsilon\to 0\) and grouping by orders of \(\epsilon\), one obtains equations that are defined on the macro scale and form micro-scale cell problems. The former contains effective parameters which are computed from the cell problem unknowns, linking the two scales. Through the cell problems, these parameters and the effective behavior depend additionally on the distribution of fluids and are able to capture more detailed effects on the fluid velocities.
The homogenization leads to macroscopic equations similar to the extended Darcy's law but with effective parameters computed from solutions of cell problems instead of being given by empiric relations. These effective parameters can be understood as total phase mobilities and for isotropic pore geometries a relative permeability can be computed. Through the cell problems these relative permeabilities in the two-scale model depend on the local phase distribution at the pore scale. This motivates a comparison of computed relative permeabilities to commonly used functional relations, enabling us to investigate under which conditions the models agree and which additional pore-scale effects are captured by the two-scale model. We investigate numerically the effective parameters computed for different geometries and fluid properties. Constructing fluid distributions of varying saturations, we solve the cell problems for the velocity contributions to obtain a relative permeability curve. The models are implemented using a staggered finite volume discretization in DuMu\({}^{\text{x}}\)[15], with Dune-SPGrid [19] and Dune-Subgrid [12] used for the grid.
In [10], [18], and [22] a Cahn-Hilliard model was used to derive a similar two-scale model. We instead investigate the applicability of the simpler Allen-Cahn model and the effects of micro-scale information on the effective flow parameters. The derivation of a two-scale model in [18] additionally assumes a separation of time scales and a different asymptotic expansion
to arrive at an instationary problem for the phase distribution. We use the full expansion in the spatial separation \(\epsilon\) instead and afterwards introduce an artifical evolution to obtain a local phase distribution. In [10] three time scales are used to separate interface equilibration, macroscopic flow and saturation changes. In [22] a solute-dependent surface tension is considered and simulations of the coupled two-scale model are presented. We instead focus on a systematic investigation of the effective parameters computed from the cell problems. A similar comparison of effective parameters is included in [16], where a sharp-interface Stokes model for two-phase flow is upscaled using volume averaging. However, [16] does not account for a three-phase contact line or slip velocities at the solid, which are accounted for in the current work.
In this contribution we aim to derive a two-scale model for two-phase flow in porous media, which captures effects of pore-scale fluid distribution and surface tension on the effective behavior at the macro-scale. Our goal is to determine conditions under which the computed effective parameters show significantly different behavior to commonly used saturation-dependent curves and highlight the importance of resolving pore-scale behavior through a two-scale model for two-phase porous-medium flow.
The structure of this paper is as follows. We present our pore-scale model for two-phase flow using an Allen-Cahn formulation coupled to a modified Navier-Stokes system in Section 2. In Section 3 we perform the upscaling and obtain a two-scale model using periodic homogenization. We investigate the dependence of computed effective parameters on pore-scale conditions using DuMuxin Section 4. Section 5 summarizes the derived model and relates it to the extended Darcy's law.
## 2 Pore-scale modeling
We begin by presenting the pore-scale model describing two-phase flow. The model is first presented in a sharp-interface formulation before we introduce the diffuse-interface description for the phase-field model.
The general domain is a stationary void space inside a porous medium. We consider a domain \(\Omega=\mathcal{P}\cup\mathcal{G}\) decomposed into a domain \(\mathcal{P}\) capturing the pore space and the solid matrix \(\mathcal{G}\). Here the distribution of two fluid phases needs to be captured and evolved according to the flow of the fluids, including effects of surface tension at the fluid-fluid interface.
### Sharp-interface model
The model is first formulated with a sharp interface separating the fluid phases and corresponding evolving subdomains \(\Omega_{\alpha}(t)\subset\mathcal{P}\), \(\alpha=1,2\) of the stationary pore space \(\mathcal{P}\) (Figure 1). The
Figure 1: Interfaces and triple point.
interfaces between the fluid phases, denoted by \(\Gamma_{f}(t)=\overline{\Omega_{1}(t)}\cap\overline{\Omega_{2}(t)}\), as well as the fluid-solid interfaces \(\Gamma_{\alpha}=\overline{\Omega_{\alpha}}\cap\mathcal{G}\) evolve accordingly. Within each subdomain the flow is modelled using incompressible Navier-Stokes equations, with constant viscosity \(\mu_{\alpha}\) and density \(\rho_{\alpha}\).
\[\nabla\cdot\mathbf{v}_{\alpha} =0\, \text{in }\Omega_{\alpha}\, \tag{1}\] \[\rho_{\alpha}\frac{\partial}{\partial t}\mathbf{v}_{\alpha}+\rho_ {\alpha}\nabla\cdot(\mathbf{v}_{\alpha}\otimes\mathbf{v}_{\alpha}) =-p_{\alpha}+\mu_{\alpha}\nabla^{2}\mathbf{v}_{\alpha}+\rho_{ \alpha}\mathbf{g}\, \text{in }\Omega_{\alpha}. \tag{2}\]
At the interfaces \(\Gamma_{\alpha}\) with the solid matrix we prescribe a Navier-Slip [23] boundary condition with slip length \(\lambda\geq 0\) for the velocity \(\mathbf{v}_{\alpha}\)
\[\mathbf{v}_{\alpha}=-\lambda(\partial_{\mathbf{n}}\mathbf{v}_{\mathbf{t}}) \mathbf{t}\,\qquad\text{on }\Gamma_{\alpha}\, \tag{3}\]
where \(\mathbf{t}\) is tangential to the fluid-solid interface \(\partial\mathcal{G}\). At the fluid-fluid interface \(\Gamma_{f}\) we prescribe continuity of velocities with a no-slip condition enforcing zero tangential velocity, with interface velocity \(V_{\Gamma_{f}}\) in the direction of the interface normal \(\mathbf{n}_{\Gamma_{f}}\) pointing from fluid 1 into fluid 2,
\[\mathbf{v}_{\alpha}=V_{\Gamma_{f}}\mathbf{n}_{\Gamma_{f}}. \tag{4}\]
The interface itself moves due to the local velocity of the fluids and with surface tension \(\gamma\) and curvature \(H_{\Gamma_{f}}\) we have a jump condition
\[[-p_{\alpha}\mathbf{I}+\mu_{\alpha}(\nabla\mathbf{v}_{\alpha}+(\nabla\mathbf{ v}_{\alpha})^{T}-\frac{2}{3}\nabla\cdot\mathbf{v}_{\alpha}\mathbf{I})]\mathbf{n}_{ \Gamma_{f}}=-\gamma H_{\Gamma_{f}}\mathbf{n}_{\Gamma_{f}}\qquad\text{on }\Gamma_{f}\, \tag{5}\]
where \([\psi]=\psi_{1}-\psi_{2}\) denotes the jump over the interface. At the three-phase contact point the fluid-fluid-interface meets the solid with a contact angle \(\theta_{\text{eq}}\) (Figure 1). In a sharp-interface formulation this could be incorporated using a boundary condition for a level set equation, when a level set is used to track the location of the fluid-fluid-interface.
### Phase-field model
To model two-phase flow at the pore scale we use a diffuse-interface approach, capturing the distribution of phases with a phase-field function \(u\) defined on the total void space \(u:T\times\mathcal{P}\to[0,1]\) with \(u=0\) and \(u=1\) corresponding to the two distinct phases \(\alpha=2\) and \(\alpha=1\) respectively. The phase-field variable evolves according to an advective Allen-Cahn equation, a second order partial differential equation (PDE) with a non-linear source term. The multi-phase system is then modeled as one fluid with varying properties, depending on \(u(t,\mathbf{x})\) to account for phase distribution.
We use the compressible Navier-Stokes equations for non-constant viscosity, derived from conservation laws using Stokes hypothesis \(\zeta=0\). A surface tension flux is added to the momentum equation, coupling it to the Allen-Cahn equation as presented for the stationary Stokes equation in [3]. In [2] the surface tension term is introduced to a Navier-Stokes equation with a phase field evolved according to the Cahn-Hilliard equation.
The resulting Navier-Stokes equations in conservative form are
\[\frac{\partial\rho}{\partial t}+\nabla\cdot(\rho\mathbf{v}) =0 \text{in }\mathcal{P}\, \tag{6a}\] \[\frac{\partial}{\partial t}(\rho\mathbf{v})+\nabla\cdot(\rho \mathbf{v}\otimes\mathbf{v}) =-\nabla p+\nabla\cdot\left(\mu\left(\nabla\mathbf{v}+(\nabla \mathbf{v})^{T}-\frac{2}{3}(\nabla\cdot\mathbf{v})\mathbf{I}\right)\right)+ \rho\mathbf{g}\] (6b) \[-\frac{3}{2}\xi\gamma\nabla\cdot(\nabla u\otimes\nabla u) \text{in }\mathcal{P}\.\]
quasi-incompressible fluid with density and viscosity dependent on the present phase in a linear manner. Denoting mass and viscosity ratios \(R=\rho_{1}/\rho_{2}\) and \(M=\mu_{1}/\mu_{2}\), we write
\[\rho(u) =\rho_{1}u+\rho_{2}(1-u)=\rho_{2}+u(\rho_{1}-\rho_{2})=\rho_{2} \left(1+u\left(\frac{\rho_{1}}{\rho_{2}}-1\right)\right)=\rho_{2}(1+u(R-1))\, \tag{7a}\] \[\mu(u) =\mu_{1}u+\mu_{2}(1-u)=\mu_{2}(1+u(M-1)). \tag{7b}\]
This is combined with the advective Allen-Cahn equation, additionally coupled via the advecting velocity and the surface tension momentum flux. The phase-field equation is given as
\[\frac{\partial u}{\partial t}+\nabla\cdot(\mathbf{v}u)=\sigma\xi(\nabla^{2}u- \xi^{-2}P^{\prime}(u))\,,\quad\text{in }\mathcal{P}\,,\quad P(u)=8u^{2}(1-u)^{2}\, \tag{8}\]
where we consider the scaled diffusivity \(\sigma\xi\) suggested in [1] in order to obtain a more favorable sharp-interface limit. The diffusivity \(\sigma\) and diffuse-interface width \(\xi\) prescribe how strongly a certain profile is enforced for the transition zone between bulk phases and the width of this zone respectively. The double-well potential, \(P(u)\), encourages a separation of phases through stable minima at \(u=0\) and \(u=1\).
The (no-)slip boundary conditions on solid boundaries apply also here, with interpolation based on the phase-field \(u\) if phase-dependent slip lengths are desired. Additionally for the phase-field a Neumann boundary condition encoding the contact angle \(\theta_{\text{eq}}\) measured through fluid 1, corresponding to \(u=1\) (Fig 2), is prescribed [11, 17].
\[\mathbf{v} =-\lambda(\partial_{\mathbf{n}}\mathbf{v_{t}})\mathbf{t}\, \text{on }\partial\mathcal{P}\, \tag{9a}\] \[\partial_{\mathbf{n}}u =-\cos(\theta_{\text{eq}})\xi^{-1}\sqrt{2P(u)}\,\text{ on } \partial\mathcal{P}. \tag{9b}\]
Note that for vanishing slip length \(\lambda=0\) and for neutral contact angle \(\theta_{\text{eq}}=\frac{\pi}{2}\), the boundary conditions reduce to homogeneous Dirichlet and Neumann conditions for velocity and phase field respectively. The Appendix A includes the sharp-interface limit of the coupled Navier-Stokes Allen-Cahn system presented in this section, yielding again the sharp-interface model presented in Section 2.1.
## 3 Upscaling using periodic homogenization
Starting from the diffuse-interface model from Section 2.2, describing the phase distribution and flow at the pore scale, we derive effective equations capturing the behavior at the Darcy scale. At the same time we obtain a set of micro-scale cell problems, defined at each macroscopic point, the solutions of which allow the computation of effective tensors. These are then used as parameters to the macro-scale problem, which in turn supplies global pressure gradients and saturations used
Figure 2: Contact angle boundary condition for the diffuse-interface phase-field model.
parameters can be interpreted as phase mobilities.
To achieve this we assume a sufficient separation of scales, with characteristic length scales \(\ell\) for the pore scale and \(L\) for the Darcy scale at which we are interested in the effective behavior. Assuming a scale separation \(\epsilon=\ell/L\ll 1\), we first non-dimensionalize the model. We then follow the approach of periodic homogenization, modeling the porous medium as a periodic arrangement of scaled reference cells \(\epsilon Y\), \(Y=[0,1]^{\mathrm{d}}\) with dimension \(\mathrm{d}\), where \(\mathrm{d}\) is \(2\) or \(3\), (see Figure 3). The domain of the porous medium is thus decomposed into \(\Omega^{\epsilon}=\cup_{w\in W_{\Omega}}\,\epsilon(w+Y)\), with \(W_{\Omega}\in\mathbb{Z}^{\mathrm{d}}\) a set of indices. With local pore space and solid matrix denoted as \(Y=\mathcal{P}\cup\mathcal{G}\) and the outer boundary of the reference cell given as \(\partial Y\) we denote the interior boundary between solid and fluid as
\[\Gamma=\overline{\mathcal{P}}\cap\mathcal{G}. \tag{10}\]
The global pore space and its boundary with the solid matrix in the non-dimensional case is then given as
\[\mathcal{P}^{\epsilon}=\cup_{w\in W_{\Omega}}\,\epsilon(w+\mathcal{P})\,\quad \Gamma^{\epsilon}=\cup_{w\in W_{\Omega}}\,\epsilon(w+\Gamma). \tag{11}\]
While their fluid content can generally vary between the cells, we here assume that the porous matrix is constant both in time and space in order to facilitate the derivation of the two-scale model. As the reference cell is the unit cube, we can define the constant porosity \(\varphi=|\mathcal{P}|\). The conditions inside the pore spaces still vary, with fluid saturations and distributions changing in time and between cells.
We use the scale separation to introduce a micro-scale coordinate, rewrite spatial derivatives accordingly, assign scalings in terms of \(\epsilon\) for the dimensionless numbers and assume all unknowns to have an asymptotic expansion in \(\epsilon\). Inserting the expansions into the model equations and gathering terms of equal order in terms of \(\epsilon\) we obtain a new set of equations containing derivatives with respect to coordinates of only one of the spatial scales. This procedure yields macroscopic equations capturing the effective behavior and cell problems defined on the reference cell. Effective parameters are defined through cell problems and integrals of their solutions, linking the two sets of equations and capturing detailed effects of local phase distribution on effective behavior. In addition to pressure driven flow, a separate cell problem captures flow due to surface tension forces, reflected in an added velocity contribution on the effective scale.
### Non-dimensionalization
In preparation of upscaling by periodic homogenization we non-dimensionalize the micro-scale model. The homogenization requires a separation of scales between representative length \(\ell\) at the
Figure 3: Separation of scales and domain definitions
pore scale and the length scale of interest \(L\) at the macro scale, quantified by the small number \(\epsilon=\ell/L\ll 1\). Defining reference values with dimensions
\[[\hat{L}] =\text{m} [\hat{\ell}] =\text{m} [\hat{\xi}] =\text{m} [\hat{t}] =\text{s} [\hat{v}] =\frac{\text{m}}{\text{s}}\] \[[\hat{\rho}] =\frac{\text{kg}}{\text{m}^{3}} [\hat{\mu}] =\frac{\text{kg}}{\text{m}\cdot\text{s}} [\hat{p}] =\frac{\text{kg}}{\text{m}\cdot\text{s}^{2}} [\hat{\lambda}] =\text{m} [\hat{\sigma}] =\frac{\text{m}}{\text{s}}\,\]
and letting
\[\hat{L}=L\qquad\hat{\ell}=\ell\qquad\hat{\xi}=\hat{\ell}\qquad\hat{t}=\frac{ \hat{L}}{\hat{v}}\qquad\hat{\rho}=\rho_{2}\qquad\hat{\mu}=\mu_{2}\qquad\hat{ \lambda}=\hat{\ell}\qquad\hat{\sigma}=\sigma\, \tag{12}\]
one can rewrite the phase-field model equations using non-dimensional variables and parameters
\[\bar{\xi}=\frac{\xi}{\hat{\xi}}\qquad\bar{\mathbf{v}}=\frac{\mathbf{v}}{\hat {v}}\qquad\bar{\rho}=\frac{\rho}{\hat{\rho}}\qquad\bar{\mu}=\frac{\mu}{\hat{ \mu}}\qquad\bar{p}=\frac{p}{\hat{p}}\qquad\bar{t}=\frac{t}{\hat{t}}\qquad \bar{\lambda}=\frac{\lambda}{\hat{\lambda}}\.\]
In the following all variables and parameters are non-dimensional and the overline notation is dropped for convenience. Using dimensionless numbers
\[\text{Re}\,=\frac{\hat{\rho}\hat{v}\hat{L}}{\hat{\mu}}\qquad\qquad\text{Ca}\, =\frac{\hat{v}\hat{\mu}}{\gamma}\qquad\qquad\text{Eu}=\frac{\hat{p}}{\hat{ \rho}\hat{v}^{2}}\qquad\qquad\text{Fr}=\frac{\hat{v}}{\sqrt{g\hat{L}}}\qquad \qquad S=\frac{\hat{\sigma}}{\hat{v}}\,\]
one obtains (see Appendix B for details)
\[\frac{\partial\rho}{\partial t}+\nabla\cdot(\rho\mathbf{v}) =0,\text{ in }\mathcal{P}\, \tag{13a}\] \[\frac{\partial}{\partial t}(\rho\mathbf{v})+\nabla\cdot(\rho \mathbf{v}\otimes\mathbf{v}) =-\text{Eu}\nabla p+\frac{1}{\text{Re}}\nabla\cdot\left(\mu\left( \nabla\mathbf{v}+(\nabla\mathbf{v})^{T}-\frac{2}{3}(\nabla\cdot\mathbf{v}) \mathbf{I}\right)\right)\] (13b) \[-\frac{1}{\text{Fr}^{2}}\rho\mathbf{z}-\frac{\epsilon}{\text{Ca} }\,\frac{3\xi}{2}\nabla\cdot(\nabla u\otimes\nabla u),\text{ in }\mathcal{P}\,\]
with boundary conditions
\[\mathbf{v}=-\epsilon\lambda(\partial_{\mathbf{n}}\mathbf{v_{t}})\mathbf{t}\,\text{ on }\Gamma \tag{14}\]
For the phase-field equation the non-dimensionalization yields (see Appendix B for details)
\[\frac{\partial u}{\partial t}-S\epsilon^{1}\xi\nabla^{2}u+\nabla\cdot(\mathbf{ v}u)=-S\epsilon^{-1}\xi^{-1}P^{\prime}(u)\,\text{ in }\mathcal{P}\, \tag{15}\]
with boundary condition
\[\partial_{\mathbf{n}}u=-\epsilon^{-1}\xi^{-1}\cos(\theta_{\text{eq}})\sqrt{2P (u)}\,\text{ on }\Gamma. \tag{16}\]
Phase-constant density and viscosityUsing the above equations (7) for density and viscosity and the reference values (12), the non-dimensional viscosity and density in (13) are
\[\mu(u)= \frac{1}{\hat{\mu}}(\mu_{1}u+\mu_{2}(1-u))=1+u(M-1)\, \tag{17}\] \[\rho(u)= 1+u(R-1). \tag{18}\]
### Periodic Homogenization
Given the scale separation \(\epsilon\) we introduce the micro-scale coordinate \(\mathbf{y}=\epsilon^{-1}\mathbf{x}\) and assume all unknowns can be written as an asymptotic expansion in \(\epsilon\), depending on both \(\mathbf{x}\) and \(\mathbf{y}\) with periodicity in the unit cell \(Y\). For an unknown \(\psi\in\{p,\mathbf{v},u\}\) we introduce \(Y\)-periodic \(\psi_{k}(t,\mathbf{x},\mathbf{y})\) such that
\[\psi(t,\mathbf{x})=\sum_{k=0}^{\infty}\epsilon^{k}\psi_{k}\left(t,\mathbf{x}, \frac{\mathbf{x}}{\epsilon}\right)\.\]
The spatial derivatives are rewritten as
\[\nabla\psi=\nabla_{\mathbf{x}}\sum_{k=0}^{\infty}\epsilon^{k}\psi_{k}(t, \mathbf{x},\mathbf{y})+\frac{1}{\epsilon}\nabla_{\mathbf{y}}\sum_{k=0}^{ \infty}\epsilon^{k}\psi_{k}(t,\mathbf{x},\mathbf{y})\.\]
For the upscaling we consider a flow regime where Darcy's law is considered valid, with laminar flow driven by the pressure drop and capillary forces, and where advective and diffusive time scales are of the same order. For the phase distribution the phase-field diffusivity \(\sigma\) should be comparable to the micro-scale advection, captured by \(S\in\mathcal{O}(\epsilon^{0})\). As will be shown in Remark 1, with \(S\in\mathcal{O}(\epsilon^{1})\) the advective term separates in the upscaling process and yields a restrictive pore-scale equation as well as introducing mixed-scale derivatives into the phase-field equation. Choosing non-dimensional numbers \(\mathrm{Ca}\,\in\mathcal{O}(\epsilon^{0})\), \(\mathrm{Re}\,\in\mathcal{O}(\epsilon^{0})\), \(\mathrm{Eu}\in\mathcal{O}(\epsilon^{-2})\) and \(\mathrm{Fr}\in\mathcal{O}(\epsilon^{0})\) the leading terms for \(\epsilon\to 0\) are given as follows (see Appendix C for details).
For the phase dependent fluid properties we observe
\[\mu(u) =(1+u(M-1))=\underbrace{1+u_{0}(M-1)}_{=\mu(u_{0})}+\epsilon u_{ 1}(M-1)+\mathcal{O}(\epsilon^{2}) \tag{19a}\] \[\rho(u) =(1+u(R-1))=\underbrace{1+u_{0}(R-1)}_{=\rho(u_{0})}+\epsilon \underbrace{u_{1}(R-1)}_{=\rho_{1}}+\mathcal{O}(\epsilon^{2})\, \tag{19b}\]
and for the double-well potential of the phase-field equation, due to its polynomial structure,
\[P^{\prime}(u)=P^{\prime}(u_{0})+\epsilon u_{1}P^{\prime\prime}(u_{0})+\mathcal{ O}(\epsilon^{2}). \tag{19c}\]
Denoting \(\overline{\mathrm{Eu}}:=\epsilon^{2}\mathrm{Eu}\), we have for (13a), (13b) and (15) in \(\mathcal{P}\)
\[\mathcal{O}(\epsilon^{1})= \epsilon^{-1}\nabla_{\mathbf{y}}\cdot(\rho(u_{0})\mathbf{v}_{0}) \tag{20a}\] \[+\epsilon^{0}\bigg{[}\frac{\partial\rho(u_{0})}{\partial t}+ \nabla_{\mathbf{y}}\cdot(\rho(u_{0})\mathbf{v}_{1}+\rho_{1}\mathbf{v}_{0})+ \nabla_{\mathbf{x}}\cdot(\rho(u_{0})\mathbf{v}_{0})\bigg{]}\] \[\mathcal{O}(\epsilon^{-1})= \epsilon^{-3}\left[-\overline{\mathrm{Eu}}\nabla_{\mathbf{y}}p_ {0}\right]\] (20b) \[+\epsilon^{-2}\bigg{[}-\overline{\mathrm{Eu}}(\nabla_{x}p_{0}+ \nabla_{\mathbf{y}}p_{1})\] \[+\frac{1}{\mathrm{Re}}\nabla_{\mathbf{y}}\cdot\bigg{(}\mu(u_{0}) \left(\nabla_{\mathbf{y}}\mathbf{v}_{0}+(\nabla_{\mathbf{y}}\mathbf{v}_{0})^{ T}-\frac{2}{3}(\nabla_{\mathbf{y}}\cdot\mathbf{v}_{0})I\right)\bigg{)}\] \[-\frac{1}{\mathrm{Ca}}\,\frac{3\xi}{2}\nabla_{\mathbf{y}}\cdot( \nabla_{\mathbf{y}}u_{0}\otimes\nabla_{\mathbf{y}}u_{0})\bigg{]}\] \[\mathcal{O}(\epsilon^{1})= \epsilon^{-1}[\nabla_{\mathbf{y}}\cdot(\mathbf{v}_{0}u_{0})-S\xi \nabla_{\mathbf{y}}^{2}u_{0}+S\xi^{-1}P^{\prime}(u_{0})]\] (20c) \[+\epsilon^{0}\bigg{[}\frac{\partial u_{0}}{\partial t}+\nabla_{ \mathbf{y}}\cdot(\mathbf{v}_{1}u_{0}+\mathbf{v}_{0}u_{1})+\nabla_{\mathbf{x}} \cdot(\mathbf{v}_{0}u_{0})+S\xi^{-1}P^{\prime\prime}(u_{0})u_{1}\] \[-S\xi(\nabla_{\mathbf{y}}\cdot\nabla_{\mathbf{x}}u_{0}+\nabla_{ \mathbf{x}}\cdot\nabla_{\mathbf{y}}u_{0}+\nabla_{\mathbf{y}}^{2}u_{0})\bigg{]}\]
and for boundary conditions (14) and (16) on \(\Gamma\)
\[\mathcal{O}(\epsilon^{1}) =\epsilon^{0}\Big{[}\mathbf{v}_{0}+\lambda(\nabla_{\mathbf{y}} \mathbf{v}_{0,\mathbf{t}}\cdot\mathbf{n})\mathbf{t}\Big{]} \tag{20d}\] \[\mathcal{O}(\epsilon^{0}) =\epsilon^{-1}\Big{[}\nabla_{\mathbf{y}}u_{0}\cdot\mathbf{n}+ \xi^{-1}\cos(\theta_{\mathrm{eq}})\sqrt{2P(u_{0})}\Big{]}. \tag{20e}\]
#### 3.2.1 Phase field
From the leading order term of (20c) we therefore obtain a stationary equation involving only local derivatives.
\[-S\xi\nabla_{\mathbf{y}}^{2}u_{0}+\nabla_{\mathbf{y}}\cdot(\mathbf{v}_{0}u_{0 })=S\xi^{-1}P^{\prime}(u_{0})\,\qquad\text{ in }\mathcal{P}\, \tag{21}\]
together with boundary condition (20e)
\[\nabla_{\mathbf{y}}u_{0}\cdot\mathbf{n}=-\xi^{-1}\cos(\theta_{\mathrm{eq}}) \sqrt{2P(u_{0})}\,\qquad\text{ on }\Gamma. \tag{22}\]
_Remark 1_.: If the phase-field equation is dominated by advection (\(S\in\mathcal{O}(\epsilon^{1})\)) the leading order advective term of (20e) would be isolated at \(\mathcal{O}(\epsilon^{-2})\) as
\[\nabla_{\mathbf{y}}\cdot(\mathbf{v}_{0}u_{0})=0. \tag{23}\]
Applied to the leading order term in (20a) this yields a divergence free velocity field and reduces to
\[\mathbf{v}_{0}\cdot\nabla_{\mathbf{y}}u_{0}=0\, \tag{24}\]
resulting in a strong limitation on modeled problems. At the order \(\mathcal{O}(\epsilon^{-1})\) the equation (20c) would instead contain the advective terms of the next order, yielding
\[0=\nabla_{\mathbf{y}}\cdot(\mathbf{v}_{1}u_{0}+\mathbf{v}_{0}u_{1})+\nabla_{ \mathbf{x}}\cdot(\mathbf{v}_{0}u_{0})-S\xi\nabla_{\mathbf{y}}^{2}u_{0}+S\xi^ {-1}P^{\prime}(u_{0}) \tag{25}\]
with an undesireable mix of scales that leads to neither a pore-scale cell problem nor an effective equation for the Darcy scale. A balance of the two terms is expressed by \(S\in\mathcal{O}(1)\), which yields (21) instead of (23) as the leading order equation.
This local phase-field equation (21) is underconstrained, admitting among others the trivial solutions of \(u_{0}=0\) and \(u_{1}=1\) for divergence free velocity fields. The local cell problem does not offer a way to compute the saturation of fluid \(1\) as the mean integral of \(u_{0}\) but rather requires it as a constraint as in [22].
From the next order \(\mathcal{O}(\epsilon^{0})\) terms of the phase-field equation (20c) we obtain
\[0 =\frac{\partial u_{0}}{\partial t}-S\xi(\nabla_{\mathbf{y}}^{2}u_ {1}+\nabla_{\mathbf{y}}\cdot(\nabla_{\mathbf{x}}u_{0})+\nabla_{\mathbf{x}} \cdot(\nabla_{\mathbf{y}}u_{0})) \tag{26}\] \[\quad+(\nabla_{\mathbf{y}}\cdot(\mathbf{v}_{1}u_{0}+\mathbf{v}_{ 0}u_{1})+\nabla_{\mathbf{x}}\cdot(\mathbf{v}_{0}u_{0}))+S\xi^{-1}P^{\prime \prime}(u_{0})u_{1}\.\]
We would like to use this instationary equation to update the saturation to in turn constrain the stationary phase-field equation obtained from the leading order term of the asymptotic expansion. After integrating over the constant local periodicity cell \(\mathcal{P}\) and using the periodicity with respect to \(\mathbf{y}\), (26) reduces to
\[\frac{\partial}{\partial t}\int_{\mathcal{P}}u_{0}\,\mathrm{d}\mathbf{y}+ \nabla_{\mathbf{x}}\cdot\int_{\mathcal{P}}\mathbf{v}_{0}u_{0}\,\mathrm{d} \mathbf{y}+\int_{\mathcal{P}}\frac{S}{\xi}P^{\prime\prime}(u_{0})u_{1}\, \mathrm{d}\mathbf{y}=0. \tag{27}\]
While the first two terms yield a macroscopic equation in saturation and phase-specific velocity, the last term includes the additional unknown \(u_{1}\). Assuming a homogeneous porous medium,
with \({\cal P}\) not depending on \({\bf x}\), one can use a solvability constraint to show that this term is zero and obtain the desired saturation equation.
Integrating the leading order terms (21) and applying the divergence theorem and periodicity conditions, one is left with only the potential derivative.
\[0=\underbrace{\int_{{\cal P}}\,-S\xi\nabla_{{\bf y}}\cdot(\nabla_{{\bf y}}u_{0}) +\nabla_{{\bf y}}\cdot({\bf v}_{0}u_{0})\,{\rm d}{\bf y}}_{=0}+\int_{{\cal P}} \frac{S}{\xi}P^{\prime}(u_{0})\,{\rm d}{\bf y} \tag{28}\]
One now views the third term in (27) as a Fredholm operator of index zero \({\cal L}(u_{0})\) applied to \(u_{1}\). Applying it instead to \(\nabla_{{\bf x}}u_{0}\), using the chain rule and the assumption of \({\cal P}\) not depending on \({\bf x}\), we obtain
\[{\cal L}(u_{0})(\nabla_{{\bf x}}u_{0})=\int_{{\cal P}}\frac{S}{\xi}P^{\prime \prime}(u_{0})\nabla_{{\bf x}}u_{0}\,{\rm d}{\bf y}=\int_{{\cal P}}\frac{S}{ \xi}\nabla_{{\bf x}}(P^{\prime}(u_{0}))\,{\rm d}{\bf y}=\frac{S}{\xi}\nabla_{ {\bf x}}\int_{{\cal P}}P^{\prime}(u_{0})\,{\rm d}{\bf y}. \tag{29}\]
Together with the previously derived information about \(P^{\prime}\) from (28), one sees that \(\nabla_{{\bf x}}u_{0}\) is an element of the kernel of \({\cal L}(u_{0})\). Rewriting (27) as \(-{\cal L}(u_{0})u_{1}=A(u_{0},{\bf v}_{0})\) with
\[A(u_{0},{\bf v}_{0})=\int_{{\cal P}}\frac{\partial u_{0}}{\partial t}+\nabla_ {{\bf x}}\cdot({\bf v}_{0}u_{0})\,{\rm d}{\bf y}\, \tag{30}\]
this yields the solvability constraint
\[0=<A(u_{0},{\bf v}_{0}),\nabla_{{\bf x}}u_{0}>=\int_{\Omega}\nabla_{{\bf x}}u _{0}({\bf x},{\bf y})\left(\int_{{\cal P}}\frac{\partial u_{0}}{\partial t}+ \nabla_{{\bf x}}\cdot({\bf v}_{0}u_{0})\,{\rm d}{\bf y}\right)({\bf x})\,{\rm d }{\bf x}. \tag{31}\]
To avoid trivial behavior (\(\nabla_{{\bf x}}u_{0}\equiv 0\), \(\nabla_{{\bf x}}u_{0}\) independent of \({\bf x}\)), the integral over the local pore-space \({\cal P}\) must disappear. Using again the assumption of a stationary and homogeneous pore space \({\cal P}\) with constant porosity \(\varphi\), this yields the saturation equation
\[\frac{\partial}{\partial t}\int_{{\cal P}}u_{0}\,{\rm d}{\bf y}+\nabla_{{\bf x }}\cdot\int_{{\cal P}}{\bf v}_{0}u_{0}\,{\rm d}{\bf y}=0. \tag{32}\]
Introducing averaged quantities for the saturation of fluid 1, \(S^{(1)}=\varphi^{-1}\int_{{\cal P}}u_{0}\,{\rm d}{\bf y}\), as well as velocities
\[\bar{{\bf v}}^{(1)}=\varphi^{-1}\int_{{\cal P}}u_{0}{\bf v}_{0}\,{\rm d}{\bf y }\,\qquad\bar{{\bf v}}^{(2)}=\varphi^{-1}\int_{{\cal P}}(1-u_{0}){\bf v}_{0}\,{ \rm d}{\bf y}\,\qquad\bar{{\bf v}}=\varphi^{-1}\int_{{\cal P}}{\bf v}_{0}\,{\rm d}{\bf y}\, \tag{33}\]
we obtain
\[\frac{\partial}{\partial t}\int_{{\cal P}}u_{0}\,{\rm d}{\bf y}+\nabla_{{\bf x }}\cdot\int_{{\cal P}}{\bf v}_{0}u_{0}\,{\rm d}{\bf y}=\varphi\left(\frac{ \partial}{\partial t}S^{(1)}({\bf x})+\nabla_{{\bf x}}\cdot\bar{{\bf v}}^{(1 )}({\bf x})\right)=0. \tag{34}\]
The second order term of the mass conservation equation (20a)
\[0=\frac{\partial\rho(u_{0})}{\partial t}+\nabla_{{\bf y}}\cdot(\rho(u_{0}){ \bf v}_{1}+\rho_{1}{\bf v}_{0})+\nabla_{{\bf x}}\cdot(\rho(u_{0}){\bf v}_{0})\,\]
yields, using (19b) and after integration over \({\cal P}\),
\[(R-1)\frac{\partial}{\partial t}\int_{{\cal P}}u_{0}\,{\rm d}{\bf y}+(R-1) \nabla_{{\bf x}}\cdot\int_{{\cal P}}u_{0}{\bf v}_{0}\,{\rm d}{\bf y}+\nabla_{{ \bf x}}\cdot\int_{{\cal P}}{\bf v}_{0}\,{\rm d}{\bf y}\, \tag{35}\]
a second conservation equation.
Inserting (34) into (35) yields \(\nabla_{\mathbf{x}}\cdot\bar{\mathbf{v}}=0\) and with \(S^{(2)}=1-S^{(1)}\) a saturation equation for the second phase, corresponding to \(u_{0}=0\),
\[0=\frac{\partial}{\partial t}(S^{(1)}+S^{(2)})=\frac{\partial}{\partial t}S^{(2 )}+\frac{1}{\varphi}\nabla_{\mathbf{x}}\cdot\int_{\mathcal{P}}(1-u_{0}) \mathbf{v}_{0}\,\mathrm{d}\mathbf{y}=\frac{\partial}{\partial t}S^{(2)}+ \nabla_{\mathbf{x}}\cdot\bar{\mathbf{v}}^{(2)}. \tag{36}\]
These macroscopic equations (34), (36), through the saturation, yield an integral constraint for the local phase-field. Together with the stationary equation (21) and boundary condition (22) we obtain the cell problem
\[\begin{cases}\nabla_{\mathbf{y}}\cdot(\mathbf{v}_{0}u_{0})=S\xi\nabla_{ \mathbf{y}}^{2}u_{0}-S\xi^{-1}P^{\prime}(u_{0})&\text{ in }\mathcal{P}\,\\ \nabla_{\mathbf{y}}u_{0}\cdot\mathbf{n}=-\xi^{-1}\cos(\theta_{\text{eq}}) \sqrt{2P(u_{0})}&\text{ on }\Gamma\,\\ u_{0}\text{ is }Y\text{-periodic and }\int_{\mathcal{P}}u_{0}=\varphi S^{(1)}\.\end{cases} \tag{37}\]
While this ensures mass-conservation it does not fully prescribe a distribution of the phases. The stationary phase-field equation (37) still defines the profile of the diffuse transition zone but the position is not clear. This phase distribution however is central to the equations for fluid flow. To obtain a meaningful solution to this stationary problem one could introduce an artifical time evolution, taking care to introduce additional source terms to the non-conservative Allen-Cahn equation in order to enforce the saturation constraint. That is, one would instead solve
\[\begin{cases}\frac{\partial u_{0}}{\partial\tau}+\nabla_{\mathbf{y}}\cdot( \mathbf{v}_{0}u_{0})=S\xi\nabla_{\mathbf{y}}^{2}u_{0}-S\xi^{-1}P^{\prime}(u_{ 0})\\ \qquad\qquad\qquad\qquad\qquad\qquad+\frac{S\xi^{-1}}{\varphi}\int_{\mathcal{ P}}P^{\prime}(u_{0})\,\mathrm{d}\mathbf{y}-\delta\left(\int_{\mathcal{P}}u_{0}\, \mathrm{d}\mathbf{y}-S^{(1)}\right)&\text{ in }\mathcal{P}\,\\ \nabla_{\mathbf{y}}u_{0}\cdot\mathbf{n}=-\xi^{-1}\cos(\theta_{\text{eq}}) \sqrt{2P(u_{0})}&\text{ on }\Gamma\,\\ u_{0}\text{ is }Y\text{-periodic}.\end{cases} \tag{38}\]
The third term on the right-hand side is introduced in order to make the Allen-Cahn equation conservative [7]. The fourth term on the right-hand side moves the system towards the desired saturation and can be localized to the interface using an interface indicator function
\[\delta(u_{0})=4\xi^{-1}u_{0}(1-u_{0})\.\]
#### 3.2.2 Fluid flow
The derivation of the local cell problems for the fluid velocity follows the common approach of separating derivatives of the two scales and using the linear structures of the equations to obtain different problems for the effect of macroscopic pressure gradients, and additionally one to capture the contribution of the surface tension at the interface.
From the leading order term of the momentum equation (20b) we obtain \(\nabla_{\mathbf{y}}p_{0}=0\) and from the next order term we get
\[0= -\overline{\text{Eu}}(\nabla_{\mathbf{x}}p_{0}+\nabla_{\mathbf{y }}p_{1})+\frac{1}{\text{Re}}\nabla_{\mathbf{y}}\cdot\left(\mu(u_{0})\left( \nabla_{\mathbf{y}}\mathbf{v}_{0}+(\nabla_{\mathbf{y}}\mathbf{v}_{0})^{T}- \frac{2}{3}(\nabla_{\mathbf{y}}\cdot\mathbf{v}_{0})\mathbf{I}\right)\right)\] \[-\frac{1}{\text{Ca}}\frac{3\xi}{2}\nabla_{\mathbf{y}}\cdot(\nabla _{\mathbf{y}}u_{0}\otimes\nabla_{\mathbf{y}}u_{0})\]
or equivalently, using \(\mu(u_{0}(t,\mathbf{x},\mathbf{y}))=1+u_{0}(t,\mathbf{x},\mathbf{y})(M-1)\),
\[\begin{split}&\overline{\mathrm{Eu}}\nabla_{\mathbf{y}}p_{1}-\frac{ M-1}{\mathrm{Re}}\nabla_{\mathbf{y}}u_{0}\cdot\left(\nabla_{\mathbf{y}} \mathbf{v}_{0}+(\nabla_{\mathbf{y}}\mathbf{v}_{0})^{T}-\frac{2}{3}(\nabla_{ \mathbf{y}}\cdot\mathbf{v}_{0})\mathbf{I}\right)\\ &-\frac{\mu(u_{0})}{\mathrm{Re}}\left(\nabla_{\mathbf{y}}^{2} \mathbf{v}_{0}-\frac{1}{3}\nabla_{\mathbf{y}}(\nabla_{\mathbf{y}}\cdot \mathbf{v}_{0})\right)\\ =&-\overline{\mathrm{Eu}}\nabla_{\mathbf{x}}p_{0}- \frac{1}{\mathrm{Ca}}\,\frac{3\xi}{2}\nabla_{\mathbf{y}}\cdot(\nabla_{\mathbf{ y}}u_{0}\otimes\nabla_{\mathbf{y}}u_{0})\.\end{split} \tag{39}\]
Due to the linear structure in the unknowns \(p_{1}\) and \(\mathbf{v}_{0}\) as well as the contributions of \(p_{0}(t,\mathbf{x})\) and \(u_{0}(t,\mathbf{x},\mathbf{y})\), we can find \(Y\)-periodic functions \(\mathbf{w}_{j}(t,\mathbf{x},\mathbf{y})\), \(\Pi_{j}(t,\mathbf{x},\mathbf{y})\), \(j=0,\ldots,\mathrm{d}\) such that
\[\mathbf{v}_{0}= -\sum_{j=1}^{\mathrm{d}}\mathbf{w}_{j}\partial_{\mathbf{x}_{j}} p_{0}-\mathbf{w}_{0}\, \tag{40}\] \[p_{1}= \sum_{j=1}^{\mathrm{d}}\Pi_{j}\partial_{\mathbf{x}_{j}}p_{0}+\Pi _{0}+\tilde{p}_{1}(t,\mathbf{x})\, \tag{41}\]
with a function \(\tilde{p}_{1}(t,\mathbf{x})\) independent of \(\mathbf{y}\) and thus not relevant for (39). These velocity contributions and local pressures can be obtained from the following auxiliary cell problems, where we denote the symmetrized gradient \(2\varepsilon_{\mathbf{y}}(\mathbf{w})=\nabla_{\mathbf{y}}\mathbf{w}+(\nabla_{ \mathbf{y}}\mathbf{w})^{T}\),
\[\begin{cases}\overline{\mathrm{Eu}}(e_{j}+\nabla_{\mathbf{y}}\Pi_{j})=\frac{1 }{\mathrm{Re}}\nabla_{\mathbf{y}}\cdot(\mu(u_{0})2\varepsilon_{\mathbf{y}}( \mathbf{w}_{j})-\frac{2}{3}(\nabla_{\mathbf{y}}\cdot\mathbf{w}_{j})\mathbf{I }),&\text{ in }\mathcal{P}\,\\ \nabla_{\mathbf{y}}\cdot(\rho(u_{0})\mathbf{w}_{j})=0,&\text{ in }\mathcal{P}\,\\ \mathbf{w}_{j}=-\lambda(\partial_{\mathbf{n}}\mathbf{w}_{j,\mathbf{t}}) \mathbf{t},&\text{ on }\Gamma\,\\ \Pi_{j},\mathbf{w}_{j}\text{ are $Y$-periodic and }\int_{\mathcal{P}}\Pi_{j}\,\mathrm{d}\mathbf{y}=0,\end{cases} \tag{42}\]
for \(j\in\{1,\ldots,\mathrm{d}\}\) and
\[\begin{cases}\overline{\mathrm{Eu}}\nabla_{\mathbf{y}}\Pi_{0}=\frac{1}{ \mathrm{Re}}\nabla_{\mathbf{y}}\cdot(\mu(u_{0})2\varepsilon_{\mathbf{y}}( \mathbf{w}_{0})-\frac{2}{3}(\nabla_{\mathbf{y}}\cdot\mathbf{w}_{0})\mathbf{I })&\\ -\frac{1}{\mathrm{Ca}}\frac{3\xi}{2}\nabla_{\mathbf{y}}\cdot(\nabla_{\mathbf{y} }u_{0}\otimes\nabla_{\mathbf{y}}u_{0}),&\text{ in }\mathcal{P}\,\\ \nabla_{\mathbf{y}}\cdot(\rho(u_{0})\mathbf{w}_{0})=0,&\text{ in }\mathcal{P}\,\\ \mathbf{w}_{0}=-\lambda(\partial_{\mathbf{n}}\mathbf{w}_{0,\mathbf{t}}) \mathbf{t},&\text{ on }\Gamma\,\\ \Pi_{0},\mathbf{w}_{0}\text{ are $Y$-periodic and }\int_{\mathcal{P}}\Pi_{0}\,\mathrm{d}\mathbf{y}=0,\end{cases} \tag{43}\]
From these solutions we define tensors capturing the effective behavior of the fluid, denoting the phase indicator of phase \(k\) as
\[u^{(k)}=\begin{cases}u_{0},&k=1\\ 1-u_{0},&k=2\end{cases} \tag{44}\]
and the components of \(\mathbf{w}_{j}\) as \(\mathbf{w}_{j,i}\).
\[\mathcal{K}^{(k)}_{ij}:=\frac{1}{\varphi}\int_{\mathcal{P}}u^{(k)}\mathbf{w}_{j,i}\,\mathrm{d}\mathbf{y}\qquad\mathcal{M}^{(k)}_{i}:=\frac{1}{\varphi}\int_{ \mathcal{P}}u^{(k)}\mathbf{w}_{0,i}\,\mathrm{d}\mathbf{y} \tag{45}\]
Multiplying (40) with \(u^{(k)}\) and integrating over \(\mathcal{P}\), we obtain the macroscopic velocity equations
containing the effective parameters.
\[\begin{split}\bar{\mathbf{v}}^{(k)}=\varphi^{-1}\int_{\mathcal{P}}u^{( k)}\mathbf{v}_{0}\,\mathrm{d}\mathbf{y}&=\varphi^{-1}\int_{ \mathcal{P}}u^{(k)}(-\sum_{j=1}^{\mathrm{d}}\mathbf{w}_{j}\partial_{\mathbf{x}_ {j}}p_{0}-\mathbf{w}_{0})\,\mathrm{d}\mathbf{y}\\ &=-\sum_{j=1}^{\mathrm{d}}\left(\varphi^{-1}\int_{\mathcal{P}}u^{( k)}\mathbf{w}_{j}\,\mathrm{d}\mathbf{y}\right)(\partial_{\mathbf{x}_{j}}p_{0})- \varphi^{-1}\int_{\mathcal{P}}u^{(k)}\mathbf{w}_{0}\,\mathrm{d}\mathbf{y}\\ &=-\mathcal{K}^{(k)}\nabla_{x}p_{0}-\mathcal{M}^{(k)}\.\end{split} \tag{46}\]
### Two-scale model
In summary we arrive at a micro-macro model, coupling a Darcy-scale flow problem reminiscent of the extended two-phase Darcy's law to \(d+2\) pore-scale cell-problems on \(Y=[0,1]^{d}\) at every point of the domain. We solve for five macroscopic unknowns \(p_{0}(t,\mathbf{x})\), \(S^{(1)}(t,\mathbf{x})\), \(S^{(2)}(t,\mathbf{x})\), \(\bar{\mathbf{v}}^{(1)}(t,\mathbf{x})\), \(\bar{\mathbf{v}}^{(2)}(t,\mathbf{x})\) with \(1=S^{(1)}+S^{(2)}\), and for every set of cell problems the local unknowns \(u_{0}\) and \(\mathbf{w}_{i}\), \(i=0,\ldots\mathrm{d}\).
The macro-scale model reminds of Darcy's law with one shared pressure unknown as well as additional effective parameters \(\mathcal{M}^{(k)}\) which, just as \(\mathcal{K}^{(k)}\), are computed from cell-problem variables. For convenience we drop the subscript \(\mathbf{x}\) on the macroscopic gradient.
\[0 =\frac{\partial}{\partial t}S^{(k)}+\nabla\cdot\bar{\mathbf{v}}^ {(k)},\text{ in }\Omega,\,k=1,2 \tag{47a}\] \[\bar{\mathbf{v}}^{(k)} =-\mathcal{K}^{(k)}\nabla p-\mathcal{M}^{(k)},\text{ in }\Omega,\,k=1,2\] (47b) \[\mathcal{K}^{(k)}_{ij} =\frac{1}{|\mathcal{P}|}\int_{\mathcal{P}}u^{(k)}(\mathbf{w}_{j} )_{i}\,\mathrm{d}\mathbf{y}\,\qquad\mathcal{M}^{(k)}_{i}=\frac{1}{|\mathcal{P}|}\int_{ \mathcal{P}}u^{(k)}(\mathbf{w}_{0})_{i}\,\mathrm{d}\mathbf{y} \tag{47c}\]
We see that the effective parameters \(\mathcal{K}^{(k)}\) represent the phase-specific effective mobilities, containing information about both the absolute permeability of the porous medium and the interactions between the two fluids. Both contributions are in general anisotropic and can't easily be separated. For isotropic geometries the intrinsic permeability \(\kappa_{\mathrm{abs}}\) is a scalar and the effective mobility can be written as \(\kappa_{\mathrm{abs}}\mathcal{K}^{(k)}_{\mathrm{rel}}(u_{0})/\mu_{k}\) with the relative permeability \(\mathcal{K}^{(k)}_{\mathrm{rel}}\in\mathbb{R}^{\mathrm{d}\times\mathrm{d}}\) depending on the local phase distribution \(u_{0}\) through the cell problems (49). The second effective parameter \(\mathcal{M}^{(k)}\) captures effective flow due to surface tension forces between the two fluids.
In return, the cell problems depend on the local value of the macroscopic saturation through a constraint on the phase-field function.
\[\begin{cases}\nabla_{\mathbf{y}}\cdot(\mathbf{v}_{0}u_{0})=S\xi\nabla_{ \mathbf{y}}^{2}u_{0}-S\xi^{-1}P^{\prime}(u_{0})&\text{ in }\mathcal{P}\,\\ \nabla_{\mathbf{y}}u_{0}\cdot\mathbf{n}=-\xi^{-1}\cos(\theta_{\mathrm{eq}}) \sqrt{2P(u_{0})}&\text{ on }\Gamma\,\\ u_{0}\text{ is }Y\text{-periodic and }\int_{\mathcal{P}}u_{0}\,\mathrm{d} \mathbf{y}=S^{(1)}\,\end{cases} \tag{48}\]
with \(\mathbf{v}_{0}\) depending on the global pressure gradient and local flow functions \(\mathbf{w}_{j}\) as defined in (40). These in turn depend on the phase field and are computed through the cell problems defined in (42) and (43)
\[\begin{cases}\overline{\mathrm{Eu}}(e_{j}+\nabla_{\mathbf{y}}\Pi_{j})=\frac{1 }{\mathrm{Re}}\nabla_{\mathbf{y}}\cdot(\mu(u_{0})2\varepsilon_{\mathbf{y}}( \mathbf{w}_{j})-\frac{2}{3}(\nabla_{\mathbf{y}}\cdot\mathbf{w}_{j})\mathbf{I}),&\text{ in }\mathcal{P}\,\\ \nabla_{\mathbf{y}}\cdot(\rho(u_{0})\mathbf{w}_{j})=0,&\text{ in }\mathcal{P}\,\\ \mathbf{w}_{j}=-\lambda(\partial_{\mathbf{n}}\mathbf{w}_{j,\mathbf{t}}) \mathbf{t},&\text{ on }\Gamma\,\\ \Pi_{j},\mathbf{w}_{j}\text{ are }Y\text{-periodic and }\int_{\mathcal{P}}\Pi_{j}\, \mathrm{d}\mathbf{y}=0,\end{cases} \tag{49}\]
for \(j\in\{1,\ldots,d\}\) and
\[\begin{cases}\overline{\operatorname{Eu}}\nabla_{\mathbf{y}}\Pi_{0}=\frac{1}{ \operatorname{Re}}\nabla_{\mathbf{y}}\cdot(\mu(u_{0})2\varepsilon_{\mathbf{y}}( \mathbf{w}_{0})-\frac{2}{3}(\nabla_{\mathbf{y}}\cdot\mathbf{w}_{0})\mathbf{I}) \\ \qquad\qquad\qquad-\frac{1}{\operatorname{Ca}}\frac{3\xi}{2}\nabla_{\mathbf{y} }\cdot(\nabla_{\mathbf{y}}u_{0}\otimes\nabla_{\mathbf{y}}u_{0}),&\text{ in }\mathcal{P}\,\\ \nabla_{\mathbf{y}}\cdot(\rho(u_{0})\mathbf{w}_{0})=0,&\text{ in }\mathcal{P}\,\\ \mathbf{w}_{0}=-\lambda(\partial_{\mathbf{n}}\mathbf{w}_{0,\mathbf{t}}) \mathbf{t},&\text{ on }\Gamma\,\\ \Pi_{0},\mathbf{w}_{0}\text{ are }Y\text{-periodic and }\int_{\mathcal{P}}\Pi_{0}\,\mathrm{d} \mathbf{y}=0.\end{cases} \tag{50}\]
## 4 Numeric Investigation
In the following we present a simulation using the tightly coupled pore-scale model from Section 2.2, as well as the investigation of the cell problems for velocity components \(\mathbf{w}_{j}\) and the behavior of the computed effective parameters. For the former we simulate the movement of two fluids through a channel geometry modeling a single pore. For the latter we consider two exemplary geometries, denoted "cross" and "obstacle", and investigate the relative permeability saturation relation for different fluid properties and fluid distributions.
### Implementation
For the numerical investigation all model equations are implemented in DuMux[15], using Newton's method as a nonlinear solver and backward Euler for temporal discretization in transient problems. The equations are discretized in space with the finite volume method on a regular rectangular grid, using Dune-SPGrid [19] and an adapted Dune-Subgrid [12] to model the considered geometries with periodic boundaries. For the phase-field and pressure unknowns a cell-centered approach is used, with control volumes equal to grid cells and degrees of freedom placed at their centers. Fluxes over control-volume faces are approximated using the two adjacent values. For the velocities a staggered discretization is used (see Figure 4), with separate degrees of freedom for each velocity component placed at the edges between grid cells and control volumes centered around them.
While the cell problem for the phase field uses only the cell-centered discretization, the pore-scale problem and the cell problems for velocity contributions require a coupled system of cell-centered pressures and staggered velocities. In DuMuxthis is achieved using a multidomain
Figure 4: Staggered discretization
formulation with a coupling manager handling the volume coupling of the different discretizations of a shared domain [15]. For the pore-scale model the existing coupling manager for Navier-Stokes models was extended to additionally exchange phase-field data.
For the phase-field equation a boundary condition prescribing the flux implements the contact angle condition. In the velocity problems a no-flux condition for the mass conservation and homogeneous Dirichlet conditions for the normal velocity are used at the fluid-solid interfaces. To implement Navier slip boundary conditions a solution-dependent flux can be described. In the case of the cell problem capturing surface tension effects the additional flux can be given based on the prescribed contact angle. On the outer boundary of the reference cell \(Y\) periodicity conditions are prescribed.
### Pore-scale simulation
We present an examplary simulation of the fully coupled and non-dimensionalized pore-scale model from Section 3.1. In a channel geometry containing both phases the system conforms to a prescribed contact angle and the interface is advected by a pressure gradient. This could be applied to simulating two-phase flow through a narrowing in a porous medium.
We consider a two-dimensional domain \(\Omega=(0,0.2)\times(0,0.1)\). At the inlet (\(x=0\)) and outlet (\(x=0.2\)) we prescribe fixed pressures and fluxes for the phase-field equation. The top and bottom boundaries are walls with slip boundary conditions for the velocity and no-flux boundary conditions for the density. For the phase field we apply the contact angle boundary condition (16) with contact angle \(\theta_{\text{eq}}=\pi/3\).
We initialize the fluid distribution with fluid 1 on the left and fluid 2 on the right with a curved and diffuse interface between them, see Figure 5. We prescribe a non-dimensional slip length
Figure 5: Initial condition of pore-scale simulation of two-phase flow through a channel. \(u=1\) (red) corresponds to fluid 1, \(u=0\) (blue) to fluid 2.
of \(0.01\) and a surface tension of \(10^{-6}\mathrm{m}^{2}\). The simulation was run for constant fluid properties, using a density of \(10^{2}\mathrm{kg/m}^{3}\) and dynamic viscosity of \(10^{-2}\mathrm{kg/(m\cdot s)}\). Applying a pressure drop of \(450\mathrm{kg/(m\cdot s^{2})}\) leads to a maximum flow velocity of \(5.7\cdot 10^{-4}\mathrm{m/s}\).
Figure 6 shows the fluid distribution at \(t=40\mathrm{s}\). Due to very weak surface tension forces in this setup, the velocity field does not deviate significantly from a parabolic flow profile. In the center of the channel the interface is advected by these higher velocities, leading to an inversion of the curvature. At the three-phase contact point the prescribed contact angle is successfully maintained despite the near-parabolic profile.
### Cell problems
We consider the cell problems (49) and (50) for the velocity components \(\mathbf{w}_{j}\) and investigate the behavior of the resulting effective parameters. The derived two-scale model is similar to two-phase Darcy's law, with effective parameters computed from cell problems rather than prescribed relative permeability saturation curves. We investigate how the computed parameters compare to commonly used curves and under which conditions core assumptions such as a monotone relation is violated. Anisotropic permeabilities are not the focus of this investigation and we choose simple geometries such that the intrinsic permeability of the pore geometry can be separated from fluid-fluid interactions. This allows us to isolate the influence of the fluid distribution on the relative permeability in a simple manner.
For two different geometries we vary the local phase field to determine the effects of saturation and phase distribution and the impact of the dimensionless ratios for viscosity and density as well as the surface tension. Both the slip length and the contact angle influence the results, see e.g. the investigation of dynamic contact angles in [17], but will not be considered here.
Figure 6: Phase distribution at \(t=40\mathrm{s}\) for the pore-scale simulation, overlaid with the velocity field. \(u=1\) (red) corresponds to fluid \(1\), \(u=0\) (blue) to fluid \(2\).
Instead these effects will be controlled by a no-slip condition for the velocity components and a homogeneous neumann condition for the phase field, encoding a neutral contact angle. Since our goal is to address behavior of the effective parameters, we do not couple the flow to the cell problem for the phase distribution (48). Instead we manually prescribe a phase-field function corresponding to a certain saturation and run a few timesteps of the instationary version of the phase-field equation (48) without advection in order to enforce the neutral contact angle. This fluid distribution is then used to solve (49) and (50).
The two domains under consideration vary in geometry and porosity, see Figure 7. The first setup, "obstacle", contains a square of side length 0.45 placed in the center of the domain, leaving a void fraction of 79.75%. The second investigated geometry, denoted "cross", is a cross of channels with diameter 0.3, resulting in a porosity of 51%.
We consider four core scenarios of which the first three investigate the effective mobility \(\mathcal{K}^{(k)}\) for different ratios of viscosity \(M\) and density \(R\). In the last setup the surface tension tensor \(\mathcal{M}^{(k)}\) is analyzed, with identical fluid properties for both fluids. The dimensionless numbers in the cell problems are chosen as 1.
For all four cases we fix the center of a circle with varying radius \(r\) and assign cells inside this circle to phase 1, with the rest of the void space being filled with fluid 2. This initial data already contains a diffuse interface according to the following radial function.
\[u(r)=\frac{1}{1+\exp(5/\xi\cdot r)}. \tag{51}\]
This initial data is evolved under the transient version of the Allen-Cahn equation (48) without advection to enforce a neutral contact angle (see Figure 8). The resulting phase field is fed into the different cell problems for flow velocity components (49), (50), which are then solved.
In the "cross" cell geometry we observe recirculation patterns (Figure 14), and e.g. for a horizontal pressure gradient forcing flow from left to right the recirculation near the top and bottom of the domain leads to flow from right to left at these edges of the periodic cell. Depending
Figure 7: Investigated cell geometries ”obstacle” and ”cross”. \(u=1\) (red) corresponds to fluid 1, \(u=0\) (blue) to fluid 2.
on the fluid distribution one of the phases may contain primarily these flows in the opposite direction to the main flow, leading to negative integrated velocities. As the effective mobility should capture net flow through the cell, we exclude these recirculations based on the computed flow field, avoiding negative mobilities. The computed velocity field is loaded into ParaView [4], computing streamlines from the boundaries and marking affected cells. This process is not performed for the flow problem driven by surface tension (50).
After postprocessing we compute the weighted integrals over marked cells to obtain the effective parameters. The saturation and an approximation to the specific interfacial area are determined by integrals over the entire domain.
\[S^{(1)}=\frac{1}{|\mathcal{P}|}\int_{\mathcal{P}}u_{0}\,\mathrm{d}\mathbf{y} \,,\quad A=\frac{1}{|\mathcal{P}|}\int_{\mathcal{P}}\frac{4}{\xi}u_{0}(1-u_{0} )\,\mathrm{d}\mathbf{y} \tag{52}\]
For the effective mobility \(\mathcal{K}^{(k)}\), the computed entries are divided by the absolute permeability and multiplied with the viscosity, yielding a relative permeability. The results are plotted over the saturation of phase 1.
#### 4.3.1 Equal fluid properties
In the first case we consider two fluids with equal viscosity and density (\(M=R=1\)). The cell problems for velocity contributions driven by the global pressure gradient treat the two-phase system as single-phase flow. The effective mobilities are obtained through integration weighted by the phase-indicator \(u^{(k)}\) and the resulting relative permeabilities always sum to 1, see Figure 9.
Due to the symmetries in the fluid distribution the relative permeability is approximately a diagonal matrix and we show only the dominant diagonal entries. For the isotropic setup in the "obstacle" geometry the entries are equal and the relative permeability reduces to a scalar value.
Due to the choice of fluid distribution for every radius \(r\) the domain corresponding to fluid 1 contains that for lower radii \(r^{\prime}<r\) and thus lower saturation. Equivalently, for phase-field
Figure 8: Preprocessing to enforce a neutral contact angle. Initial condition from (51) (left), and the resulting phase field with desired contact angle (right).
functions \(u_{1}<u_{2}\) the corresponding saturations fulfil \(S_{1}<S_{2}\). Under this condition the relative permeability is monotone with respect to the saturation. For differing fluid distributions this monotonicity is not guaranteed and a higher saturation concentrated at areas of lower velocities can result in a lower permeability. Figure 10 shows changing relative permeabilities for fixed saturation and varying fluid distributions obtained by changing the center of the initial circular subdomain for fluid 1. This information about local phase distribution and its impact on macroscopic flow can only be captured by solving the cell problems.
#### 4.3.2 Influence of viscosity differences
Next we investigate the effect of the viscosity ratio by considering \(M=2\), the investigated cell problems now correspond to an incompressible fluid with varying viscosity.
In stark contrast to typically used relative permeability curves we observe non-monotone behavior for the case "obstacle" with relative permeabilities of the more viscous fluid reaching values above 1 for saturations above 0.8, up to \(\mathcal{K}_{xx}^{(1)}=1.039\) at a saturation of 0.95 (see Figure 11). Due to the chosen fluid distribution the less viscous fluid 2 is located on the surface of the solid matrix at low saturations (high saturation for phase 1). This reduces the resistance exerted
Figure 10: Impact of phase distribution on relative permeability for equal fluid properties with radius \(r=0.15\) (left) and \(r=0.3\) (right).
Figure 9: Relative-permeability-saturation relation for equal fluid properties.
on the first fluid and results in higher velocities, especially for higher viscosity ratios \(M\). Such lubricating effects have been observed experimentally in different settings [6].
#### 4.3.3 Influence of density differences
In the case of non-trivial density ratio \(R\), the cell problem (49) corresponds to Stokes equation for a quasi-compressible fluid. For ratio \(R=2\) (Figure 12) no non-monotone behavior is observed but the curves are notably different to the one obtained for the case of equal fluid properties. For density ratio \(R=10\) (Figure 13) non-monotonicity is observed for a horizontal pressure gradient in the "cross" setup, with the velocity in the lighter fluid increasing as the main flow path is filled with the more dense fluid. The fluid distribution and computed flow field for highest permeability at a saturation of \(0.45\) are shown in Figure 14. In Figure 15 the results for the "obstacle" geometry are compared to those for equal fluid properties, which highlights the fact that the sum of the relative permeabilities is less than one.
Figure 11: Relative-permeability-saturation relation for viscosity ratio \(M=2\).
Figure 12: Relative-permeability-saturation relation for density ratio \(R=2\).
Figure 14: Phase distribution and velocity component for density ratio \(R=10\).
Figure 13: Relative-permeability-saturation relation for density ratio \(R=10\).
#### 4.3.4 Surface tension tensor
For the isotropic geometry and fluid distribution considered above the velocities driven by surface tension cancel out in the integral and the computed effective parameter is equal to 0 (Figure 16, left). For asymmetric fluid distributions (Figure 16, right) a non-zero contribution is obtained, but no visible trends are observed as the saturation changes. The redistribution of fluids driven by surface tension forces leads to a net flow for the two phases, which corresponds to \(\mathcal{M}^{(k)}\) being non-zero. The size and direction of this net flow and hence the size and sign of the effective parameter \(\mathcal{M}^{(k)}\) depends highly on the fluid distribution. Such effects can only be accounted for by solving the cell problem (50).
### Discussion
For equal fluid properties and fixed fluid distributions we obtain monotone relative permeability saturation curves comparable to commonly used relations such as Brooks-Corey [8] or van Genuchten [24]. However, even in this simple case the relative permeability depends not only on saturation but also on the distribution of the fluids. As observed in Section 4.3, even in this simple case no relation can be given without accounting for the local distribution of the fluids.
For differing viscosities the model is able to reproduce non-monotone relative permeabilities with values above 1 as discussed in [6]. When the less viscous fluid coats the solid surface it can reduce the resistance exerted on the more viscous fluid, resulting in higher velocities. Due to the different fluid distribution such a coating is not observed to the same extent for the "cross" setup, leading to monotone relations for the relative permeability. These effects depend strongly on the local phase distribution and cannot be captured solely by the saturation without additional assumptions.
For moderate density differences the combined permeability is reduced and for higher density ratios non-monotone behavior can be observed. We remark that for very high differences in fluid properties, this should be considered in the derivation of the cell problems and the two-scale problem, which would lead to a different appearance of the effective equations. To obtain equations similar to the often used two-phase Darcy's law, we use the assumptions presented in Section 3.2.
The derived two-scale model contains an additional effective parameter which accounts for the influence of surface tension between the fluid phases. This is not a common part of the extended
Figure 15: Comparison of relative permeability saturation relation for ”obstacle” geometry.
Darcy's law and there is no counterpart to compare it to. A similar cell problem and resulting effective parameter has been investigated by [22], incorporating the influence of solute-dependent surface tension. Here only constant surface tension is considered. For symmetric distributions the effective parameter capturing surface tension effects disappears as expected. For anisotropic phase distributions the flow driven by surface tension does not cancel out and the associated cell problem is able to capture significant contributions to effective flow behavior. The size and direction of flow resulting from surface tension and thus the magnitude and sign of the effective parameter depend strongly on the local distribution of fluids.
The numeric investigation of the effective parameters highlights how different fluid distributions with equal saturation can result in very different net flow and effective behavior. Increased relative permeability due to viscosity differences and significant surface tension effects further emphasize the importance of resolving the local fluid morphology.
## 5 Conclusion
We derived a two-scale model for two-phase flow in porous media using homogenization and investigated the dependence of effective parameters on the fluid distribution in the pore-scale cell problems.
The model consists of macro-scale equations similar to the extended Darcy's law with effective parameters computed from solutions to cell problems defined at every macroscopic point. One of the effective parameters can be understood as an effective mobility, while the second effective parameter represents effects of interfacial tension on the effective flow. On the pore-scale the phase distribution is captured by a phase field, using an advective Allen-Cahn formulation. For the velocity field we use a stationary Stokes equation with phase-dependent fluid properties and an additional source term incorporating surface tension forces.
The two-scale structure allows to investigate numerically the effects of pore-scale information on effective behavior. We construct local fluid distributions for the pore scale and solve the cell
Figure 16: Symmetric and asymmetric case with center at (0.35, 0.3).
problems associated with pressure driven velocity contributions.
The similarity to two-phase Darcy's law motivates a comparison of the dependence of relative permeabilities on saturation. By selecting isotropic pore geometries, relative permeability tensors can be extracted from the effective parameters \(\mathcal{K}^{(k)}\) in our model and for isotropic fluid distributions these simplify to diagonal matrices or scalar values. As one of the core qualities of relative-permability-saturation-curves we investigate the conditions under which a monotone relationship is obtained. If a common fluid morphology is maintained for different saturations, then for equal fluid properties monotone curves, similar to commonly used functions like van Genuchten and Brooks-Corey, are obtained. The calculated curves however depend strongly on the fluid distribution and change with the pore geometry, highlighting the need to account for pore-scale information. For different viscosities the model can produce non-monotone relationships and relative permeabilities greater than 1. This can be related to a lubricating effect of the less viscous fluid coating the surface of the solid matrix, an effect observed experimentally. For moderate density ratios the effective total permeability decreases and no non-monotone behavior was observed.
We also performed numeric simulations for the cell problem corresponding to the effective parameter \(\mathcal{M}^{(k)}\), capturing effects of surface tension forces. For isotropic geometry and fluid morphologies these forces result in no net flow and a vanishing effective parameter. For asymmetric fluid distributions we obtain significant net flows captured by \(\mathcal{M}^{(k)}\).
Our two-scale model for two-phase flow captures pore-scale fluid-fluid interactions related to surface tension forces as well as differences in viscosity and density. Through numeric investigation of the effective parameters we highlight the importance of local fluid distribution for accurately capturing effective flow behavior.
AcknowledgmentsFunded by Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC 2075 - 390740016. We acknowledge the support by the Stuttgart Center for Simulation Science (SimTech). We thank the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) for supporting this work by funding SFB 1313, Project Number 327154368.
The authors would like to thank Kundan Kumar (University of Bergen) for useful discussions on monotonicity of relative permeability saturation curves.
## Appendix A Sharp-interface limit
In the limit \(\xi\to 0\) the phase-field model presented in Section 2.2 recovers the sharp-interface model from Section 2.1. Following the approach of matched asymptotic expansions [9] we consider the behavior at the transition zone and in the bulk phases separately, connecting them with matching conditions. Let \(L\) be a reference length and \(\bar{\xi}=\xi/L\) the non-dimensional interface width.
We assume asymptotic expansions in \(\bar{\xi}\) that capture the behavior away from the interface (outer expansions) for unknowns \(\psi\in\{u,p,\mathbf{v}\}\)
\[\psi^{\text{out}}(t,\mathbf{x})=\sum_{k=0}^{\infty}\bar{\xi}^{k}\psi_{k}^{ \text{out}}(t,\mathbf{x}). \tag{53}\]
For inner expansions at the diffuse interface we consider local curvilinear coordinates. Let \(\Gamma_{\bar{\xi}}(t)=\{\mathbf{y}_{\bar{\xi}}\in\mathcal{P}\mid u(t,\mathbf{ y}_{\bar{\xi}})=1/2\}\) be the reconstructed interface. With a parametrization \(\mathbf{s}\) along \(\Gamma_{\bar{\xi}}(t)\) and normal \(\mathbf{n}_{\bar{\xi}}(t,\mathbf{s})\) pointing into fluid 2, we define a signed distance \(r\) for points near the interface,
such that
\[\mathbf{x}=\mathbf{y}_{\bar{\xi}}(t,\mathbf{s})+r\mathbf{n}_{\bar{\xi}}(t,\mathbf{ s}). \tag{54}\]
Note than \(\partial_{t}r=-v_{n}\), with \(v_{n}\) the velocity of the interface in direction of \(\mathbf{n}_{\bar{\xi}}\) corresponding to the sharp-interface velocity \(V_{\Gamma_{f}}\). Furthermore it can be shown (see [9]) that for mean and Gaussian interface curvatures \(\kappa\) and \(\Pi\)
\[|\nabla r|=1,\quad\nabla r\cdot\nabla s_{i}=0,\quad\nabla^{2}r=\frac{\kappa+2 \Pi r}{1+\kappa r+\Pi r^{2}}\.\]
The outer expansions define the interface \(\Gamma_{0}^{\mathrm{out}}(t)=\{\mathbf{x}\in\mathcal{P}\mid u_{0}^{\mathrm{ out}}(t,\mathbf{x})=1/2\}\) with normal vector \(\mathbf{n}_{0}\), interfacial velocity \(v_{n,0}\) and mean curvature \(\kappa_{0}\). The point \(\mathbf{y}_{\bar{\xi}}\) can be expressed through expansions
\[\mathbf{y}_{\bar{\xi}}=\sum_{k=0}^{\infty}\bar{\xi}^{k}\mathbf{y}_{k}\]
with \(\mathbf{y}_{0}\in\Gamma_{0}^{\mathrm{out}}(t)\), and similarly \(\mathbf{n}_{\bar{\xi}}=\mathbf{n}_{0}+\bar{\xi}\mathbf{n}_{1}+\mathcal{O}( \bar{\xi}^{2})\). Defining \(z=r/\bar{\xi}\), we consider the inner expansions in the variables \(z\) and \(\mathbf{s}\)
\[\psi^{\mathrm{in}}(t,\mathbf{x})=\sum_{k=0}^{\infty}\bar{\xi}^{k}\psi^{ \mathrm{in}}_{k}(t,z,\mathbf{s}). \tag{55}\]
The derivatives are rewritten accordingly, yielding [9]
\[\partial_{t}\psi =-\bar{\xi}^{-1}v_{n,0}\partial_{z}\psi^{\mathrm{in}}+(\partial_{ t}+\partial_{t}\mathbf{s}\cdot\nabla_{\mathbf{s}})\psi^{\mathrm{in}}+ \mathcal{O}(\bar{\xi})\, \tag{56}\] \[\nabla_{\mathbf{x}}\psi =\bar{\xi}^{-1}\partial_{z}\psi^{\mathrm{in}}\,\mathbf{n}_{0}+ \nabla_{\Gamma}\psi^{\mathrm{in}}+\mathcal{O}(\bar{\xi})\,\] \[\nabla_{\mathbf{x}}\cdot\psi =\bar{\xi}^{-1}\partial_{z}\psi^{\mathrm{in}}\cdot\mathbf{n}_{0}+ \nabla_{\Gamma}\cdot\psi^{\mathrm{in}}+\mathcal{O}(\bar{\xi})\,\] \[\nabla_{\mathbf{x}}^{2}\psi =\bar{\xi}^{-2}\partial_{zz}\psi^{\mathrm{in}}+\bar{\xi}^{-1} \kappa_{0}\partial_{z}\psi^{\mathrm{in}}+\mathcal{O}(1)\,\]
using the expansions \(v_{n}=v_{n,0}+\mathcal{O}(\bar{\xi})\) and \(\kappa=\kappa_{0}+\mathcal{O}(\bar{\xi})\). For fixed \(t\) and \(\mathbf{s}\), let \(\mathbf{y}_{1/2\pm}\) denote the limit for \(r\searrow 0\) and \(r\nearrow 0\) respectively of a point \(\mathbf{y}\) in curvilinear coordinates (54). The values of outer expansions at \(\mathbf{y}_{1/2\pm}\) are paired with the limit values of inner expansions for \(z\to\pm\infty\) in the following matching conditions.
\[\lim_{z\to\pm\infty}\psi^{\mathrm{in}}_{0}(t,z,\mathbf{s}) =\psi^{\mathrm{out}}_{0}(t,\mathbf{y}_{1/2\pm})\, \tag{57}\] \[\lim_{z\to\pm\infty}\partial_{z}\psi^{\mathrm{in}}_{0}(t,z, \mathbf{s}) =0\,\] \[\lim_{z\to\pm\infty}(\psi^{\mathrm{in}}_{0}(t,z,\mathbf{s})-(z+y _{1})\nabla\psi^{\mathrm{out}}_{0}(t,\mathbf{y}_{1/2\pm})\cdot\mathbf{n}_{0}) =\psi^{\mathrm{out}}_{1}(t,\mathbf{y}_{1/2\pm})\,\] \[\lim_{z\to\pm\infty}\partial_{z}\psi^{\mathrm{in}}_{1}(t,z, \mathbf{s}) =\nabla\psi^{\mathrm{out}}_{0}(t,\mathbf{y}_{1/2\pm})\cdot\mathbf{n}_{0}\.\]
### Outer expansions
Due to the polynomial structure of \(P^{\prime}\) it can be expanded as
\[P^{\prime}(u_{0}+\bar{\xi}u_{1}+\mathcal{O}(\bar{\xi}^{2}))=P^{\prime}(u_{0})+ \bar{\xi}u_{1}P^{\prime\prime}(u_{0})+\mathcal{O}(\bar{\xi}^{2}). \tag{58}\]
Substituting the outer expansions (53) into the phase-field equation (8) yields
\[0 =L^{2}\bar{\xi}^{2}\frac{\partial}{\partial t}(u_{0}^{\mathrm{out }}+\mathcal{O}(\bar{\xi}^{1}))+L^{2}\bar{\xi}^{2}\nabla\cdot((\mathbf{v}_{0}+ \mathcal{O}(\bar{\xi}^{1}))(u_{0}^{\mathrm{out}}+\mathcal{O}(\bar{\xi}^{1}))) \tag{59a}\] \[-\sigma L^{3}\bar{\xi}^{2}\nabla^{2}(u_{0}^{\mathrm{out}}+ \mathcal{O}(\bar{\xi}^{1}))+\sigma L\bar{\xi}P^{\prime}(u_{0}^{\mathrm{out}}+ \bar{\xi}u_{1}^{\mathrm{out}}+\mathcal{O}(\bar{\xi}^{2}))\] \[=\big{[}\sigma L\bar{\xi}P^{\prime}(u_{0}^{\mathrm{out}})\big{]} +\mathcal{O}(\bar{\xi}^{2}). \tag{59b}\]
The leading order equation \(P^{\prime}(u_{0}^{\rm out})=0\) has solutions \(u_{0}^{\rm out}\in\{0,1/2,1\}\), of which \(0\) and \(1\) are stable minimizers of \(P\). This allows a decomposition of the unified \(\mathcal{P}\) into bulk domains \(\Omega_{1}^{\rm out}\) and \(\Omega_{2}^{\rm out}\) for fluid \(1\) and \(2\), respectively, through
\[\Omega_{1}^{\rm out}(t):=\{\mathbf{x}\in\mathcal{P}\mid u_{0}^{\rm out}(t, \mathbf{x})=1\}\,\qquad\Omega_{2}^{\rm out}(t):=\{\mathbf{x}\in\mathcal{P}\mid u_{0}^{\rm out }(t,\mathbf{x})=0\}. \tag{60}\]
We consider the remaining outer expansions for \(\Omega_{i}^{\rm out}\).
For the mass balance equation (6a) we obtain by using the linear dependence of density on the phase-field variable
\[0= \frac{\partial}{\partial t}(\rho(u_{0}^{\rm out})+\mathcal{O}( \bar{\xi}))+\nabla\cdot((\rho(u_{0}^{\rm out})+\mathcal{O}(\bar{\xi}))( \mathbf{v}_{0}^{\rm out}+\mathcal{O}(\bar{\xi}))) \tag{61a}\] \[= \frac{\partial\rho_{i}}{\partial t}+\nabla\cdot(\rho_{i}\mathbf{ v}_{0}^{\rm out})+\mathcal{O}(\bar{\xi}). \tag{61b}\]
Denoting phase velocities \(\mathbf{v}_{i}=\mathbf{v}_{0}^{\rm out}|_{\Omega_{i}^{\rm out}}\) we recover the individual mass conservation equations and, due to constant phase densities \(\rho_{i}\), the divergence-free condition (1). The momentum balance (6b) yields, using additionally the linear relation for viscosity,
\[0= \frac{\partial}{\partial t}((\rho(u_{0}^{\rm out})+\mathcal{O}( \bar{\xi}))(\mathbf{v}_{0}^{\rm out}+\mathcal{O}(\bar{\xi})))\] \[+\nabla\cdot((\rho(u_{0}^{\rm out})+\mathcal{O}(\bar{\xi}))( \mathbf{v}_{0}^{\rm out}+\mathcal{O}(\bar{\xi}))\otimes(\mathbf{v}_{0}^{\rm out }+\mathcal{O}(\bar{\xi})))\] \[+\nabla(p_{0}^{\rm out}+\mathcal{O}(\bar{\xi}))\] \[-\nabla\cdot\bigg{(}(\mu(u_{0}^{\rm out})+\mathcal{O}(\bar{\xi}) )\bigg{(}\nabla(\mathbf{v}_{0}^{\rm out}+\mathcal{O}(\bar{\xi}))+(\nabla( \mathbf{v}_{0}^{\rm out}+\mathcal{O}(\bar{\xi})))^{T}\] \[-\frac{2}{3}(\nabla\cdot(\mathbf{v}_{0}^{\rm out}+\mathcal{O}( \bar{\xi})))\mathbf{I}\bigg{)}\bigg{)}\] \[-(\rho(u_{0}^{\rm out})+\mathcal{O}(\bar{\xi}))\mathbf{g}+ \mathcal{O}(\bar{\xi}) \tag{62a}\] \[= \frac{\partial}{\partial t}(\rho_{i}\mathbf{v}_{i})+\nabla\cdot( \rho_{i}\mathbf{v}_{i}\otimes\mathbf{v}_{i})+\nabla p_{0}^{\rm out}\] \[-\nabla\cdot\bigg{(}\mu_{i}\bigg{(}\nabla\mathbf{v}_{i}+(\nabla \mathbf{v}_{i})^{T}-\frac{2}{3}(\nabla\cdot\mathbf{v}_{i})\mathbf{I}\bigg{)} \bigg{)}-\rho_{i}\mathbf{g}+\mathcal{O}(\bar{\xi}) \tag{62b}\]
With phase pressures \(p_{i}\) given by restricting \(p_{0}^{\rm out}\) to the bulk domain \(\Omega_{i}^{\rm out}\) we recover the momentum balance (2).
Inserting the outer expansions into the boundary conditions yields the conditions for the fluid-solid interfaces of the sharp-interface model. From (9a) we obtain
\[0 =(\mathbf{v}_{0}^{\rm out}+\mathcal{O}(\bar{\xi}))+\lambda\nabla ((\mathbf{v}_{0}^{\rm out}+\mathcal{O}(\bar{\xi}))\cdot(\mathbf{t}_{0}+ \mathcal{O}(\bar{\xi})))\cdot(\mathbf{n}_{0}+\mathcal{O}(\bar{\xi}))(\mathbf{t }_{0}+\mathcal{O}(\bar{\xi})) \tag{63a}\] \[=\mathbf{v}_{i}+\lambda\nabla(\mathbf{v}_{i}\cdot\mathbf{t}_{0}) \cdot\mathbf{n}_{0}\,\mathbf{t}_{0}\, \tag{63b}\]
corresponding to the slip condition (3).
### Inner expansions
To obtain the boundary conditions at the fluid-fluid interface we insert the inner expansions (55), rewriting derivatives according to (56), and use the matching conditions (57).
Phase-field equationFrom the phase-field equation (8) we obtain due to the polynomial form of \(P\)
\[0= L^{2}\bar{\xi}^{2}(-\bar{\xi}^{-1}v_{n,0}\partial_{z}(u_{0}^{\rm in}+ \mathcal{O}(\bar{\xi})))+L^{2}\bar{\xi}^{2}(-\bar{\xi}^{-1}\partial_{z}((\mathbf{ v}_{0}^{\rm in}+\mathcal{O}(\bar{\xi}))(u_{0}^{\rm in}+\mathcal{O}(\bar{\xi}))) \cdot\mathbf{n}_{0})\] \[-\sigma L^{3}\bar{\xi}^{3}(\bar{\xi}^{-2}\partial_{zz}(u_{0}^{\rm in }+\bar{\xi}u_{1}^{\rm in}+\mathcal{O}(\bar{\xi}^{2}))+\bar{\xi}^{-1}\kappa_{0} \partial_{z}(u_{0}^{\rm in}+\mathcal{O}(\bar{\xi})))\] \[+\sigma L\bar{\xi}(P^{\prime}(u_{0}^{\rm in})+u_{1}^{\rm in}P^{ \prime\prime}(u_{0}^{\rm in})+\mathcal{O}(\bar{\xi}^{2})) \tag{64a}\] \[= \bar{\xi}\big{[}-\sigma L^{2}\partial_{zz}u_{0}^{\rm in}+\sigma P ^{\prime}(u_{0}^{\rm in})-L^{2}v_{n,0}\partial_{z}u_{0}^{\rm in}+L^{2}\partial _{z}(\mathbf{v}_{0}^{\rm in}u_{0}^{\rm in})\cdot\mathbf{n}_{0}\big{]}+ \mathcal{O}(\bar{\xi}^{2}). \tag{64b}\]
Multiplying the leading order terms with \(\partial_{z}u_{0}^{\rm in}\) and integrating with respect to \(z\) yields a condition for the interface velocity
\[0= \int_{z=-\infty}^{\infty}\!\!\sigma(-L^{3}\partial_{zz}u_{0}^{\rm in }+LP^{\prime}(u_{0}^{\rm in}))\partial_{z}u_{0}^{\rm in}+(-v_{n,0}+\mathbf{v}_{ 0}^{\rm in}\cdot\mathbf{n}_{0})L^{2}(\partial_{z}u_{0}^{\rm in})^{2}\,{\rm d}z \tag{65a}\] \[= \sigma L^{2}\int_{z=-\infty}^{\infty}-\frac{L}{2}\partial_{z}( \partial_{z}u_{0}^{\rm in})^{2}+\partial_{z}P(u_{0}^{\rm in})\,{\rm d}z+(-v_{n,0}+\mathbf{v}_{0}^{\rm in}\cdot\mathbf{n}_{0})L^{2}\int_{z=-\infty}^{\infty} (\partial_{z}u_{0}^{\rm in})^{2}\,{\rm d}z\] (65b) \[= \sigma L^{2}\big{[}-\frac{L}{2}(0-0)+(0-0)\big{]}+(-v_{n,0}+ \mathbf{v}_{0}^{\rm in}\cdot\mathbf{n}_{0})\frac{2L}{3}\, \tag{65c}\]
where the terms in the first integral are equal to \(0\) due to the matching conditions. Taking the limit in \(z\), we obtain
\[v_{n,0}=\mathbf{v}_{0}^{\rm out}(t,\mathbf{y}_{1/2\pm})\cdot\mathbf{n}_{0}. \tag{66}\]
With \(\mathbf{v}_{0}^{\rm in}\cdot\mathbf{n}_{0}\) independent of \(z\), the leading order terms reduce to
\[0=-\sigma L^{2}\partial_{zz}u_{0}^{\rm in}+\sigma P^{\prime}(u_{0}^{\rm in}). \tag{67}\]
Multiplying the remaining terms with \(\partial_{z}u_{0}^{\rm in}\) and integrating with respect to \(z\) yields, applying the chain rule and matching conditions with \(u_{0}^{\rm in}(t,\mathbf{y}_{1/2-})=0\) and \(P(0)=0\),
\[0= \int_{z^{\prime}=-\infty}^{z}-L^{2}\partial_{z}u_{0}^{\rm in} \partial_{zz}u_{0}^{\rm in}+\partial_{z}u_{0}^{\rm in}P^{\prime}(u_{0}^{\rm in })\,{\rm d}z \tag{68a}\] \[= \int_{z^{\prime}=-\infty}^{z}-L^{2}\frac{1}{2}\partial_{z}(( \partial_{z}u_{0}^{\rm in})^{2})+\partial_{z}P(u_{0}^{\rm in})\,{\rm d}z\] (68b) \[= -L^{2}(\partial_{z}u_{0}^{\rm in}(t,z,\mathbf{s}))^{2}-0+2P(u_{0} ^{\rm in}(t,z,\mathbf{s}))-0. \tag{68c}\]
By definition of the curvilinear coordinates we have \(\partial_{z}u_{0}^{\rm in}\geq 0\) and
\[\partial_{z}u_{0}^{\rm in}=L^{-1}\sqrt{2P(u_{0}^{\rm in})}. \tag{69}\]
Using \(u^{\rm out}(t,0,\mathbf{s})=1/2\) one can obtain a profile for the phase-field, namely
\[u_{0}^{\rm in}(t,z,\mathbf{s})=u_{0}^{\rm in}(z)=\frac{1}{1+\exp 4z/L}=\frac{1}{ 2}\left(1+\tanh\left(\frac{2z}{L}\right)\right). \tag{70}\]
Mass balanceInserting the inner expansions into the mass balance equation (6a) yields
\[0= (-\bar{\xi}^{-1}v_{n,0}\partial_{z}+\mathcal{O}(1))(\rho(u_{0}^{ \rm in})+\mathcal{O}(\bar{\xi}))\] \[+\bar{\xi}^{-1}\partial_{z}((\mathbf{v}_{0}^{\rm in}+\mathcal{O} (\bar{\xi}))(\rho(u_{0}^{\rm in})+\mathcal{O}(\bar{\xi})))\cdot\mathbf{n}_{0}+ \mathcal{O}(1) \tag{71a}\] \[= \bar{\xi}^{-1}\big{[}-v_{n,0}\partial_{z}\rho(u_{0}^{\rm in})+ \partial_{z}(\mathbf{v}_{0}^{\rm in}\rho(u_{0}^{\rm in}))\cdot\mathbf{n}_{0} \big{]}+\mathcal{O}(1). \tag{71b}\]
Integrating with respect to \(z\) and applying the matching conditions we obtain
\[0= \int_{z=-\infty}^{\infty}-v_{n,0}\partial_{z}\rho(u_{0}^{\rm in})+ \partial_{z}(\mathbf{v}_{0}^{\rm in}\rho(u_{0}^{\rm in}))\,\mathrm{d}z \tag{72a}\] \[= -v_{n_{0}}(\rho(1)-\rho(0))+(\mathbf{v}_{0}^{\rm out}(t,\mathbf{y }_{1/2+})\rho(1)-\mathbf{v}_{0}^{\rm out}(t,\mathbf{y}_{1/2-})\rho(0))\] (72b) \[= -v_{n,0}(\rho_{1}-\rho_{2})+(\mathbf{v}_{1}\rho_{1}-\mathbf{v}_{2 }\rho_{2})\;, \tag{72c}\]
which is fulfilled for \(v_{1}=v_{2}=v_{n,0}\) at the interface.
Momentum balanceInserting the inner expansions into the momentum balance equation (6b) yields
\[0= -\bar{\xi}^{-1}v_{n,0}\partial_{z}((\rho(u_{0}^{\rm in})+\mathcal{ O}(\bar{\xi}))(\mathbf{v}_{0}^{\rm in}+\mathcal{O}(\bar{\xi})))+\mathcal{O}(1)\] \[+\bar{\xi}^{-1}\partial_{z}((\rho(u_{0}^{\rm in})+\mathcal{O}( \bar{\xi}))(\mathbf{v}_{0}^{\rm in}+\mathcal{O}(\bar{\xi}))\otimes(\mathbf{v} _{0}^{\rm in}+\mathcal{O}(\bar{\xi})))\mathbf{n}_{0}+\mathcal{O}(1)\] \[+\bar{\xi}^{-1}\partial_{z}(p_{0}^{\rm in}+\mathcal{O}(\bar{\xi} ))\mathbf{n}_{0}+\mathcal{O}(1)\] \[-\bar{\xi}^{-1}\partial_{z}\bigg{(}(\mu(u_{0}^{\rm in})+\bar{\xi} (M-1)u_{1}^{\rm in}+\mathcal{O}(\bar{\xi}^{2}))\bigg{(}\bar{\xi}^{-1}\partial_ {z}(\mathbf{v}_{0}^{\rm in}+\bar{\xi}\mathbf{v}_{1}^{\rm in}+\mathcal{O}(\bar{ \xi}^{2}))\otimes\mathbf{n}_{0}\] \[+\nabla_{\Gamma}(\mathbf{v}_{0}^{\rm in}+\mathcal{O}(\bar{\xi})) +\bar{\xi}^{-1}(\partial_{z}(\mathbf{v}_{0}^{\rm in}+\bar{\xi}\mathbf{v}_{1} ^{\rm in}+\mathcal{O}(\bar{\xi}^{2}))\otimes\mathbf{n}_{0})^{T}+(\nabla_{ \Gamma}(\mathbf{v}_{0}^{\rm in}+\mathcal{O}(\bar{\xi})))^{T}\] \[-\bar{\xi}^{-1}\frac{2}{3}\partial_{z}(\mathbf{v}_{0}^{\rm in}+ \bar{\xi}\mathbf{v}_{1}^{\rm in}+\mathcal{O}(\bar{\xi}^{2}))\cdot\mathbf{n}_{ 0}\mathbf{I}-\frac{2}{3}\nabla_{\Gamma}\cdot(\mathbf{v}_{0}^{\rm in}+\mathcal{ O}(\bar{\xi}))\mathbf{I}+\mathcal{O}(\bar{\xi})\bigg{)}\bigg{)}\mathbf{n}_{0}\] \[+\nabla_{\Gamma}\cdot\bigg{(}(\mu(u_{0}^{\rm in})+\mathcal{O}( \bar{\xi}))\bar{\xi}^{-1}\bigg{(}\partial_{z}(\mathbf{v}_{0}^{\rm in}+\mathcal{ O}(\bar{\xi}))\otimes\mathbf{n}_{0}+(\partial_{z}(\mathbf{v}_{0}^{\rm in}+ \mathcal{O}(\bar{\xi}))\otimes\mathbf{n}_{0})^{T}\] \[-\frac{2}{3}\partial_{z}(\mathbf{v}_{0}^{\rm in}+\mathcal{O}( \bar{\xi}))\cdot\mathbf{n}_{0}\mathbf{I}+\mathcal{O}(\bar{\xi})\bigg{)}\bigg{)} \bigg{)}-(\rho(u_{0}^{\rm in})+\mathcal{O}(\bar{\xi}))\mathbf{g}\] \[-\frac{3\gamma L}{2}\bar{\xi}\Big{[}\bar{\xi}^{-1}\partial_{z} \Big{(}\bar{\xi}^{-1}\partial_{z}(u_{0}^{\rm in}+\bar{\xi}u_{1}^{\rm in}+ \mathcal{O}(\bar{\xi}^{2}))\mathbf{n}_{0}+\nabla_{\Gamma}(u_{0}^{\rm in}+ \mathcal{O}(\bar{\xi}))\] \[\otimes(\bar{\xi}^{-1}\partial_{z}(u_{0}^{\rm in}+\bar{\xi}u_{1} ^{\rm in}+\mathcal{O}(\bar{\xi}^{2}))\mathbf{n}_{0}+\nabla_{\Gamma}(u_{0}^{ \rm in}+\mathcal{O}(\bar{\xi})))\Big{)}\mathbf{n}_{0}\] \[+\nabla_{\Gamma}\cdot\Big{(}\bar{\xi}^{-1}\partial_{z}(u_{0}^{ \rm in}+\mathcal{O}(\bar{\xi}))\mathbf{n}_{0}\otimes\bar{\xi}^{-1}\partial_ {z}(u_{0}^{\rm in}+\mathcal{O}(\bar{\xi}))\mathbf{n}_{0}\Big{)}+\mathcal{O}( 1)\Big{]}\] \[+\mathcal{O}(1)\] (73a) with leading order \[= \bar{\xi}^{-2}\bigg{[}-\partial_{z}(\mu(u_{0}^{\rm in})(\partial _{z}\mathbf{v}_{0}^{\rm in}\otimes\mathbf{n}_{0}+(\partial_{z}\mathbf{v}_{0}^{ \rm in}\otimes\mathbf{n}_{0})^{T}-\frac{2}{3}\partial_{z}\mathbf{v}_{0}^{\rm in }\cdot\mathbf{n}_{0}\mathbf{I}))\mathbf{n}_{0}\] \[-\frac{3\gamma L}{2}\partial_{z}(\partial_{z}u_{0}^{\rm in} \mathbf{n}_{0}\otimes\partial_{z}u_{0}^{\rm in}\mathbf{n}_{0})\mathbf{n}_{0} \bigg{]} \tag{73b}\]
and second order
\[+\bar{\xi}^{-1}\bigg{[}-v_{n,0}\partial_{z}(\rho(u^{\rm in}_{0}) \mathbf{v}^{\rm in}_{0})+\partial_{z}(\rho(u^{\rm in}_{0})\mathbf{v}^{\rm in}_{0} \otimes\mathbf{v}^{\rm in}_{0})\mathbf{n}_{0}+\partial_{z}p^{\rm in}_{0}\mathbf{ n}_{0}\] \[-\partial_{z}\bigg{(}\mu(u^{\rm in}_{0})\bigg{(}\partial_{z} \mathbf{v}^{\rm in}_{1}\otimes\mathbf{n}_{0}+(\partial_{z}\mathbf{v}^{\rm in}_ {1}\otimes\mathbf{n}_{0})^{T}-\frac{2}{3}\partial_{z}\mathbf{v}^{\rm in}_{1} \cdot\mathbf{n}_{0}\mathbf{I}\] \[+\nabla_{\Gamma}\mathbf{v}^{\rm in}_{0}+(\nabla_{\Gamma}\mathbf{ v}^{\rm in}_{0})^{T}-\frac{2}{3}\nabla_{\Gamma}\cdot\mathbf{v}^{\rm in}_{0} \mathbf{I}\bigg{)}\] \[+(M-1)u^{\rm in}_{1}\bigg{(}\partial_{z}\mathbf{v}^{\rm in}_{0} \otimes\mathbf{n}_{0}+(\partial_{z}\mathbf{v}^{\rm in}_{0}\otimes\mathbf{n} _{0})^{T}-\frac{2}{3}\partial_{z}\mathbf{v}^{\rm in}_{0}\cdot\mathbf{n}_{0} \mathbf{I}\bigg{)}\bigg{)}\mathbf{n}_{0}\] \[+\nabla_{\Gamma}\cdot\bigg{(}\mu(u^{\rm in}_{0})\bigg{(}\partial_ {z}\mathbf{v}^{\rm in}_{0}\otimes\mathbf{n}_{0}+(\partial_{z}\mathbf{v}^{\rm in }_{0}\otimes\mathbf{n}_{0})^{T}-\frac{2}{3}\partial_{z}\mathbf{v}^{\rm in}_{0 }\cdot\mathbf{n}_{0}\mathbf{I}\bigg{)}\bigg{)}\] \[-\frac{3\gamma L}{2}\bigg{[}\partial_{z}\Big{(}\partial_{z}u^{ \rm in}_{0}\mathbf{n}_{0}\otimes(\partial_{z}u^{\rm in}_{1}\mathbf{n}_{0}+ \nabla_{\Gamma}u^{\rm in}_{0})+(\partial_{z}u^{\rm in}_{1}\mathbf{n}_{0}+ \nabla_{\Gamma}u^{\rm in}_{0})\otimes\partial_{z}u^{\rm in}_{0}\mathbf{n}_{0} \Big{)}\mathbf{n}_{0}\] \[+\nabla_{\Gamma}\cdot\bigg{(}\partial_{z}u^{\rm in}_{0}\mathbf{ n}_{0}\otimes\partial_{z}u^{\rm in}_{0}\mathbf{n}_{0}\bigg{)}+\mathcal{O}(1)\bigg{]} \bigg{]}+\mathcal{O}(1). \tag{73c}\]
Using the definition of the outer product, the leading order terms at \(\mathcal{O}(\bar{\xi}^{-2})\) can be written as
\[0=-\frac{3\gamma L}{2}\partial_{z}(\partial_{z}u^{\rm in}_{0})^{2}\mathbf{n}_{ 0}-\partial_{z}(\mu(u^{\rm in}_{0})\partial_{z}\mathbf{v}^{\rm in}_{0})- \partial_{z}\Big{(}\mu(u^{\rm in}_{0})((\partial_{z}\mathbf{v}^{\rm in}_{0} \cdot\mathbf{n}_{0})-\frac{2}{3}(\partial_{z}\mathbf{v}^{\rm in}_{0}\cdot \mathbf{n}_{0})\mathbf{I})\Big{)}\mathbf{n}_{0} \tag{74}\]
Let \(\mathbf{t}_{0}\) the tangential vector, with \(\mathbf{t}_{0}\cdot\mathbf{n}_{0}=0\). Taking the dot product of the above (74) with \(\mathbf{t}_{0}\) yields
\[0=\partial_{z}(\mu(u^{\rm in}_{0})\partial_{z}\mathbf{v}^{\rm in}_{0})\cdot \mathbf{t}_{0}. \tag{75}\]
Integrating with respect to \(z\) and using matching conditions, we obtain
\[\mu(u^{\rm in}_{0})\partial_{z}\mathbf{v}^{\rm in}_{0}\cdot\mathbf{t}_{0}= \textit{const.}=0. \tag{76}\]
Since \(\mu\neq 0\) we have \(\partial_{z}\mathbf{v}^{\rm in}_{0}\cdot\mathbf{t}_{0}=0\), corresponding to the sharp-interface boundary condition and together with the assumption \(\partial_{z}\mathbf{v}^{\rm in}_{0}\cdot\mathbf{n}_{0}=0\) the velocity \(\mathbf{v}^{\rm in}_{0}\) is independent of \(z\).
Using also the definition of the outer product as well as \(\nabla_{\Gamma}\psi\cdot\mathbf{n}_{0}=0\), \(\nabla_{\Gamma}\cdot\mathbf{n}_{0}=\kappa_{0}\) and \(\nabla_{\Gamma}u^{\rm in}_{0}=0\), the second order terms simplify to
\[0= -v_{n,0}\partial_{z}(\rho(u^{\rm in}_{0})\mathbf{v}^{\rm in}_{0})+ \partial_{z}(\rho(u^{\rm in}_{0})\mathbf{v}^{\rm in}_{0}\otimes\mathbf{v}^{\rm in }_{0})\mathbf{n}_{0}+\partial_{z}p^{\rm in}_{0}\mathbf{n}_{0}\] \[-\frac{3\gamma L}{2}\bigg{[}\partial_{z}\Big{(}2\partial_{z}u^{ \rm in}_{0}\partial_{z}u^{\rm in}_{1}\mathbf{n}_{0}\Big{)}+(\partial_{z}u^{\rm in }_{0})^{2}\kappa_{0}\mathbf{n}_{0}\Big{)}\bigg{]}. \tag{77}\]
Integrating with respect to \(z\) and applying the matching conditions yields
\[0= -v_{n,0}[\rho(u^{\rm out}_{0})\mathbf{v}^{\rm out}_{0}]+[\rho(u^{ \rm out}_{0})\mathbf{v}^{\rm out}_{0}\otimes\mathbf{v}^{\rm out}_{0}]\mathbf{n} _{0}+[p^{\rm out}_{0}]\mathbf{n}_{0}\] \[-[\mu(u^{\rm out}_{0})(\nabla\mathbf{v}^{\rm out}_{0}+(\nabla \mathbf{v}^{\rm out}_{0})^{T}-\frac{2}{3}\nabla\cdot\mathbf{v}^{\rm out}_{0} \mathbf{I})\mathbf{n}_{0}]-\frac{3\gamma L}{2}\kappa_{0}\mathbf{n}_{0}\int_{z=- \infty}^{\infty}(\partial_{z}u^{\rm in}_{0})^{2}\,\mathrm{d}z\, \tag{78}\]
where \([\psi]=\psi_{1}-\psi_{2}\) denotes the jump of \(\psi\) across the interface and we used the boundedness of \(\nabla_{\Gamma}\mathbf{v}^{\rm in}_{0}\) as well as \(\partial_{z}u^{\rm in}_{1}\). As the integral evaluates to \(2/(3L)\) and the phase velocities are equal to \(v_{n,0}\) at the interface, we recover the boundary condition (5).
Non-dimensionalization
As described in Section 3.1, in addition to the two length scales \(\ell\) and \(L\) with \(\epsilon=\ell/L\), we define reference values with dimensions
\[[\hat{L}]=\text{m} [\hat{\ell}]=\text{m} [\hat{\ell}]=\text{m} [\hat{\ell}]=\text{s} [\hat{v}]=\frac{\text{m}}{\text{s}}\] \[[\hat{\rho}]=\frac{\text{kg}}{\text{m}^{3}} [\hat{\mu}]=\frac{\text{kg}}{\text{m}\cdot\text{s}} [\hat{p}]=\frac{\text{kg}}{\text{ms}^{2}} [\hat{\lambda}]=\text{m} [\hat{\sigma}]=\frac{\text{m}}{\text{s}}\,\]
and let
\[\hat{L}=L\hskip 28.452756pt\hat{\ell}=\ell\hskip 28.452756pt\hat{\xi}=\hat{ \ell}\hskip 28.452756pt\hat{t}=\frac{L}{\hat{v}}\hskip 28.452756pt\hat{\rho}= \rho_{2}\hskip 28.452756pt\hat{\mu}=\mu_{2}\hskip 28.452756pt\hat{\lambda}=\ell \hskip 28.452756pt\hat{\sigma}=\sigma\.\]
This defines non-dimensionalized variables
\[\bar{\xi}=\frac{\xi}{\hat{\ell}}\hskip 28.452756pt\bar{\mathbf{v}}=\frac{ \mathbf{v}}{\hat{v}}\hskip 28.452756pt\bar{\rho}=\frac{\rho}{\hat{\rho}} \hskip 28.452756pt\bar{\mu}=\frac{\mu}{\hat{\mu}}\hskip 28.452756pt\bar{p}= \frac{p}{\hat{p}}\hskip 28.452756pt\bar{t}=\frac{t}{\hat{t}}\hskip 28.452756pt \bar{\lambda}=\frac{\lambda}{\hat{\lambda}}\,\]
and dimensionless numbers
\[\text{Re}\,=\frac{\hat{\rho}\hat{v}\hat{L}}{\hat{\mu}}\hskip 36.135pt\text{Ca} \,=\frac{\hat{v}\hat{\mu}}{\gamma}\hskip 36.135pt\text{Eu}=\frac{\hat{p}}{ \hat{\rho}\hat{v}^{2}}\hskip 36.135pt\text{Fr}=\frac{\hat{v}}{\sqrt{g\hat{L}}} \hskip 36.135ptS=\frac{\hat{\sigma}}{\hat{v}}\.\]
Inserting into the mass (6a) and momentum (6b) balances yields
\[\frac{\hat{\rho}}{\hat{t}}\frac{\partial\bar{\rho}}{\partial\hat{ t}}+\frac{\hat{\rho}\hat{v}}{\hat{L}}\bar{\nabla}\cdot(\bar{\rho}\bar{\mathbf{v}})= 0\, \tag{79a}\] \[\frac{\hat{\rho}\hat{v}}{\hat{t}}\frac{\partial}{\partial\hat{t}} (\bar{\rho}\bar{\mathbf{v}})+\frac{\hat{\rho}\hat{v}^{2}}{\hat{L}}\bar{\nabla }\cdot(\bar{\rho}\bar{\mathbf{v}}\otimes\bar{\mathbf{v}})= -\frac{\hat{p}}{\hat{L}}\bar{\nabla}\bar{p}+\frac{\hat{\mu}\hat{v} }{\hat{L}^{2}}\bar{\nabla}\cdot\left(\bar{\mu}\left(\bar{\nabla}\bar{ \mathbf{v}}+(\bar{\nabla}\bar{\mathbf{v}})^{T}-\frac{2}{3}(\bar{\nabla}\cdot \bar{\mathbf{v}})\mathbf{I}\right)\right)\] (79b) \[+\hat{\rho}g\bar{\rho}(-\mathbf{z})-\frac{\gamma\hat{\xi}}{\hat {L}^{3}}\frac{3\bar{\xi}}{2}\bar{\nabla}\cdot(\bar{\nabla}\bar{u}\otimes\bar {\nabla}\bar{u})\,\]
and after multiplication with \(\hat{t}/\hat{\rho}\) and \(\hat{L}^{2}/(\hat{\mu}\hat{v})\) respectively
\[\frac{\partial\bar{\rho}}{\partial\hat{t}}+\frac{\hat{t}\hat{v} }{\hat{L}}\bar{\nabla}\cdot(\bar{\rho}\bar{\mathbf{v}})= 0\, \tag{80a}\] \[\frac{\hat{\rho}\hat{L}\hat{v}}{\hat{\mu}}\frac{\hat{L}}{\hat{t} \hat{v}}\frac{\partial}{\partial\hat{t}}(\bar{\rho}\bar{\mathbf{v}})+\frac{ \hat{\rho}\hat{L}\hat{v}}{\hat{\mu}}\bar{\nabla}\cdot(\bar{\rho}\bar{\mathbf{v }}\otimes\bar{\mathbf{v}})= -\frac{\hat{p}\hat{L}}{\hat{\mu}\hat{v}}\bar{\nabla}p\bar{\mathbf{v }}\cdot\left(\bar{\mu}\left(\bar{\nabla}\bar{\mathbf{v}}+(\bar{\nabla}\bar{ \mathbf{v}})^{T}-\frac{2}{3}(\bar{\nabla}\cdot\bar{\mathbf{v}})\mathbf{I} \right)\right)\] (80b) \[-\frac{\hat{\rho}\hat{L}\hat{v}}{\hat{\mu}}\frac{g\hat{L}}{\hat{v} ^{2}}\bar{\rho}\mathbf{z}-\frac{\gamma}{\hat{\mu}\hat{v}}\epsilon\frac{3\xi}{2} \bar{\nabla}\cdot(\bar{\nabla}\bar{u}\otimes\bar{\nabla}\bar{u})\.\]
Using the non-dimensional numbers and dropping the overline notation this is written as
\[\frac{\partial\rho}{\partial t}+\nabla\cdot(\rho\mathbf{v})= 0\, \tag{81a}\] \[\frac{\partial}{\partial t}(\rho\mathbf{v})+\nabla\cdot(\rho \mathbf{v}\otimes\mathbf{v})= -\text{Eu}\nabla p+\frac{1}{\text{Re}}\nabla\cdot\left(\mu\left( \nabla\mathbf{v}+(\nabla\mathbf{v})^{T}-\frac{2}{3}(\nabla\cdot\mathbf{v}) \mathbf{I}\right)\right)\] (81b) \[-\frac{1}{\text{Fr}^{2}}\rho\mathbf{z}-\frac{\epsilon}{\text{Ca}} \,\frac{3\xi}{2}\nabla\cdot(\nabla u\otimes\nabla u)\.\]
For the slip boundary condition (9a) inserting the non-dimensionalized variables yields
\[\hat{v}\hat{\bf v}=\frac{\hat{\lambda}\hat{v}}{\hat{L}}\bar{\lambda}(\bar{\partial }_{\bf n}\bar{\bf v}_{\bf t}){\bf t}\,\ \mbox{on}\ \Gamma. \tag{82}\]
With \(\hat{\lambda}=\hat{\ell}\) and dropping the overline notation they simplify to
\[{\bf v}=\epsilon\lambda(\partial_{\bf n}{\bf v}_{\bf t}){\bf t}\,\ \mbox{on}\ \Gamma. \tag{83}\]
### Phase field
The phase-field equation (8)
\[\xi^{2}\frac{\partial u}{\partial t}-\sigma\xi^{3}\nabla^{2}u+\xi^{2}\nabla \cdot({\bf v}u)=-\sigma\xi P^{\prime}(u) \tag{84}\]
can be rewritten with non-dimensionalized variables as
\[\frac{\hat{\xi}^{2}}{\hat{t}}\bar{\xi}^{2}\partial_{\hat{t}}\bar{u}-\frac{ \hat{\sigma}\hat{\xi}^{3}}{\hat{L}^{2}}\bar{\xi}^{3}\bar{\nabla}^{2}\bar{u}+ \frac{\hat{\xi}^{2}\hat{v}}{\hat{L}}\bar{\xi}^{2}\bar{\nabla}\cdot(\bar{\bf v }\bar{u})=-\hat{\sigma}\hat{\xi}\bar{\xi}P^{\prime}(\bar{u}). \tag{85}\]
Dividing by \(\hat{\xi}^{2}/\hat{t}\), using the dimensionless number \(S\) and dropping the overline notation the equation becomes
\[\xi^{2}\partial_{t}u-S\epsilon^{1}\xi^{2}\nabla^{2}u+\xi^{2}\nabla\cdot({\bf v }u)=-S\epsilon^{-1}P^{\prime}(u). \tag{86}\]
For the contact-angle boundary condition we obtain
\[\frac{1}{\hat{L}}\bar{\partial}_{\bf n}\bar{u}=-\frac{1}{\hat{\xi}}\cos( \theta_{\rm eq})\bar{\xi}^{-1}\sqrt{2P(\bar{u})}\,\ \mbox{on}\ \Gamma. \tag{87}\]
Multiplying with \(L\), using \(\hat{\xi}=\hat{\ell}=\epsilon\hat{L}\) and dropping the overline notation this yields
\[\partial_{\bf n}u=-\epsilon^{-1}\cos(\theta_{\rm eq})\xi^{-1}\sqrt{2P(u)}\, \ \mbox{on}\ \Gamma. \tag{88}\]
## Appendix C Homogenization
We introduce the micro-scale coordinate \({\bf y}=\epsilon^{-1}{\bf x}\) with \(\epsilon=\ell/L\) the scale separation and assume asymptotic expansions for \(\psi\in\{p,{\bf v},u\}\)
\[\psi(t,{\bf x})=\sum_{k=0}^{\infty}\epsilon^{k}\psi_{k}\left(t,{\bf x},\frac {{\bf x}}{\epsilon}\right) \tag{89}\]
with each \(\psi_{k}\)\(y\)-periodic on the reference cell \(Y\). Rewriting spatial derivatives according to
\[\nabla\psi=\nabla_{\bf x}\sum_{k=0}^{\infty}\epsilon^{k}\psi_{k}(t,{\bf x},{ \bf y})+\frac{1}{\epsilon}\nabla_{\bf y}\sum_{k=0}^{\infty}\epsilon^{k}\psi_{ k}(t,{\bf x},{\bf y}) \tag{90}\]
and choosing non-dimensional numbers \(\mbox{Ca}\,\in{\cal O}(\epsilon^{0})\), \(\mbox{Re}\,\in{\cal O}(\epsilon^{0})\), \(\mbox{Eu}\in{\cal O}(\epsilon^{-2})\) and \(\mbox{Fr}\in{\cal O}(\epsilon^{0})\) in addition to \(S\in{\cal O}(\epsilon^{0})\) and inserting the asymptotic expansions into the non-dimensionalized
equations yields the following leading order terms. For the mass balance (13a) we obtain
\[0 =\frac{\partial\rho}{\partial t}+\nabla\cdot(\rho\mathbf{v}) \tag{91}\] \[=\frac{\partial}{\partial t}(\rho_{0}+\mathcal{O}(\epsilon^{1}))+ (\epsilon^{-1}\nabla_{\mathbf{y}}+\nabla_{\mathbf{x}})\cdot((\rho_{0}+ \epsilon\rho_{1}+\mathcal{O}(\epsilon^{2}))(\mathbf{v}_{0}+\epsilon\mathbf{v} _{1}+\mathcal{O}(\epsilon^{2})))\] \[=\left(\frac{\partial\rho_{0}}{\partial t}+\mathcal{O}(\epsilon^ {1})\right)+(\epsilon^{-1}\nabla_{\mathbf{y}}+\nabla_{\mathbf{x}})\cdot(\rho_{ 0}\mathbf{v}_{0}+\epsilon(\rho_{0}\mathbf{v}_{1}+\rho_{1}\mathbf{v}_{0})+ \mathcal{O}(\epsilon^{2}))\] \[=\left[\frac{\partial\rho_{0}}{\partial t}\right]+\mathcal{O}( \epsilon^{1})+\epsilon^{-1}\big{[}\nabla_{\mathbf{y}}\cdot(\rho_{0}\mathbf{v} _{0})\big{]}+\big{[}\nabla_{\mathbf{y}}\cdot(\rho_{0}\mathbf{v}_{1}+\rho_{1} \mathbf{v}_{0})+\nabla_{\mathbf{x}}\cdot(\rho_{0}\mathbf{v}_{0})\big{]}+ \mathcal{O}(\epsilon^{1})\] \[=\epsilon^{-1}\big{[}\nabla_{\mathbf{y}}\cdot(\rho_{0}\mathbf{v} _{0})\big{]}+\epsilon^{0}\bigg{[}\frac{\partial\rho_{0}}{\partial t}+\nabla_{ \mathbf{y}}\cdot(\rho_{0}\mathbf{v}_{1}+\rho_{1}\mathbf{v}_{0})+\nabla_{ \mathbf{x}}\cdot(\rho_{0}\mathbf{v}_{0})\bigg{]}+\mathcal{O}(\epsilon^{1})\.\]
We note that due to the dependence of \(\rho\) on \(u\) we have
\[\rho_{0}=\rho(u_{0})\,\qquad\rho_{1}=u_{1}(R-1)\, \tag{92}\]
and the equation can be written as
\[0=\epsilon^{-1}\big{[}\nabla_{\mathbf{y}}\cdot(\rho(u_{0})\mathbf{v}_{0}) \big{]}+\epsilon^{0}\bigg{[}\frac{\partial\rho(u_{0})}{\partial t}+\nabla_{ \mathbf{y}}\cdot(\rho(u_{0})\mathbf{v}_{1}+\rho_{1}\mathbf{v}_{0})+\nabla_{ \mathbf{x}}\cdot(\rho(u_{0})\mathbf{v}_{0})\bigg{]}+\mathcal{O}(\epsilon^{1}). \tag{93}\]
Analogously for the viscosity \(\mu\) we have \(\mu_{0}=\mu(u_{0})\) and \(\mu_{1}=u_{1}(M-1)\). Denoting \(\overline{\mathrm{Eu}}:=\epsilon^{2}\mathrm{Eu}\in\mathcal{O}(\epsilon^{0})\), the momentum equation (13b) yields the following terms.
\[0= -\frac{\partial}{\partial t}(\rho\mathbf{v})-\nabla\cdot(\rho \mathbf{v}\otimes\mathbf{v})-\mathrm{Eu}\nabla p+\frac{1}{\mathrm{Re}}\nabla \cdot\left(\mu\left(\nabla\mathbf{v}+(\nabla\mathbf{v})^{T}-\frac{2}{3}(\nabla \cdot\mathbf{v})\mathbf{I}\right)\right)\] \[-\frac{1}{\mathrm{Fr}^{2}}\rho\mathbf{z}-\frac{\epsilon}{\mathrm{ Ca}}\frac{3\xi}{2}\nabla\cdot(\nabla u\otimes\nabla u) \tag{94a}\] \[= -\frac{\partial}{\partial t}((\rho_{0}+\mathcal{O}(\epsilon^{1}) )(\mathbf{v}_{0}+\mathcal{O}(\epsilon^{1})))\] \[-(\epsilon^{-1}\nabla_{\mathbf{y}}+\nabla_{\mathbf{x}})\cdot(( \rho_{0}+\mathcal{O}(\epsilon^{1}))(\mathbf{v}_{0}+\mathcal{O}(\epsilon^{1})) \otimes(\mathbf{v}_{0}+\mathcal{O}(\epsilon^{1})))\] \[-\epsilon^{-2}\overline{\mathrm{Eu}}(\epsilon^{-1}\nabla_{ \mathbf{y}}+\nabla_{\mathbf{x}})(p_{0}+\epsilon p_{1}+\mathcal{O}(\epsilon^{2}))\] \[+\frac{1}{\mathrm{Re}}(\epsilon^{-1}\nabla_{\mathbf{y}}+\nabla_{ \mathbf{x}})\cdot\left((\mu_{0}+\mathcal{O}(\epsilon^{1}))\bigg{(}(\epsilon^ {-1}\nabla_{\mathbf{y}}+\nabla_{x})(\mathbf{v}_{0}+\mathcal{O}(\epsilon^{1}))\] \[+((\epsilon^{-1}\nabla_{\mathbf{y}}+\nabla_{x})(\mathbf{v}_{0}+ \mathcal{O}(\epsilon^{1})))^{T}-\frac{2}{3}\big{(}(\epsilon^{-1}\nabla_{ \mathbf{y}}+\nabla_{\mathbf{x}})\cdot(\mathbf{v}_{0}+\mathcal{O}(\epsilon^{1} ))\big{)}\mathbf{I}\bigg{)}\right)\] \[-\frac{1}{\mathrm{Fr}^{2}}(\rho_{0}+\mathcal{O}(\epsilon^{1})) \mathbf{z}\] \[-\frac{\epsilon}{\mathrm{Ca}}\frac{3\xi}{2}(\epsilon^{-1}\nabla_{ \mathbf{y}}+\nabla_{\mathbf{x}})\cdot\left((\epsilon^{-1}\nabla_{y}+\nabla_{ x})(u_{0}+\mathcal{O}(\epsilon^{1}))\otimes(\epsilon^{-1}\nabla_{y}+\nabla_{x})(u_{0}+ \mathcal{O}(\epsilon^{1}))\right)\] (94b) \[= -\mathcal{O}(\epsilon^{0})-\mathcal{O}(\epsilon^{-1})-\epsilon^{- 3}\big{[}\overline{\mathrm{Eu}}\nabla_{\mathbf{y}}p_{0}\big{]}-\epsilon^{-2} \big{[}\overline{\mathrm{Eu}}(\nabla_{\mathbf{y}}p_{1}+\nabla_{\mathbf{x}}p_{ 0})\big{]}-\mathcal{O}(\epsilon^{-1})\] \[+\epsilon^{-2}\bigg{[}\frac{1}{\mathrm{Re}}\nabla_{y}\cdot\left( \mu_{0}\left(\nabla_{\mathbf{y}}\mathbf{v}_{0}+(\nabla_{\mathbf{y}}\mathbf{v}_ {0})^{T}-\frac{2}{3}(\nabla_{\mathbf{y}}\cdot\mathbf{v}_{0})\mathbf{I} \right)\right)\bigg{]}+\mathcal{O}(\epsilon^{-1})-\mathcal{O}(\epsilon^{0})\] \[-\epsilon^{-2}\bigg{[}\frac{1}{\mathrm{Ca}}\frac{3\xi}{2}\nabla_{ \mathbf{y}}\cdot(\nabla_{\mathbf{y}}u_{0}\otimes\nabla_{\mathbf{y}}u_{0}) \bigg{]}-\mathcal{O}(\epsilon^{-1})\] (94c) \[= \epsilon^{-3}\big{[}-\overline{\mathrm{Eu}}\nabla_{\mathbf{y}}p_ {0}\big{]}\] \[+\epsilon^{-2}\bigg{[}\overline{\mathrm{Eu}}(-\nabla_{\mathbf{y}} p_{1}-\nabla_{\mathbf{x}}p_{0})+\frac{1}{\mathrm{Re}}\,\nabla_{y}\cdot\left(\mu(u_{0}) \left(\nabla_{\mathbf{y}}\mathbf{v}_{0}+(\nabla_{\mathbf{y}}\mathbf{v}_{0})^ {T}-\frac{2}{3}(\nabla_{\mathbf{y}}\cdot\mathbf{v}_{0})\mathbf{I}\right)\right)\] \[-\frac{1}{\mathrm{Ca}}\frac{3\xi}{2}\nabla_{\mathbf{y}}\cdot( \nabla_{\mathbf{y}}u_{0}\otimes\nabla_{\mathbf{y}}u_{0})\bigg{]} \tag{94d}\]
Using the polynomial structure of \(P^{\prime}\) we obtain the following expansion from the phase-field equation (15).
\[0= \xi^{2}\frac{\partial u}{\partial t}-S\epsilon^{1}\xi^{2}\nabla^{2}u +\xi^{2}\nabla\cdot(\mathbf{v}u)+S\epsilon^{-1}P^{\prime}(u) \tag{95a}\] \[= \xi^{2}\frac{\partial}{\partial t}(u_{0}+\mathcal{O}(\epsilon^{1} ))-\epsilon^{1}S\xi^{2}(\epsilon^{-1}\nabla_{\mathbf{y}}+\nabla_{\mathbf{x}}) \cdot(\epsilon^{-1}\nabla_{\mathbf{y}}+\nabla_{\mathbf{x}})(u_{0}+\epsilon u_ {1}+\mathcal{O}(\epsilon^{2}))\] \[+\xi^{2}(\epsilon^{-1}\nabla_{\mathbf{y}}+\nabla_{\mathbf{x}}) \cdot((\mathbf{v}_{0}+\epsilon\mathbf{v}_{1}+\mathcal{O}(\epsilon^{2}))(u_{0} +\epsilon u_{1}+\mathcal{O}(\epsilon^{2})))\] \[+\epsilon^{-1}S(P^{\prime}(u_{0})+\epsilon u_{1}P^{\prime\prime}( u_{0})+\mathcal{O}(\epsilon^{2}))\] (95b) \[= \bigg{[}\xi^{2}\frac{\partial u_{0}}{\partial t}\bigg{]}+ \mathcal{O}(\epsilon^{1})\] \[-\epsilon^{-1}\big{[}S\xi^{2}\nabla_{\mathbf{y}}^{2}u_{0}\big{]} -\big{[}S\xi^{2}(\nabla_{\mathbf{y}}\cdot(\nabla_{\mathbf{x}}u_{0})+\nabla_{ \mathbf{x}}\cdot(\nabla_{\mathbf{y}}u_{0})+\nabla_{\mathbf{y}}\cdot(\nabla_{ \mathbf{y}}u_{1}))\big{]}-\mathcal{O}(\epsilon^{1})\] \[+\epsilon^{-1}\big{[}S\xi^{2}\nabla_{\mathbf{y}}\cdot(\mathbf{v}_ {0}u_{0})\big{]}+\epsilon^{0}\big{[}\xi^{2}\nabla_{\mathbf{y}}\cdot(\mathbf{v }_{0}u_{1}+\mathbf{v}_{1}u_{0})+\xi^{2}\nabla_{\mathbf{x}}\cdot(\mathbf{v}_{0} u_{0})\big{]}\] \[+\epsilon^{-1}\big{[}SP^{\prime}(u_{0})\big{]}+\big{[}Su_{1}P^{ \prime\prime}(u_{0})\big{]}\] (95c) \[= \epsilon^{-1}\big{[}-S\xi^{2}\nabla_{\mathbf{y}}^{2}u_{0}+\xi^{2} \nabla_{\mathbf{y}}\cdot(\mathbf{v}_{0}u_{0})+SP^{\prime}(u_{0})\big{]}\] \[+\epsilon^{0}\bigg{[}\xi^{2}\frac{\partial u_{0}}{\partial t}+ \xi^{2}\nabla_{\mathbf{y}}\cdot(\mathbf{v}_{0}u_{1}+\mathbf{v}_{1}u_{0})+\xi ^{2}\nabla_{\mathbf{x}}\cdot(\mathbf{v}_{0}u_{0})\] \[-S\xi^{2}(\nabla_{\mathbf{y}}\cdot(\nabla_{\mathbf{x}}u_{0})+ \nabla_{\mathbf{x}}\cdot(\nabla_{\mathbf{y}}u_{0})+\nabla_{\mathbf{y}}\cdot( \nabla_{\mathbf{y}}u_{1}))+Su_{1}P^{\prime\prime}(u_{0})\bigg{]} \tag{95d}\]
Inserting the asymptotic expansions into the boundary conditions yields
\[0 =\mathbf{v}-\epsilon\lambda(\partial_{\mathbf{n}}\mathbf{v}_{ \mathbf{t}})\mathbf{t}\] \[=(\mathbf{v}_{0}+\mathcal{O}(\epsilon^{1}))-\epsilon\lambda(( \epsilon^{-1}\nabla_{y}+\nabla_{\mathbf{x}})(\mathbf{v}_{0}+\mathcal{O}( \epsilon^{1}))\mathbf{n})\] \[=\epsilon^{0}\big{[}\mathbf{v}_{0}-\lambda\nabla_{y}\mathbf{v}_ {0}\mathbf{n}\big{]}+\mathcal{O}(\epsilon^{1})\,\] on
\[\Gamma\]
, (96a) \[0 =\partial_{\mathbf{n}}u+\epsilon^{-1}\cos(\theta_{\mathrm{eq}}) \xi^{-1}\sqrt{2P(u)}\] \[=(\epsilon^{-1}\nabla_{\mathbf{y}}+\nabla_{\mathbf{x}})(u_{0}+ \mathcal{O}(\epsilon^{1}))\mathbf{n}+\epsilon^{-1}\cos(\theta_{\mathrm{eq}}) \xi^{-1}(\sqrt{2P(u_{0})}+\mathcal{O}(\epsilon^{1}))\] \[=\epsilon^{-1}\big{[}\nabla_{y}u_{0}\mathbf{n}+\cos(\theta_{ \mathrm{eq}})\xi^{-1}\sqrt{2P(u_{0})}\big{]}+\mathcal{O}(\epsilon^{1})\,\] on
|
2302.14862 | Categorical Symmetry of the Standard Model from Gravitational Anomaly | In the Standard Model, some combination of the baryon $\bf B$ and lepton $\bf
L$ number symmetry is free of mixed anomalies with strong and electroweak
$su(3) \times su(2) \times u(1)_{\tilde Y}$ gauge forces. However, it can still
suffer from a mixed gravitational anomaly, hypothetically pertinent to
leptogenesis in the very early universe. This happens when the total "sterile
right-handed" neutrino number $n_{\nu_R}$ is not equal to the family number
$N_f$. Thus the invertible $\bf B - L$ symmetry current conservation can be
violated quantum mechanically by gravitational backgrounds such as
gravitational instantons. In specific, we show that a noninvertible categorical
$\bf B - L$ generalized symmetry still survives in gravitational backgrounds.
In general, we propose a construction of noninvertible symmetry charge
operators as topological defects derived from invertible anomalous symmetries
that suffer from mixed gravitational anomalies. Examples include the
perturbative local and nonperturbative global anomalies classified by
$\mathbb{Z}$ and $\mathbb{Z}_{16}$ respectively. For this construction, we
utilize the anomaly inflow bulk-boundary correspondence, the 4d Pontryagin
class and the gravitational Chern-Simons 3-form, the 3d
Witten-Reshetikhin-Turaev-type topological quantum field theory corresponding
to a 2d rational conformal field theory with an appropriate rational chiral
central charge, and the 4d $\mathbb{Z}_4^{\rm TF}$-time-reversal symmetric
topological superconductor with 3d boundary topological order. | Pavel Putrov, Juven Wang | 2023-02-28T18:59:50Z | http://arxiv.org/abs/2302.14862v2 | # Categorical Symmetry of the Standard Model from Gravitational Anomaly
###### Abstract
In the Standard Model, some combination of the baryon \(\mathbf{B}\) and lepton \(\mathbf{L}\) number symmetry is free of mixed anomalies with strong and electroweak \(su(3)\times su(2)\times u(1)_{\bar{Y}}\) gauge forces. However, it can still suffer from a mixed gravitational anomaly, hypothetically pertinent for leptogenesis in the very early universe. This happens when the "sterile right-handed" neutrino number \(n_{\nu_{R}}\) is not equal to the family number \(N_{f}\). Thus the invertible \(\mathbf{B}-\mathbf{L}\) symmetry current conservation can be violated quantum mechanically by gravitational backgrounds such as gravitational instantons. In specific, we show that a noninvertible categorical counterpart of the \(\mathbf{B}-\mathbf{L}\) symmetry still survives in gravitational backgrounds. In general, we propose a construction of noninvertible symmetry charge operators as topological defects derived from invertible anomalous symmetries that suffer from mixed gravitational anomalies. Examples include the perturbative local and nonperturbative global anomalies classified by \(\mathbb{Z}\) and \(\mathbb{Z}_{16}\) respectively. For this construction, we utilize the anomaly inflow concept and the 3d Witten-Reshetikhin-Turaev-type topological quantum field theory corresponding to a 2d rational conformal field theory with an appropriate chiral central charge, or the 3d boundary topological order of 4d \(\mathbb{Z}_{4}^{TF}\)-time-reversal symmetric topological superconductor.
###### Contents
* I Introduction and Summary
* I.1 Introduction and the Plan
* I.2 Summary
* II Standard Model: 4d Anomaly, 5d Invertible Phase, and 6d Polynomial
* III Categorical Symmetry from Mixed U(1)-Gravitational Anomaly
* IV Categorical Symmetry from Mixed \(\mathbb{Z}_{4}\)-Gravitational Anomaly
* V Conclusion
* A Table of Representations of Quarks and Leptons
* B Anomaly Polynomial Generators for U(1) Symmetry in 4d from Index Theorem
## I Introduction and Summary
### Introduction and the Plan
The Standard Model (SM) [1; 2; 3; 4] has a specific combination of the baryon \(\mathbf{B}\) and lepton \(\mathbf{L}\) number symmetry, known as a continuous U(1)\({}_{\mathbf{B}-\mathbf{L}}\), preserved within the SM gauge interactions, thanks to the SM lagrangian interaction structure and thanks to the U(1)\({}_{\mathbf{B}-\mathbf{L}}\) being mixed gauge anomaly-free with strong and electroweak gauge forces of Lie algebra \(\mathcal{G}_{\text{SM}}\equiv su(3)\times su(2)\times u(1)_{\bar{Y}}\)[5]. In the past, the U(1)\({}_{\mathbf{B}-\mathbf{L}}\) symmetry current conservation is checked quantum mechanically by perturbative local anomalies, captured by Feynman graphs (see references in [5]). The U(1)\({}_{\mathbf{B}-\mathbf{L}}\) symmetry preservation is a remarkable fact because of the following reason. In 4d spacetime, the U(1) symmetry
current of a single Weyl fermion number alone is known to suffer from Adler-Bell-Jackiw (ABJ) perturbative local anomaly [6; 7] via triangle Feynman diagram calculations with three vertices of \(\mathrm{U}(1)^{3}\), \(\mathrm{U}(1)\)-\(G^{2}\) and \(\mathrm{U}(1)\)-gravity [2], through abelian \(\mathrm{U}(1)\) or nonabelian \(G\) instantons [8; 9; 10; 11] and gravitational instantons [12; 13], characterized respectively by Chern class [14] and Pontryagin class [15; 16]. Recently, thanks to the development of cobordism classifications of bulk topological phases and their boundary anomalies ([17; 18; 19; 20; 21] and references therein) via the classic anomaly inflow idea [22], both perturbative local anomalies and nonperturbative global anomalies in the SM have been checked systematically, via the cobordism group supplemented with quantum field theory (QFT) calculations [23; 24; 25; 26; 27; 28; 29; 30; 31]. However, the \(\mathbf{B}-\mathbf{L}\) symmetry suffers from a mixed gravitational anomaly, when the "sterile right-handed" neutrino number \(n_{\nu_{R}}\) is not equal to the family number \(N_{f}\). Namely, the invertible \(\mathbf{B}-\mathbf{L}\) symmetry current or charge conservation can be violated by gravitational backgrounds under curved spacetime geometries or by gravitational instantons. Phenomenological applications of this mixed \(\mathbf{B}-\mathbf{L}\)-gravitational anomaly include the gravitational leptogenesis [32; 33; 34] and beyond the Standard Model (BSM) new exotic sectors [28; 29; 30; 31] obtained from canceling this gravitational anomaly.
Although physicists had confirmed at least \(N_{f}=3\) families of quarks and leptons by experiments [35; 36], we do not yet identify the detailed properties of sterile neutrinos, nor know how many \(n_{\nu_{R}}\) there are in nature [36]. Following the set-up advocated in [29; 30; 31], the index \(-N_{f}+n_{\nu_{R}}\) counting the difference between the family and the right-hand neutrino number will become important. As we will review in Sec. II that a nonzero \(-N_{f}+n_{\nu_{R}}\) implies nontrivial _perturbative local anomalies_, classified by \(\mathbb{Z}^{2}\) and captured by small gauge-diffeomorphism transformations via the ABJ triangle Feynman diagram [6; 7] with three vertices of \(\mathrm{U}(1)^{3}_{\mathbf{B}-\mathbf{L}}\) and \(\mathrm{U}(1)_{\mathbf{B}-\mathbf{L}}\)-gravity [2] types. When the continuous \(\mathbf{B}-\mathbf{L}\) symmetry is combined with the \(\tilde{Y}\) electroweak hypercharge gauge symmetry and then restricted to a discrete \(\mathbb{Z}_{4,X}\) subgroup, where \(X\equiv 5(\mathbf{B}-\mathbf{L})-\frac{2}{3}\tilde{Y}\) with properly integer quantized hypercharge \(\tilde{Y}\)[37; 38], the aforementioned perturbative local anomaly classified by \(\mathbb{Z}^{2}\) becomes a nonperturbative global anomaly classified by \(\mathbb{Z}_{16}\). All the quarks and leptons have a unit charge \(1\) under \(\mathbb{Z}_{4,X}\) (see Table 1 in Appendix A). Thus, the index \(-N_{f}+n_{\nu_{R}}\) mod \(16\) also implies a nontrivial _nonperturbative global anomaly_ classified by \(\mathbb{Z}_{16}\)[30; 31] and captured by the large gauge-diffeomorphism transformations.
In this work, specifically, we reinterpret the nonconservation of the invertible \(\mathbf{B}-\mathbf{L}\) symmetry current due to a gravitational background as the replacement of the Noether charge operators by their noninvertible analogs. _Noninvertible categorical symmetry_[39] is a concept growing out of the recent development on the generalized global symmetry [40] (see reviews [41; 42]). Ref. [40] emphasize anew that the symmetry charge operator \(U\) is a topological defect. A topology-preserving deformation of \(U\) around a relatively charged object \(\mathcal{O}\) would not affect the measurement of the symmetry charge. While an ordinary global symmetry with a group \(G\) implies that the fusion rules of the symmetry charge operators (a.k.a. topological defects) is described by the corresponding group law, there are also symmetries with charge operators that obey fusion rules described by a fusion category that goes beyond an ordinary group. These symmetries are called _noninvertible symmetries_ (since some of the charge operators do not have inverse operators) or _categorical symmetries_ (since charge operators form a category). While fusion categories of topological defects in 2d conformal field theories (CFT) were appreciated long ago [43; 44; 45; 46; 47], Ref. [48; 49; 50; 51; 52; 53; 54] advocate their noninvertible symmetry nature. Only recently, noninvertible symmetries have been explored more systematically in higher spacetime dimensions, see selective references [55; 56; 57; 58; 59; 60; 61] relevant for the 4d SM or BSM context, see selective mathematically grandiose encyclopedic references [62; 63; 64; 65; 66; 67], and references therein. Importantly Ref. [58; 59] shows that although the invertible \(\mathrm{U}(1)\) symmetry can be broken by the dynamical \(\mathrm{U}(1)^{\prime}\) gauge theory via the \(\mathrm{U}(1)-\mathrm{U}(1)^{\prime 2}\) ABJ perturbative local anomaly (such as the axial \(\mathrm{U}(1)=\mathrm{U}(1)_{A}\) symmetry of the vector gauged \(\mathrm{U}(1)^{\prime}=\mathrm{U}(1)_{V}\) quantum electrodynamics (QED)), a subgroup of the broken invertible \(\mathrm{U}(1)\) can be revived as a noninvertible symmetry. Namely, it is the subgroup of elements \(\mathrm{e}^{\,\mathrm{i}\,\alpha}\in\mathrm{U}(1)\) such that
\[\alpha=2\pi p/N\in 2\pi\cdot\mathbb{Q}/\mathbb{Z}\subset 2\pi\cdot\mathbb{R}/ \mathbb{Z}\cong\mathrm{U}(1) \tag{1}\]
for some integers \(p\) and \(N\) (one can always assume that \(\alpha\in[0,2\pi)\), so that \(N>p\geq 0\), and that \(p\) and \(N\) are coprime). That is the rational \(\mathbb{Q}/\mathbb{Z}\) part of the original \(\mathbb{R}/\mathbb{Z}\cong\mathrm{U}(1)\) invertible symmetry is revived as a noninvertible symmetry, meaning that the modified symmetry charge operators beget noninvertible fusion rules.
For the invertible \(\mathrm{U}(1)\) symmetry, there is a one-to-one correspondence between the elements \(\alpha\in 2\pi\cdot(\mathbb{R}/\mathbb{Z})\cong\mathrm{U}(1)\) the invertible symmetry charge operators \(U_{\alpha}\), with the fusion corresponding to the group binary operation \(\alpha_{1}+\alpha_{2}\in 2\pi\cdot(\mathbb{R}/\mathbb{Z})\cong\mathrm{U}(1)\):
\[U_{\alpha_{1}}\,U_{\alpha_{2}}=U_{\alpha_{1}+\alpha_{2}}. \tag{2}\]
For the full (i.e. closed under fusion) noninvertible symmetry, however, there is no longer a one-to-one correspondence between \(\mathbb{Q}/\mathbb{Z}\) group elements and the topological operators. The operators however can be labelled by elements of a certain commutative monoid \(\mathfrak{M}\), such that the noninvertible fusion rules correspond to the monoid's binary operation and there is surjective homomorphism of monoids \(\mathfrak{M}\to\mathbb{Q}/\mathbb{Z}\)[68].
We will encounter an analogous structure in our setup with gravitational anomalies. The plan of this article goes as follows:
In Sec. II.2, we outline and summarize our strategy and interpretations in a friendly and nontechnical way.
In Sec. II, we recall and setup the 4d SM, its anomaly associated with the quark number \(\mathrm{U}(1)_{\mathbf{Q}}\) and lepton number \(\mathrm{U}(1)_{\mathbf{L}}\) symmetry (whose combination gives the \(\mathbf{B}-\mathbf{L}\)), and the anomaly associated with the spacetime diffeomorphisms or, equivalently, gravity. We will write down the 4d anomaly in terms of a 5d invertible topological field theory (iTFT), or 6d anomaly polynomial. We put the emphasis on the two integers, belonging to the \(\mathbb{Z}^{2}\) group that classifies local anomalies (the \(\mathrm{U}(1)_{\mathbf{B}-\mathbf{L}}^{3}\) pure gauge anomaly and \(\mathrm{U}(1)_{\mathbf{B}-\mathbf{L}}\)-gravity2 mixed gauge-gravity anomaly), and also \(\nu\in\mathbb{Z}_{16}\) that classifies global anomalies (the mixed \(\mathbb{Z}_{4,X}\)-gravity anomaly). This story will turn out to match exactly the cobordism results previously obtained in [30; 31].
Footnote 2: In general, to capture the full anomaly one may need to consider networks of the charge operators and the deformations involving the moves of the network. This, in particular, will be relevant for the construction in Sec. IV.
In Sec. III, we discuss the construction of the noninvertible categorical symmetry topological defect from the mixed \(\mathbb{Z}_{4}\)-gravitational anomaly classified by \(\mathbb{Z}_{16}\).
In Sec. V, we conclude with final remarks. We enlist future directions pertinent to the leptogenesis, baryogenesis, and possible BSM implications of the theoretical proposals on replacing the right-handed neutrinos with interacting topological quantum field theory (TQFT) or CFT sectors together [28; 29; 69].
In Appendix A, for the reader's convenience, we gather the representations of Weyl fermions in various gauge or global symmetries, \(su(3)\times su(2)\times u(1)_{\tilde{Y}}\), \(\mathrm{U}(1)_{\mathbf{Q}-N_{\mathbf{c}}\mathbf{L}}\) (the precise form of \(\mathrm{U}(1)_{\mathbf{B}-\mathbf{L}}\) with properly quantized charges, with the color number \(N_{c}=3\)), \(\mathbb{Z}_{2N_{c}N_{f},\mathbf{Q}+N_{\mathbf{c}}\mathbf{L}}\subset\mathrm{U} (1)_{\mathbf{Q}+N_{c}\mathbf{L}}\) (the precise form of \(\mathbb{Z}_{2N_{f},\mathbf{B}+\mathbf{L}}\subset\mathrm{U}(1)_{\mathbf{B}+ \mathbf{L}}\) with properly quantized charges), \(\mathbb{Z}_{4,X}\) symmetry, and others.
In Appendix B we review the classification of anomalies for \(\mathrm{Spin}\times\mathrm{U}(1)\) and \(\mathrm{Spin}^{c}\) symmetries in 4d [20] in terms of degree 6 anomaly polynomial and its relation to the classification of anomalies for \(\mathrm{Spin}\times_{\mathbb{Z}_{4}^{F}}\times\mathbb{Z}_{4}\) symmetry.
### Summary
In this work, we show that a noninvertible counterpart of certain kinds of mixed-gravitational anomalous symmetry still survives in gravitational backgrounds. Below are some strategies, steps, and interpretations that we will take to achieve that goal.
1. We will follow the following general idea about the trade-off between anomalies and noninvertability of symmetries. A presence of anomaly of a global \(p\)-form symmetry in a \(d\)-dimensional QFT implies that the naive (i.e. classically defined) extended charge operators of dimension \(d-p-1\) are no longer topological1 or require some additional noncanonical choice to be unambiguously defined. However, one can consider modifying the charge operator by introducing a \((d-p-1)\)-dimensional topological quantum field theory (TQFT, typically we mean noninvertible TQFT) supported on its worldvolume and coupled in a nontrivial way to bulk fields. If the TQFT itself has anomaly, its partition function may also change when the charge operator is deformed, or it may also require some noncanonical choice to be made. It then can happen that such a pathology of the TQFT cancel the pathology of the naive charge operator and together they will form a well-defined topological defect. However, for a TQFT to have an anomaly, it must be noninvertible. Therefore the new topological effects will be noninvertible as well. A version of such construction was in particular initiated in [58; 59]. Footnote 1: In general, to capture the full anomaly one may need to consider networks of the charge operators and the deformations involving the moves of the network. This, in particular, will be relevant for the construction in Sec. IV.
2. There are two types of anomalies involving \(\mathrm{U}(1)\) global symmetry in 4d: 1. Pure \(\mathrm{U}(1)\) anomaly, with the corresponding anomaly 4d polynomial term \[I_{6}=\kappa_{1}\,\frac{c_{1}^{3}}{3!}+\ldots\equiv\kappa_{1}\,\frac{F^{3}}{3! \,(2\pi)^{3}}+\ldots\] (3) where \(F=\,\mathrm{d}A\) is the field strength of the corresponding \(\mathrm{U}(1)\) gauge field \(A\) and \(c_{1}\equiv F/(2\pi)\) is the Chern-Weil represntative of the first Chern class of the \(\mathrm{U}(1)\) principle bundle. This anomaly implies that the \(\mathrm{U}(1)\) symmetry current 1-form \(j\) is not conserved in a general background gauge field configuration: \[\mathrm{d}\star j=\kappa_{1}\frac{F^{2}}{8\pi^{2}}+\ldots\] (4)
In particular, in the presence of such anomaly the U(1) symmetry cannot be gauged, i.e. the corresponding gauge field cannot be made dynamical. On the other hand, such anomaly is not considered to be breaking U(1) as a global symmetry, as the current is still conserved in the trivial background. In the nontrivial background however, the nonconservation of the current implies that the corresponding 3-dimensional charge operators are no longer topological. As we will consider in more detail later, their topological but noninvertible counterparts can be constructed essentially in the same way as it was done in [58, 59] in the case of ABJ-type mixed anomaly between a global U(1) and a different gauged U(1). The difference is that in our case the second U(1) is not gauged and identified with the first U(1). According to the general prescription outlined above, the new operators are constructed by introducing a 3d TQFT which is supported on the worldvolume of the original charge operators and coupled to the bulk background U(1) gauge field. This construction works for the charge operators corresponding to the torsion elements of U(1), i.e. the operators realizing rotations by fractions of the full rotation. Let us also note that if \(\mathbb{Z}_{2}^{F}\subset\text{U(1)}\) (which is the case of \(\mathbf{B}-\mathbf{L}\) symmetry considered in this work), the 4d theory can be considered on a nonspin spacetime manifold \(M\). In that case, the background U(1) gauge field is necessarily nontrivial (in particular, for the corresponding first Chern class, we must have \(2c_{1}=w_{2}(TM)\neq 0\mod 2\), where \(w_{j}(TM)\) is the \(j\)-th Stiefel-Whitney class of tangent bundle \(TM\), see App. B for review).
2. Mixed gravitational anomaly, with the corresponding anomaly 4d polynomial term \[I_{6}=\kappa_{2}\,c_{1}p_{1}+\ldots\equiv-\kappa_{2}\ \frac{1}{2(2\pi)^{3}}F\,\text{Tr}[R\wedge R]+\ldots\] (5) where \(R=d\omega+\omega\wedge\omega\) is the 2-form curvature of the Levi-Cevita spin-connection 1-form \(\omega\) and \(p_{1}=-\text{Tr}[R\wedge R]/(8\pi^{2})\) is the standard representative of the first Pontryagin class. The anomaly implies the following nonconservation of the U(1) current: \[\text{d}\star j=-\frac{\kappa_{2}}{8\pi^{2}}\text{Tr}[R\wedge R]+\ldots\] (6) Note that in principle one can get rid of the term in the right-hand side by introducing a local counterterm [13]. However, it will break general covariance of the theory. Since we expect that Standard Model can be coupled to gravity in a consistent way, we will assume absence of such a counterterm. This type of anomaly ordinarily is also not considered to be breaking the U(1) symmetry. The current is still conserved on a flat spacetime. Unlike in the previous case, however, we do expect the spacetime to be curved in a physical theory due to gravitational effects. Such an anomaly in particular plays a crucial role in the gravitational leptogenesis model [33]. On a curved spacetime, the presence of such anomaly implies that the naive U(1) charge operators will not be topological anymore and therefore the corresponding charge will not be conserved. In particular, the change of the total charge in some time interval is given by \[\Delta Q=-\frac{\kappa_{2}}{8\pi^{2}}\int_{\Delta M^{4}}\text{Tr}[R\wedge R]\] (7) where \(\Delta M^{4}\) is the spacetime between the initial and final time slices. As we will show later, the charge operators can be again modified to be topological, at the cost of losing invertibility. According to the general prescription outlined above this is done by introducing a 3d TQFT which is supported on the worldvolume of the defect and coupled to the bulk gravity via framing anomaly. Existence of such extended topological operators can be interpreted as a certain modified charge conservation law. When the spacetime topology and metric are considered dynamical, in principle one expects no global symmetries at all in quantum gravity [70, 71, 72, 73], including noninvertible ones [62]. Our construction then shows that if a U(1) symmetry has a mixed gravitational anomaly, it does not become completely broken in quantum gravity just by this anomaly. Rather, it should be either (1) broken by some other method or (2) dynamically gauged in the UV completed theory. When the anomalies of both types are present, the construction of the noninvertible counterparts to the naive charge operators can be combined by stacking together the two anomalous 3d TQFTs used the individual cases.
Standard model: 4D anomaly, 5D invertible phase, and 6D polynomial
Standard Model (SM) [1; 2; 3; 4] is a 4d chiral gauge theory with Yang-Mills spin-1 gauge fields of the Lie algebra
\[\mathcal{G}_{\rm SM}\equiv su(3)\times su(2)\times u(1)_{\tilde{Y}} \tag{8}\]
coupling to \(N_{f}=3\) families of 15 or 16 Weyl fermions (spin-\(\frac{1}{2}\) Weyl spinor \(\mathbf{2}_{L}\) in the spacetime symmetry Spin(1,3), written as a left-handed 15- or 16-plet \(\psi_{L}\)) in the following \(\mathcal{G}_{\rm SM}\) representation
\[(\psi_{L})_{\rm I}=(\bar{d}_{R}\oplus l_{L}\oplus q_{L}\oplus\bar{ u}_{R}\oplus\bar{e}_{R})_{\rm I}\oplus n_{\nu_{\rm I},R}\bar{\nu}_{\rm I,R}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\
Let us write down the full _invertible_ spacetime-internal symmetry structure of the SM [30; 31]. To specify the spacetime-internal symmetries of a theory, we follow Freed-Hopkin's notation [19] to write
\[G\equiv(\frac{G_{\text{spacetime}}\ltimes G_{\text{internal}}}{N_{\text{shared}}}) \equiv G_{\text{spacetime}}\ltimes N_{\text{shared}}\;G_{\text{internal}}. \tag{13}\]
The semi-direct product \(\ltimes\) specifies a group extension. The \(N_{\text{shared}}\) is the shared common normal subgroup symmetry between \(G_{\text{spacetime}}\) and \(G_{\text{internal}}\), e.g. \(N_{\text{shared}}\) can be the fermion parity symmetry \(\mathbb{Z}_{2}^{F}\), which acts on fermions by \(\psi\mapsto-\psi\). The Lie algebra of the internal symmetry of SM is \(\mathcal{G}_{\text{SM}}\), but the global structure of Lie group \(G_{\text{SM}_{\text{q}}}\) has four possible versions [75; 76] all compatible with the SM matter field representation (9):
\[G_{\text{SM}_{\text{q}}}\equiv\frac{\text{SU}(3)\times\text{SU}(2)\times\text {U}(1)_{\tilde{Y}}}{\mathbb{Z}_{\text{q}}},\quad\text{ with q}=1,2,3,6. \tag{14}\]
Following [30; 31], if we treat the \(G_{\text{SM}_{\text{q}}}\) as an internal global symmetry, we shall consider the spacetime-internal symmetry of SM as
\[G=\text{Spin}\times_{\mathbb{Z}_{2}^{F}}\text{U}(1)_{\mathbf{Q}-N_{c}\mathbf{ L}}\times_{\mathbb{Z}_{2}^{F}}\mathbb{Z}_{2N_{c}N_{f},\mathbf{Q}+N_{c}\mathbf{L}} \times G_{\text{SM}_{\text{q}}}. \tag{15}\]
However, \(G_{\text{SM}_{\text{q}}}\) is an SM dynamical gauge group, such that dynamically gauging it induces a generalized global symmetry [40], including a 1-form electric symmetry and a 1-form magnetic symmetry, as [30; 57]
\[G=\text{Spin}\times_{\mathbb{Z}_{2}^{F}}\text{U}(1)_{\mathbf{Q}-N_{c}\mathbf{ L}}\times_{\mathbb{Z}_{2}^{F}}\mathbb{Z}_{2N_{c}N_{f},\mathbf{Q}+N_{c}\mathbf{L}} \times\mathbb{Z}_{6/\text{q},[1]}^{e}\times\text{U}(1)_{[1]}^{m}. \tag{16}\]
Ref. [30; 31] looks into the 4d SM's anomaly via the 5d cobordism group TP\({}_{5}\) calculation. Here instead we start from deriving the 6d anomaly polynomial.
As was described in [13; 77], the anomaly polynomial of Weyl fermions can be computed using Atiyah-Singer index theorem. The contribution of a single Weyl fermion in 4d is the degree 6 part of \(\hat{A}\,\text{ch}(\mathcal{E})\) where \(\hat{A}\) is the A-roof genus of the spacetime tangent bundle and \(\mathcal{E}\) is the complex vector bundle associated to the representation of the fermion. The explicit expression in terms of Pontryagin and Chern characteristic classes [14; 15; 16] can be obtained using the expansions of \(\hat{A}\) and \(\text{ch}(E)\):
\[\hat{A} =1-\frac{p_{1}}{24}+\frac{7p_{1}^{2}-4p_{2}}{5760}+\ldots, \tag{17}\] \[\text{ch}(\mathcal{E}) =\text{rank}\,\mathcal{E}+c_{1}(\mathcal{E})+\frac{1}{2}\left(c_{1 }^{2}(\mathcal{E})-2c_{2}(\mathcal{E})\right)+\frac{1}{6}\left((c_{1}^{3}( \mathcal{E})-3c_{1}(\mathcal{E})c_{2}(\mathcal{E})+3c_{3}(\mathcal{E})\right)+\ldots \tag{18}\]
and also using the properties \(\text{ch}(\mathcal{E}_{1}\oplus\mathcal{E}_{2})=\text{ch}(\mathcal{E}_{1})+ \text{ch}(\mathcal{E}_{2})\), \(\text{ch}(\mathcal{E}_{1}\otimes\mathcal{E}_{2})=\text{ch}(\mathcal{E}_{1}) \,\text{ch}(\mathcal{E}_{2})\). The explicit anomaly polynomial for the gauge, global, and diffeomorphism symmetries of the 4d SM, with the matter representation given in (9), reads3:
Footnote 3: To obtain the polynomial coefficients correctly, here we use the convention in Table 1 such that every fermion is written as a left-handed Weyl spinor (left-handed particle \(\psi_{L}\) or right-handed particle \(\text{i}\sigma_{2}\psi_{R}^{2}\)). Every particle contributes \(+1\) (e.g., \(\psi_{L}\)) and every anti-particle contributes \(-1\) (e.g., \(\text{i}\sigma_{2}\psi_{R}^{2}\)), to the quark \(\mathbf{Q}\) or lepton \(\mathbf{L}\) number, namely the integer charge representation of \(\text{U}(1)_{\mathbf{Q}}\) or \(\text{U}(1)_{\mathbf{L}}\).
\[I_{6}\equiv(N_{c}c_{1}(\text{U}(1)_{\mathbf{Q}})+c_{1}(\text{ U}(1)_{\mathbf{L}}))\,N_{f}\left(18\,\frac{c_{1}(\text{U}(1)_{\tilde{Y}})^{2}}{2}+c_{2}( \text{SU}(2))\right)\\ +(-N_{f}+n_{\nu_{R}})\,\left(\frac{c_{1}(\text{U}(1)_{\mathbf{L}} )^{3}}{6}-\frac{c_{1}(\text{U}(1)_{\mathbf{L}})p_{1}(TM)}{24}\right), \tag{19}\]
where \(c_{j}(G)\) is the \(j\)-th Chern class of the vector bundle associated to the defining representation of \(G\), and \(p_{j}(TM)\) is the \(j\)-th Pontryagin class of the spacetime tangent bundle \(TM\). When \(M^{6}\) is a closed 6-manifold, then \(\int_{M^{6}}I_{6}\in\mathbb{Z}\). When \(M^{6}\) has a boundary \(\partial M^{6}=M^{5}\), we can consider this \(M^{5}\) as a 5d interface between two 6d bulks with the lagrangian density \(\theta I_{6}\) such that \(\theta=0\) on one 6d side and \(\theta=2\pi\) on the other 6d side. On the \(M^{5}\) interface, we have an invertible topological field theory (iTFT) with the action \(S_{5}=2\pi\int_{M^{5}}I_{5}\in 2\pi\mathbb{R}\). Its value modulo \(2\pi\) is independent of the choice of \(M^{6}\). The explicit 5d iTFT related in this way to the anomaly polynomial (19) reads
\[S_{5}\equiv\int_{M^{5}}(N_{c}A_{\mathbf{Q}}+A_{\mathbf{L}})N_{f}\left(18\, \frac{c_{1}(\text{U}(1)_{\tilde{Y}})^{2}}{2}+c_{2}(\text{SU}(2))\right)+(-N_{f} +n_{\nu_{R}})\,A_{\mathbf{L}}\,\left(\frac{c_{1}(\text{U}(1)_{\mathbf{L}})^{2} }{6}-\frac{p_{1}(TM)}{24}\right). \tag{20}\]
Here \(A_{\mathbf{Q}}\) and \(A_{\mathbf{L}}\) are background gauge fields for \(\mathrm{U}(1)_{\mathbf{Q}}\) and \(\mathrm{U}(1)_{\mathbf{L}}\) symmetries respectively. This 5d TQFT encodes the anomaly of the 4d SM by the standard anomaly inflow setup. Note that in principle there is an ambiguity of adding a total derivative: \(I_{5}\to I_{5}+\mathrm{d}(I_{4})\). Such a change corresponds to addition of a counterterm \(I_{4}\) to the action of the 4d theory. In Eq. (20), we have made the choice that preserves gauge invariance for the 4d dynamical gauge fields and general covariance.
Here are some comments on the symmetries and anomalies in 4d read from the 5d iTFT (20):
1. Two particular linear combinations of \(\mathrm{U}(1)_{\mathbf{Q}}\) and \(\mathrm{U}(1)_{\mathbf{L}}\), written as \(\mathrm{U}(1)_{\mathbf{Q}-N_{c}\mathbf{L}}\) and \(\mathrm{U}(1)_{\mathbf{Q}+N_{c}\mathbf{L}}\) are particularly convenient. Because both \(\mathrm{U}(1)_{\mathbf{Q}-N_{c}\mathbf{L}}\) and \(\mathrm{U}(1)_{\mathbf{Q}+N_{c}\mathbf{L}}\) contain the fermion parity \(\mathbb{Z}_{2}^{F}\) normal subgroup, we have two types of \(\mathrm{Spin}^{c}\equiv\mathrm{Spin}\times_{\mathbb{Z}_{2}^{F}}\mathrm{U}(1)\) structures from both \(\mathbf{Q}-N_{c}\mathbf{L}\) and \(\mathbf{Q}+N_{c}\mathbf{L}\), agreeing with (15) and (16).
2. The invertible \(\mathrm{U}(1)_{\mathbf{Q}-N_{c}\mathbf{L}}\) ordinary 0-form symmetry couples to the 1-form background gauge fields satisfying the constraint \(N_{c}A_{\mathbf{Q}}+A_{\mathbf{L}}=0\) or simply \(N_{c}A_{\mathbf{Q}}=-A_{\mathbf{L}}=N_{c}A_{\mathbf{Q}-N_{c}\mathbf{L}}\). The vanishing of the first term in (20) tells that the ABJ-type anomalies of the form \(\mathrm{U}(1)_{\mathbf{Q}-N_{c}\mathbf{L}}\)-\(\mathrm{U}(1)_{\tilde{Y}}^{2}\) and \(\mathrm{U}(1)_{\mathbf{Q}-N_{c}\mathbf{L}}\)-\(\mathrm{SU}(2)^{2}\) are absent.
3. The invertible \(\mathrm{U}(1)_{\mathbf{Q}+N_{c}\mathbf{L}}\) ordinary 0-form symmetry couples to the 1-form background gauge fields satisfying the constraint \(N_{c}A_{\mathbf{Q}}-A_{\mathbf{L}}=0\) or simply \(N_{c}A_{\mathbf{Q}}=A_{\mathbf{L}}=N_{c}A_{\mathbf{Q}+N_{c}\mathbf{L}}\). The nonvanishing of the first term in (20) with the coefficient \(2N_{f}N_{c}A_{\mathbf{Q}+N_{c}\mathbf{L}}(18\frac{c_{\mathrm{U}}(1)_{\tilde{Y }})^{2}}{2}+c_{2}(\mathrm{SU}(2)))\) tells that: (1) the ABJ-type \(\mathrm{U}(1)_{\mathbf{Q}+N_{c}\mathbf{L}}\)-\(\mathrm{U}(1)_{\tilde{Y}}^{2}\) anomaly breaks \(\mathrm{U}(1)_{\mathbf{Q}+N_{c}\mathbf{L}}\) down to \(\mathbb{Z}_{36N_{c}N_{f},\mathbf{Q}+N_{c}\mathbf{L}}\) via the \(\mathrm{U}(1)\) instanton number \(n^{(1)}=\int\frac{c_{1}(\mathrm{U}(1)_{\tilde{Y}})^{2}}{2}\in\mathbb{Z}\) on spin manifolds. (2) meanwhile, the ABJ-type \(\mathrm{U}(1)_{\mathbf{Q}+N_{c}\mathbf{L}}\)-\(\mathrm{SU}(2)^{2}\) anomaly breaks \(\mathrm{U}(1)_{\mathbf{Q}+N_{c}\mathbf{L}}\) down to \(\mathbb{Z}_{2N_{c}N_{f},\mathbf{Q}+N_{c}\mathbf{L}}\) via the \(\mathrm{SU}(2)\) instanton number \(n^{(2)}=-\int c_{2}(\mathrm{SU}(2))\in\mathbb{Z}\) on arbitrary 4-manifolds.
4. **No noninvertible symmetry for the \(\mathbf{Q}+N_{c}\mathbf{L}\)** (or \(\mathbf{B}+\mathbf{L}\) symmetry): The SM is compatible with four global structure versions of the Lie gauge group \(G_{\mathrm{SM}_{a}}\). When \(q=1\) or \(3\), the SM admits the \(\mathrm{SU}(2)\times\mathrm{U}(1)_{\tilde{Y}}\) instantons. When \(q=2\) or \(6\), the SM admits the \(\mathrm{U}(2)_{\tilde{Y}}\) instantons. The \(q=1,3\) and \(q=2,6\) are related by gauging the 1-form electric symmetry in (16). Let us compare \(\mathrm{SU}(2)\times\mathrm{U}(1)_{\tilde{Y}}\) instanton versus \(\mathrm{U}(2)_{\tilde{Y}}\equiv\frac{\mathrm{SU}(2)\times\mathrm{U}(1)_{\tilde {Y}}}{Z_{2}}\) instanton. \(\bullet\) Because of the \(\mathrm{Spin}\times_{\mathbb{Z}_{2}^{F}}\mathrm{U}(1)_{\mathbf{Q}-N_{c} \mathbf{L}}=\mathrm{Spin}^{c}\) structure in (15) and (16), we can allow different instanton number quantizations on the spin manifolds versus nonspin manifolds. \(\bullet\) Regardless of the \(\mathrm{SU}(2)\times\mathrm{U}(1)_{\tilde{Y}}\) instanton numbers from \(n^{(2)}=-\int c_{2}(\mathrm{SU}(2))\) and \(n^{(1)}=\int\frac{c_{1}(\mathrm{U}(1)_{\tilde{Y}})^{2}}{2}\), or the \(\mathrm{U}(2)\) instantons from \(n^{(2)}=\int(-c_{2}(\mathrm{U}(2))+\frac{c_{1}(\mathrm{U}(2)_{\tilde{Y}})^{2} }{2})\), we are concerned only with the quantization of the second Chern classes and the first one squared. \(\bullet\) The Chern numbers \(\int c_{1}(\mathrm{U}(1)_{\tilde{Y}})\in\mathbb{Z}\), \(\int c_{1}(\mathrm{U}(2)_{\tilde{Y}})\in\mathbb{Z}\), and \(\int c_{2}(\mathrm{U}(2)_{\tilde{Y}})\in\mathbb{Z}\) are all integer-valued for both spin and nonspin manifolds. But the \(\mathrm{U}(1)_{\tilde{Y}}\) instanton number \(\int\frac{1}{2}c_{1}^{2}(\mathrm{U}(1)_{\tilde{Y}})\in\mathbb{Z}\) on spin manifolds, while \(\int\frac{1}{2}c_{1}^{2}(\mathrm{U}(1)_{\tilde{Y}})\in\frac{\mathbb{Z}}{2}\) becomes half-integer valued on nonspin manifolds. However the fractional \(\frac{1}{2}\)\(\mathrm{U}(1)_{\tilde{Y}}\) instanton only means to break \(\mathrm{U}(1)_{\mathbf{Q}+N_{c}\mathbf{L}}\) down to \(\mathbb{Z}_{18N_{c}N_{f},\mathbf{Q}+N_{c}\mathbf{L}}\), which would not be enough to affect the symmetry already broken down to \(\mathbb{Z}_{2N_{c}N_{f},\mathbf{Q}+N_{c}\mathbf{L}}\) by \(\mathrm{SU}(2)\) instantons. Namely, there is _no_ noninvertible symmetry to be constructed out of the invertible \(\mathbb{Z}_{2N_{c}N_{f},\mathbf{Q}+N_{c}\mathbf{L}}\) symmetry because this \(\mathbb{Z}_{2N_{c}N_{f},\mathbf{Q}+N_{c}\mathbf{L}}\) remains preserved and anomaly-free under any instantons on spin and nonspin manifolds compatible with the SM structure (15) and (16).
5. Eq. (20) also shows that the anomaly cancelation for \(\mathrm{U}(1)_{\mathbf{Q}}^{2}\)-\(\mathrm{U}(1)_{\tilde{Y}}\), \(\mathrm{U}(1)_{\mathbf{L}}^{2}\)-\(\mathrm{U}(1)_{\tilde{Y}}\), \(\mathrm{U}(1)_{\mathbf{Q}}^{2}\)-\(\mathrm{SU}(2)\), and \(\mathrm{U}(1)_{\mathbf{L}}^{2}\)-\(\mathrm{SU}(2)\) always holds because their coefficients are always zero in the SM. So we do not obtain a 2-group-like structure [78] in the SM within (20).
6. **From \(\mathrm{Spin}^{c}\) to \(\mathrm{Spin}\times_{\mathbb{Z}_{2}^{F}}\mathbb{Z}_{4,X}\)-structure manifold**: When we replace the continuous \(\mathrm{U}(1)_{\mathbf{Q}-N_{c}\mathbf{L}}\) symmetry by a discrete \(\mathbb{Z}_{4,X}\) with \(X\equiv 5(\mathbf{B}-\mathbf{L})-\frac{2}{3}\tilde{Y}=\frac{5}{N_{c}}( \mathbf{Q}-N_{c}\mathbf{L})-\frac{2}{3}\tilde{Y}\), we also map the 5d iTFT in (20) classified by \(\mathbb{Z}^{2}\) to the 5d iTFT classified by \(\mathbb{Z}_{16}\) evaluated on a 5d \(M^{5}\): \[S_{5}\equiv(-N_{f}+n_{\nu_{R}})\,\frac{2\pi}{16}\eta_{4\mathrm{d}}(\mathrm{PD}(A _{Z_{2,X}}))\big{|}_{M^{5}}.\] (21) \(\bullet\) Because all the quarks and leptons have charge 1 under \(\mathbb{Z}_{4,X}\) (see Table 1 in Appendix A), there is no \(N_{c}\) factor in this formula. \(\bullet\) The background gauge field \(A_{Z_{2,X}}\in\mathrm{H}^{1}(M^{5},\mathbb{Z}_{2})\) is obtained by the quotient map down to \(\mathbb{Z}_{2,X}\equiv\mathbb{Z}_{4,X}/\mathbb{Z}_{2}^{F}\) from the \(\mathrm{Spin}\times_{\mathbb
\(\bullet\) Here the 5d Atiyah-Patodi-Singer (APS [79]) eta-invariant \(\eta_{\rm 5d}=\eta_{\rm 4\,d}({\rm PD}(A_{Z_{2,X}}))\) is valued in \(\mathbb{Z}_{16}\equiv\mathbb{Z}/(16\mathbb{Z})\) and is written as the 4d eta invariant \(\eta_{\rm 4\,d}\in\mathbb{Z}_{16}\)4 on the 4d Pin\({}^{+}\) submanifold representing Poincare dual (PD) to \(A_{Z_{2,X}}\). The Pin\({}^{+}\) structure is obtained from the 5d bulk Spin \(\times_{Z_{2}^{F}}\mathbb{Z}_{4,X}\)-structure by Smith isomorphism: \(\Omega_{5}^{\rm Spin\times_{Z_{2}}\mathbb{Z}_{4}}\cong\Omega_{4}^{\rm Pin^{+}} \cong\mathbb{Z}_{16}\)[81, 82, 83, 84]. The eta invariant \(\eta_{\rm 4\,d}\in\mathbb{Z}_{16}\) is the effective topological action of the interacting fermionic time-reversal symmetric topological superconductor of condensed matter in three spatial dimensions with an _anti-unitary_ time-reversal symmetry \(\mathbb{Z}_{4}^{TF}\) such that the time-reversal symmetry generator \(T\) squares to the fermion parity operator, namely \(T^{2}=(-1)^{F}\). The symmetry can be defined by the nontrivial group extension \(1\to\mathbb{Z}_{2}^{F}\to\mathbb{Z}_{4}^{TF}\to\mathbb{Z}_{2}^{T}\to 1\) (see a review [85, 86]). In contrast, in the SM, we have the _unitary_\(\mathbb{B}-\mathbb{L}\)-like symmetry \(\mathbb{Z}_{4,X}\) whose generator \(X\) squares to \(X^{2}=(-1)^{F}\). The symmetry again can be defined by the nontrivial group extension \(1\to\mathbb{Z}_{2}^{F}\to\mathbb{Z}_{4,X}\to\mathbb{Z}_{2,X}\to 1\). Footnote 4: The normalization is different from the normalisation used in [80]: \(\eta_{\rm 4\,d}^{\rm Here}=4\eta_{\rm 4\,d}^{\rm There}\).
7. **The 4d SM and 5d iTFT coupled path integral**: We can write down a fully gauge-diffeomorphism-invariant path integral by coupling a 4d SM action (11) on \(M^{4}\) with a 5d iTFT action (19) on \(M^{5}\) with \(M^{4}\)= \(\partial M^{5}\): \[Z[M^{4},M^{5};A_{\mathbf{Q}},A_{\mathbf{L}}]=\big{(}\int[\mathcal{D}\psi_{L}] [\mathcal{D}\psi_{L}^{\dagger}][\mathcal{D}A_{I}]\mathrm{e}^{\mathrm{i}\,S_{ \mathrm{SM}}[M^{4};A_{\mathbf{Q}},A_{\mathbf{L}}]}\big{)}\cdot\mathrm{e}^{ \mathrm{i}\,S_{\mathrm{5}}[M^{5};A_{\mathbf{Q}},A_{\mathbf{L}}]}.\] (22) Here we have included the dynamical gauge fields (namely \(A_{I=1,2,3}\) for \(\mathcal{G}_{\mathrm{SM}}\equiv su(3)\times su(2)\times u(1)_{\hat{Y}}\)), background gauge fields (namely \(A_{\mathbf{Q}}\) for \(\mathrm{U}(1)_{\mathbf{Q}}\), and \(A_{\mathbf{L}}\) for \(\mathrm{U}(1)_{\mathbf{L}}\)), and the background gravity fields. Importantly, the dynamical gauge fields \(A_{I}\) are restricted to the 4d manifold \(M^{4}\), while both background gauge fields, \(A_{\mathbf{Q}}\) and \(A_{\mathbf{L}}\), and background gravity can couple to and propagate between the 4d SM theory and the 5d bulk. Two certain convenient combinations of \(A_{\mathbf{Q}}\) and \(A_{\mathbf{L}}\) gauge fields contain indeed two kinds of Spin\({}^{c}\) gauge fields: \(A_{\mathbf{Q}-N_{c}\mathbf{L}}\) and \(A_{\mathbf{Q}+N_{c}\mathbf{L}}\). So we can probe the anomalies associated with Spin\({}^{c}\) structures. The continuous \(\mathrm{U}(1)_{\mathbf{Q}}\) and \(\mathrm{U}(1)_{\mathbf{L}}\) symmetry transformations give \(\alpha\) phase variations on the Weyl fermions as in (12), as well as the gauge field \(A_{\mathbf{Q}}\mapsto A_{\mathbf{Q}}+\mathrm{d}\alpha_{\mathbf{Q}}\) and \(A_{\mathbf{L}}\mapsto A_{\mathbf{L}}+\mathrm{d}\alpha_{\mathbf{L}}\). Following Noether's theorem with quantum anomaly, we can derive the anomalous current nonconservation in the 4d SM's path integral: \[\int[\mathcal{D}\psi_{L}][\mathcal{D}\psi_{L}^{\dagger}][ \mathcal{D}A_{I}]\,\mathrm{e}^{\mathrm{i}\,\big{(}\int\big{(}(\mathrm{d}^{4}x \,\mathcal{L}_{\mathrm{SM}})+\alpha_{\mathbf{Q}}(\mathrm{d}*J_{\mathbf{Q}})+ \alpha_{\mathbf{L}}(\mathrm{d}*J_{\mathbf{L}})\] \[+(N_{c}\alpha_{\mathbf{Q}}+\alpha_{\mathbf{L}})N_{f}(18\tfrac{ \varepsilon_{1}(\mathrm{U}(1)_{\hat{Y}})^{2}}{2}+c_{2}(\mathrm{SU}(2)))+(-N_{f }+n_{\nu_{R}})\alpha_{\mathbf{L}}(\tfrac{\varepsilon_{1}(\mathrm{U}(1)_{ \mathbf{L}})^{2}}{6}-\tfrac{\mathbf{p}_{1}(TM)}{24})\big{)}\big{)}.\] (23) Here \(j_{\mathbf{Q}}=j_{\mathbf{Q}_{H}}\mathrm{d}x^{\mu}=q_{\mathbf{Q}}(\psi_{L \mathbf{Q}}^{\dagger}\bar{\sigma}_{\mu}\psi_{L\mathbf{Q}})\mathrm{d}x^{\mu}\) and \(j_{\mathbf{L}}=j_{\mathbf{L}\mu}\mathrm{d}x^{\mu}=q_{\mathbf{L}}(\psi_{L\mathbf{ L}}^{\dagger}\bar{\sigma}_{\mu}\psi_{L\mathbf{L}})\mathrm{d}x^{\mu}\), where \(\psi_{L\mathbf{Q}}\) and \(\psi_{L\mathbf{L}}\) respectively contain the quark and lepton sectors of the Weyl fermion multiplet \(\psi_{L}\) in (9). The quark number \(q_{\mathbf{Q}}\) and lepton number \(q_{\mathbf{L}}\) are \(+1\) for left-handed particles and \(-1\) for right-handed anti-particles. The divergence of the currents are given by \(\mathrm{d}\star j_{\mathbf{Q}}=\partial^{\mu}j_{\mathbf{Q}\mu}\mathrm{d}^{4}x\) and \(\mathrm{d}\star j_{\mathbf{L}}=\partial^{\mu}j_{\mathbf{L}\mu}\mathrm{d}^{4}x\).5 The violation of the quark \(\mathbf{Q}\) and lepton \(\mathbf{L}\) currents by the mixed gauge anomalies or mixed gravitational anomalies on the quantum level reads: Footnote 5: More precisely, here and below, when gravity and curved spacetime is involved, we have \(\mathrm{d}\star j=\partial^{\mu}(\sqrt{-8}j_{\mu})\,\mathrm{d}^{4}x\) with the spacetime metric \(g_{\mu\nu}\). \[\begin{array}{rcl}\mathrm{d}\star j_{\mathbf{Q}}&=&-N_{c}N_{f}(18\tfrac{ \varepsilon_{1}(\mathrm{U}(1)_{\mathbf{Y}})^{2}}{2}+c_{2}(\mathrm{SU}(2))).&\\ \mathrm{d}\star j_{\mathbf{L}}&=&-N_{f}(18\tfrac{\varepsilon_{1}(\mathrm{U}(1) _{\mathbf{Y}})^{2}}{2}+c_{2}(\mathrm{SU}(2)))&-&(-N_{f}+n_{\nu_{R}})\,(\tfrac{ \varepsilon_{1}(\mathrm{U}(1)_{\mathbf{L}})^{2}}{6}-\tfrac{p_{1}(TM)}{24}).\\ \mathrm{d}\star j_{\mathbf{Q}-N_{c}\mathbf{L}}&=&+&(-N_{f}+n_{\nu_{R}})\,(N_{c}^{3 }\tfrac{\varepsilon_{1}(\mathrm{U}(1)_{\mathbf{L}})^{2}}{6}-N_{c}\tfrac{p_{1}( TM)}{24}).\\ \mathrm{d}\star j_{\mathbf{Q}+N_{c}\mathbf{L}}&=&-2N_{c}N_{f}(18\tfrac{ \varepsilon_{1}(\mathrm{U}(1)_{\hat{Y}})^{2}}{2}+c_{2}(\mathrm{SU}(2)))&-&(-N_{f }+n_{\nu_{R}})\,(N_{c}^{3}\tfrac{\varepsilon_{1}(\mathrm{U}(1)_{\mathbf{L}}) ^{2}}{6}-N_{c}\tfrac{p_{1}(TM)}{24}).\end{array}\] (24) Eq. (23) shows the 4d SM perspective. But from the anomaly inflow perspective, those are the boundary currents in 4d that inflow to the bulk currents in 5d. The 5d bulk currents (denoted by \(J_{\mathbf{Q}}\) and \(J_{\mathbf{L}}\)) can be introduced by adding an extra term \(\int(A_{\mathbf{Q}}\wedge\star J_{\mathbf{Q}}+A_{\mathbf{L}}\wedge\star J_{ \mathbf{L}})\) to the original 5d bulk action \(S_{5}[M^{5};A_{\mathbf{Q}},A_{\mathbf{L}}]\). We then obtain the equalities following (24) as the boundary-bulk current inflow relations \(\mathrm{d}\star j_{\mathbf{Q}}=\star J_{\mathbf{Q}}\) and \(\mathrm{d}\star j_{\mathbf{L}}=\star J_{\mathbf{L}}\), similarly for \(\mathrm{d}\star j_{\mathbf{Q}-N_{c}\mathbf{L}}=\star J_{\mathbf{Q}-N_{c} \mathbf{L}}\) and \(\mathrm{d}\star j_{\mathbf{Q}+N_{c}\mathbf{L}}=\star J_{\mathbf{Q}+N_{c} \mathbf{L}}\). When the continuous \(\mathrm{U}(1)_{\mathbf{Q
where \(A_{\mathbb{Z}_{4,X}}\) is precisely a Spin \(\times_{\mathbb{Z}_{2}^{F}}\mathbb{Z}_{4,X}\) gauge field that couples to and communicates between the 4d SM theory and the 5d bulk.
## III Categorical symmetry from mixed \(\mathrm{U}(1)\)-gravitational anomaly
In Sec. II, we reviewed the violation of the continuous \(\mathbf{B-L}\) symmetry (more precisely, \(\mathrm{U}(1)_{\mathbf{Q}-N_{c}\mathbf{L}}\)) and thus the nonconservation of its current \(j_{\mathbf{Q}-N_{c}\mathbf{L}}\) of the SM, due to the pure \(\mathrm{U}(1)^{3}\) anomaly and the mixed \(\mathrm{U}(1)\)-gravity2 anomaly in (24):
Footnote 2: In general it is not required to be a submanifold, just a 3-cycle with \(\mathrm{U}(1)\) coefficients. Such a cycle corresponds to a network of charge operators. The discussion in principle can be generalized to this more general case. However, all the ingredients of the construction of noninvertible defects will have locality property, therefore the final result will give a local definition of the noninvertible defect.
\[\mathrm{d}\star j_{\mathbf{Q}-N_{c}\mathbf{L}}=(-N_{f}+n_{\nu_{R}})\,\left(N_{ c}^{3}\frac{c_{1}(\mathrm{U}(1)_{\mathbf{L}})^{2}}{6}-N_{c}\frac{p_{1}(TM)}{24} \right).\]
In this section, we simply denote \(j_{\mathbf{Q}-N_{c}\mathbf{L}}\) as \(j\), and consider a mathematically motivated general expression (with the reason to be explained),
\[\mathrm{d}\star j=k_{1}\,\frac{c_{1}^{2}}{3!}+k_{2}\,p_{1} \tag{26}\]
that corresponds to a general degree 6 anomaly polynomial for the \(\mathrm{U}(1)\) gauge theory:
\[I_{6}=k_{1}\,\frac{c_{1}^{3}}{3!}+k_{2}\,c_{1}p_{1} \tag{27}\]
where, as in Section I.2, \(c_{1}=F/(2\pi)\), with \(F=\mathrm{d}A\), is the Chern-Weil representative 2-form of the first Chern class of the \(\mathrm{U}(1)\) bundle with connection 1-form \(A\), and \(p_{1}=-\mathrm{Tr}[R\wedge R]/(8\pi^{2})\), with \(R=\mathrm{d}\omega+\omega\wedge\omega\), is the 2-form representative of first Pontryagin class of the spacetime tangent bundle \(TM^{4}\) with Levi-Cevita spin-connection 1-form \(\omega\). Note that in general the coefficients \(k_{1,2}\) cannot be arbitrary numbers. Their possible values are determined by Atiyah-Singer index theorem [77] (see Appendix B for a review). Assuming \(\mathbb{Z}_{2}^{F}\subset\mathrm{U}(1)\), as in the case of \(\mathbf{B-L}\) symmetry in Standard Model, they must satisfy the conditions
\[k_{1}=24\ell+k,\qquad k_{2}=-\frac{k}{24}, \tag{28}\]
for some \(k,\ell\in\mathbb{Z}\) (if \(\mathbb{Z}_{2}^{F}\not\subset U(1)\) we have instead \(\ell\in\frac{1}{4}\mathbb{Z}\)). Moreover, any values of \(k\) and \(\ell\) can be realized by considering all possible 4d QFTs. Note that, generically, unless \(k=0\mod 24\), the presence of mixed \(\mathrm{U}(1)\)-gravitational anomaly implies the presence of the pure \(\mathrm{U}(1)\) anomaly.
For the Standard Model setup considered in Section II:
\[k=(-N_{f}+n_{\nu_{R}})\,N_{c},\qquad\ell=(-N_{f}+n_{\nu_{R}})\,\frac{N_{c}^{3} -N_{c}}{24}. \tag{29}\]
To consider first the effect of the mixed \(\mathrm{U}(1)\)-gravitational anomaly only, we assume that we are on a connected spacetime spin 4-manifold \(M^{4}\) and in the trivial background \(\mathrm{U}(1)\) gauge field. The current \(j\) of the global \(\mathrm{U}(1)\) symmetry is not conserved, but satisfies:
\[\mathrm{d}\star j=-\frac{k}{24}p_{1}=\frac{k}{24}\frac{1}{8\pi^{2}}\mathrm{Tr }[R\wedge R]. \tag{30}\]
Consider the naive (i.e. Noether) charge operator, corresponding to the rotation by the angle \(\alpha\in 2\pi\cdot\mathbb{R}/\mathbb{Z}\cong\mathrm{U}(1)\) and supported on an oriented connection 3 submanifold3\(Y\subset M^{4}\):
Footnote 3: In general it is not required to be a submanifold, just a 3-cycle with \(\mathrm{U}(1)\) coefficients. Such a cycle corresponds to a network of charge operators. The discussion in principle can be generalized to this more general case. However, all the ingredients of the construction of noninvertible defects will have locality property, therefore the final result will give a local definition of the noninvertible defect.
\[U_{\alpha}(Y)=\mathrm{e}^{\mathrm{i}\,\alpha\,\int_{Y}\,\star j}. \tag{31}\]
By slightly abusing the notations we will use the same symbol for a chosen lift of \(\alpha\) to \(\mathbb{R}\) (we can always choose a representative \(\alpha\) to be in the interval \([0,2\pi)\)).
The (30) implies that this operator is actually not topological. Namely, consider slightly deformed support \(Y^{\prime}\). By the Stokes theorem, the change of the naive charge operator is the following:
\[U_{\alpha}(Y^{\prime})\,U_{\alpha}(Y)^{-1}=\,{\rm e}^{{\rm i}\,\alpha\left(f_{Y^ {\prime}}\star j-f_{Y}\star j\right))}=\,{\rm e}^{{\rm i}\,\alpha\int_{Z}\,{ \rm d}\star j}=\,{\rm e}^{-\frac{{\rm i}\,k\alpha}{24}\int_{Z}\,p_{1}} \tag{32}\]
where \(Z\) is the 4-chain such that \(\partial Z=Y^{\prime}-Y\) (see Fig. 1). The topological noninvariance then can be fixed using the fact that locally \(p_{1}=-\,{\rm dGCS}/(2\pi)\), where GCS is the gravitational Chern-Simons 3-form
\[{\rm GCS}:=\frac{1}{4\pi}{\rm Tr}[\omega\wedge d\omega+\frac{2}{3}\,\omega \wedge\omega\wedge\omega]. \tag{33}\]
That is, it is the Chern-Simons 3-form of the Levi-Cevita spin-connection 1-form \(\omega\) on the spacetime tangent bundle \(TM\)4.
Footnote 4: More formally this can be described as follows. First note that \(NY=TM|_{Y}/TY\), where \(NY\) is the rank one normal bundle over \(Y\), \(TM|_{Y}\) is the restriction of the rank 4 tangent bundle of \(M\) to the submanifold, and the quotient if performed fiberwise. It is known that \(TY\) can be always trivialized globally over the 3-manifold \(Y\). Let us choose a particular trivialization isomorphism \(\varphi_{T}:TY\rightarrow\mathbb{R}^{3}\times Y\). Moreover, since both \(X\) and \(Y\) are oriented, \(NY\) can be also globally trivialized, \(\varphi_{N}:NY\rightarrow\mathbb{R}\times Y\), and there is a canonical choice. Therefore \(TM|_{Y}\) itself can be also trivialized with trivialization \((\varphi_{T},\varphi_{N}):TM|_{Y}\rightarrow(\mathbb{R}^{3}\times\mathbb{R}) \times Y\).
This noninvariance can be fixed by modifying the charge operator as follows:
\[\tilde{U}_{\alpha}(Y)=\,{\rm e}^{{\rm i}\,\alpha\int_{Y}(\star j-\frac{k\,{ \rm GCS}}{24\cdot 2\pi})}. \tag{34}\]
Note that in order to define GCS we have to make a choice of 4d vierbein, that is a particular trivialization of the tangent bundle \(TM\)4. Since we only need to integrate GCS over \(Y\) we need to choose the vierbein over (a small neighborhood of) \(Y\). Such a choice of 4d vierbein can be made by choosing a 3d vierbein on \(Y\) and supplementing it with the normal unit vector7. The modification of the charge operator (34) is very similar to the modification considered in [58; 59] in the case of ABJ type anomaly between a global U(1) and a gauge U(1) symmetries. There, instead of gravitational Chern-Simons term, the Chern-Simons term of the bulk gauge U(1) was considered.
Footnote 4: More formally this can be described as follows. First note that \(NY=TM|_{Y}/TY\), where \(NY\) is the rank one normal bundle over \(Y\), \(TM|_{Y}\) is the restriction of the rank 4 tangent bundle of \(M\) to the submanifold, and the quotient if performed fiberwise. It is known that \(TY\) can be always trivialized globally over the 3-manifold \(Y\). Let us choose a particular trivialization isomorphism \(\varphi_{T}:TY\rightarrow\mathbb{R}^{3}\times Y\). Moreover, since both \(X\) and \(Y\) are oriented, \(NY\) can be also globally trivialized, \(\varphi_{N}:NY\rightarrow\mathbb{R}\times Y\), and there is a canonical choice. Therefore \(TM|_{Y}\) itself can be also trivialized with trivialization \((\varphi_{T},\varphi_{N}):TM|_{Y}\rightarrow(\mathbb{R}^{3}\times\mathbb{R}) \times Y\).
For a small deformation of \(Y\) to \(Y^{\prime}\) we can consider the extension of the trivialization of the tangent bundle to \(Z\). By Stokes theorem we then have
\[\int_{Y^{\prime}}{\rm GCS}-\int_{Y}{\rm GCS}=\int_{Z}{\rm dGCS}=-2\pi\int_{Z}p _{1} \tag{35}\]
and therefore
\[\tilde{U}_{\alpha}(Y^{\prime})=\tilde{U}_{\alpha}(Y). \tag{36}\]
However, the definition of the \(\tilde{U}(Y)\) above required a choice of trivialization of \(TY\), and there is no canonical choice. A change of the trivialization corresponds to a gauge transformation of the spin-connection which is used to define the gravitational Chern-Simons 3-form. And although the integral \(\int_{Y}{\rm GCS}\) is invariant under continuous changes of the trivialization (i.e. "small gauge transformations"), it changes under the large ones (i.e. "large gauge transformations") by an integral multiple of \(2\pi\)[87]. More specifically, the isotopy classes of trivializations of \(TY\), i.e.
Figure 1: A schematic drawing of a small deformation of a submanifold \(Y\subset M\) to \(Y^{\prime}\). The shaded domain depicts \(Z\) such that \(\partial Z=Y^{\prime}-Y\).
framings of the tangent bundle, form a torsor over \(\mathrm{H}^{3}(Y,\mathbb{Z})\cong\mathbb{Z}\). Under the change of framing by \(f\in\mathbb{Z}\) units the charge operator changes as follows:
\[\tilde{U}_{\alpha}(Y)\;\to\;\tilde{U}_{\alpha}(Y)\,e^{-\frac{i\alpha\,k\,f}{24}}. \tag{37}\]
If \(\alpha/(2\pi)\) is rational, this however can be compensated by putting a Witten-Reshetikhin-Turaev-type 3d TQFT \(T\) associated to a 2d rational CFT of a certain central charge \(c\in\mathbb{Q}\). The partition function of such a TQFT on \(Y\) also changes with the change of framing of \(Y\)[87]:
\[Z_{T}[Y]\;\to\;Z_{T}[Y]\;\mathrm{e}^{\frac{2\pi\,\mathrm{i}\,f\,c}{24}}. \tag{38}\]
We then can consider the following family of topological operators:
\[D_{(c,T)}(Y):=\mathrm{e}^{\,\mathrm{i}\,c\int_{Y}\left(\frac{2\pi}{\pi}\,sj- \frac{1}{24}\;\mathrm{GCS}\right)}\cdot Z_{T}[Y] \tag{39}\]
labeled by pairs \((c,T)\) where \(c\in\mathbb{Q}\) (such that \(2\pi\,c/k=\alpha\mod 2\pi\) for the original \(\alpha\in 2\pi\cdot\mathbb{Q}/\mathbb{Z}\)), and \(T\) is a TQFT associated to a rational CFT with central charge \(c\). Note that such pairs exist for any \(\alpha\in 2\pi\cdot\mathbb{Q}/\mathbb{Z}\). Although for each \(\alpha\in 2\pi\cdot\mathbb{Q}/\mathbb{Z}\) one could choose specific \(c\) and \(T\), for example of a CS type, with \(c\) being the Sugawara central charge8, we do not necessarily want to do it, because such a collection of operators would not be closed under fusion.
Footnote 8: For example, if \(\alpha=2\pi p/N\mod 1\), with \(p,N\) being positive integers, one can take \(T\) to be a stack of \(p\cdot k\) copies of \(\mathrm{SU}(2)\) level \(-(6N-2)\) Chern-Simons theory. The total central charge is \(c=-p\,k\,\frac{(6N-2)3}{(6N-2)^{2}}=\frac{pk}{N}-3pk\), which satisfies the condition \(c/k=\alpha\mod 1\).
The pairs form a commutative monoid \(\mathfrak{N}=\{(c,T)\}\) with the binary operation:
\[(c_{1},T_{1})+(c_{2},T_{2}):=(c_{1}+c_{2},T_{1}\otimes T_{2}) \tag{40}\]
corresponding to the fusion of the operators (cf. [68]):
\[D_{(c_{1},T_{1})}(Y)D_{(c_{2},T_{2})}(Y)=D_{(c_{1}+c_{2},T_{1}\otimes T_{2})}( Y)\equiv D_{(c_{1},T_{1})+(c_{2},T_{2})}(Y). \tag{41}\]
This monoid related to the subgroup \(\mathbb{Q}/\mathbb{Z}\subset\mathbb{R}/\mathbb{Z}\cong\mathrm{U}(1)\) of the original invertible symmetry by a surjective morphism of monoids:
\[\begin{array}{rcl}\mathfrak{N}&\longrightarrow&\mathbb{Q}/\mathbb{Z},\\ (c,T)&\longmapsto&\frac{\alpha}{2\pi}=c/k\mod 1.\end{array} \tag{42}\]
The operators (39) are noninvertible when \(c\notin\frac{1}{2}\mathbb{Z}\) because \(T\) is necessarily a noninvertible TQFT. When \(c\in\frac{1}{2}\mathbb{Z}\) one can choose \(T\) to be invertible, if one considers it as a spin TQFT. For example, one can take \(T\) to be a stack of Ising spin-TQFTs. Note that there is a canonical spin structure induced on the codimension one submanifold \(Y\) from the spin structure on \(M\). This, however, requires a choice of spin-structure on \(M\). The invertability for \(c\in\frac{1}{2}\mathbb{Z}\) corresponds to the fact that \(\mathbb{Z}_{2k}\equiv\mathbb{Z}/(2k\mathbb{Z})\) subgroup of \(\mathrm{U}(1)\) symmetry can be preserved as an invertible symmetry.
Now let us consider the additional effect of the pure \(\mathrm{U}(1)\) anomaly. To do so let us turn the nontrivial background \(\mathrm{U}(1)\) gauge field. Moreover, if \(\mathbb{Z}_{2}^{F}\subset U(1)\), we can now drop the assumption of the spacetime manifold being spin (note that although not every 4-manifold admits a spin structure, every 4-manifold does admit a Spin\({}^{c}\) structure). The anomaly polynomial of the general form (27) implies that
\[\mathrm{d}*j=\frac{24\ell+k}{6}\,c_{1}^{2}-\frac{k}{24}p_{1}. \tag{43}\]
Because of the first term the defect defined in (39) will no longer be topological. Namely, a deformation of \(Y\) into \(Y^{\prime}\) changes the operator as follows:
\[D_{(c,T)}(Y^{\prime})D_{(c,T)}(Y)^{-1}=e^{2\pi\,\mathrm{i}\,c\,\frac{24\ell+k}{ k}\int_{Z}\frac{\,\mathbb{Z}^{2}}{6\cdot 4\pi^{2}}}. \tag{44}\]
However, this can be fixed as in [58; 59], by putting on top of it an additional abelian TQFT that couples to the bulk \(\mathrm{U}(1)\) gauge field. The main difference is that in our setup this \(\mathrm{U}(1)\) gauge field is not dynamical in the bulk.
Such an abelian TQFT can be always realized by a \(\mathrm{U}(1)^{L}\) Chern-Simons theory with a certain \(L\times L\) symmetric level matrix \(K\) with integral elements \(K_{ij}\in\mathbb{Z}\). The coupling to the external 4d \(\mathrm{U}(1)\) gauge field \(A\) then can be described by a choice of an integral vector \(n\in\mathbb{Z}^{L}\). The matrix \(K\) can be thought of as defining an integral rank
\(L\) lattice \(\Lambda\cong\mathbb{Z}^{L}\) equipped with the quadratic form given by \(K\): \((a,b)_{\Lambda}=a^{T}Kb,\ a,b\in\Lambda\). Then one can consider \(n\in\Lambda^{*}\) as an element of the dual lattice in a basis-independent way. The path integral for the partition function of the TQFT defined on 3-manifold \(Y\) reads:
\[Z^{\rm ab}_{(\Lambda,n\in\Lambda^{*})}[Y;A]=\int\prod_{i=1}^{L}[\mathcal{D}a_{ i}]\;{\rm e}^{\frac{i}{2\pi}\,f_{Y}\left(\frac{1}{2}\sum_{i,j=1}^{L}K_{ij}a_{i} \wedge da_{j}+\sum_{i=1}^{L}n_{i}a_{i}\wedge\mathrm{d}A\right)} \tag{45}\]
where \(a_{i}\) are internal 3d dynamical U(1) gauge field. The theory depends only on the isomorphism class of the lattice, since the theories with equivalent matrices are related by field redefinition. The classification of the theories inequivalent on the quantum level, without coupling to the external field, is given in [88; 89]9.
Footnote 9: Namely, the invariant data of the theory on the quantum level is \(\sigma=\mathrm{sign}\,\Lambda\), the signature of the Lattice, and the discriminator group \(\mathsf{D}=\Lambda^{*}/\Lambda\) together with a quadratic refinement \(\mathsf{q}:\mathsf{D}\to\mathsf{Q}/\mathbb{Z}\) of the bilinear form \(\mathsf{D}\times\mathsf{D}\to\mathsf{Q}/\mathbb{Z}\), \((a,b)\mapsto(a,b)_{\Lambda}\mod 1\). The element \(n\in\Lambda^{*}\) defining the coupling to the external field \(A\) then descends to an element \([n]\in\mathsf{D}\). For a given \([n]\), however, the partition function is independent of the choice of representative \(n\) only up to gauge invariant counterterm, namely a factor of the form \(\exp\frac{1}{4\pi}\int_{Y}A\wedge dA\) for an _integer_\(r\in\mathbb{Z}\).
By performing the Gaussian integration over \(a_{i}\) in (45) one can see that the change in (44) will be cancelled if
\[(n,n)_{\Lambda}\equiv\,n^{T}K^{-1}n=c\,\frac{24\ell+k}{3k} \tag{46}\]
which can be always satisfied by an appropriate choice10 of \(K\) and \(n\).
Footnote 10: The right-hand side of 46 is a certain fraction that can be always represented by a pair of integers \(p^{\prime}\geq 0\) and \(N\neq 0\):
\[(c-\mathrm{sign}\,\Lambda)\]
instead. Note that if the ambient spacetime \(M^{4}\) is not considered to be spin (with a chosen spin structure), there is no canonically induced spin structure on \(Y\subset M^{4}\) and one has to stay in the realm of bosonic TQFTs, meaning that the lattice \(\Lambda\) must be even. The condition (46) still can always be satisfied, for example by the choice as in Footnote 10: Note that unless \(24\ell+k=0\), \(c\), the first entry of the quadruple, is completely determined by the pair \((\Lambda,n)\).
This extra abelian TQFT, however, also has a framing anomaly corresponding to the chiral central charge equal to the signature of the lattice, \(\mathrm{sign}\,\Lambda\). Therefore one should adjust the TQFT \(T\) that appeared above to have central charge \((c-\mathrm{sign}\,\Lambda)\) instead. Note that if the ambient spacetime \(M^{4}\) is not considered to be spin (with a chosen spin structure), there is no canonically induced spin structure on \(Y\subset M^{4}\) and one has to stay in the realm of bosonic TQFTs, meaning that the lattice \(\Lambda\) must be even. The condition (46) still can always be satisfied, for example by the choice as in Footnote 10: The right-hand side of 46 is a certain fraction that can be always represented by a pair of integers \(p^{\prime}\geq 0\) and \(N\neq 0\):
\[D_{(c,T,\Lambda,n)}(Y):=\,{\rm e}^{\,{\rm i}\,c\int_{Y}\left(\frac{2\pi}{4k} \ast j-\frac{1}{24}\;\mathrm{GCS}\right)}\cdot Z_{T}(Y)\cdot Z^{\rm ab}_{( \Lambda,n)}(Y;A). \tag{47}\]
The defects are now labeled by quadruples \((c,T,\Lambda,n)\) where \(c\in\mathbb{Q}\), \(\Lambda\) is an integral lattice with \(n\in\Lambda^{*}\) such that11\((n,n)_{\Lambda}=-c\,(24\ell+k)/(3k)\), and \(T\) is a Witten-Reshetikhin-Turaev-type 3d TQFT corresponding to a rational conformal theory with central charge \((c-\mathrm{sign}\,\Lambda)\). The relation to the original \(\alpha\) in the naive undressed defect is as before: \(\alpha=c/k\mod 1\). Moreover, two quadruples define the same defect if they satisfy the equivalence relation
Footnote 11: Note that unless \(24\ell+k=0\), \(c\), the first entry of the quadruple, is completely determined by the pair \((\Lambda,n)\).
\[(c,T,\Lambda\oplus\Lambda^{\prime},n\oplus 0)\;\sim\;(c,T\otimes T^{\rm ab}_{ \Lambda^{\prime}},\Lambda,n) \tag{48}\]
where \(T^{\rm ab}_{\Lambda^{\prime}}\) is the abelian TQFT associated to the lattice \(\Lambda^{\prime}\). The equivalence relation corresponds to absorbing \(T^{\rm ab}_{\Lambda^{\prime}}\), a part of the abelian TQFT which is not coupled to the bulk field \(A\), into the TQFT \(T\). As before, the quadruples form a commutative monoid \(\mathfrak{N}^{\prime}=\{(c,T,\Lambda,n)\}\) with the binary operation:
\[(c_{1},T_{1},\Lambda_{1},n_{1})+(c_{2},T_{2},\Lambda_{2},n_{2}):=(c_{1}+c_{2},T_ {1}\otimes T_{2},\Lambda_{1}\oplus\Lambda_{2},n_{1}\oplus n_{2}) \tag{49}\]
corresponding to the fusion of the operators (cf. [68]):
\[D_{(c_{1},T_{1},\Lambda_{1},n_{1})}(Y)D_{(c_{2},T_{2},\Lambda_{2},n_{2})}(Y)=D_{ (c_{1},T_{1},\Lambda_{1},n_{1})+(c_{2},T_{2},\Lambda_{2},n_{2})}(Y). \tag{50}\]
The relations between the elements of quadruples are respected by the binary operation because \(\mathrm{sign}\,(\Lambda_{1}\oplus\Lambda_{2})=\mathrm{sign}\,\Lambda_{1}+ \mathrm{sign}\,\Lambda_{2}\), \((n_{1}\oplus n_{2},n_{1}\oplus n_{2})_{\Lambda_{1}\oplus\Lambda_{2}}=(n_{1},n_{ 1})_{\Lambda_{1}}+(n_{2},n_{2})_{\Lambda_{2}}\).
This monoid is related to the subgroup \(\mathsf{Q}/\mathbb{Z}\subset\mathbb{R}/\mathbb{Z}\cong\mathrm{U}(1)\) of the original invertible symmetry by a surjective morphism of monoids:
\[\begin{array}{rcl}\mathfrak{N}^{\prime}&\longrightarrow&\mathsf{Q}/\mathbb{Z},\\ (c,T,\Lambda,n)&\longmapsto&\frac{\alpha}{2\pi}=c/k\mod 1.\end{array} \tag{51}\]
Note that for a given \(c\), the defect can be made invertible if \(c(24\ell+k)/(6k)\in\mathbb{Z}\) and also \(c\in 8\mathbb{Z}\).
Categorical symmetry from mixed \(\mathbb{Z}_{4}\)-gravitational anomaly
Let \(\mathbf{v}\in\mathbb{Z}_{16}\) be the anomaly of \(\mathbb{Z}_{4}\supset\mathbb{Z}_{2}^{F}\) symmetry. In the Standard Model setup considered in Section II, this symmetry is \(\mathbb{Z}_{4,X}\) and \(\mathbf{v}=-N_{f}+n_{\nu_{R}}\). Let us start with the naive (i.e. classically defined) network12\(U(\tilde{Y})\) of charge operators supported on a 3-cycle \(\tilde{Y}\) with \(\mathbb{Z}_{4}\) coefficients (corresponding to the charges assigned to the individual operators in the networks). Note that it is not always possible to resolve \(\tilde{Y}\) into a submanifold. Because the symmetry group involves fermion parity the definition of the charge operator is rather subtle already on the classical level. What we mean by it is the following. A choice of the background \(\mathbb{Z}_{4}\) field corresponds to choosing \(\text{Spin}\times_{\mathbb{Z}_{2}^{F}}\mathbb{Z}_{4}\) structure on the spacetime 4-manifold \(M^{4}\). Let us fix one such structure. The _changes_ of structures are in-one-to one correspondence with the elements of \(\text{H}^{1}(M^{4},\mathbb{Z}_{4})\) (meaning that the space of structures is a torsor over this group). The insertion of \(U(\tilde{Y})\) then implements the _change_ of the structure corresponding to the Poincare dual of \([\tilde{Y}]\in H_{3}(M^{4},\mathbb{Z}_{4})\).
Footnote 12: The reason we consider a network of charge operators rather than a single charge operator supported will become apparent later.
On the quantum level the theory has anomaly corresponding to the 5d invertible TQFT with the following effective action on a 5d spacetime manifold \(M^{5}\):
\[S_{5\text{d}}=\frac{\pi\mathbf{v}\,\eta(\text{PD}(A))}{8}|_{M^{5}} \tag{53}\]
where \(A\) is the \(\mathbb{Z}_{2}\) background gauge field (i.e. the element of \(\text{H}^{1}(M,\mathbb{Z}_{2})\)) defined from the \(\mathbb{Z}_{4}\times_{\mathbb{Z}_{2}^{F}}\times\)Spin structure by the quotient map:
\[\text{Spin}\times_{\mathbb{Z}_{2}^{F}}\mathbb{Z}_{4}\longrightarrow\mathbb{Z }_{4}/\mathbb{Z}_{2}^{F}\equiv\mathbb{Z}_{2} \tag{54}\]
and \(\eta\) is the eta-invaraint normalized such that \(\eta\) is an integer well-defined modulo 16 on a closed \(\text{Pin}^{+}\) 4-manifold. The expression (53) is not quite mathematically precise. What it actually means is the following. The Poincare dual of \(A\in\text{H}^{1}(M^{5},\mathbb{Z}_{2})\) can be represented by an unoriented closed codimension-1 submanifold in \(M^{5}\). The \(\text{Spin}\times_{\mathbb{Z}_{2}^{F}}\mathbb{Z}_{4}\) structure on \(M\) induces \(\text{Pin}^{+}\) structure on it [84]. The \(\eta(\text{PD}(A))\) in (53) is defined as the eta-invariant of this \(\text{Pin}^{+}\) 4-manifold, which we denote by \(\text{PD}(A)\).
The anomalous 4d theory is unambiguously defined in a general background only if considered as the theory on the boundary of the 5-dimensional spacetime \(M^{5}\). That is when \(M^{4}\) is considered to be one of the boundary components of \(M^{5}\) with \(\text{Spin}\times_{\mathbb{Z}_{2}^{F}}\mathbb{Z}_{4}\) structure on \(M^{4}\) induced from \(M^{5}\). In particular, the boundary components of the submanifold \(\text{PD}(A)\) that lie in \(M^{4}\), that is \(\partial\text{PD}(A)\cap M^{4}=\text{PD}(A|_{M^{4}})\subset M^{4}\), represent the Poincare dual of \(A|_{X}\in\text{H}^{1}(M^{4},\mathbb{Z}_{2})\) which is defined by the \(\text{Spin}\times_{\mathbb{Z}_{2}^{F}}\mathbb{Z}_{4}\) structure on \(M^{4}\).
The insertion of the operator network \(U(\tilde{Y})\) has an effect of changing \(\text{PD}(A|_{M^{4}})\) by the union with \(Y=\tilde{Y}\mod 2\), the cycle in \(M^{4}\) with \(\mathbb{Z}_{2}\) coeffici
Figure 2: A schematic drawing of how locally a \(\mathbb{Z}_{4}\)-cycle \(\tilde{Y}\) inside \(M^{4}\) looks like. The cycle corresponds to a network of charge operators on the classical level. The numbers denote the \(\mathbb{Z}_{4}\) charges of the operators in the network. Note that operators of charge \(3=-1\mod 4\) is equivalent to operators of charge \(+1\) but with reversed orientation. Therefore we can assume that there are only two types of nontrivial operators in the network: of charge 1 and 2. Since \(2=-2\mod 4\), the operators of charge 2 do not require a choice of orientation. In blue we depict \(Y\), the mod 2 reduction of \(\tilde{Y}\). It is realized by forgetting in the network all operators of charge 2 and also forgetting the orientation of operators of charge 1. It can always be deformed into a smooth unoriented submanifold inside \(M^{4}\), which we will denote by the same symbol, \(Y\).
reduction of \(\tilde{Y}\) (see Fig. 2). It can always be resolved into a smooth unoriented 3-manifold by a small deformation13. This means that, on the quantum level, the operator network supported on \(\tilde{Y}\) by itself is not well defined, but becomes so if we extend \(Y=\tilde{Y}\mod 2\) to a 4-dimensional hypersurface in the 5d bulk \(M^{5}\). The effective action, \(\pi\mathsf{v}\eta/8\), supported on the hypersuface is only topological inside the bulk, meaning it is invariant under deformations that preserve the boundary.
Footnote 13: Note that if we started with \(\tilde{Y}\) being a manifold, the resulting \(Y\) would be either empty or an orientable manifold. This would be quite restrictive for the analysis below. This is the reason why we have started with a nontrivial network of charge operators.
Assume there is a 3d Pin\({}^{-}\) TQFT \(T\) with anomaly described by the 4d effective action \(S_{4d}=-\pi\mathsf{v}\eta/8\), that is, it has anomaly \(-\mathsf{v}\in\mathbb{Z}_{16}=\mathrm{Hom}(\Omega_{\mathrm{Pin}^{+}},U(1))\). Such TQFTs were considered in [90; 91; 92; 93; 94; 95]. We can then get rid of the dependence of the extension of \(Y\) into the 5d bulk by supplementing the defect network with TQFT \(T\) supported on \(Y\):
\[D_{T}(\tilde{Y}):=U(\tilde{Y})\,Z_{T}(Y). \tag{55}\]
Due to the anomaly of \(T\), \(Z_{T}(Y)\) itself is only unambiguously defined if considered on the boundary of the 4d TQFT describing the anomaly. We can choose to put this TQFT on another 4-dimensional Pin\({}^{+}\) submanifold \(Z\subset M^{5}\), such that its ends on \(M^{4}\) along \(Y\), that is \(\partial Z\cap M^{4}=Y\). The total effective action of the bulk 5d TQFT on \(M^{5}\) and the bulk 4d TQFT on \(Z\subset M^{5}\) is then
\[S_{5d}+S_{4d}=\pi\mathsf{v}\,\eta(\mathrm{PD}(A))-\pi\mathsf{v}\,\eta(Z)=\pi \mathsf{v}\eta(\mathrm{PD}(A)\cup(-Z)). \tag{56}\]
Since \(\partial\mathrm{PD}(A)=Y\sqcup\ldots\) and \(\partial(-Z)=-Y\sqcup\ldots\), we can deform \(\mathrm{PD}(A)\cup(-Z)\) into a smooth hypersurface and push it inside the bulk so that it does not intersect with \(Y\) anymore (see Fig. 3). This will not change the total action (56), since \(\eta\)-invariant is topological in the bulk. But this implies that the operator (55) is well defined by itself as a topological operator in \(M^{4}\), without the need of specifying the 4d extension of \(Y\) into 5d bulk. As before, the defects become noninvertible because \(T\) is a noninvertible TQFT.
Note that the embedding \(\mathbb{Z}_{4}\hookrightarrow\mathrm{U}(1)\) induces the map between the corresponding anomalies in the opposite direction (see App. B):
\[\begin{array}{rcl}\mathbb{Z}^{2}&\longrightarrow&\mathbb{Z}_{16},\\ (k,\ell)&\longmapsto&\mathsf{v}=k-4\ell.\end{array} \tag{57}\]
Under such embedding the naive charge operator corresponding to the generator of \(\mathbb{Z}_{4}\) can be realized as the charge operator \(U_{\alpha}\) of \(\mathrm{U}(1)\) symmetry with \(\alpha=\frac{1}{4}\), considered in Section III. There it was shown that on the quantum level
Figure 3: The 4d hypersurface \(\mathrm{PD}(A)\) inside the 5d bulk, with the action \(\pi\mathsf{iv}\eta/8\) supported on it and ending on \(Y\subset M^{4}\), is needed to unambigiosly define the classical charge operator network \(U(\tilde{Y})\), with \(Y=\tilde{Y}\mod 2\). The 4d hypersurface \(Z\), with the action \(-\pi\mathrm{i}\eta/8\), and also ending on \(Y\), is needed to unambigiosly define the \(Z_{T}(Y)\), the partition function of an anomalous Pin\({}^{+}\) TQFT \(T\). The union \(\mathrm{PD}(A)\cup(-Z)\) can be deformed into a smooth hypersurface and pushed inside the 5d bulk, with the total action unchanged. This means that the product (55) is a well-defined topological defect in 4d, not requiring a choice of extension of \(Y\) into the bulk.
the operator can be made topological by supplementing it with the gravitational Chern-Simons term and a TQFT with the corresponding central charge satisfying \(c=k\alpha=k/4\mod k\). Using the anomaly map (57) and taking into account that \(\ell\in\mathbb{Z}\), we then get relation \(c=\nu/4\mod 1\). This is consistent with the fact that, the 3d TQFT realizing the \(\text{Pin}^{+}\) anomaly must be supplemented with the 4d bulk action term \(2\pi\,c\,\frac{p_{1}}{24}\) with \(c\) satisfying the condition above [93, Footnote 3 in particular]. The half-integer shifts of the central charge \(c\) can be implemented by stacking with invertible spin-TQFTs.
## V Conclusion
In this work. we have shown that although an _invertible_ symmetry can suffer from mixed gravitational anomalies under gravitational backgrounds (such as gravitational instantons), still a certain _noninvertible_ counterpart of an infinite discrete subgroup of this original broken symmetry can be revived as a noninvertible categorical symmetry. We have constructed the noninvertible symmetry charge operators as topological defects, specifically for the case of a mixed \(\text{U}(1)\)-gravitational anomaly [12; 13] and a mixed \(\mathbb{Z}_{4}\)-gravitational anomaly [24; 27; 81; 82; 83; 84]. Built upon the previous construction based on the mixed gauge anomaly pioneered in [58; 59], our construction can be regarded as a natural extension to the mixed gravitational anomaly counterpart. Meanwhile, thanks to the previous systematic classification of the anomalies and the corresponding cobordism class of the Standard Model (SM) [24; 25; 26; 27; 28; 29; 30; 31], we implement the aforementioned mixed anomalies in the SM naturally with the baryon \(\mathbf{B}\) minus lepton \(\mathbf{L}\) number symmetries, such as \(\text{U}(1)=\text{U}(1)_{\mathbf{Q}-N_{c}\mathbf{L}}\) and \(\mathbb{Z}_{4}=\mathbb{Z}_{4,X\equiv 5(\mathbf{B}-\mathbf{L})-\frac{2}{3} \tilde{Y}}\). The anomaly coefficients crucially depend on the difference between the family number and the "right-handed" sterile neutrino number: \((-N_{f}+n_{\nu_{R}})\).
Even under mixed gravitational anomalies, the subgroup of rotations by angles of the form \(\alpha=2\pi p/N\), that is \(2\pi\cdot\mathbb{Q}/\mathbb{Z}\) subgroup of the original invertible symmetry \(2\pi\cdot\mathbb{R}/\mathbb{Z}\cong\text{U}(1)_{\mathbf{Q}-N_{c}\mathbf{L}}\), can be revived as a noninvertible symmetry14. On the other hand, the full \(\mathbb{Z}_{4,X}\) can also be revived, such that the modified \(\mathbb{Z}_{4,X}\) charge operator generates a noninvertible symmetry, while the original normal subgroup \(\mathbb{Z}_{2}^{F}\) fermion parity charge operator still generates an invertible symmetry.
Footnote 14: The maximal _invertible_ symmetry \(\mathbb{Z}_{2m}\subset\text{U}(1)_{\mathbf{B}-\mathbf{L}}\) which is free of any self- and gravitational anomalies can be determined as follows for a given \(-N_{f}+n_{n_{\nu_{R}}}\), or, more generally given integer anomaly coefficients \(k\) and \(\ell\) (as in Section 11). The \(m\) is the maximal number such that the image of the anomaly \((k,\ell)\in\mathbb{Z}^{2}\cong\text{Hom}(\text{G}_{6}^{\text{Spin}^{c}}, \mathbb{Z})\) under the pullback map to \(\text{Hom}(\text{G}_{1_{5}}^{\text{Spin}\times_{2}\mathbb{Z}_{2}^{F}\mathbb{Z }_{2m}},\text{U}(1))\) is zero. Using the results of [82], this condition explicitly reads as the following system of equations:
\[\left\{\begin{array}{c}\left(2m^{2}+m+1\right)(24\ell+k)-(m+3)k \leavevmode\nobreak\ =\leavevmode\nobreak\ 0\mod\leavevmode\nobreak\ 48m,\\ m(24\ell+k)+k\leavevmode\nobreak\ =\leavevmode\nobreak\ 0\mod 2m.\end{array}\right. \tag{58}\]
For the standard model setup, with \(N_{c}=3\), let \(|-N_{f}+n_{\nu_{R}}|=2^{p}\cdot r\) for some odd \(r\). Then \(m=2^{\text{max}\{p-3,0\}}\cdot 3r\).
for all families), the anomaly-free cancellation holds among \(\mathrm{U}(1)^{3}_{\mathbf{L}_{\epsilon}-\mathbf{L}_{\mu}}\), \(\mathrm{U}(1)^{3}_{\mathbf{L}_{\mu}-\mathbf{L}_{\tau}}\), \(\mathrm{U}(1)_{\mathbf{L}_{\epsilon}-\mathbf{L}_{\mu}}\)-gravity [2], and \(\mathrm{U}(1)_{\mathbf{L}_{\mu}-\mathbf{L}_{\tau}}\)-gravity [2] local anomalies. However, the nonvanishing mixed anomaly \(\mathrm{U}(1)_{\mathbf{L}_{\epsilon}-\mathbf{L}_{\mu}}\)-\(\mathrm{U}(1)^{2}_{\mathbf{L}_{\mu}-\mathbf{L}_{\tau}}\) implies that in the case of dynamically gauged \(\mathrm{U}(1)_{\mathbf{L}_{\mu}-\mathbf{L}_{\tau}}\) with the so-called BSM \(Z^{\prime}\) gauge boson, the invertible \(\mathrm{U}(1)_{\mathbf{L}_{\epsilon}-\mathbf{L}_{\mu}}\) is broken. But Ref. [60] revives the noninvertible counterpart of \(\mathrm{U}(1)_{\mathbf{L}_{\epsilon}-\mathbf{L}_{\mu}}\) symmetry, and uses this noninvertible symmetry to protect neutrino masses as well as to generate small neutrino masses through the quantum effect of the instantons of the non-abelian 'horizontal' lepton-symmetry gauged group at UV.
\(\bullet\) Ref. [61] studies the flavor symmetries between different families and the aspects of their higher-group symmetries or noninvertible symmetries.
\(\bullet\) In contrast, in our work, we do not look at the low energy QED or QCD below the electroweak scale [58, 59]. We also do not assume any additional BSM \(Z^{\prime}\) gauge boson [60], nor do we require any hypothetical GUT structure [57] or any approximate flavor symmetry at UV [61]. Instead, we only implement the full honest SM gauge structure and matter content, (8) and (9), given by the confirmed experiments. Therefore, by the completeness of anomalies examined in [30, 27, 31], what we obtain is really a _noninvertible categorical symmetry_ of the _minimal_ SM from the mixed gravitational anomaly. It is possible that there are other new types of constructions of noninvertible symmetries beyond what we know of at this moment. But as long as the index \((-N_{f}+n_{\nu_{R}})\neq 0\) in our SM vacuum, then our noninvertible symmetry charge operators are valid topological defects in the SM.
2. **No global symmetry in quantum gravity**: Due to absence global symmetries in quantum gravity [70, 71, 72, 62, 73], the noninvertible symmetry must be either (1) completely broken (so the conservation law of noninvertible symmetry's charge operators must be violated again by another mechanism beyond the original mixed gravitational anomaly) or (2) dynamically gauged in the UV-complete theory at high energy (by gauging, we require to condense the topological defects -- namely the noninvertible symmetry's charge operators must be absorbed and become part of the vacuum). In either case, it will be interesting to work out the details in the future.
3. **Leptogenesis and Baryogenesis, Dirac vs Majorana masses, and exotic-BSM TQFT/CFT sectors**: \(\bullet\)_Leptogenesis_[32] concerns generic hypothetical physical processes that produced the lepton asymmetry (an asymmetry between the numbers of leptons and antileptons) in the very early universe, resulting in the present-day dominance of leptons over antileptons. In particular, a class of scenarios proposes that the baryon asymmetry of the universe is produced from the lepton asymmetry, e.g. generated in the decays of heavy sterile neutrinos.
\(\bullet\)_Gravitational leptogenesis_[33] provides the lepton asymmetry based on the gravitational anomaly (24) so that lepton number violation \(\mathrm{d}\star j_{\mathbf{L}}=\left(-N_{f}+n_{\nu_{R}}\right)\frac{p_{1}}{24 }=\left(-N_{f}+n_{\nu_{R}}\right)\frac{1}{24}\frac{\mathrm{Tr}[R\wedge R]}{8 \pi^{2}}\) comes from the gravitational background and curved spacetime.
\(\bullet\)_Baryogenesis_ and the baryon asymmetry can follow from the sphaleron process, once the lepton asymmetry is produced. The sphaleron converts between \(\mathrm{d}\star j_{\mathbf{L}}\) and \(\mathrm{d}\star j_{\mathbf{Q}}\) via the SU(2) instanton or even U(1) instanton in (24). The lepton and baryon asymmetries also affect the Big Bang _nucleosynthesis_ at later times.
Previously Ref. [34] studied the gravitational leptogenesis based on Dirac or Majorana neutrino mass scenarios. However, experiments have not yet confirmed (1) whether heavy sterile neutrinos do exist, (2) what is the index \((-N_{f}+n_{\nu_{R}})\), and (3) what is the mass generating mechanisms for left-handed neutrinos as well as (if any) sterile right-handed neutrinos.
One interesting future direction is whether the noninvertible symmetry topological defects provide any new perspectives on the gravitational leptogenesis. To recall, when we focus on the mixed \(\mathbb{Z}_{4,X}\)-gravitational anomaly classified by \((-N_{f}+n_{\nu_{R}})\in\mathbb{Z}_{16}\), we decorate the generating topological defect with the 3d symmetric anomalous boundary topological order of the 4d \(\mathbb{Z}_{4}^{TF}\)-time-reversal symmetric topological superconductor (4d \(\mathrm{Pin}^{+}\) iTFT with \(T^{2}=(-1)^{F}\)) classified by \(\mathbb{Z}_{16}\).
In fact, there has been a proposal to replace the hypothetical heavy sterile neutrino with the exotic 4d TQFT or CFT, called Ultra Unification [28, 69, 29]. In that case, the SM lives with the 4d symmetric anomalous TQFT or CFT on the boundary of the 5d bulk \(\mathbb{Z}_{4,X}\)-symmetric iTFT (5d \(\mathrm{Spin}\times_{Z_{4}^{F}}\mathbb{Z}_{4,X}\) iTFT with \(X^{2}=(-1)^{F}\)) classified also by \(\mathbb{Z}_{16}\). It will be illuminating to explore the relations between all these physics better altogether.
###### Acknowledgements.
PP would like to thank Po-Shen Hsin, Mehrdad Mirbabayi and Cumrun Vafa for the relevant discussions. JW appreciates Eduardo Garcia-Valdecasas, Hotat Lam, Justin Kaidi, Gabi Zafrir, and Yunqin Zheng for helpful con
versations on the related issues in the past. We are grateful to the hospitality and inspiring venues provided by the conferences of Simons Collaboration on Global Categorical Symmetries and Simons Center for Geometry and Physics in 2022. JW is supported by Harvard University CMSA.
## Appendix A Table of Representations of Quarks and Leptons
For the reader's convenience, we organize the representations of Weyl fermions with respect to various internal symmetry groups in Table 1, including:
\(\bullet\) SM Lie algebra \(\mathcal{G}_{\text{SM}}\equiv su(3)\times su(2)\times u(1)_{\bar{Y}}\) is compatible with four versions of Lie group \(G_{\text{SM}_{q}}\equiv\frac{\text{SU}(3)\times\text{SU}(2)\times\text{U}(1)_{ \bar{Y}}}{\mathbb{Z}_{3}}\) with \(\text{q}=1,2,3,6\). In order to have a proper quantization, we choose the charge of \(\text{U}(1)_{\bar{Y}}\) as 6 times the charge of the particle physics convention \(\text{U}(1)_{Y}\). The \(\text{U}(1)_{\text{EM}}\) is a linear combination of the \(\text{U}(1)_{T_{3}}\subset\text{SU}(2)\) subgroup and \(\text{U}(1)_{\bar{Y}}\).
\(\bullet\)\(\text{U}(1)_{\mathbf{Q}-N_{c}\mathbf{L}}\) symmetry (the precise form of \(\text{U}(1)_{\mathbf{B}-\mathbf{L}}\) with properly quantized charges, with the color number \(N_{c}=3\)).
\(\bullet\)\(\mathbb{Z}_{2N_{c}N_{f},\mathbf{Q}+N_{c}\mathbf{L}}\) symmetry (the precise form of \(\mathbb{Z}_{2N_{f},\mathbf{B}+\mathbf{L}}\) with properly quantized charges).
\(\bullet\)\(X\) symmetry, with \(X\equiv 5(\mathbf{B}-\mathbf{L})-4Y\equiv 5(\mathbf{B}-\mathbf{L})-\frac{2}{3} \tilde{Y}=\frac{5}{N_{c}}(\mathbf{Q}-N_{c}\mathbf{L})-\frac{2}{3}\tilde{Y}\), including \(\text{U}(1)_{X}\), \(\mathbb{Z}_{5,X}\), and \(\mathbb{Z}_{4,X}\).
\(\bullet\) Fermion parity \(\mathbb{Z}_{2}^{F}\) symmetry. Note that \(G_{\text{SM}_{q}}\) does not contain \(\mathbb{Z}_{2}^{F}\). So the fermion parity is not dynamically gauged within the \(G_{\text{SM}_{q}}\). The SM requires a spin structure to have fermions. The quotient group \(\frac{\text{Spin}}{\mathbb{Z}_{2}^{F}}=\text{SO}\) gives rise to the bosonic SO special orthogonal group of (local) spacetime rotations.
\(\bullet\)\(\text{SU}(5)\): The multiplet \(\overline{\mathbf{5}}\), \(\mathbf{10}\), and \(\mathbf{1}\) structure of the Georgi-Glashow \(\text{SU}(5)\) grand unified theory. Note that the \(\text{U}(1)_{X}\) is compatible with the \(\text{SU}(5)\) multiplet structure, so together they combine to form a \(u(5)\) or \(su(5)\times u(1)\) structure. More precisely, it is compatible with the refined Lie group \(\text{U}(5)_{\hat{\mathfrak{q}}}\equiv\frac{\text{SU}(5)\times_{\hat{\mathfrak{ q}}}\text{U}(1)_{\bar{Z}_{5,X}}}{\mathbb{Z}_{5,X}}\) defined in [57], with \(\hat{\mathfrak{q}}=2\) or 3. Both \(\text{SU}(5)\) and \(\text{U}(1)_{X}\) share the \(\mathbb{Z}_{5,X}\) center normal subgroup, so that it is quotient over to define \(\text{U}(5)_{\hat{\mathfrak{q}}}\).
\(\bullet\)\(\text{Spin}(10)\): The multiplet \(\mathbf{16}\) of \(\text{Spin}(10)\). Note that \(\text{Spin}(10)\supset Z(\text{Spin}(10))=\mathbb{Z}_{4,X}\supset\mathbb{Z}_ {2}^{F}\), namely the \(\text{Spin}(10)\) center \(Z(\text{Spin}(10))=\mathbb{Z}_{4}\) can be identified with \(\mathbb{Z}_{4,X}\) which also contains a \(\mathbb{Z}_{2}^{F}\) normal subgroup.
Note that a "sterile right-handed" neutrino (written as a right-handed anti-neutrino \(\bar{\nu}_{R}\) and regarded a left-handed Weyl spinor here in Table 1) is _only sterile_ to the SM's strong and electroweak forces in \(\mathcal{G}_{\text{SM}}\), and _sterile_ to the Georgi-Glashow \(\text{SU}(5)\) gauge force. However, the "sterile right-handed" neutrino is _not sterile_ to but charged under the \(\mathbf{B}\pm\mathbf{L}\) (more precisely \(\text{U}(1)_{\mathbf{Q}\pm N_{c}\mathbf{L}}\)), \(\text{U}(1)_{X}\), \(\mathbb{Z}_{5,X}\), and \(\mathbb{Z}_{4,X}\), and \(\mathbb{Z}_{2}^{F}\).
## Appendix B Anomaly Polynomial Generators for \(\text{U}(1)\) Symmetry in 4d from Index Theorem
Consider a collection of left- and right-moving Weyl fermions with the global \(\text{U}(1)\) symmetry charges \(q_{i},i=1,\ldots,n_{L}\) and \(\tilde{q}_{j},j=1,\ldots,n_{R}\) respectively. They have the following degree 6 anomaly polynomial which can be computed as the index of the 6d Dirac operator via Atiyah-Singer index theorem [13; 77]:
\[I_{6}=\left(\sum_{j=1}^{n_{R}}\tilde{q}_{j}^{3}-\sum_{i=1}^{n_{L}}q_{i}^{3} \right)\frac{c_{1}^{3}}{6}-\left(\sum_{j=1}^{n_{R}}\tilde{q}_{j}-\sum_{i=1}^{ n_{L}}q_{i}\right)\frac{c_{1}p_{1}}{24}. \tag{12}\]
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c} \hline \hline
**SM** & & & & & & & & & & & & \\
**fermion** & & & & & & & & & & & & & \\
**spinor** & & & & & & & & & & & & & & \\
**field** & & & & & & & & & & & & & & \\ \hline \(\bar{d}_{R}\) & \(\overline{\mathbf{3}}\) & \(\mathbf{1}\) & \(1/3\) & \(2\) & \(1/3\) & \(-1/3\) & \(-1\) & \(-1\) & \(-3\) & \(-3\) & \(1\) & \(1\) & \(\overline{\mathbf{5}}\) \\ \hline \(l_{L}\) & \(\mathbf{1}\) & \(\mathbf{2}\) & \(-1/2\) & \(-3\) & \(0\) or \(-1\) & \(-1\) & \(-3\) & \(+3\) & \(-3\) & \(-3\) & \(1\) & \(1\) & \(1\) & \\ \hline \(q_{L}\) & \(\mathbf{3}\) & \(\mathbf{2}\) & \(1/6\) & \(1\) & \(2/3\) or \(-1/3\) & \(1/3\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \\ \hline \(\bar{u}_{R}\) & \(\overline{\mathbf{3}}\) & \(\mathbf{1}\) & \(-2/3\) & \(-4\) & \(-2/3\) & \(-1/3\) & \(-1\) & \(-1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(\mathbf{10}\) & \\ \hline \(\bar{e}_{R}=e_{L}^{+}\) & \(\mathbf{1}\) & \(\mathbf{1}\) & \(1\) & \(6\) & \(1\) & \(1\) & \(3\) & \(-3\) & \(1\) & \(1\) & \(1\) & \(1\) & \(1\) & \\ \hline \(\bar{\nu}_{R}=\nu_{L}\) & \(\mathbf{1}\) & \(\mathbf{1}\) & \(0\) & \(0\) & \(0\) & \(1\) & \(3\) & \(-3\) & \(5\) & \(0\) & \(1\) & \(1\) & \(\mathbf{1}\) & \\ \hline \hline \end{tabular}
\end{table}
Table 1: Representations of quarks and leptons in terms of Weyl fermions in various internal symmetry groups. Each fermion is a spin-\(\frac{1}{2}\) Weyl spinor \(\mathbf{2}_{L}\) representation in the spacetime symmetry group \(\text{Spin}(1,3)\). Each fermion is written as a left-handed particle \(\bar{\psi}_{L}\) or a right-handed anti-particle \(\mathrm{i}\sigma_{2}\bar{\psi}_{R}^{2}\).
Assuming that \(\mathbb{Z}_{2}^{F}\) is not included in the \(\mathrm{U}(1)\), the anomaly polynomial above is in general a linear combination (over \(\mathbb{Z}\), if all the charges \(\tilde{q}_{i},\;q_{j}\) are integers) of the following two terms:
\[I^{A}:=\frac{c_{1}^{3}}{6}-\frac{c_{1}p_{1}}{24},\qquad I^{B}:=c_{1}^{3}. \tag{101}\]
Those are the values of \(I_{6}\) for the charge vectors \(\tilde{q}=(1),\,q=()\) and \(\tilde{q}=(2),\,q=(1,1)\) respectively. For general charges we have:
\[I_{6}=\left(\sum_{j=1}^{n_{R}}\tilde{q}_{j}-\sum_{i=1}^{n_{L}}q_{i}\right)I^{A }+\left(\sum_{j=1}^{n_{R}}\frac{\tilde{q}_{j}^{3}-\tilde{q}_{j}}{6}-\sum_{i=1} ^{n_{L}}\frac{q_{i}^{3}-q_{i}}{6}\right)I^{B}. \tag{102}\]
Note that \((q^{3}-q)/6\in\mathbb{Z}\) for any \(q\in\mathbb{Z}\). Of course, instead of \((I^{A},I^{B})\) as above one can consider another pair related to it by a \(\mathrm{GL}(2,\mathbb{Z})\) transformation.
Moreover, \(I^{A},I^{B}\) serve as the two generators of \(\mathrm{Hom}(\Omega_{6}^{\mathrm{Spin}}(B\mathrm{U}(1)),\mathbb{Z})\cong \mathbb{Z}\times\mathbb{Z}\) by considering their integrals over 6-manifolds representing the elements in the bordism group. This can be argued as follows. First, the fact that \(I^{A},I^{B}\) are integer-valued on any representative follows from Atiyah-Singer index theorem for the twisted Dirac operator. It is then enough to check that there exists a pair of representatives in the bordism group such that the values of \((I^{A},I^{B})\) on them form a basis in \(\mathbb{Z}^{2}\). For the first such representative, \(W_{1}\), let us take spin 6-manifold \(S^{2}\times S^{2}\times S^{2}\) with \(c_{1}=a+b+c\), where \(a,b,c\) are Poincare dual to \([\mathrm{pt}\times S^{2}\times S^{2}]\), \([S^{2}\times\mathrm{pt}\times S^{2}]\), \([S^{2}\times S^{2}\times\mathrm{pt}]\) respectively. We have \((I^{A},I^{B})(W_{1})=(1,6)\), which follows from the fact that signature of \(S^{2}\times S^{2}\) is zero. For the second representative, \(W_{2}\), let us take spin 6-manifold \(\mathbb{CP}^{3}\) with \(c_{1}=h\), the standard generator of \(\mathrm{H}^{2}(\mathbb{CP}^{3},\mathbb{Z})\) (the class of the Kahler form). Using the fact that \(p_{1}=4h^{2}\) in this case we obtain \((I^{A},I^{B})(W_{2})=(0,1)\).
If instead we have \(\mathbb{Z}_{2}^{F}\subset\mathrm{U}(1)\) (so that in particular \(q_{i},\tilde{q}_{j}\) are all necessarily odd), the general anomaly polynomial is an integral linear combination of the following two terms:
\[I^{C}:=\frac{c_{1}^{3}}{6}-\frac{c_{1}p_{1}}{24}=\frac{(2c_{1})^{3}-(2c_{1})p _{1}}{48},\qquad I^{D}:=4c_{1}^{3}=\frac{(2c_{1})^{3}}{2}. \tag{103}\]
Those are the values of \(I_{6}\) for the charge vectors \(\tilde{q}=(1),\,q=()\) and \(\tilde{q}=(3),\,q=(1,1,1)\) respectively. Note that in this case \(c_{1}\) is in general not a well-defined integer cohomology class, only \(2c_{1}\) is. This is because in general there is no globally well-defined \(\mathrm{U}(1)\) bundle, only \(\mathrm{U}(1)/\mathbb{Z}_{2}\) bundle, the first Chern class of which is \(2c_{1}\). It has to satisfy the condition \(2c_{1}=w_{2}\mod 2\) where \(w_{2}\) is the 2nd Stiefel-Whitney class of the tangent bundle.
For general charges we have:
\[I_{6}=\left(\sum_{j=1}^{n_{R}}\tilde{q}_{j}-\sum_{i=1}^{n_{L}}q_{i}\right)I^{ C}+\left(\sum_{j=1}^{n_{R}}\frac{\tilde{q}_{j}^{3}-\tilde{q}_{j}}{24}-\sum_{i=1} ^{n_{L}}\frac{q_{i}^{3}-q_{i}}{24}\right)I^{D}. \tag{104}\]
Note that \((q^{3}-q)/24\in\mathbb{Z}\) for any \(q\in 2\mathbb{Z}+1\).
Now \(I^{C},I^{D}\) serve as the two generators of \(\mathrm{Hom}(\Omega_{6}^{\mathrm{Spin}^{c}},\mathbb{Z})\cong\mathbb{Z}\times \mathbb{Z}\). As before, the fact that they are integral valued follows from Atiyah-Singer index theorem. Finding a pair of representatives in the bordism group such that the values of \((I^{C},I^{D})\) on them form a basis in \(\mathbb{Z}^{2}\) is a bit more involved. To construct it we will first consider the following triple of representatives. For the first, \(V_{1}\), we take \(W_{1}\) as above, but now considered as a \(\mathrm{spin}^{c}\) 6-manifold. It has \((I^{C},I^{D})(V_{1})=(1,24)\). For the second, \(V_{2}\), we take \(\mathrm{spin}^{c}\) 6-manifold \(S^{2}\times\mathbb{CP}^{2}\) with \(2c_{1}=2a+h\) where \(a\) is the Poincare dual to \([\mathrm{pt}\times\mathbb{CP}^{2}]\) and \(h\) is the standard generator of \(\mathrm{H}^{2}(\mathbb{CP}^{2},\mathbb{Z})\). Note that \(w_{2}=h\mod 2\), so \(2c_{1}\) indeed satisfies the necessary condition. Using the fact that \(p_{1}=3h^{2}\) we get \((I^{C},I^{D})(V_{2})=(0,3)\). For the third, \(V_{3}\), we take a quartic complex hypersurface in \(\mathbb{CP}^{4}\) with \(2c_{1}=h\), the standard generator of \(\mathrm{H}^{2}(\mathbb{CP}^{3})\) induced on the cohomology of the quartic. It is consistent with the fact that \(w_{2}=h\mod 2\) in this case. Using also the facts that \(p_{1}=-11h^{2}\) and that the top-degree cohomology generator of the quartic is \(4h^{3}\), we obtain \((I^{C},I^{D})(V_{3})=(1,2)\). Now let us take \(U_{1}:=V_{2}\#(-V_{2})^{\#8}\) and \(U_{2}=V_{2}\#(-V_{2})\) where \(\#\) denotes connected sum and minus sign denotes orientation reversal. We have \((I^{C},I^{D})(U_{1})=(1,0)\) and \((I^{C},I^{D})(U_{2})=(-1,1)\) which do form a basis in \(\mathbb{Z}^{2}\).
Consider now the inclusion map \(\mathbb{Z}_{4}\subset\mathrm{U}(1)\), in the case when \(\mathbb{Z}_{2}^{F}\subset\mathbb{Z}_{4}\). The anomalies of \(\mathbb{Z}_{4}\) symmetry have \(\mathbb{Z}_{16}=\mathrm{Hom}(\Omega_{5}^{\mathrm{Spin}\times\mathbb{Z}_{2}^{F} \mathbb{Z}_{4}},\mathrm{U}(1))\) classification. The inclusion induces the pullback map between the groups classifying the anomalies:
\[\begin{array}{rcl}\mathrm{Hom}(\Omega_{6}^{\mathrm{Spin}^{c}},\mathbb{Z})& \longrightarrow&\mathrm{Hom}(\Omega_{5}^{\mathrm{Spin}\times\mathbb{Z}_{2}^{F} \mathbb{Z}_{4}},\mathrm{U}(1)),\\ \mathbb{Z}^{2}&\longrightarrow&\mathbb{Z}_{16}.\end{array} \tag{105}\]
To describe the map explicitly, we need to choose a basis for each group. For \(\mathbb{Z}^{2}\) we choose the basis (101). For \(\mathbb{Z}_{16}\) we choose the basis element to be the anomaly of a single right-moving Weyl fermion of charge \(+1\mod 4\). By considering a right-moving fermion with U(1) charge \(+1\) we then immediately conclude that
\[(1,0)\;\longmapsto\;1\mod 16. \tag{102}\]
Note that the right-moving fermion of charge \(3=-1\mod 4\) should necessarily have anomaly \(-1\mod 16\), because the map (101) is a homomorphism, and (101) changes the sign when all the fermion charges change signs. Therefore, by considering a right-moving fermion with U(1) charge \(+3\) we conclude that
\[(3,1)\;\longmapsto\;-1\mod 16. \tag{103}\]
Combining (102) and (103) we get more generally:
\[(a,b)\;\longmapsto\;a-4b\mod 16. \tag{104}\]
|
2307.16673 | On the canonical bundle of complex solvmanifolds and applications to
hypercomplex geometry | We study complex solvmanifolds $\Gamma\backslash G$ with holomorphically
trivial canonical bundle. We show that the trivializing section of this bundle
can be either invariant or non-invariant by the action of $G$. First we
characterize the existence of invariant trivializing sections in terms of the
Koszul 1-form $\psi$ canonically associated to $(\mathfrak{g},J)$, where
$\mathfrak{g}$ is the Lie algebra of $G$, and we use this characterization to
produce new examples of complex solvmanifolds with trivial canonical bundle.
Moreover, we provide an algebraic obstruction, also in terms of $\psi$, for a
complex solvmanifold to have trivial (or more generally holomorphically
torsion) canonical bundle. Finally, we exhibit a compact hypercomplex
solvmanifold $(M^{4n},\{J_1,J_2,J_3\})$ such that the canonical bundle of
$(M,J_{\alpha})$ is trivial only for $\alpha=1$, so that $M$ is not an
$\operatorname{SL}(n,\mathbb{H})$-manifold. | Adrián Andrada, Alejandro Tolcachier | 2023-07-31T13:48:56Z | http://arxiv.org/abs/2307.16673v3 | # On the canonical bundle of complex solvmanifolds and applications to hypercomplex geometry
###### Abstract.
In this article we study solvmanifolds \(\Gamma\backslash G\) equipped with invariant complex structures such that its canonical bundle is trivial. We show that the trivializing section of this bundle can be either invariant or non-invariant by the action of \(G\). First we characterize the existence of invariant trivializing sections in terms of a \(1\)-form \(\psi\) canonically associated to \((\mathfrak{g},J)\), where \(\mathfrak{g}\) is the Lie algebra of \(G\), and we use this characterization to recover some known results in the literature as well as to produce new examples of complex solvmanifolds with trivial canonical bundle. Later we consider the non-invariant case and we provide an algebraic obstruction, also in terms of \(\psi\), for a complex solvmanifold to have trivial (or more generally holomorphically torsion) canonical bundle. In addition, this obstruction leads us to a way of building explicit non-invariant sections which we illustrate with some examples. Finally, we apply our results to hypercomplex manifolds in order to provide a negative answer to a question posed by M. Verbitsky.
Key words and phrases:Complex solvmanifold, canonical bundle, solvable Lie group, lattice 2020 Mathematics Subject Classification: 53C15, 32M10, 22E25, 22E40
## 1. Introduction
Given a complex manifold \((M,J)\) with \(\dim_{\mathbb{C}}M=n\), its canonical bundle \(K_{M}\) is defined as the \(n\)-th exterior power of its holomorphic cotangent bundle, and it is a holomorphic line bundle over \(M\). This line bundle is holomorphically trivial when there exists a nowhere vanishing \((n,0)\)-form which is holomorphic (or equivalently, closed). Complex manifolds with holomorphically trivial canonical bundle are important in differential geometry and other fields. For instance, compact Kahler manifolds \(M\) with global Riemannian holonomy contained in \(\mathrm{SU}(n)\), have holomorphically trivial canonical bundle. More generally, any Calabi-Yau manifold (i.e., a compact Kahler manifold \(M\) with \(c_{1}(M)=0\) in \(H^{2}(M,\mathbb{R})\)) has holomorphically torsion canonical bundle, that is, \(K_{M}^{\otimes k}\) is trivial for some \(k\in\mathbb{N}\). In theoretical physics, complex manifolds with holomorphically trivial canonical bundle appear in the study of the Hull-Strominger system. Indeed, in dimension \(6\), the solutions of this system occur in compact complex manifolds \(M\) endowed with a special Hermitian metric (not necessarily Kahler) and with trivial \(K_{M}\). According to [45], compact complex manifolds with holomorphically torsion canonical bundle have vanishing first Bott-Chern class, \(c_{1}^{BC}=0\), and therefore they are examples of _non-Kahler Calabi-Yau_ manifolds.
A large family of compact complex manifolds with trivial canonical bundle is given by nilmanifolds \(\Gamma\backslash G\) equipped with an invariant complex structure. Indeed, it was shown in [8] that the simply connected nilpotent Lie group \(G\) admits a nonzero left invariant holomorphic \((n,0)\)-form \(\sigma\) (with \(\dim_{\mathbb{R}}G=2n\)), by using a distinguished basis of left invariant \((1,0)\)-forms provided by Salamon in [42]. Since \(\sigma\) is left invariant, it induces an invariant trivializing section of \(K_{\Gamma\backslash G}\) for any lattice \(\Gamma\subset G\).
The next natural step is to study solvmanifolds \(\Gamma\backslash G\) equipped with invariant complex structures (or complex solvmanifolds, for short). In this case, it is known that several different phenomena can occur. For instance:
Introduction
Let \(G\) be a smooth manifold and \(\mathbb{R}^{3}\) be a smooth manifold. Let \(\Gamma\) be a smooth manifold and \(\Gamma\) be a smooth manifold.
The Lie group \(G\) admits the lattice \(\Gamma=\{(2\pi k,m,n,\frac{p}{2})\mid k,m,n,p\in\mathbb{Z}\}\); note that \(\tau\) is \(\Gamma\)-invariant since \(\mathrm{e}^{i(t+2\pi k)}=\mathrm{e}^{it}\). Therefore \(\tau\) induces a nowhere vanishing closed \((2,0)\)-form \(\tilde{\tau}\) on the solvmanifold \((\Gamma\backslash G,J)\) and thus this solvmanifold has trivial canonical bundle. We point out that \((\Gamma\backslash G,J)\) is a primary Kodaira surface since it can be seen that it is biholomorphic to \((\tilde{\Gamma}\backslash(\mathbb{R}\times H_{3}),\tilde{J})\), where \(\tilde{J}\) is induced by a left invariant complex structure on \(\mathbb{R}\times H_{3}\).
The previous example is the main motivation for this article. It shows that when studying the triviality of the canonical bundle of complex solvmanifolds we need to deal with the problem in two instances. First, we have to determine whether a complex solvmanifold admits an invariant trivializing section, and if this is not the case, then we must look for non-invariant trivializing sections.
Since a trivializing section of the canonical bundle of \((\Gamma\backslash G,J)\) gives rise, via pullback, to a trivializing section of \((G,J)\), we should work at the Lie group level. Conversely, if \((G,J)\) admits a left invariant trivializing section of \(K_{G}\), then for any lattice \(\Gamma\subset G\) the complex solvmanifold \((\Gamma\backslash G,J)\) has trivial canonical bundle since the trivializing section descends to the quotient. On the other hand, if \((G,J)\) admits a non-invariant trivializing section, then we need to determine whether the trivializing section is invariant by the action of \(\Gamma\); if this is the case the section induces a trivializing section on the quotient and hence \(K_{\Gamma\backslash G}\) is trivial. Our main interest is to study complex solvmanifolds but whenever possible we prove results for compact quotients of general simply connected Lie groups equipped with left invariant complex structures.
First, in SS3, given a (not necessarily solvable) Lie group \(G\) endowed with a left invariant complex structure \(J\), we tackle the problem of the existence of an invariant trivializing section of \(K_{G}\). We show in Theorem 3.1 that the existence of such a form is equivalent to the vanishing of the 1-form \(\psi\) on the Lie algebra \(\mathfrak{g}=\mathrm{Lie}(G)\) naturally defined by \(\psi(x)=\mathrm{Tr}(J\operatorname{ad}x)-\mathrm{Tr}\operatorname{ad}(Jx)\) for \(x\in\mathfrak{g}\). This algebraic characterization allows us to recover some known results in the literature (for instance, which almost abelian complex solvmanifolds have trivial canonical bundle [15]), as well as to obtain new results: the triviality of the canonical bundle of any complex solvmanifold equipped with an abelian complex structure (Corollary 3.7) and conditions which ensure that the canonical bundle of a certain family of almost nilpotent complex solvmanifolds (considered in [16]) is trivial (Propositions 3.11 and 3.14). Concerning the case of holomorphically torsion canonical bundle, we show that if \((\Gamma\backslash G,J)\) admits an invariant trivializing section of some power of \(K_{\Gamma\backslash G}\) then \(K_{\Gamma\backslash G}\) itself admits an invariant trivializing section.
Next we move on to the non-invariant setting. In SS4 we begin by proving that two trivializing sections of the canonical bundle of a Lie group \(G\) equipped with a left invariant complex structure differ by a nowhere vanishing holomorphic function on \(G\) (Lemma 4.1). This implies that the trivializing sections of the canonical bundle of a compact complex manifold \(\Gamma\backslash G\) are either all invariant or all non invariant (Corollary 4.2). Then we restrict to the solvable case and we show that any simply connected solvable Lie group \(G\) equipped with a left invariant complex structure \(J\) has trivial canonical bundle, probably via a non-invariant trivializing section (see Theorem 4.6). However, if \(G\) has a lattice \(\Gamma\), this trivializing section is not necessarily invariant by the action of \(\Gamma\), and it is difficult to determine whether there exists another one which is invariant by \(\Gamma\).
In SS5, our goal is to give new examples of complex solvmanifolds with trivial canonical bundle, via a non-invariant section. In order to narrow this search of examples, we first provide an algebraic obstruction in terms of the 1-form \(\psi\), which holds for any Lie group \(G\) with a left invariant complex structure. Indeed, exploiting the relation of \(\psi\) with the Chern-Ricci form of any invariant Hermitian metric and using Belgun's symmetrization process we show in Theorem 5.2 that if a compact complex manifold \(\Gamma\backslash G\) has trivial (or more generally holomorphically torsion) canonical bundle then \(\psi\) vanishes on the commutator ideal \([\mathfrak{g},\mathfrak{g}]\) where \(\mathfrak{g}=\mathrm{Lie}(G)\). We use this condition to reobtain the known fact that compact semisimple Lie groups with a left
invariant complex structure do not have holomorphically torsion canonical bundle (Proposition 5.8). This obstruction also provides us with a helpful insight to find an explicit trivializing section of the canonical bundle of some complex solvmanifolds (see Proposition 5.10). We apply this construction in order to exhibit some examples, one of them on the complex parallelizable Nakamura manifold.
In the last section we consider a Lie group \(G\) equipped with a left invariant hypercomplex structure \(\{J_{1},J_{2},J_{3}\}\) and we study the triviality of the canonical bundle of the complex manifolds \((G,J_{\alpha})\), \(\alpha=1,2,3\). Recall that a hypercomplex structure on a manifold \(M\) is a triple of complex structures \(\{J_{1},J_{2},J_{3}\}\) satisfying the laws of the quaternions \(J_{3}=J_{1}J_{2}=-J_{2}J_{1}\); if \(M\) admits such a structure then the dimension of \(M\) is a multiple of \(4\). First we prove in Theorem 6.1 that if \(\{J_{1},J_{2},J_{3}\}\) is a left invariant hypercomplex structure on \(G\) and if \((G,J_{\alpha})\) admits a left invariant trivializing section of its canonical bundle for some \(\alpha=1,2,3\), then the canonical bundle of \((G,J_{\beta})\) is trivial for all \(\beta=1,2,3\), and the same happens for any associated compact quotient \(\Gamma\backslash G\) with the induced hypercomplex structure. Next we show that this does not necessarily hold for hypercomplex solvmanifolds if the trivializing section of \((\Gamma\backslash G,J_{\alpha})\) is not invariant. Indeed, in Example 6.3 we exhibit \(8\)-dimensional hypercomplex solvmanifolds \((\Gamma\backslash G,\{J_{1},J_{2},J_{3}\})\) such that \((\Gamma\backslash G,J_{1})\) has trivial canonical bundle but the canonical bundles of \((\Gamma\backslash G,J_{2})\) and \((\Gamma\backslash G,J_{3})\) are both non trivial. This example also provides a negative answer to a question raised by Verbitsky in [46].
**Acknowledgments.** The authors are grateful to Daniele Angella for his useful comments and suggestions. This work was partially supported by CONICET, SECyT-UNC and FONCyT (Argentina).
## 2. Preliminaries
An almost complex structure on a differentiable manifold \(M\) is an automorphism \(J\) of the tangent bundle \(TM\) satisfying \(J^{2}=-\operatorname{I}\), where \(\operatorname{I}\) is the identity endomorphism of \(TM\). Note that the existence of an almost complex structure on \(M\) forces the dimension of \(M\) to be even, say \(\dim_{\mathbb{R}}M=2n\). The almost complex structure \(J\) is called _integrable_ when it satisfies the condition \(N_{J}\equiv 0\), where \(N_{J}\) is the Nijenhuis tensor given by:
\[N_{J}(X,Y)=[X,Y]+J([JX,Y]+[X,JY])-[JX,JY], \tag{1}\]
for \(X,Y\) vector fields on \(M\). An integrable almost complex structure is called simply a complex structure on \(M\). According to the well-known Newlander-Nirenberg theorem, a complex structure on \(M\) is equivalent to the existence of a holomorphic atlas on \(M\), so that \((M,J)\) can be considered as a complex manifold of complex dimension \(n\).
If \((M,J)\) is a complex manifold with \(\dim_{\mathbb{C}}M=n\) its canonical bundle is defined as
\[K_{M}=\bigwedge^{n}\mathcal{T}_{M}^{*},\]
where \(\mathcal{T}_{M}^{*}\) is the holomorphic cotangent bundle of \(M\). This is a holomorphic line bundle on \(M\), and it is holomorphically trivial if and only if there exists a nowhere vanishing holomorphic \((n,0)\)-form defined on \(M\). In this article, by trivial canonical bundle we will always mean holomorphically trivial canonical bundle.
Note that if \(\sigma\) is a \((n,0)\)-form on \(M\) then \(\sigma\) is holomorphic if and only if it is closed, since \(d\sigma=\partial\sigma+\overline{\partial}\sigma\) and \(\partial\sigma\) is a \((n+1,0)\)-form, thus \(\partial\sigma=0\).
We observe first that the existence of a trivializing section of the canonical bundle has some topological consequences on a compact complex manifold.
**Proposition 2.1**.: _Let \((M,J)\) be a compact complex manifold with trivial canonical bundle and \(\dim_{\mathbb{R}}M=2n\). Then the \(n\)-th Betti number \(b_{n}(M)\) satisfies \(b_{n}(M)\geq 2\)._
Proof.: We follow the lines of [14, Proposition 2.5]. Let \(\tau\) denote a nowhere vanishing holomorphic \((n,0)\)-form on \(M\), therefore \(\tau\wedge\bar{\tau}\) is a nonzero multiple of a real volume form on \(M\). Let us decompose it as \(\tau=\tau_{1}+i\tau_{2}\). Since \(\tau\) is closed, we have that \(d\tau_{1}=0=d\tau_{2}\). Therefore, they define de Rham cohomology classes \([\tau_{1}],[\tau_{2}]\in H^{n}_{dR}(M,\mathbb{R})\).
Let us show that these two classes are linearly independent. If we assume otherwise then there exist \(a,b\in\mathbb{R}\) with \(a^{2}+b^{2}\neq 0\) such that \(a\tau_{1}+b\tau_{2}=d\eta\) for some \((n-1)\)-form \(\eta\).
We divide the analysis into two cases, according to the parity of \(n\).
(i) Case \(n\) odd: in this case we have
\[0\neq\tau\wedge\bar{\tau}=(\tau_{1}\wedge\tau_{1}+\tau_{2}\wedge\tau_{2})+i(- \tau_{1}\wedge\tau_{2}+\tau_{2}\wedge\tau_{1})=-2i(\tau_{1}\wedge\tau_{2}).\]
We compute next
\[d(\eta\wedge(-b\tau_{1}+a\tau_{2}))=(a\tau_{1}+b\tau_{2})\wedge(-b\tau_{1}+a \tau_{2})=(a^{2}+b^{2})(\tau_{1}\wedge\tau_{2}).\]
Integrating over \(M\) we obtain, due to Stokes' theorem, \(0=(a^{2}+b^{2})\int_{M}\tau_{1}\wedge\tau_{2}\), which is a contradiction.
(ii) Case \(n\) even: in this case we have
\[0\neq\tau\wedge\bar{\tau}=(\tau_{1}\wedge\tau_{1}+\tau_{2}\wedge\tau_{2})+i(- \tau_{1}\wedge\tau_{2}+\tau_{2}\wedge\tau_{1})=\tau_{1}\wedge\tau_{1}+\tau_{2 }\wedge\tau_{2}.\]
It follows from \(0=\tau\wedge\tau\) that \(\tau_{1}\wedge\tau_{1}=\tau_{2}\wedge\tau_{2}\) and \(\tau_{1}\wedge\tau_{2}=0\). In particular, \(0\neq\tau\wedge\bar{\tau}=2\tau_{1}\wedge\tau_{1}\). We compute next
\[d(\eta\wedge(a\tau_{1}+b\tau_{2}))=(a\tau_{1}+b\tau_{2})\wedge(a\tau_{1}+b \tau_{2})=(a^{2}+b^{2})\tau_{1}\wedge\tau_{1}.\]
Again, integrating over \(M\) we obtain a contradiction.
Therefore we obtain that \(b_{n}(M)\geq 2\).
Other important holomorphic line bundles over the complex manifold \((M,J)\) are given by the tensor powers of the canonical bundle:
\[K^{\otimes k}_{M}=K_{M}\otimes\cdots\otimes K_{M}\quad(\text{$k$ times}).\]
Following [45], we will say that a complex manifold \((M,J)\) is _holomorphically torsion_ if \(K^{\otimes k}_{M}\) is holomorphically trivial for some \(k\geq 1\). The triviality of this holomorphic bundle can be understood as follows.
For any complex manifold \(M\) the Dolbeault operator \(\bar{\partial}\) can be extended to a differential operator \(\bar{\partial}_{k}:\Gamma(K^{\otimes k}_{M})\to\Gamma((T^{*}M)^{0,1}\otimes K ^{\otimes k}_{M})\), where \(\Gamma(\cdot)\) denotes the space of smooth sections. Indeed, since \(\bigwedge^{n,1}(M)\cong(T^{*}M)^{0,1}\otimes K_{M}\) we define recursively: \(\bar{\partial}_{1}=\bar{\partial}\) and for \(k\geq 2\),
\[\bar{\partial}_{k}(\sigma\otimes s)=\bar{\partial}\sigma\otimes s+\sigma \otimes\bar{\partial}_{k-1}s,\]
where \(\sigma\in\Gamma(K_{M})\) and \(s\in\Gamma(K^{\otimes k-1}_{M})\). This differential operator satisfies the Leibniz rule \(\bar{\partial}_{k}(fs)=\bar{\partial}f\otimes s+f\bar{\partial}_{k}s\) for any \(f\in C^{\infty}(M,\mathbb{C})\) and \(s\in\Gamma(K^{\otimes k}_{M})\).
The holomorphic bundle \(K^{\otimes k}_{M}\) is trivial if and only if there exists a nowhere vanishing section \(s\in\Gamma(K^{\otimes k}_{M})\) such that \(\bar{\partial}_{k}s=0\).
A Hermitian structure on a smooth manifold \(M\) is a pair \((J,g)\) of a complex structure \(J\) and a Riemannian metric \(g\) compatible with \(J\), that is, \(g(JX,JY)=g(X,Y)\) for all vector fields \(X,Y\) on \(M\), or equivalently, \(g(JX,Y)=-g(X,JY)\). We point out that any complex manifold \((M,J)\) admits a Riemannian metric \(g\) such that \((J,g)\) is a Hermitian structure on \(M\).
### Solvmanifolds
A discrete subgroup \(\Gamma\) of a Lie group \(G\) is called a _lattice_ if the quotient \(\Gamma\backslash G\) has finite volume. According to [32], if such a lattice exists then the Lie group must be unimodular, that is, it carries a bi-invariant Haar measure. This is equivalent, when \(G\) is connected, to \(\operatorname{Tr}(\operatorname{ad}x)=0\) for all \(x\in\mathfrak{g}=\operatorname{Lie}(G)\) (in this case, \(\mathfrak{g}\) is called unimodular as well). When \(\Gamma\backslash G\) is compact the lattice \(\Gamma\) is said to be uniform. It is well known that when \(G\) is solvable then any lattice is uniform [40, Theorem 3.1].
Assume that \(G\) is simply connected and \(\Gamma\) is a uniform lattice in \(G\). The quotient \(\Gamma\backslash G\) is called a solvmanifold if \(G\) is solvable and a nilmanifold if \(G\) is nilpotent, and it follows that \(\pi_{1}(\Gamma\backslash G)\cong\Gamma\) and \(\pi_{n}(\Gamma\backslash G)=0\) for \(n>1\). Furthermore, the diffeomorphism class of solvmanifolds is determined by the isomorphism class of the corresponding lattices, as the following result shows:
**Theorem 2.2**.: _[_34_]_ _If \(\Gamma_{1}\) and \(\Gamma_{2}\) are lattices in simply connected solvable Lie groups \(G_{1}\) and \(G_{2}\), respectively, and \(\Gamma_{1}\) is isomorphic to \(\Gamma_{2}\), then \(\Gamma_{1}\backslash G_{1}\) is diffeomorphic to \(\Gamma_{2}\backslash G_{2}\)._
The previous result can be strengthened when both solvable Lie groups \(G_{1}\) and \(G_{2}\) are completely solvable1, according to Saito's rigidity theorem:
Footnote 1: A solvable Lie group \(G\) is completely solvable if the adjoint operators \(\operatorname{ad}x:\mathfrak{g}\to\mathfrak{g}\), with \(x\in\mathfrak{g}=\operatorname{Lie}(G)\), have only real eigenvalues. In particular, nilpotent Lie groups are completely solvable.
**Theorem 2.3**.: _[_41_]_ _Let \(G_{1}\) and \(G_{2}\) be simply connected completely solvable Lie groups and \(\Gamma_{1}\subset G_{1},\,\Gamma_{2}\subset G_{2}\) lattices. Then every isomorphism \(f:\Gamma_{1}\to\Gamma_{2}\) extends uniquely to a Lie group isomorphism \(F:G_{1}\to G_{2}\)._
We point out that in any fixed dimension only countably many non-isomorphic simply connected Lie groups admit lattices, according to [33] (for the solvable case) and [49] (for the general case).
Let \(G\) be a simply connected solvable Lie group, and \(N\) the nilradical of \(G\) (i.e., the connected closed Lie subgroup of \(G\) whose Lie algebra is the nilradical \(\mathfrak{n}\) of \(\mathfrak{g}\)). Moreover, \([G,G]\) is the connected closed Lie subgroup with Lie algebra \([\mathfrak{g},\mathfrak{g}]\). As \(G\) is solvable, \([G,G]\subset N\) so \(G/N\) is abelian, and from the long exact sequence of homotopy groups associated to the fibration \(N\to G\to G/N\) it follows that \(G/N\) is simply connected. Therefore \(G/N\cong\mathbb{R}^{k}\) for some \(k\in\mathbb{N}\) and \(G\) satisfies the short exact sequence
\[1\to N\to G\to\mathbb{R}^{k}\to 1.\]
\(G\) is called _splittable_ if this sequence splits, that is, there is a right inverse homomorphism of the projection \(G\to\mathbb{R}^{k}\). This condition is equivalent to the existence of a homomorphism \(\phi:\mathbb{R}^{k}\to\operatorname{Aut}(N)\) such that \(G\) is isomorphic to the semidirect product \(\mathbb{R}^{k}\ltimes_{\phi}N\).
Following [50], a lattice \(\Gamma\) of a splittable solvable Lie group \(\mathbb{R}^{k}\ltimes_{\phi}N\) will be called _splittable_ if it can be written as \(\Gamma=\Gamma_{1}\ltimes_{\phi}\Gamma_{2}\) where \(\Gamma_{1}\subset\mathbb{R}^{k}\) and \(\Gamma_{2}\subset N\) are lattices of \(\mathbb{R}^{k}\) and \(N\) respectively. Also in [50] there is a criterion to determine the existence of splittable lattices in splittable solvable simply connected Lie groups.
**Theorem 2.4**.: _[_50_]_ _Let \(G=\mathbb{R}^{k}\ltimes_{\phi}N\) be a simply connected splittable solvable Lie group, where \(N\) is the nilradical of \(G\). If there exist a rational basis \(\mathcal{B}=\{X_{1},\ldots,X_{n}\}\) of \(\mathfrak{n}\) and a basis \(\{t_{1},\ldots,t_{k}\}\) of \(\mathbb{R}^{k}\) such that the coordinate matrix of \(d(\phi(t_{j}))_{1_{N}}\) in the basis \(\mathcal{B}\) is an integer unimodular matrix for all \(1\leq j\leq k\) then \(G\) has a splittable lattice of the form \(\Gamma=\text{span}_{\mathbb{Z}}\{t_{1},\ldots,t_{k}\}\ltimes_{\phi}\exp^{N}( \text{span}_{\mathbb{Z}}\{X_{1},\ldots,X_{n}\})\)._
When \(k=1\) the simply connected solvable splittable Lie group \(G=\mathbb{R}\ltimes_{\phi}N\) is called _almost nilpotent_. In this case, every lattice is splittable due to [10]. If \(N\) is abelian, i.e. \(N=\mathbb{R}^{n}\), then \(G\) is called _almost abelian_.
In the examples in the forthcoming sections, we will begin with a Lie algebra \(\mathfrak{g}=\mathbb{R}^{k}\ltimes_{\varphi}\mathfrak{n}\). In order to apply Theorem 2.4 we need to determine the associated simply connected Lie group \(G\). Let \(N\) denote the simply connected nilpotent Lie group with Lie algebra \(\mathfrak{n}\). Since \(\exp:\mathfrak{n}\to N\) is a diffeomorphism, we may assume that the underlying manifold of \(N\) is \(\mathfrak{n}\) itself with the group law \(x\cdot y=Z(x,y)\), where \(Z(x,y)\) is the polynomial map given by the Baker-Campbell-Haudorff formula: \(\exp(x)\exp(y)=\exp(Z(x,y))\). Therefore, with this assumption, we have that \(\exp:\mathfrak{n}\to N\) is simply the identity map on \(\mathfrak{n}\) and moreover, \(\operatorname{Aut}(\mathfrak{n})=\operatorname{Aut}(N)\).
Let \(\{t_{1},\dots,t_{k}\}\) be a basis of \(\mathbb{R}^{k}\) and denote \(B_{j}=\varphi(t_{j})\in\operatorname{Der}(\mathfrak{n})\). Then, \(\exp(B_{j})\in\operatorname{Aut}(N)\) and using [10, Theorem 4.2] we have that \(G=\mathbb{R}^{k}\ltimes_{\phi}N\), where \(\phi:\mathbb{R}^{k}\to\operatorname{Aut}(N)\) is the Lie group homomorphism given by
\[\phi\left(\sum_{j=1}^{k}x_{j}t_{j}\right)=\exp(x_{1}B_{1}+\dots+x_{k}B_{k})= \exp(x_{1}B_{1})\exp(x_{2}B_{2})\cdots\exp(x_{k}B_{k}).\]
Here \(\exp\) denotes the matrix exponential after identification of \(\mathfrak{n}\cong\mathbb{R}^{\dim\mathfrak{n}}\) choosing a basis of \(\mathfrak{n}\).
Note that, in the notation of Theorem 2.4, we have that \(d(\phi(t_{j}))_{1_{N}}=\exp(B_{j})=\exp(\varphi(t_{j}))\). Hence, in order to find lattices we need a basis \(\{t_{1},\dots,t_{k}\}\) such that \([\exp(\varphi(t_{j}))]_{\mathcal{B}}\) is an integer unimodular matrix in the rational basis \(\mathcal{B}\) of \(\mathfrak{n}\), for all \(1\leq j\leq k\).
We move on now to consider invariant geometric structures on solvmanifolds.
Let \(G\) be a connected Lie group with Lie algebra \(\mathfrak{g}\). A complex structure \(J\) on \(G\) is said to be left invariant if left translations by elements of \(G\) are holomorphic maps. In this case \(J\) is determined by the value at the identity of \(G\). Thus, a left invariant complex structure on \(G\) amounts to a complex structure on its Lie algebra \(\mathfrak{g}\), that is, a real linear transformation \(J\) of \(\mathfrak{g}\) satisfying \(J^{2}=-\operatorname{I}\) and \(N_{J}(x,y)=0\) for all \(x,y\) in \(\mathfrak{g}\), with \(N_{J}\) defined as in (1). A Riemannian metric \(g\) on \(G\) is called left invariant when left translations are isometries. Such a metric \(g\) is determined by its value \(g_{e}=\langle\cdot\,,\cdot\,\rangle\) at the identity \(e\) of \(G\), that is, \(\langle\cdot\,,\cdot\,\rangle\) is a positive definite inner product on \(T_{e}G=\mathfrak{g}\).
A Hermitian structure \((J,g)\) on \(G\) is called left invariant when both \(J\) and \(g\) are left invariant. Given a left invariant Hermitian structure \((J,g)\) on \(G\), let \(J\) and \(\langle\cdot\,,\cdot\,\rangle\) denote the corresponding complex structure and Hermitian inner product on \(\mathfrak{g}\). We say that \((J,\langle\cdot\,,\cdot\,\rangle)\) is a Hermitian structure on \(\mathfrak{g}\).
We observe that left invariant geometric structures defined on \(G\) induce corresponding geometric structures on \(\Gamma\backslash G\), with \(\Gamma\) a lattice in \(G\), which are called _invariant_. For instance, a left invariant complex structure (respectively, Riemannian metric) on \(G\) induces a unique complex structure (respectively, Riemannnian metric) on \(\Gamma\backslash G\) such that the canonical projection \(G\to\Gamma\backslash G\) is a local biholomorphism (respectively, local isometry). In this article, a solvmanifold equipped with an invariant complex structure will be called simply a _complex solvmanifold_.
## 3. Complex solvmanifolds with invariantly trivial canonical bundle
In this section we deal with the existence of nowhere vanishing left invariant closed \((n,0)\)-forms on \(2n\)-dimensional Lie groups equipped with a left invariant complex structure, equivalently we study the existence of nonzero closed \((n,0)\)-forms on the corresponding Lie algebras. First we characterize the existence of such a form in algebraic terms. Then we use this characterization to recover known results of this topic in the literature and to produce new examples.
We begin by proving the main result of this section where we characterize \(2n\)-dimensional Lie algebras equipped with complex structures admitting nonzero closed \((n,0)\)-forms. In order
to do so, we introduce2 a 1-form \(\psi\in\mathfrak{g}^{*}\) (compare with \(\theta^{1}\) in [46, Proposition 4.1]) which will be called the _canonical 1-form_ and will play a crucial role throughout the article:
Footnote 2: A similarly defined 1-form appeared in [28] (see also [20]) when studying invariant homogeneous complex structures on homogeneous spaces \(G/H\).
\[\psi(x)=\operatorname{Tr}(J\operatorname{ad}x)-\operatorname{Tr} \operatorname{ad}(Jx),\quad x\in\mathfrak{g}. \tag{2}\]
**Theorem 3.1**.: _Let \(\mathfrak{g}\) be a \(2n\)-dimensional Lie algebra with an almost complex structure \(J\). Let \(\sigma\in\bigwedge^{n,0}\mathfrak{g}^{*}\) be a nonzero \((n,0)\)-form on \(\mathfrak{g}\). Then \(d\sigma=0\) if and only if \(J\) is integrable and \(\psi\equiv 0\)._
Proof.: Let \(\{u_{1},\dots,u_{n},v_{1},\dots,v_{n}\}\) be a \(J\)-adapted basis of \(\mathfrak{g}\), that is, \(Ju_{j}=v_{j}\) for all \(j\). Since \(\dim_{\mathbb{C}}\bigwedge^{n,0}\mathfrak{g}^{*}=1\), we may assume that \(\sigma=(u^{1}+iv^{1})\wedge\dots\wedge(u^{n}+iv^{n})\).
The Lie brackets of \(\mathfrak{g}\) can be written in terms of the basis above by
\[[u_{j},u_{k}]=\sum_{\ell=1}^{n}a_{jk}^{\ell}u_{\ell}+\sum_{\ell=1}^{n}b_{jk}^ {\ell}v_{\ell},\quad[u_{j},v_{k}]=\sum_{\ell=1}^{n}c_{jk}^{\ell}u_{\ell}+\sum _{\ell=1}^{n}d_{jk}^{\ell}v_{\ell},\quad[v_{j},v_{k}]=\sum_{\ell=1}^{n}e_{jk}^ {\ell}u_{\ell}+\sum_{\ell=1}^{n}f_{jk}^{\ell}v_{\ell},\]
with \(a_{kj}^{\ell}=-a_{jk}^{\ell}\), \(b_{kj}^{\ell}=-b_{jk}^{\ell}\), \(e_{kj}^{\ell}=-e_{jk}^{\ell}\) and \(f_{kj}^{\ell}=-f_{jk}^{\ell}\). Accordingly, for \(1\leq\ell\leq n\) we have
\[du^{\ell} =-\sum_{j,k=1}^{n}\left(\frac{1}{2}a_{jk}^{\ell}\,u^{jk}+c_{jk}^{ \ell}\,u^{j}\wedge v^{k}+\frac{1}{2}e_{jk}^{\ell}\,v^{jk}\right),\] \[dv^{\ell} =-\sum_{j,k=1}^{n}\left(\frac{1}{2}b_{jk}^{\ell}\,u^{jk}+d_{jk}^{ \ell}\,u^{j}\wedge v^{k}+\frac{1}{2}f_{jk}^{\ell}\,v^{jk}\right).\]
Let us set \(\gamma_{j}:=u^{j}+iv^{j}\) for all \(j\), so that \(\sigma=\gamma_{1}\wedge\dots\wedge\gamma_{n}\). Next we compute \(d\gamma_{\ell}\) in terms of \(\gamma_{j}\) and \(\bar{\gamma_{j}}\). First we note that \(2u^{j}=(\gamma_{j}+\bar{\gamma_{j}})\) and \(2v^{j}=-i(\gamma_{j}-\bar{\gamma_{j}})\) imply that
\[4u^{jk}=(\gamma_{j}+\bar{\gamma_{j}})\wedge(\gamma_{k}+\bar{ \gamma_{k}}) =\gamma_{jk}+\gamma_{\bar{j}k}+\gamma_{j\bar{k}}+\gamma_{\bar{j} \,\bar{k}}\] \[4u^{j}\wedge v^{k}=-i(\gamma_{j}+\bar{\gamma_{j}})\wedge(\gamma_ {k}-\bar{\gamma_{k}}) =-i(\gamma_{jk}+\gamma_{\bar{j}k}-\gamma_{j\bar{k}}-\gamma_{\bar{j }\,k})\] \[4v^{jk}=-(\gamma_{j}-\bar{\gamma_{j}})\wedge(\gamma_{k}-\bar{ \gamma_{k}}) =-(\gamma_{jk}-\gamma_{\bar{j}k}-\gamma_{j\bar{k}}+\gamma_{\bar{j }\,\bar{k}})\]
Using these identities it follows that
\[d\gamma_{\ell} =-\sum_{j,k=1}^{n}\frac{1}{2}(a_{jk}^{\ell}+ib_{jk}^{\ell})u^{jk} +(c_{jk}^{\ell}+id_{jk}^{\ell})u^{j}\wedge v^{k}+\frac{1}{2}(e_{jk}^{\ell}+if_ {jk}^{\ell})v^{jk}\] \[=-\frac{1}{4}\sum_{j,k=1}^{n}\left(\left(\frac{1}{2}a_{jk}^{\ell} +d_{jk}^{\ell}-\frac{1}{2}e_{jk}^{\ell}+i\left(\frac{1}{2}b_{jk}^{\ell}-c_{jk}^ {\ell}-\frac{1}{2}f_{jk}^{\ell}\right)\right)\gamma_{jk}\right.\] \[\quad+\left(\frac{1}{2}a_{jk}^{\ell}+d_{jk}^{\ell}+\frac{1}{2}e_{ jk}^{\ell}+i\left(\frac{1}{2}b_{jk}^{\ell}-c_{jk}^{\ell}+\frac{1}{2}f_{jk}^{ \ell}\right)\right)\gamma_{\bar{j}k}\] \[\quad+\left(\frac{1}{2}a_{jk}^{\ell}-d_{jk}^{\ell}+\frac{1}{2}e_{ jk}^{\ell}+i\left(\frac{1}{2}b_{jk}^{\ell}+c_{jk}^{\ell}+\frac{1}{2}f_{jk}^{ \ell}\right)\right)\gamma_{j\bar{k}}\] \[\quad+\left(\frac{1}{2}a_{jk}^{\ell}-d_{jk}^{\ell}-\frac{1}{2}e_{ jk}^{\ell}+i\left(\frac{1}{2}b_{jk}^{\ell}+c_{jk}^{\ell}-\frac{1}{2}f_{jk}^{ \ell}\right)\right)\gamma_{\bar{j}\,\bar{k}}\Bigg{)} \tag{3}\]
We use the previous expression in order to compute \(d\sigma\), where we use the unconventional but shorter notation \(\dfrac{\sigma}{\gamma_{l}}=\gamma_{1}\wedge\cdots\wedge\gamma_{\ell-1}\wedge\gamma _{\ell+1}\wedge\cdots\wedge\gamma_{n}\):
\[d\sigma =\sum_{\ell=1}^{n}(-1)^{\ell+1}\,\gamma_{1}\wedge\cdots\wedge\gamma _{\ell-1}\wedge d\gamma_{\ell}\wedge\gamma_{\ell+1}\wedge\cdots\wedge\gamma_{n}\] \[=-\dfrac{1}{4}\sum_{j,\ell}\left(\dfrac{1}{2}a_{j\ell}^{\ell}+d_ {j\ell}^{\ell}+\dfrac{1}{2}e_{j\ell}^{\ell}+i\left(\dfrac{1}{2}b_{j\ell}^{\ell }-c_{j\ell}^{\ell}+\dfrac{1}{2}f_{j\ell}^{\ell}\right)\right)\bar{\gamma}_{j}\wedge\sigma\] \[\quad+\dfrac{1}{4}\sum_{k,\ell}\left(\dfrac{1}{2}a_{\ell k}^{\ell }-d_{\ell k}^{\ell}+\dfrac{1}{2}e_{\ell k}^{\ell}+i\left(\dfrac{1}{2}b_{\ell k }^{\ell}+c_{\ell k}^{\ell}+\dfrac{1}{2}f_{\ell k}^{\ell}\right)\right)\bar{ \gamma}_{k}\wedge\sigma\] \[\quad+\dfrac{1}{4}\sum_{j,k,\ell=1}^{n}(-1)^{\ell}\left(\dfrac{1} {2}a_{jk}^{\ell}-d_{jk}^{\ell}-\dfrac{1}{2}e_{jk}^{\ell}+i\left(\dfrac{1}{2}b_ {jk}^{\ell}+c_{jk}^{\ell}-\dfrac{1}{2}f_{jk}^{\ell}\right)\right)\bar{\gamma}_ {j}\wedge\bar{\gamma}_{k}\wedge\dfrac{\sigma}{\gamma_{\ell}}\] \[=\dfrac{1}{4}\sum_{j,\ell}(a_{\ell j}^{\ell}-(d_{j\ell}^{\ell}+d_ {\ell j}^{\ell})+e_{\ell j}^{\ell})+i(b_{\ell j}^{\ell}+(c_{\ell j}^{\ell}+c_ {j\ell}^{\ell})+f_{\ell j}^{\ell}))\bar{\gamma}_{j}\wedge\sigma\] \[+\dfrac{1}{4}\sum_{j<k}\sum_{\ell}((a_{jk}^{\ell}+(-d_{jk}^{\ell} +d_{kj}^{\ell})-e_{jk}^{\ell})+i(b_{jk}^{\ell}+(c_{jk}^{\ell}-c_{kj}^{\ell})-f _{jk}^{\ell}))\bar{\gamma}_{j}\wedge\bar{\gamma}_{k}\wedge\dfrac{\sigma}{ \gamma_{\ell}}.\]
Since \(\{\bar{\gamma}_{j}\wedge\sigma\mid 1\leq j\leq n\}\cup\{\bar{\gamma}_{j}\wedge\bar{ \gamma}_{k}\wedge\tfrac{\sigma}{\gamma_{\ell}}\mid j<k,1\leq\ell\leq n\}\) is linearly independent, we see that \(d\sigma=0\) if and only if
\[\sum_{\ell=1}^{n}a_{\ell j}^{\ell}-d_{j\ell}^{\ell}-d_{\ell j}^{ \ell}+e_{\ell j}^{\ell}=0,\quad\sum_{\ell=1}^{n}b_{\ell j}^{\ell}+c_{\ell j}^{ \ell}+c_{j\ell}^{\ell}+f_{\ell j}^{\ell}=0,\quad 1\leq j\leq n, \tag{5}\] \[e_{jk}^{\ell}=a_{jk}^{\ell}-d_{jk}^{\ell}+d_{kj}^{\ell},\;f_{jk} ^{\ell}=b_{jk}^{\ell}+c_{jk}^{\ell}-c_{kj}^{\ell},\quad j<k,\;1\leq\ell\leq n. \tag{4}\]
On the other hand, it is well known that the integrability of \(J\) is equivalent to
\[d(\bigwedge^{1,0}\mathfrak{g}_{\mathbb{C}}^{*})\subseteq\bigwedge^{2,0} \mathfrak{g}_{\mathbb{C}}^{*}\oplus\bigwedge^{1,1}\mathfrak{g}_{\mathbb{C}}^{*},\]
where \(\mathfrak{g}_{\mathbb{C}}\) denotes the complexification of \(\mathfrak{g}\) and the bidegrees are induced by \(J\).
Therefore, \(J\) is integrable if and only if the coefficient of \(\bar{\gamma}_{j}\wedge\bar{\gamma}_{k}\) in \(d\gamma_{\ell}\) vanishes for all \(j,k,\ell\). It follows from (3) that this happens if and only if
\[e_{jk}^{\ell}=a_{jk}^{\ell}-d_{jk}^{\ell}+d_{kj}^{\ell},\quad f_{jk}^{\ell}=b_ {jk}^{\ell}+c_{jk}^{\ell}-c_{kj}^{\ell},\quad j<k,\;1\leq\ell\leq n,\]
which is exactly (5).
Next, using the inner product \(\langle\cdot,\cdot\rangle\) on \(\mathfrak{g}\) defined by decreeing the basis \(\{u_{1},\ldots,u_{n},v_{1},\ldots,v_{n}\}\) orthonormal we have that
\[\operatorname{Tr}(J\operatorname{ad}u_{j}) =\sum_{\ell=1}^{n}(b_{\ell j}^{\ell}+c_{j\ell}^{\ell}),\quad \operatorname{Tr}(J\operatorname{ad}v_{j})=\sum_{\ell=1}^{n}(d_{\ell j}^{ \ell}-e_{\ell j}^{\ell})\] \[-\operatorname{Tr}\operatorname{ad}v_{j} =\sum_{\ell=1}^{n}(c_{\ell j}^{\ell}+f_{\ell j}^{\ell}),\quad \operatorname{Tr}\operatorname{ad}u_{j}=\sum_{\ell=1}^{n}(a_{j\ell}^{\ell}+d_ {j\ell}^{\ell})\]
Hence, (4) can be written as
\[-\operatorname{Tr}\operatorname{ad}u_{j}-\operatorname{Tr}(J \operatorname{ad}v_{j}) =0\quad 1\leq j\leq n\] \[-\operatorname{Tr}\operatorname{ad}v_{j}+\operatorname{Tr}(J \operatorname{ad}u_{j}) =0\quad 1\leq j\leq n.\]
Then (4) is equivalent to \(\psi(u_{j})=\psi(v_{j})=0\). Thus, \(d\sigma=0\) if and only if \(J\) is integrable and \(\psi\equiv 0\).
In the unimodular case we obtain the following characterization.
**Corollary 3.2**.: _Let \(\mathfrak{g}\) be a \(2n\)-dimensional unimodular Lie algebra with a complex structure \(J\). Then \(\mathfrak{g}\) admits a closed non-vanishing \((n,0)\)-form if and only if \(\operatorname{Tr}(J\operatorname{ad}x)=0\) for all \(x\in\mathfrak{g}\)._
**Remark 3.3**.: It follows from the proof of Theorem 3.1 that if \(J\) is integrable then3
Footnote 3: Cf. [23, Lemma 3]
\[d\sigma =\frac{1}{4}\sum_{j,\ell}(a^{\ell}_{\ell j}-d^{\ell}_{j\ell}-d^{ \ell}_{\ell j}+e^{\ell}_{\ell j})+i(b^{\ell}_{\ell j}+c^{\ell}_{\ell j}+c^{ \ell}_{j\ell}+f^{\ell}_{\ell j})\;\bar{\gamma}_{j}\wedge\sigma\] \[=\frac{1}{4}\sum_{j}\left(-\operatorname{Tr}(\operatorname{ad}u _{j})-\operatorname{Tr}(J\operatorname{ad}v_{j})\right)+i(\operatorname{Tr}( J\operatorname{ad}u_{j})-\operatorname{Tr}(\operatorname{ad}v_{j}))\right)\;\bar{ \gamma}_{j}\wedge\sigma\] \[=\frac{1}{4}\sum_{j}(-\psi(v_{j})+i\psi(u_{j}))\;\bar{\gamma}_{j }\wedge\sigma.\]
When \(\mathfrak{g}\) is unimodular and \(J\) is integrable, the vanishing of the \(1\)-form \(\psi\) can also be understood in terms of the complexification \(\mathfrak{g}_{\mathbb{C}}\) of \(\mathfrak{g}\) as the following proposition shows. Recall that \(\mathfrak{g}_{\mathbb{C}}=\mathfrak{g}^{1,0}\oplus\mathfrak{g}^{0,1}\), where \(\mathfrak{g}^{1,0}\) (respectively, \(\mathfrak{g}^{0,1}\)) is the \(i\)-eigenspace (respectively, \((-i)\)-eigenspace) of the \(\mathbb{C}\)-linear extension \(J^{\mathbb{C}}:\mathfrak{g}_{\mathbb{C}}\to\mathfrak{g}_{\mathbb{C}}\), and they are given by
\[\mathfrak{g}^{1,0}=\{x-iJx\mid x\in\mathfrak{g}\},\quad\mathfrak{g}^{0,1}=\{x +iJx\mid x\in\mathfrak{g}\}.\]
Both \(\mathfrak{g}^{1,0}\) and \(\mathfrak{g}^{0,1}\) are Lie subalgebras of \(\mathfrak{g}_{\mathbb{C}}\) due to the integrability of \(J\).
**Proposition 3.4**.: _Let \((\mathfrak{g},J)\) be a \(2n\)-dimensional unimodular Lie algebra equipped with a complex structure. Then \((\mathfrak{g},J)\) has a nonzero closed \((n,0)\)-form if and only if_
1. \(\mathfrak{g}^{1,0}\) _is unimodular, or_
2. \(\mathfrak{g}^{0,1}\) _is unimodular._
Proof.: We will show the equivalence only for \(\mathfrak{g}^{1,0}\). The computations for \(\mathfrak{g}^{0,1}\) are completely analogous. Given a Hermitian inner product \(\langle\cdot\,,\cdot\,\rangle\) on \(\mathfrak{g}\), it can be extended to a complex inner product on \(\mathfrak{g}_{\mathbb{C}}\) satisfying \(\langle a+ib,c+id\rangle=\langle a,c\rangle+\langle b,d\rangle-i(\langle a,d \rangle-\langle b,c\rangle)\). Thus, if \(\{e_{j}\}_{j=1}^{2n}\) is an orthonormal basis of \(\mathfrak{g}\) such that \(Je_{2j-1}=e_{2j}\) then \(\{\frac{1}{\sqrt{2}}(e_{2j-1}-ie_{2j})\}_{j=1}^{n}\) is an orthonormal basis of \(\mathfrak{g}^{1,0}\).
Now, consider \(x-iJx\in\mathfrak{g}^{1,0}\). We can decompose \(\operatorname{ad}(x-iJx)\) with respect to the decomposition \(\mathfrak{g}_{\mathbb{C}}=\mathfrak{g}^{1,0}\oplus\mathfrak{g}^{0,1}\) as
\[\operatorname{ad}(x-iJx)=\begin{array}{|c|c|c|}\cline{1-3}A_{x}&*&\\ \cline{1-3}0&B_{x}&\end{array}\]
Next we compute
\[\operatorname{Tr}A_{x} =\frac{1}{2}\sum_{j=1}^{n}\langle[x-iJx,e_{2j-1}-ie_{2j}],e_{2j- 1}-ie_{2j}\rangle\] \[=\frac{1}{2}\sum_{j=1}^{n}\langle[x,e_{2j-1}]-[Jx,e_{2j}]-i([x,e_ {2j}]+[Jx,e_{2j-1}]),e_{2j-1}-ie_{2j}\rangle\] \[=\frac{1}{2}\left(\operatorname{Tr}\operatorname{ad}x-i\operatorname {Tr}\operatorname{ad}(Jx)-\operatorname{Tr}(J\operatorname{ad}(Jx))-i \operatorname{Tr}(J\operatorname{ad}x)\right)\] \[=\frac{1}{2}(\operatorname{Tr}\operatorname{ad}x-\operatorname{ Tr}(J\operatorname{ad}(Jx))-\frac{i}{2}(\operatorname{Tr}\operatorname{ad}(Jx)+ \operatorname{Tr}(J\operatorname{ad}x)).\]
Therefore, \(\operatorname{Tr}\operatorname{ad}(x-iJx)=0\) on \(\mathfrak{g}^{1,0}\) if and only if \(\operatorname{Tr}(J\operatorname{ad}x)=-\operatorname{Tr}\operatorname{ad}(Jx)\) on \(\mathfrak{g}\). In particular, as \(\mathfrak{g}\) is unimodular, it follows that \(\mathfrak{g}^{1,0}\) is unimodular if and only if \(\operatorname{Tr}(J\operatorname{ad}x)=0\), and the statement follows from Corollary 3.2.
Using Theorem 3.1, we can recover some known results in the literature.
For instance, the basic examples of complex manifolds with holomorphically trivial canonical bundle are complex parallelizable manifolds. Indeed, the triviality of the holomorphic tangent bundle implies the triviality of the canonical bundle. By a result of Wang [48] a compact complex parallelizable manifold is biholomorphic to a compact quotient \(\Gamma\backslash G\), where \(G\) is a complex Lie group and \(\Gamma\) is a uniform lattice of \(G\). The triviality of the canonical bundle of such a compact quotient \(\Gamma\backslash G\) also follows from Theorem 3.1 noting that the complex structure on the quotient is induced from a left invariant complex structure on \(G\) satisfying \(J\operatorname{ad}(x)=\operatorname{ad}(x)J\) for all \(x\in\mathfrak{g}=\operatorname{Lie}(G)\). This implies that \(J\operatorname{ad}(x)=\operatorname{ad}(Jx)\) and since \(\mathfrak{g}\) is unimodular, the canonical \(1\)-form \(\psi\) vanishes.
Another family of compact complex manifolds with trivial canonical bundle are nilmanifolds equipped with invariant complex structures. This fact (proven in [8]) can be deduced from Theorem 3.1, using that a nilpotent Lie algebra equipped with a complex structure satisfies \(\operatorname{Tr}(J\operatorname{ad}x)=0\) for all \(x\in\mathfrak{g}\) (see [8, Lemma 2.2] and [29, Proposition 2.1]). Remarkably, this fact is not used originally in [8] to prove that complex nilmanifolds have trivial canonical bundle. Instead they use the existence of a distinguished basis of \((1,0)\)-forms on nilpotent Lie algebras given by Salamon in [42].
More recently, in [15], \(2n\)-dimensional almost abelian Lie algebras admitting a complex structure with a nonzero closed \((n,0)\)-form were characterized. Let us recall that a Lie algebra \(\mathfrak{g}\) is called almost abelian if it has a codimension one abelian ideal. In this case, one can write an almost abelian Lie algebra \(\mathfrak{g}\) as \(\mathfrak{g}=\mathbb{R}e_{2n}\ltimes\mathfrak{u}\), where \(\mathfrak{u}\) is an abelian ideal. It is well known (see for instance [29]) that a \(2n\)-dimensional almost abelian Lie algebra admits a complex structure if and only if \(B:=\operatorname{ad}e_{2n}|_{\mathfrak{u}}\) can be written as
\[B=\left[\begin{array}{c|c}a&0\\ \hline v&A\end{array}\right],\quad\text{where }a\in\mathbb{R},\,v\in\mathbb{R}^{2n-2},\, AJ_{1}=J_{1}A.\]
Here we are decomposing \(\mathfrak{u}=\mathbb{R}e_{1}\oplus(\mathfrak{u}\cap J\mathfrak{u})\), where \(e_{1}=-Je_{2n}\) and \(J_{1}:=J|_{\mathfrak{u}\cap J\mathfrak{u}}\).
As a corollary of Theorem 3.1 we recover the result in [15, Proposition 2.4].
**Corollary 3.5**.: _Let \(\mathfrak{g}=\mathbb{R}e_{2n}\ltimes\mathbb{R}^{2n-1}\) be an almost abelian Lie algebra and \(J\) be a complex structure on \(\mathfrak{g}\) as above. Then \((\mathfrak{g},J)\) admits a nonzero closed \((n,0)\)-form if and only if_
\[a+\frac{1}{2}\operatorname{Tr}A=0,\quad\operatorname{Tr}(J_{1}A)=0.\]
_If, moreover, \(\mathfrak{g}\) is unimodular then_
\[a=0,\quad\operatorname{Tr}A=0,\quad\operatorname{Tr}(J_{1}A)=0.\]
Proof.: We compute first \(J\operatorname{ad}e_{2n}\) and \(J\operatorname{ad}e_{1}\) in the basis \(\{e_{2n},e_{1},\dots,e_{2n-1}\}\), where \(\{e_{j}\}_{j=2}^{2n-1}\) is a \(J\)-adapted basis of \(\mathfrak{u}\cap J\mathfrak{u}\):
\[J\operatorname{ad}e_{2n}=\left[\begin{array}{c|c}0&1\\ -1&0\\ \hline&J_{1}\end{array}\right]\cdot\left[\begin{array}{cc|c}0&0\\ 0&a\\ \hline 0&v&A\end{array}\right]=\left[\begin{array}{cc|c}0&a\\ 0&0\\ \hline 0&*&J_{1}A\end{array}\right],\]
\[J\operatorname{ad}e_{1}=\left[\begin{array}{c|c}0&1\\ -1&0\\ \hline&J_{1}\end{array}\right]\cdot\left[\begin{array}{cc|c}0&0\\ -a&0\\ \hline-v&0&0\end{array}\right]=\left[\begin{array}{cc|c}-a&0\\ 0&0\\ \hline*&0&0\end{array}\right],\]
Thus, we obtain that
\[\psi(e_{2n})=\operatorname{Tr}(J\operatorname{ad}e_{2n})- \operatorname{Tr}\operatorname{ad}(Je_{2n})=\operatorname{Tr}(J_{1}A),\] \[\psi(e_{1})=\operatorname{Tr}(J\operatorname{ad}e_{1})- \operatorname{Tr}\operatorname{ad}(Je_{1})=-a-(a+\operatorname{Tr}A)=-2a- \operatorname{Tr}A.\]
It is easily seen that \(\psi(\mathfrak{u}\cap J\mathfrak{u})=0\). Therefore, \(\psi\equiv 0\) if and only if \(a+\frac{1}{2}\operatorname{Tr}A=0\) and \(\operatorname{Tr}(J_{1}A)=0\). Moreover, if \(\mathfrak{g}\) is unimodular then \(0=a+\operatorname{Tr}A\), which together with \(a+\frac{1}{2}\operatorname{Tr}A=0\) forces \(0=a=\operatorname{Tr}A\).
**Example 3.6**.: In dimension \(6\), according to [14], the non-nilpotent unimodular almost abelian Lie algebras admitting complex structure with a nonzero closed \((3,0)\)-form are
* \(\mathfrak{g}_{1}=(e^{15},-e^{25},-e^{35},e^{45},0,0)\),
* \(\mathfrak{g}_{2}^{\alpha}=(\alpha e^{15}+e^{25},-e^{15}+\alpha e^{25},-\alpha e ^{35}+e^{45},-e^{35}-\alpha e^{45},0,0)\), \(\alpha\geq 0\).
This notation means for instance that \(\mathfrak{g}_{1}\) is the Lie algebra determined by a basis \(\{e^{j}\}_{j=1}^{6}\) of \(\mathfrak{g}_{1}^{*}\) such that \(de^{1}=e^{15}\), \(de^{2}=-e^{25}\), \(de^{3}=-e^{35}\), \(de^{4}=e^{45}\), and \(de^{5}=de^{6}=0\). On \(\mathfrak{g}_{1}\) and \(\mathfrak{g}_{2}^{0}\) there is only one complex structure with a nonzero closed \((3,0)\)-form whereas \(\mathfrak{g}_{2}^{\alpha}\), \(\alpha>0\), has two such complex structures (up to equivalence). The corresponding simply connected Lie groups \(G_{1}\) and \(G_{2}^{\alpha}\) admit lattices (for a countable number of values of the parameter \(\alpha\)).
We consider next a special family of left invariant complex structures on Lie groups. If an almost complex structure \(J\) on a Lie algebra \(\mathfrak{g}\) satisfies \([Jx,Jy]=[x,y]\) for all \(x,y\in\mathfrak{g}\) then it is immediate to verify that \(J\) is integrable. Such a complex structure is called _abelian_. They were introduced in [7] and they have proved very useful in different contexts in differential and complex geometry. Abelian complex structures can only occur on \(2\)-step solvable Lie algebras (see for instance [2]).
In the next result we show the existence of a left invariant trivializing section of the canonical bundle of a unimodular Lie group equipped with an abelian complex structure. As usual we state the result at the Lie algebra level.
**Corollary 3.7**.: _A \(2n\)-dimensional Lie algebra \(\mathfrak{g}\) equipped with an abelian complex structure \(J\) has a nonzero closed \((n,0)\)-form if and only if \(\mathfrak{g}\) is unimodular. In particular, any complex solvmanifold equipped with an abelian complex structure has trivial canonical bundle._
Proof.: The fact that \(J\) is abelian is equivalent to \([x,Jy]=-[Jx,y]\) for all \(x,y\in\mathfrak{g}\). Hence, \(\operatorname{ad}(x)J=-\operatorname{ad}(Jx)\), which implies \(\operatorname{Tr}(\operatorname{ad}(x)J)=-\operatorname{Tr}(\operatorname{ad} (Jx))\). This identity together with the condition \(\operatorname{Tr}(\operatorname{ad}(x)J)=\operatorname{Tr}(\operatorname{ad} (Jx))\), which comes from \(\psi\equiv 0\), and the fact that \(J\) is an isomorphism imply the result.
In dimension \(6\), there is only one unimodular non-nilpotent Lie algebra admitting an abelian complex structure (see [1]). It is the Lie algebra \(\mathfrak{s}\) determined by a basis \(\{e_{i}\}_{i=1}^{6}\) and Lie brackets
\[[e_{1},e_{6}]=-e_{1},\quad[e_{2},e_{5}]=e_{1},\quad[e_{1},e_{5}]= -e_{2},\quad[e_{2},e_{6}]=-e_{2},\] \[[e_{3},e_{6}]=e_{3},\quad[e_{4},e_{5}]=-e_{3},\quad[e_{3},e_{5}]= e_{4},\quad[e_{4},e_{6}]=e_{4}.\]
This Lie algebra appears as \(\mathfrak{g}_{8}\) in [14] and as \(\mathfrak{s}_{(-1,0)}\) in [1]; it is the real Lie algebra underlying the complex parallelizable Nakamura manifold [35] (see also [4]). It has an infinite number of non-equivalent4 complex structures admitting a nonzero holomorphic \((3,0)\)-form but only one of them is abelian (see [14, Proposition 3.7]), namely: \(Je_{1}=e_{2}\), \(Je_{3}=e_{4}\) and \(Je_{5}=e_{6}\). It was proven in [50] that its corresponding simply connected Lie group \(S\) admits a lattice. We show next that this example can be generalized to any dimension of the form \(4n+2\).
**Example 3.8**.: For \(n\geq 1\), let \(\mathfrak{s}_{n}=\mathbb{R}^{2}\ltimes\mathbb{R}^{4n}\) be the \((4n+2)\)-dimensional unimodular Lie algebra with basis \(\{f_{1},f_{2},e_{1},e_{2},\ldots,e_{4n}\}\) and Lie brackets given by5
Footnote 5: Throughout the article we use \(A\oplus B\) to denote the block-diagonal matrix \(\begin{bmatrix}A&\\ &B\end{bmatrix}\). This naturally generalizes to the sum of \(n\) square matrices.
\[A:=\operatorname{ad}f_{1}|_{\mathbb{R}^{4n}}=\left(\begin{bmatrix}0&-1\\ 1&0\end{bmatrix}\oplus\begin{bmatrix}0&1\\ -1&0\end{bmatrix}\right)^{\oplus n},\quad B:=\operatorname{ad}f_{2}|_{ \mathbb{R}^{4n}}=\operatorname{diag}(1,1,-1,-1)^{\oplus n}.\]
Note that \(\mathfrak{s}_{1}\) coincides with the Lie algebra \(\mathfrak{s}\) above.
It is easy to verify that the almost complex structure \(J\) given by \(Jf_{1}=f_{2}\) and \(Je_{2j-1}=e_{2j}\) for all \(1\leq j\leq 2n\) is abelian. It follows from Corollary 3.7 that \(\mathfrak{s}_{n}\) admits a nonzero closed \((n,0)\)-form. We show next that the associated simply connected Lie group \(S_{n}\) admits lattices. For \(m\in\mathbb{N}\), \(m\geq 3\), let \(t_{m}=\log(\frac{m+\sqrt{m^{2}-4}}{2})\). Then
\[\exp(\pi A)=-\operatorname{I}_{4n},\quad\exp(t_{m}B)=\operatorname{diag}( \operatorname{e}^{t_{m}},\operatorname{e}^{t_{m}},\operatorname{e}^{-t_{m}}, \operatorname{e}^{-t_{m}})^{\oplus n}.\]
Using that \(\operatorname{e}^{t_{m}}+\operatorname{e}^{-t_{m}}=m\), it is easily seen that there exists \(P\in\operatorname{GL}(4n,\mathbb{R})\) such that \(P^{-1}\exp(t_{m}B)P=\begin{bmatrix}0&-1\\ 1&m\end{bmatrix}^{\oplus 2n}\), and it is clear that \(P^{-1}(-\operatorname{I}_{4n})P=-\operatorname{I}_{4n}\), so the matrices \(\exp(\pi A)\) and \(\exp(t_{m}B)\) are simultaneously conjugate to integer unimodular matrices. According to Theorem 2.4, since any basis of \(\mathbb{R}^{4n}\) is rational, the subgroup \(\Gamma_{m}^{n}:=(\pi\mathbb{Z}\oplus t_{m}\mathbb{Z})\ltimes P\mathbb{Z}^{4}\) is a lattice of \(S_{n}\). The corresponding complex solvmanifold \((\Gamma_{m}^{n}\backslash S_{n},J)\) has trivial canonical bundle, for any \(m\).
**Remark 3.9**.: The Lie algebra \(\mathfrak{s}_{n}\) also carries a bi-invariant complex structure \(\tilde{J}\) given by
\[\tilde{J}f_{1}=-f_{2},\qquad\tilde{J}e_{2j-1}=e_{2j},\quad 1\leq j\leq 2n,\]
so that the solvmanifold \((\Gamma\backslash S_{n},\tilde{J})\) is complex parallellizable for any lattice \(\Gamma\subset G\), generalizing in this way the complex parallellizable Nakamura manifold.
### Examples in almost nilpotent solvmanifolds
We finish this section by using Theorem 3.1 to study the existence of holomorphic \((n,0)\)-forms on a class of \(2n\)-dimensional almost nilpotent Lie algebras which were considered in [16]. There it is obtained a characterization of the existence of two types of Hermitian structures \((J,g)\) on almost nilpotent Lie algebras \(\mathfrak{g}=\mathbb{R}\ltimes_{D}\mathfrak{n}\) whose nilradical \(\mathfrak{n}\) has one-dimensional commutator \([\mathfrak{n},\mathfrak{n}]\), namely
1. \(J[\mathfrak{n},\mathfrak{n}]=\mathfrak{n}^{\perp_{\mathfrak{g}}}\), where \(\mathfrak{n}^{\perp_{\mathfrak{g}}}\) denotes the orthogonal complement of \(\mathfrak{n}\) in \(\mathfrak{g}\), and
2. \(J[\mathfrak{n},\mathfrak{n}]\subset\mathfrak{n}\).
According to [6], if an almost nilpotent Lie algebra has nilradical \(\mathfrak{n}\) with one-dimensional commutator, then \(\mathfrak{n}\) is a central extension of a Heisenberg Lie algebra \(\mathfrak{h}_{2\ell+1}\), i.e. a direct sum \(\mathfrak{h}_{2\ell+1}\oplus\mathbb{R}^{h}\) with \(\ell,h\) positive integers. Recall that \(\mathfrak{h}_{2\ell+1}=\text{span}\{x_{1},\ldots,x_{\ell},y_{1},\ldots,y_{\ell},z\}\) with brackets given by \([x_{i},y_{i}]=z\) for \(1\leq i\leq\ell\).
Let us consider first Case (i). We will give a characterization of the existence of nonzero holomorphic \((n,0)\)-forms, together with two families of complex solvmanifolds with trivial canonical bundle.
**Theorem 3.10**.: _[_16_]_ _Let \(\mathfrak{g}=\mathbb{R}e_{2n}\ltimes\mathfrak{n}\) be an almost nilpotent Lie algebra with \(\dim[\mathfrak{n},\mathfrak{n}]=1\) equipped with a Hermitian structure \((J,g)\) such that \(J[\mathfrak{n},\mathfrak{n}]=\mathfrak{n}^{\perp_{\mathfrak{g}}}\). Then there is an orthonormal basis \(\{e_{i}\}_{i=1}^{2n}\) such that \(Je_{1}=e_{2n}\), \(Je_{2k}=e_{2k+1}\) for \(1\leq k\leq n-2\) and_
\[[\mathfrak{n},\mathfrak{n}]=\mathbb{R}e_{1},\quad\mathfrak{k}_{1}:=\mathfrak{n }\cap J\mathfrak{n}=\text{span}\{e_{2},\ldots,e_{2n-1}\},\quad J[\mathfrak{n},\mathfrak{n}]=\mathbb{R}e_{2n}.\]
_The Lie brackets in this basis are described by_
\[\operatorname{ad}e_{2n}|_{\mathfrak{n}}=\begin{bmatrix}a&0\\ 0&A\end{bmatrix}\quad\text{and}\quad[Y,Z]=-\eta(Y,Z)e_{1},\qquad a\in\mathbb{R}, \ A\in\mathfrak{gl}(\mathfrak{k}_{1}),\ \eta\in\bigwedge^{2}\mathfrak{k}_{1}^{*},\ Y,Z\in \mathfrak{k}_{1},\]
_where the following conditions are satisfied_
1. \(AJ_{1}=J_{1}A\)_, where_ \(J_{1}=J|_{\mathfrak{k}_{1}}\)_,_
2. \(\eta(J\cdot,J\cdot)=\eta(\cdot,\cdot)\)_, and_
3. \(A^{*}\eta=a\eta\)_, where_ \(A^{*}\eta\) _is the 2-form on_ \(\mathfrak{k}_{1}\) _defined by_ \[(A^{*}\eta)(X,Y)=\eta(A(X),Y)+\eta(X,A(Y)),\quad X,Y\in\mathfrak{k}_{1}.\]
_Conversely, if the data \((a,A,\eta,J,g)\) as above satisfies (1)-(3) then they define a Hermitian structure on an almost nilpotent Lie algebra with \(\dim[\mathfrak{n},\mathfrak{n}]=1\) such that \(J[\mathfrak{n},\mathfrak{n}]=\mathfrak{n}^{\perp_{\mathfrak{g}}}\)._
**Proposition 3.11**.: _The Lie algebra \(\mathfrak{g}=\mathbb{R}e_{2n}\ltimes\mathfrak{n}\) equipped with a complex structure \(J\) as above has a nonzero closed \((n,0)\)-form if and only_
\[a+\frac{1}{2}\operatorname{Tr}A=0,\quad\operatorname{Tr}(J_{1}A)=0.\]
_Moreover, if \(\mathfrak{g}\) is unimodular then_
\[a=\operatorname{Tr}A=0,\quad\operatorname{Tr}(J_{1}A)=0.\]
Proof.: In [16, Lemma 5.1] it is computed the following expression for the canonical 1-form \(\psi\):
\[\psi(e_{1})=-2a-\operatorname{Tr}A,\quad\psi(e_{2n})=\operatorname{Tr}J_{1}A, \quad\psi|_{\mathfrak{k}_{1}}\equiv 0.\]
Then \(\psi\equiv 0\) if and only if \(a+\frac{1}{2}\operatorname{Tr}A=\operatorname{Tr}J_{1}A=0\), so the statement follows from Theorem 3.1. If \(\mathfrak{g}\) is unimodular then \(a+\frac{1}{2}\operatorname{Tr}A=0\) and \(a+\operatorname{Tr}A=0\) forces \(a=\operatorname{Tr}A=0\).
**Examples 3.12**.: (i) Let \(\mathfrak{g}_{n}=\mathbb{R}e_{4n+2}\ltimes_{B}\mathfrak{h}_{4n+1}\) where
\[B=\left[\begin{array}{c|cc}0&\\ \hline&\operatorname{I}_{2n}&\\ &-\operatorname{I}_{2n}\end{array}\right],\quad\text{and}\quad\eta=e^{2,\,2n+ 2}+\cdots+e^{2n+1,\,4n+1}.\]
Equip \(\mathfrak{g}_{n}\) with the metric \(g\) which makes \(\{e_{j}\}_{j=1}^{4n+2}\) an orthonormal basis, and the complex structure \(J\) given by \(Je_{1}=e_{4n+2}\) and \(Je_{2k}=e_{2k+1}\), \(1\leq k\leq 2n\). Then, according to Theorem 3.10, \((\mathfrak{g}_{n},J,g)\) defines a Hermitian structure on the unimodular almost nilpotent Lie algebra \(\mathfrak{g}_{n}\). Moreover, it is easy to verify that \((\mathfrak{g}_{n},J)\) satisfies the conditions in Proposition 3.11, so \((\mathfrak{g}_{n},J)\) admits a nonzero closed \((2n+1,0)\)-form. For any \(m\in\mathbb{N}\), \(m\geq 3\), the associated simply connected Lie group \(G_{n}\) admits a lattice \(\Gamma_{m}^{n}\). Indeed, for \(t_{m}=\log(\frac{m+\sqrt{m^{2}-4}}{2})\), let
\[P_{m}=\left[\begin{array}{c|cc}1&0&0\\ \hline 0&\operatorname{I}_{2n}&\alpha_{m}\operatorname{I}_{2n}\\ 0&\frac{1}{\alpha_{m}^{n}-\alpha_{m}}\operatorname{I}_{2n}&\frac{\alpha_{m}^{ n}}{\alpha_{m}^{n}-\alpha_{m}}\operatorname{I}_{2n}\end{array}\right],\quad\text{ where}\quad\alpha_{m}=\exp(t_{m}).\]
Then \(P_{m}^{-1}\exp(t_{m}B)P_{m}=\left[\begin{array}{c|cc}1&0&0\\ \hline 0&0_{2n}&-\operatorname{I}_{2n}\\ 0&\operatorname{I}_{2n}&m\operatorname{I}_{2n}\end{array}\right]\). Thus, if we set \(f_{j}=P_{m}e_{j}\), \(1\leq j\leq 4n+1\) then we have that \([f_{j},f_{k}]=[e_{j},e_{k}]\), \(1\leq j,k\leq 4n+1\), hence \(\{f_{j}\}_{j=1}^{4n+1}\) is a rational basis of \(\mathfrak{h}_{4n+1}\) in which the matrix of \(\exp(t_{m}B)\) is an integer unimodular matrix. It follows from Theorem 2.4 that \(\Gamma_{m}^{n}=t_{m}\mathbb{Z}\oplus\exp^{H_{4n+1}}(\operatorname{span}_{ \mathbb{Z}}\{f_{1},\dots,f_{4n+1}\})\) is a lattice of \(G_{m}\). All the corresponding complex solvmanifolds \((\Gamma_{m}^{n}\backslash G_{m},J)\) have trivial canonical bundle.
(ii) For \(a_{1},\ldots,a_{n}\in\mathbb{R}\) such that \(\sum_{j=1}^{n}a_{j}=0\) let us define \(\mathfrak{g}=\mathfrak{g}(a_{1},\ldots,a_{n})=\mathbb{R}e_{2n+2}\ltimes_{B} \mathfrak{h}_{2n+1}\) where
\[B=(0)\oplus\begin{bmatrix}0&-a_{1}\\ a_{1}&0\end{bmatrix}\oplus\cdots\oplus\begin{bmatrix}0&-a_{n}\\ a_{n}&0\end{bmatrix},\quad\text{and}\quad\eta=e^{23}+\cdots+e^{2n,2n+1}.\]
Equip \(\mathfrak{g}\) with the metric \(g\) such that \(\{e_{j}\}_{j=1}^{2n+2}\) is an orthonormal basis and the complex structure \(J\) defined by \(Je_{1}=e_{2n+2}\) and \(Je_{2k}=e_{2k+1}\), \(1\leq k\leq n\). According to Theorem 3.10, \((J,g)\) is a Hermitian structure on the unimodular almost nilpotent Lie algebra \(\mathfrak{g}\). It is easy to verify that \((\mathfrak{g},J)\) satisfies the conditions in Proposition 3.11 (since \(\operatorname{Tr}J_{1}A=\sum_{j=1}^{n}a_{j}=0\)), so \((\mathfrak{g},J)\) has a nonzero closed \((n+1,0)\)-form. The corresponding simply connected Lie group \(G:=G(a_{1},\ldots,a_{n})\) admits a lattice for some values of the parameters \(a_{1},\ldots,a_{n}\). Indeed, choosing \(a_{1},\ldots,a_{n}\in\{2\pi,\pi,\frac{\pi}{2}\}+2\pi\mathbb{Z}\) with \(a_{1}+\cdots+a_{n}=0\), we obtain that \(\{e_{j}\}_{j=1}^{2n+1}\) is a rational basis of \(\mathfrak{h}_{2n+1}\) in which \(\exp B\) is a unimodular integer matrix, so by Theorem 2.4 the Lie group \(G\) admits a lattice \(\Gamma:=\Gamma(a_{1},\ldots,a_{n})\). The corresponding solvmanifolds equipped with the induced complex structure \((\Gamma\backslash G,J)\) have trivial canonical bundle.
We consider next Case (ii) and as in Case (i) we give a characterization for the existence of nonzero holomorphic \((n,0)\)-forms. Moreover, we exhibit two families of complex solvmanifolds with trivial canonical bundle.
**Theorem 3.13**.: _[_16_]_ _Let \(\mathfrak{g}=\mathbb{R}e_{2n}\ltimes\mathfrak{n}\) be an almost nilpotent Lie algebra with \(\dim[\mathfrak{n},\mathfrak{n}]=1\) equipped with a Hermitian structure \((J,g)\) such that \(J[\mathfrak{n},\mathfrak{n}]\subset\mathfrak{n}\). Then there is an orthonormal basis \(\{e_{i}\}_{i=1}^{2n}\) such that \(Je_{2j-1}=e_{2j}\), \(1\leq j\leq n\), and_
\[\mathfrak{n}=\mathbb{R}\langle e_{1},\ldots,e_{2n-1}\rangle,\quad[\mathfrak{n },\mathfrak{n}]=\mathbb{R}e_{1}.\]
_Denote \(\mathfrak{k}_{1}:=[\mathfrak{n},\mathfrak{n}]^{\perp_{\mathfrak{g}}}\cap \mathfrak{n}\) and \(\mathfrak{k}_{2}=\mathfrak{k}_{1}\cap J\mathfrak{k}_{1}\). Then, with respect to the decomposition_
\[\mathfrak{n}=\mathbb{R}e_{1}\oplus\mathbb{R}e_{2}\oplus\mathfrak{k}_{2}\oplus \mathbb{R}e_{2n-1},\]
_the Lie brackets are given by_
\[\operatorname{ad}e_{2n}|_{\mathfrak{n}}=\left[\begin{array}{c|c|c|c}a_{1}&0& \alpha&v_{1}\\ \hline 0&a_{2}&\gamma&v_{2}\\ \hline 0&0&A&v\\ \hline 0&0&0&a\end{array}\right],\quad[Y,Z]=-\eta(Y,Z)e_{1},\quad Y,Z\in \mathfrak{k}_{1},\]
_where \(a,a_{1},a_{2},v_{1},v_{2}\in\mathbb{R}\), \(v\in\mathfrak{k}_{2}\), \(\alpha,\gamma\in\mathfrak{k}_{2}^{*}\), \(A\in\mathfrak{gl}(\mathfrak{k}_{2})\) and_
\[\eta=\xi+(\gamma+\alpha\circ J)\wedge e^{2n-1}+(a_{2}-a_{1})e^{2}\wedge e^{2n -1},\quad\xi\in\bigwedge^{2}\mathfrak{k}_{2}^{*}.\]
_Moreover, \([A,J|_{\mathfrak{k}_{2}}]=0\) and the equations_
\[0 =(a_{2}-a_{1})(a+a_{2}-a_{1})\] \[0 =A^{*}\xi-a_{1}\xi\] \[0 =(a-a_{1})(\gamma+\alpha\circ J)+A^{*}(\gamma+\alpha\circ J)+(a_{ 2}-a_{1})\gamma-\iota_{v}\xi\]
_are satisfied._
_Conversely, the data \((a,a_{1},a_{2},A,v_{1},v_{2},\alpha,\gamma,v,\xi,J,g)\) satisfying the conditions above define a Hermitian structure \((J,g)\) on the almost nilpotent Lie algebra \(\mathfrak{g}\) such that \(J[\mathfrak{n},\mathfrak{n}]=\mathfrak{n}^{\perp_{\mathfrak{g}}}\)._
**Proposition 3.14**.: _The Lie algebra \(\mathfrak{g}=\mathbb{R}e_{2n}\ltimes\mathfrak{n}\) equipped with the complex structure \(J\) as above has a nonzero closed \((n,0)\)-form if and only if_
\[\operatorname{Tr}A=-2(a+a_{1}),\quad\operatorname{Tr}(J|_{\mathfrak{k}_{2}}A)=0.\]
_If the associated simply connected Lie group \(G\) admits lattices then_
\[a_{1}=a_{2}=a=0,\quad\operatorname{Tr}A=0,\quad\operatorname{Tr}(J|_{ \mathfrak{k}_{2}}A)=0.\]
Proof.: In [16, Lemma 5.10] it is computed the following expression for the canonical \(1\)-form \(\psi\):
\[\psi(e_{1}) =\psi(e_{2})=0,\quad\psi(\mathfrak{k}_{2})\equiv 0,\] \[\psi(e_{2n-1}) =(-a+a_{2}-a_{1})-(a+a_{1}+a_{2}+\operatorname{Tr}A)=-2(a+a_{1})- \operatorname{Tr}A,\] \[\psi(e_{2n}) =\operatorname{Tr}(J|_{\mathfrak{k}_{2}}A).\]
Then \(\psi\equiv 0\) if and only if \(\operatorname{Tr}A=-2(a+a_{1})\) and \(\operatorname{Tr}(J|_{\mathfrak{k}_{2}}A)=0\), so the statement follows from Theorem 3.1. If \(G\) admits lattices, then by a result of [17]\(\mathfrak{g}\) is _strongly unimodular_, and by [16, Remark 2.6] the almost nilpotent Lie algebra \(\mathfrak{g}\) is strongly unimodular if and only if \(a_{1}=a+a_{2}+\operatorname{Tr}A=0\). This, together with the fact that \(\operatorname{Tr}A=-2(a+a_{1})\) and \(0=a_{2}(a+a_{2})\) imply that \(a_{1}=a_{2}=a=\operatorname{Tr}A=0\).
**Examples 3.15**.: (i) For \(n\geq 2\), let \(\mathfrak{g}_{n}=\mathbb{R}e_{4n}\ltimes_{B}\mathfrak{h}_{4n-1}\) where
\[B=\left[\begin{array}{c|c|c|c|c}0&0&0&0&v_{1}\\ \hline 0&0&0&0&v_{2}\\ \hline 0&0&\operatorname{I}_{2n-2}&0&0\\ \hline 0&0&0&-\operatorname{I}_{2n-2}&0\\ \hline 0&0&0&0\end{array}\right],\;v_{1},v_{2}\neq 0,\quad\text{and}\quad\eta=e^{3,2n+1}+\cdots+e^{2n,4n-2}.\]
Equip \(\mathfrak{g}_{n}\) with the metric \(g\) satisfying that \(\{e_{j}\}_{j=1}^{4n}\) is an orthonormal basis, and the complex structure \(J\) given by \(Je_{2k-1}=e_{2k}\), \(1\leq k\leq 2n\). Then, according to Theorem 3.13, \((J,g)\) is a Hermitian structure on the unimodular almost nilpotent Lie algebra \(\mathfrak{g}_{n}\). Moreover, \((\mathfrak{g}_{n},J)\) satisfies the conditions in Proposition 3.14 so \((\mathfrak{g}_{n},J)\) admits a nonzero closed \((2n,0)\)-form. The associated simply connected Lie group \(G_{n}\) admits lattices. Indeed, for \(m\in\mathbb{N},m\geq 3\), let \(\alpha_{m}=\frac{m+\sqrt{m^{2}-4}}{2}\) and \(t_{m}=\log\alpha_{m}\). Thus, the matrix
\[\exp(t_{m}B)=\left[\begin{array}{c|c|c|c}1&0&0&0&v_{1}t_{m}\\ \hline 0&1&0&0&v_{2}t_{m}\\ \hline 0&0&\alpha_{m}\operatorname{I}_{2n-2}&0&0\\ \hline 0&0&0&\alpha_{m}^{-1}\operatorname{I}_{2n-2}&0\\ \hline 0&0&0&0&1\end{array}\right]\]
is conjugate via
\[P_{m}=\left[\begin{array}{c|c|c|c|c}v_{1}t_{m}&0&0&0&0\\ \hline v_{2}t_{m}&0&0&0&-\frac{1}{v_{1}t_{m}}\\ \hline 0&0&\operatorname{I}_{2n-2}&\alpha_{m}\operatorname{I}_{2n-2}&0\\ \hline 0&0&\frac{1}{\alpha_{m}^{-1}-\alpha_{m}}\operatorname{I}_{2n-2}&\frac{ \alpha_{m}^{-1}}{\alpha_{m}^{-1}-\alpha_{m}}\operatorname{I}_{2n-2}&0\\ \hline 0&1&0&0&0\end{array}\right]\]
to an integer unimodular matrix. Setting \(f_{j}=P_{m}e_{j}\) for \(1\leq j\leq 4n-1\) then we obtain that \([f_{j},f_{k}]=[e_{j},e_{k}]\) so \(\{f_{j}\}_{j=1}^{4n-1}\) is a rational basis of \(\mathfrak{h}_{4n-1}\) in which \(\exp(t_{m}B)\) is written as an integer unimodular matrix. Therefore, by Theorem 2.4 the Lie group \(G_{n}\) admits a lattice \(\Gamma_{n}^{m}\). The associated complex solvmanifolds \((\Gamma_{n}^{m}\backslash G_{n},J)\) have trivial canonical bundle.
(ii) For \(n\geq 2\) let \(\mathfrak{g}(a_{1},\ldots,a_{n})=\mathbb{R}e_{2n+2}\ltimes_{B}\mathfrak{h}_{2n+1}\) where
\[B=\left[\begin{array}{c|c|c|c}0&0&0&v_{1}\\ \hline 0&0&0&v_{2}\\ \hline 0&0&A&0\\ \hline 0&0&0&0\end{array}\right],\;v_{1},v_{2}\neq 0,\;A=\left[\begin{array}{cc}0&-a_{1} \\ a_{1}&0\end{array}\right]\oplus\cdots\oplus\left[\begin{array}{cc}0&-a_{n}\\ a_{n}&0\end{array}\right],\quad\sum_{i=1}^{n}a_{i}=0.\]
and \(\eta=e^{34}+\cdots+e^{2n-1,2n}\). According to Theorem 3.13 defining \((J,g)\) as in Example (i) we obtain a Hermitian structure on \(\mathfrak{g}=\mathfrak{g}(a_{1},\ldots,a_{n})\). Furthermore, since \(\sum_{i=1}^{n}a_{i}=0\), \((\mathfrak{g},J)\) has
a nonzero closed \((n+1,0)\)-form, due to Proposition 3.14. The associated simply connected Lie group \(G=G(a_{1},\dots,a_{n})\) admits lattices. Indeed, choosing \(a_{1},\dots,a_{n}\in\{2\pi,\pi,\frac{\pi}{2}\}+2\pi\mathbb{Z}\) such that \(a_{1}+\dots+a_{n}=0\) and taking
\[P=\left[\begin{array}{c|c|c|c}v_{1}&0&0&0\\ \hline v_{2}&0&0&-\frac{1}{v_{i}}\\ \hline 0&0&\text{I}_{2n-2}&0\\ \hline 0&1&0&0\end{array}\right]\]
we obtain a rational basis \(\{f_{j}\}_{j=1}^{2n+1}\) of \(\mathfrak{h}_{2n+1}\), where \(f_{j}=Pe_{j}\), in which \(\exp B\) is an unimodular integer matrix. Hence, by Theorem 2.4 the simply connected Lie group \(G\) admits a lattice \(\Gamma=\Gamma(a_{1},\dots,a_{n})\). All the corresponding complex solvmanifolds \((\Gamma\backslash G,J)\) have trivial canonical bundle.
### Invariantly torsion canonical bundle
Here we tackle the case when some power of the canonical bundle of a compact complex quotient \((\Gamma\backslash G,J)\) is trivialized by an invariant holomorphic section. We obtain that actually the canonical bundle is itself invariantly trivial.
**Proposition 3.16**.: _Let \((G,J)\) be a \(2n\)-dimensional Lie group equipped with a left invariant complex structure. If \(K_{G}^{\otimes k}\) admits a nonzero invariant holomorphic section for some \(k\in\mathbb{N}\) then \(K_{G}\) admits a nonzero invariant holomorphic section. That is, \((G,J)\) has trivial canonical bundle. The same happens for any quotient \(\Gamma\backslash G\) where \(\Gamma\) is a uniform lattice of \(G\)._
Proof.: We can work at the Lie algebra level since we are dealing with invariant objects. Let \(\sigma\) be a generator of \(\bigwedge^{n,0}\mathfrak{g}^{*}\), where \(\mathfrak{g}=\text{Lie}(G)\). Then, \(\sigma^{\otimes k}:=\sigma\otimes\dots\otimes\sigma\) (\(k\) times) is a generator of \((\bigwedge^{n,0}\mathfrak{g}^{*})^{\otimes k}\), which we may assume holomorphic since this space is \(1\)-dimensional. Recall from Remark 3.3 that \(d\sigma=\beta\wedge\sigma\) for some \((0,1)\)-form \(\beta\), which in terms of the extended Dolbeault operator \(\bar{\partial}\) from SS2 can be expressed as \(\bar{\partial}\sigma=\beta\otimes\sigma\). Next we compute
\[0=\bar{\partial}\sigma^{\otimes k}=\sum_{j=1}^{k}\sigma\otimes\dots\otimes \underbrace{\bar{\partial}\sigma}_{\text{$j$-th place}}\otimes\dots\otimes \sigma=\sum_{j=1}^{k}\beta\otimes\sigma^{\otimes k}=k\beta\otimes\sigma^{ \otimes k}.\]
Therefore, \(\beta=0\) and this implies \(\bar{\partial}\sigma=0\). Hence, \(\sigma\) is holomorphic and the proof follows.
## 4. Triviality of the canonical bundle of solvable Lie groups with left invariant complex structures
The main goal in this section is to show that any simply connected solvable Lie group equipped with a left invariant complex structure has trivial canonical bundle. In general the trivializing holomorphic section will not be left invariant.
Any nowhere vanishing section of the canonical bundle of a \(2n\)-dimensional Lie group \(G\) equipped with a left invariant complex structure \(J\) can be written as \(\tau=f\sigma\), where \(\sigma\) is a nonzero left invariant \((n,0)\)-form and \(f\) is a smooth function \(f:G\to\mathbb{C}^{\times}=\mathbb{C}\setminus\{0\}\). When this \(f\) exists we cannot expect uniqueness in general (in the non-compact setting) as the following result shows.
**Lemma 4.1**.: _Let \(G\) be a \(2n\)-dimensional Lie group equipped with a left invariant complex structure \(J\), and let \(\sigma\) denote a nonzero left invariant \((n,0)\)-form on \(G\). Assume that \(\tau_{1}:=f_{1}\sigma\) is closed, for some smooth function \(f_{1}:G\to\mathbb{C}^{\times}\). If \(f_{2}:G\to\mathbb{C}^{\times}\) is another smooth function on \(G\) then \(\tau_{2}:=f_{2}\sigma\) is closed if and only if \(\frac{f_{2}}{f_{1}}\) is a holomorphic function on \(G\)._
_In particular, if \(G\) is compact then \(f_{2}=cf_{1}\) for some \(c\in\mathbb{C}^{\times}\)._
Proof.: Assume first that \(H:=\frac{f_{2}}{f_{1}}\) is holomorphic. Then:
\[\overline{\partial}\tau_{2} =\overline{\partial}((Hf_{1})\sigma)\] \[=\overline{\partial}(Hf_{1})\wedge\sigma+(Hf_{1})\,\overline{ \partial}\sigma\] \[=H(\overline{\partial}f_{1})\wedge\sigma+(Hf_{1})\,\overline{ \partial}\sigma\] \[=H(\overline{\partial}f_{1}\wedge\sigma+f_{1}\,\overline{ \partial}\sigma)\] \[=H\,\overline{\partial}(f_{1}\sigma)\] \[=0.\]
Therefore \(\tau_{2}\) is holomorphic and hence closed.
Conversely, assume now that \(\tau_{2}\) is closed. From \(d\tau_{1}=0\) and \(d\tau_{2}=0\) we obtain
\[df_{1}\wedge\sigma+f_{1}\,d\sigma=0,\qquad df_{2}\wedge\sigma+f_{2}\,d\sigma=0.\]
From these equations we obtain readily that
\[d\left(\frac{f_{2}}{f_{1}}\right)\wedge\sigma=0. \tag{6}\]
Let us consider a basis \(\{\gamma_{1},\ldots,\gamma_{n}\}\) of left invariant \((1,0)\)-forms on \(G\), hence we may assume \(\sigma=\gamma_{1}\wedge\cdots\wedge\gamma_{n}\). If we write
\[d\left(\frac{f_{2}}{f_{1}}\right)=\sum_{j=1}^{n}(a_{j}\gamma_{j}+b_{j} \overline{\gamma}_{j})\]
for some \(a_{j},b_{j}\in\mathbb{C}\), then (6) becomes
\[0=\sum_{j=1}^{n}(a_{j}\gamma_{j}+b_{j}\overline{\gamma}_{j})\wedge\sigma=\sum _{j=1}^{n}b_{j}\overline{\gamma}_{j}\wedge\sigma\]
which implies \(b_{j}=0\) for \(j=1,\ldots,n\) since \(\{\overline{\gamma}_{j}\wedge\sigma\}_{j=1}\) is a linearly independent set. This means that \(d(\frac{f_{2}}{f_{1}})\) is a \((1,0)\)-form, and this is equivalent to \(\overline{\partial}(\frac{f_{2}}{f_{1}})=0\), that is, \(\frac{f_{2}}{f_{1}}\) is holomorphic.
**Corollary 4.2**.: _Let \((G,J)\) be a \(2n\)-dimensional Lie group equipped with a left invariant complex structure, and assume that \(G\) admits a uniform lattice \(\Gamma\). If \(\sigma\) is a nonzero invariant \((n,0)\)-form and \(f:\Gamma\backslash G\to\mathbb{C}^{\times}\) is a smooth function such that \(\tau:=f\sigma\) is closed then \(f\) is unique up to a nonzero constant. In particular, nowhere vanishing closed \((n,0)\)-forms \(\tau\) on \((\Gamma\backslash G,J)\) are either all invariant or all non-invariant._
Now we proceed to prove the main theorem of the section. We begin with a series of preliminary results. Recall that in a solvable Lie algebra \(\mathfrak{g}\) its nilradical \(\mathfrak{n}(\mathfrak{g})\) is given by \(\mathfrak{n}(\mathfrak{g})=\{x\in\mathfrak{g}\mid\operatorname{ad}x\,\text{ is nilpotent}\}\).
**Lemma 4.3**.: _Let \(\mathfrak{g}\) be a solvable Lie algebra equipped with a complex structure \(J\), and denote \(\mathfrak{n}(\mathfrak{g})\) its nilradical. If \(\mathfrak{h}=\operatorname{Ker}\psi\), where \(\psi\) is the canonical 1-form on \((\mathfrak{g},J)\) then_
\[\mathfrak{n}(\mathfrak{g})\cap J\mathfrak{n}(\mathfrak{g})\subseteq\mathfrak{ h}\cap J\mathfrak{h}.\]
Proof.: Let \(x\in\mathfrak{n}(\mathfrak{g})\cap J\mathfrak{n}(\mathfrak{g})\). Since \(Jx\in\mathfrak{n}(\mathfrak{g})\) then \(\operatorname{ad}(Jx)\) is nilpotent and thus \(\operatorname{Tr}\operatorname{ad}(Jx)=0\). As a consequence we only need to prove that \(\operatorname{Tr}(J\operatorname{ad}x)=0\). It follows that
\[x-iJx\in\mathfrak{n}(\mathfrak{g})\oplus i\mathfrak{n}(\mathfrak{g})= \mathfrak{n}(\mathfrak{g})_{\mathbb{C}}=\mathfrak{n}(\mathfrak{g}_{\mathbb{C}}),\]
so that \(\operatorname{ad}(x-iJx)\) is a nilpotent endomorphism of \(\mathfrak{g}_{\mathbb{C}}\). We can write
\[\operatorname{ad}(x-iJx)=\left[\begin{array}{c|c}A_{x}&*\\ \hline 0&B_{x}\end{array}\right],\]
in a certain basis of \(\mathfrak{g}_{\mathbb{C}}\) adapted to the decomposition \(\mathfrak{g}_{\mathbb{C}}=\mathfrak{g}^{1,0}\oplus\mathfrak{g}^{0,1}\). Since this operator is nilpotent, we have that both matrices \(A_{x}\) and \(B_{x}\) are nilpotent, so that \(\operatorname{Tr}A_{x}=\operatorname{Tr}B_{x}=0\). Now we compute
\[J^{\mathbb{C}}\operatorname{ad}(x-iJx)=\left[\begin{array}{c|c}iA_{x}&*\\ \hline 0&-iB_{x}\end{array}\right].\]
Therefore,
\[0=\operatorname{Tr}(J^{\mathbb{C}}\operatorname{ad}(x-iJx))=\operatorname{Tr }(J\operatorname{ad}(x))-i\operatorname{Tr}(J\operatorname{ad}(Jx)),\]
so that
\[\operatorname{Tr}(J\operatorname{ad}(x))=\operatorname{Tr}(J\operatorname{ad }(Jx))=0,\]
that is, \(x\in\mathfrak{h}\cap J\mathfrak{h}\).
The following technical lemma will provide a particular basis of \((1,0)\)-forms which will be useful in the proof of the main result of this section.
**Lemma 4.4**.: _Let \(\mathfrak{g}\) be a \(2n\)-dimensional solvable Lie algebra equipped with a complex structure \(J\). Then there exists a basis \(\{\gamma_{1},\ldots,\gamma_{n}\}\) of \((1,0)\)-forms, with \(\gamma_{k}=u^{k}+iv^{k}\), and an index \(1\leq s\leq n\) such that:_
1. \(u^{j}\) _is closed for_ \(1\leq j\leq s\)_, and_
2. \(u_{j},v_{j}\in[\mathfrak{g},\mathfrak{g}]\cap J[\mathfrak{g},\mathfrak{g}]\) _for_ \(j>s\)_,_
_where \(\{u_{1},v_{1},\ldots,u_{n},v_{n}\}\) denotes the dual basis of \(\{u^{1},v^{1},\ldots,u^{n},v^{n}\}\)._
Proof.: Consider the commutator ideal \(\mathfrak{g}^{\prime}=[\mathfrak{g},\mathfrak{g}]\) and let \(\mathfrak{u}\) be a complementary subspace to \(\mathfrak{g}^{\prime}\cap J\mathfrak{g}^{\prime}\) in \(\mathfrak{g}^{\prime}\), that is
\[\mathfrak{g}^{\prime}=(\mathfrak{g}^{\prime}\cap J\mathfrak{g}^{\prime}) \oplus\mathfrak{u}.\]
Moreover, we have that \(\mathfrak{g}^{\prime}\cap J\mathfrak{u}=\{0\}\). Indeed, if \(v\in\mathfrak{g}^{\prime}\cap J\mathfrak{u}\), this implies that \(v\in\mathfrak{g}^{\prime}\cap J\mathfrak{g}^{\prime}\) since \(\mathfrak{u}\subset\mathfrak{g}^{\prime}\). Hence, \(Jv\in\mathfrak{u}\cap(\mathfrak{g}^{\prime}\cap J\mathfrak{g}^{\prime})\), which implies \(Jv=0\) and thus \(v=0\).
Therefore we can decompose \(\mathfrak{g}\) as
\[\mathfrak{g}=(\mathfrak{g}^{\prime}\cap J\mathfrak{g}^{\prime})\oplus \mathfrak{u}\oplus J\mathfrak{u}\oplus\mathfrak{v},\]
where \(\mathfrak{v}\) is a complementary subspace to \(\mathfrak{g}^{\prime}\oplus\mathfrak{u}\) in \(\mathfrak{g}\), which can be chosen \(J\)-invariant. As \(\mathfrak{g}\) is solvable, \(\mathfrak{g}^{\prime}\) is a proper subspace of \(\mathfrak{g}\), so \(J\mathfrak{u}\oplus\mathfrak{v}\neq\{0\}\). The fact that the subspaces \(\mathfrak{v}\), \(\mathfrak{u}\oplus J\mathfrak{u}\) and \(\mathfrak{g}^{\prime}\cap J\mathfrak{g}^{\prime}\) are \(J\)-invariant allows us to take bases \(\{x_{1},\ldots,x_{r},\tilde{x}_{1},\ldots,\tilde{x}_{r}\}\) of \(\mathfrak{v}\), \(\{y_{1},\ldots,y_{m},\tilde{y}_{1},\ldots,\tilde{y}_{m}\}\) of \(\mathfrak{u}\oplus J\mathfrak{u}\) (with \(y_{k}\in J\mathfrak{u},\tilde{y}_{k}\in\mathfrak{u}\)) and \(\{z_{1},\ldots,z_{\ell},\tilde{z}_{1},\ldots,\tilde{z}_{\ell}\}\) of \(\mathfrak{g}^{\prime}\cap J\mathfrak{g}^{\prime}\) such that \(Jx_{k}=\tilde{x}_{k}\), \(Jy_{k}=\tilde{y}_{k}\), \(Jz_{k}=\tilde{z}_{k}\) and \(r+m+\ell=n\). Then, we can take the ordered basis of \((1,0)\)-forms
\[\{\gamma_{1},\ldots,\gamma_{s},\ldots,\gamma_{n}\}=\{x^{1}+i\tilde{x}^{1}, \ldots,x^{r}+i\tilde{x}^{r},y^{1}+i\tilde{y}^{1},\ldots,y^{m}+i\tilde{y}^{m}, z^{1}+i\tilde{z}^{1},\ldots,z^{\ell}+i\tilde{z}^{\ell}\},\]
with \(s:=r+m<n\) and \(\{x^{1},\tilde{x}^{1},\ldots,y^{1},\tilde{y}^{1},\ldots,z^{1},\tilde{z}^{1},\ldots\}\) is the basis of \(\mathfrak{g}^{*}\) dual to the basis \(\{x_{1},\tilde{x}_{1},\ldots,y_{1},\tilde{y}_{1},\ldots,z_{1},\tilde{z}_{1},\ldots\}\). Let us rename \(u^{j}=\operatorname{Re}\gamma_{j}\) and \(v^{j}=\operatorname{Im}\gamma_{j}\). For \(1\leq j\leq s\) we have that \(u^{j}\) belongs to the annihilator of \(\mathfrak{g}^{\prime}\) so \(u^{j}\) is closed, and for \(j>s\) we have that \(u_{j},v_{j}\in\mathfrak{g}^{\prime}\cap J\mathfrak{g}^{\prime}\).
**Remark 4.5**.: It follows from the proof of Lemma 4.4 that \(s=n\) if and only if \(\mathfrak{g}=\mathfrak{g}^{\prime}\oplus J\mathfrak{g}^{\prime}\). In this case the complex structure \(J\) is abelian. Indeed, the more general condition \(\mathfrak{g}^{\prime}\cap J\mathfrak{g}^{\prime}=\{0\}\) implies that \(J\) is abelian, which can be easily verified from \(N_{J}=0\).
**Theorem 4.6**.: _Any \(2n\)-dimensional simply connected solvable Lie group \(G\) equipped with a left invariant complex structure \(J\) admits a nonzero closed \((n,0)\)-form \(\tau\). In particular, the canonical bundle of \((G,J)\) is trivial._
Proof.: Let \(\mathfrak{g}\) be the Lie algebra of \(G\) and take the basis \(\{\gamma_{1},\ldots,\gamma_{n}\}\) with \(\gamma_{k}=u^{k}+iv^{k}\), as in Lemma 4.4. Consider now the \((n,0)\)-form \(\sigma\) given by \(\sigma=\gamma_{1}\wedge\cdots\wedge\gamma_{n}\). If \(d\sigma=0\) we may simply choose \(\tau=\sigma\). On the other hand, if \(d\sigma\neq 0\) it follows from Remark 3.3 that
\[d\sigma=\frac{1}{4}\sum_{j=1}^{n}(-\psi(v_{j})+i\psi(u_{j}))\,\bar{\gamma_{j}} \wedge\sigma, \tag{7}\]
Let us call \(C_{j}=-\psi(v_{j})+i\psi(u_{j})\). We show first that \(C_{j}=0\) when \(j>s\). Indeed, in this case \(u_{j},v_{j}\in\mathfrak{g}^{\prime}\cap J\mathfrak{g}^{\prime}\). As \(\mathfrak{g}^{\prime}\cap J\mathfrak{g}^{\prime}\subset\mathfrak{n}(\mathfrak{ g})\cap J\mathfrak{n}(\mathfrak{g})\) since \(\mathfrak{g}\) is solvable, it follows from Lemma 4.3 that \(\psi(u_{j})=\psi(v_{j})=0\) so \(C_{j}=0\) for \(j>s\). Therefore we can write
\[d\sigma=\frac{1}{4}\sum_{j=1}^{s}C_{j}\bar{\gamma_{j}}\wedge\sigma=\frac{1}{4 }\sum_{j=1}^{s}(C_{j}\bar{\gamma_{j}}\wedge\sigma+C_{j}\underbrace{\gamma_{j} \wedge\sigma}_{=0})=\frac{1}{2}\sum_{j=1}^{s}C_{j}u^{j}\wedge\sigma.\]
Hence, the form \(\alpha=\frac{1}{2}\sum_{j=1}^{s}C_{j}u^{j}\) satisfies \(d\sigma=\alpha\wedge\sigma\) and \(d\alpha=0\) by the choice of the basis \(\{\gamma_{1},\ldots,\gamma_{n}\}\). Since \(G\) is simply connected the left invariant \(1\)-form \(\alpha\) on \(G\) is exact so that there exists a smooth function \(f:G\to\mathbb{C}\) satisfying \(\alpha=df\). Finally, we consider the \((n,0)\)-form \(\tau:=\mathrm{e}^{-f}\sigma\) and we compute
\[d\tau=\mathrm{e}^{-f}(-\alpha\wedge\sigma+d\sigma)=0,\]
which says that \(\tau\) is a nowhere vanishing closed \((n,0)\)-form on \(G\).
**Remark 4.7**.: The closed \((n,0)\)-form \(\tau\) from Theorem 4.6 can be written as \(\tau=F\sigma\), where \(F:G\to\mathbb{C}^{\times}\) is a Lie group homomorphism. Indeed, with the notation in the proof of Theorem 4.6, replacing \(f\) by \(f-f(1_{G})\) we still have that \(\alpha=df\) is left invariant and \(f(1_{G})=0\), where \(1_{G}\) denotes the identity element. This implies that \(f:G\to\mathbb{C}\) is an additive homomorphism. Hence, \(F:=\mathrm{e}^{-f}:G\to\mathbb{C}^{\times}\) is a multiplicative homomorphism.
**Remark 4.8**.: There are Lie groups with a left invariant complex structure which do not have trivial canonical bundle. For instance, the \(4\)-dimensional compact Lie group \(S^{1}\times\mathrm{SU}(2)\) carries a left invariant complex structure such that it is biholomorphic to a Hopf manifold \(S^{1}\times S^{3}\), and it is well known that this compact complex surface has non-trivial canonical bundle.
**Remark 4.9**.: It was conjectured by Hasegawa in [25] that all simply connected unimodular solvable Lie groups with left invariant complex structure are Stein manifolds (that is, they are biholomorphic to a closed complex submanifold of some \(\mathbb{C}^{N}\)). If this conjecture were true, then the canonical bundle of any of these pairs \((G,J)\) would be holomorphically trivial according to the Oka-Grauert principle ([22]), since the canonical bundle of any such Lie group is always smoothly trivial via a left invariant section. Thus, Theorem 4.6 provides evidence in the direction of this conjecture.
## 5. An algebraic obstruction for the triviality of the canonical bundle
In this section we will consider compact quotients of a Lie group by uniform lattices, equipped with an invariant complex structure. In the main result of this section we provide an algebraic obstruction for the canonical bundle to be holomorphically trivial (or more generally, holomorphically torsion), in terms of the canonical \(1\)-form \(\psi\). Namely, \(\psi\) has to vanish on the commutator ideal of the associated Lie algebra. We will do this by exploiting the relation of \(\psi\) with the Chern-Ricci form of any invariant Hermitian metric on the quotient.
Let us recall the definition of the Chern-Ricci form. Let \((M,J,g)\) be a \(2n\)-dimensional Hermitian manifold, and let \(\omega=g(J\cdot,\cdot)\) be the fundamental \(2\)-form associated to \((M,J,g)\). The _Chern connection_ is the unique connection \(\nabla^{C}\) on \(M\) which is Hermitian (i.e. \(\nabla^{C}J=0\)
\(\nabla^{C}g=0\)) and the \((1,1)\)-component \(T^{1,1}\) of its torsion tensor vanishes. In terms of the Levi-Civita connection \(\nabla\) of \(g\), the Chern connection is expressed as
\[g(\nabla^{C}_{X}Y,Z)=g(\nabla_{X}Y,Z)-\frac{1}{2}d\omega(JX,Y,Z),\quad X,Y,Z\in \mathfrak{X}(M).\]
The _Chern-Ricci form_\(\rho=\rho(J,g)\) is defined by
\[\rho(X,Y)=-\frac{1}{2}\operatorname{Tr}(J\circ R^{C}(X,Y))=\sum_{i=1}^{n}g(R^{ C}(X,Y)e_{i},Je_{i}),\]
where \(R^{C}(X,Y)=[\nabla^{C}_{X},\nabla^{C}_{Y}]-\nabla^{C}_{[X,Y]}\) is the curvature tensor associated to \(\nabla^{C}\) and \(\{e_{i},Je_{i}\}_{i=1}^{n}\) is a local orthonormal frame for \(g\). It is well known that \(\rho\) is a closed real \((1,1)\)-form on \(M\).
Consider now any left invariant almost Hermitian structure \((J,g)\) on a Lie group \(G\) with Lie algebra \(\mathfrak{g}\). In [46] it is proved that
\[\rho(x,y)=\frac{1}{2}(\operatorname{Tr}(J\operatorname{ad}[x,y])- \operatorname{Tr}\operatorname{ad}(J[x,y])),\quad x,y\in\mathfrak{g}. \tag{8}\]
Remarkably, this Chern-Ricci form does not depend on the Hermitian metric. We observe from (8) that \(2\rho=-d\psi\), thus implying that if \(\psi\) vanishes then \(\rho=0\), for any Hermitian metric \(g\). This provides a great source of examples of Chern-Ricci flat metrics; and in particular we obtain compact examples when the Lie group \(G\) admits uniform lattices. We summarize this in the following proposition.
**Proposition 5.1**.: _Let \((G,J)\) be a \(2n\)-dimensional Lie group equipped with a left invariant complex structure and assume that \(\Gamma\) is a uniform lattice in \(G\). If there exists a nonzero left invariant holomorphic \((n,0)\)-form on \(G\) then for any left invariant Hermitian metric \(g\) on \(G\), the induced Hermitian structure \((J,g)\) on \(\Gamma\backslash G\) has vanishing Chern-Ricci form. In particular, the restricted Chern holonomy of \((J,g)\) on \(\Gamma\backslash G\) is contained in \(\operatorname{SU}(n)\)._
Proof.: We only have to justify the last statement, and this follows from [45, Proposition 1.1].
As a consequence, if \(\rho\neq 0\) (that is, \(\psi([\mathfrak{g},\mathfrak{g}])\neq 0\)) then there is no invariant trivializing section of the canonical bundle of \((\Gamma\backslash G,J)\). We show next that the condition \(\rho\neq 0\) is also sufficient to prove that there are no trivializing sections, invariant or not, of the canonical bundle, thus giving an algebraic obstruction for a complex solvmanifold to have trivial canonical bundle. In fact, we will prove a stronger result, namely that if \(\rho\neq 0\) then the canonical bundle of \((\Gamma\backslash G,J)\) is not holomorphically torsion.
Following ideas from [19, Proposition 5.1], we use Belgun's symmetrization process to prove the result. Let us describe briefly this symmetrization process ([9]):
Assume that \(G\) is a Lie group equipped with a left invariant complex structure, and \(\Gamma\) is a uniform lattice in \(G\). Let \(\nu\) denote the bi-invariant volume form on \(G\) given in [32, Lemma 6.2] and such that \(\int_{M}\nu=1\), where \(M:=\Gamma\backslash G\). Up to identifying left invariant forms on \(M\) with linear forms over \(\mathfrak{g}^{*}\) via left translations, consider the Belgun symmetrization map defined by:
\[\mu:\Omega^{*}(M)\to\bigwedge^{*}\mathfrak{g}^{*},\quad\mu(\alpha)(X_{1}, \dots,X_{k})=\int_{M}\alpha_{m}(X_{1}|_{m},\dots,X_{k}|_{m})\nu_{m},\]
for \(X_{1},\dots,X_{k}\in\mathfrak{X}(M)\). Then:
1. \(\mu(f)\in\mathbb{R}\) for any \(f\in C^{\infty}(\Gamma\backslash G,\mathbb{R})\),
2. \(\mu(\alpha)=\alpha\) if \(\alpha\in\bigwedge^{*}\mathfrak{g}^{*}\);
3. \(\mu(J\alpha)=J\mu(\alpha)\), where \(J\alpha(\cdot,\dots,\cdot)=\alpha(J^{-1}\cdot,\dots,J^{-1}\cdot)\);
4. \(\mu(d\alpha)=d(\mu(\alpha))\).
Extending this map \(\mathbb{C}\)-linearly to \(\mathbb{C}\)-valued differential forms on \(M\), we also have:
1. \(\mu(\partial\alpha)=\partial(\mu(\alpha))\) and \(\mu(\overline{\partial}\alpha)=\overline{\partial}(\mu(\alpha))\).
**Theorem 5.2**.: _Let \(G\) be a Lie group equipped with a left invariant complex structure \(J\), and let \(\Gamma\) denote a uniform lattice in \(G\). Let \(J\) denote also the induced complex structure on \(\Gamma\backslash G\). If the canonical bundle of \((\Gamma\backslash G,J)\) is trivial (or, more generally, holomorphically torsion) then \(\operatorname{Tr}(J\operatorname{ad}([x,y]))=0\) for all \(x,y\in\mathfrak{g}=\operatorname{Lie}(G)\), that is \(\psi([\mathfrak{g},\mathfrak{g}])=0\)._
Proof.: According to [45, Proposition 1.1] if \(M\) is a compact complex manifold and \(K_{M}\) is holomorphically torsion then given any Hermitian metric \(g\) on \(M\), the associated Chern-Ricci form \(\rho\) satisfies \(\rho=i\partial\bar{\partial}F\), for some \(F\in C^{\infty}(M,\mathbb{R})\).
Now, assume that the canonical bundle of \((\Gamma\backslash G,J)\) is holomorphically torsion and consider a Hermitian metric \(g\) on \(\Gamma\backslash G\) induced by a left invariant one on \(G\). Then its associated Chern-Ricci form \(\rho\) satisfies \(\rho=i\partial\bar{\partial}F\), for some \(F\in C^{\infty}(\Gamma\backslash G,\mathbb{R})\). We consider next the symmetrization \(\mu(\rho)\) of \(\rho\). As the symmetrization commutes with the Dolbeault operators \(\partial\) and \(\bar{\partial}\) we have that
\[\mu(\rho)=i\partial\bar{\partial}\mu(F)=0,\]
since \(\mu(F)\) is constant. As \(\rho\) is left invariant we obtain \(\rho=\mu(\rho)\) and therefore \(\rho=0\). Since at the Lie algebra level we have that \(\rho(x,y)=\operatorname{Tr}(J\operatorname{ad}([x,y]))\) for \(x,y\in\mathfrak{g}\), the proof is complete.
**Remark 5.3**.: Proposition 1.1 in [45] predicts the existence of a Hermitian metric with \(\rho=0\) on any compact complex manifold with holomorphically torsion canonical bundle. It follows from the proof of Theorem 5.2 that in the case of a complex solvmanifold any _invariant_ Hermitian metric has \(\rho=0\). Using again [45, Proposition 1.1], we obtain that these invariant Hermitian metrics have restricted Chern holonomy contained in \(\operatorname{SU}(n)\), where \(2n\) is the real dimension of the solvmanifold.
**Remark 5.4**.: If \(\rho=0\) then the canonical bundle of the complex solvmanifold is not necessarily trivial. Indeed, consider the Lie group \(G\) from Example 1.1 equipped with the left invariant complex structure \(J\) given therein. It is easy to see that \(\psi(e_{0})=-2\) and \(\psi(e_{j})=0\) for \(1\leq j\leq 3\), so that \(\psi([\mathfrak{g},\mathfrak{g}])=0\). However, \(G\) admits a lattice \(\Gamma^{\prime}=\{(\pi k,m,n,\frac{p}{2})\mid k,m,n,p\in\mathbb{Z}\}\) such that \((\Gamma^{\prime}\backslash G,J)\) is a secondary Kodaira surface (see [24]) and hence, has non trivial canonical bundle. Note that \(\tau\otimes\tau\), where \(\tau=\operatorname{e}^{it}\sigma\), is a trivializing section of \(K_{\Gamma^{\prime}\backslash G}^{\otimes 2}\), and thus the canonical bundle is holomorphically torsion.
We exhibit next a \(6\)-dimensional example of this phenomenon.
**Example 5.5**.: For \(p\in\mathbb{R}\), let \(\mathfrak{g}_{p}=\mathbb{R}e_{6}\ltimes_{A_{p}}\mathbb{R}^{5}\), where the matrix \(A_{p}\) is given in the basis \(\{e_{1},\ldots,e_{5}\}\) of \(\mathbb{R}^{5}\) by:
\[A_{p}=\begin{bmatrix}-p&-1&&\\ 1&-p&&\\ &&p&2\\ &&-2&p\\ &&&0\end{bmatrix}.\]
Equip \(\mathfrak{g}_{p}\) with the complex structure \(Je_{1}=e_{2}\), \(Je_{3}=e_{4}\) and \(Je_{5}=e_{6}\). Then an easy calculation shows that \(\psi(e_{j})=0\) for \(1\leq j\leq 5\) and \(\psi(e_{6})=2\). It follows from Theorem 3.1 that this Lie algebra does not admit any nonzero closed \((3,0)\)-form.
For some values of \(p\in\mathbb{R}\), the associated simply connected Lie group \(G_{p}\) admits lattices, according to [13] (the Lie algebra \(\mathfrak{g}_{p}\) corresponds to the Lie group denoted by \(G_{5.17}^{p,-p,2}\times\mathbb{R}\) there). Moreover, it was shown, using techniques by Console and Fino in [12], that for certain values of \(p\) some lattices \(\Gamma\) in \(G_{p}\) satisfy \(b_{3}(\Gamma\backslash G_{p})=0\) (see [13, Table 7.1]); for instance, for any \(m\in\mathbb{N}\) take \(p=\frac{s_{m}}{\pi}\), where \(s_{m}=\log\left(\frac{m+\sqrt{m^{2}+4}}{2}\right)\). Then, \(\exp(\pi A_{p})=\operatorname{diag}(-\operatorname{e}^{-s_{m}},-\operatorname {e}^{-s_{m}},\operatorname{e}^{s_{m}},\operatorname{e}^{s_{m}},1)\) is conjugate to the integer unimodular matrix \(E_{m}=\begin{bmatrix}0&1\\ 1&m\end{bmatrix}^{\oplus 2}\oplus(1)\). According to Theorem 5.2, \(\operatorname{rank}(G_{p})=0\) for \(1\leq j\leq 5\) and \(\psi(e_{6})=2\) for \(1\leq j\leq 5\) and \(\psi(e_{6})=2\) for \(1\leq j\leq 5\).
**Remark 5.6**.: For \(p\in\mathbb{R}\), let \(\mathfrak{g}_{p}=\mathbb{R}e_{6}\ltimes_{A_{p}}\mathbb{R}^{5}\), where the matrix \(A_{p}\) is given in the basis \(\{e_{1},\ldots,e_{5}\}\) of \(\mathbb{R}^{5}\) by:
\[A_{p}=\begin{bmatrix}-p&-1&&\\ 1&-p&&\\ &&p&2\\ &&-2&p&\\ &&&0\end{bmatrix}.\]
Let \(\mathfrak{g}_{p}\) be the Lie algebra of \(\mathfrak{g}_{p}\). Then \(\mathfrak{g}_{p}\) is the Lie algebra of \(\mathfrak{g}_{p}\). Then \(\mathfrak{g}_{p}\) is the Lie algebra of \(\mathfrak{g}_{p}\).
**Example 5.7**.: For \(p\in\mathbb{R}\), let \(\mathfrak{g}_{p}=\mathbb{R}e_{6}\ltimes_{A_{p}}\mathbb{R}^{5}\), where the matrix \(A_{p}\) is given in the basis \(\{e_{1},\ldots,e_{5}\}\) of \(\mathbb{R}^{5}\) by:
\[A_{p}=\begin{bmatrix}-p&-1&&\\ 1&-p&&\\ &&p&2\\ &&-2&p&\\ &&&0\end{bmatrix}.\]
Equip \(\mathfrak{g}_{p}\) with the complex structure \(Je_{1}=e_{2}\), \(Je_{3}=e_{4}\) and \(Je_{5}=e_{6}\). Then an easy calculation shows that \(\psi(e_{j})=0\) for \(1\leq j\leq 5\) and \(\psi(e_{6})=2\). It follows from Theorem 3.1 that this Lie algebra does not admit any nonzero closed \((3,0)\)-form.
For some values of \(p\in\mathbb{R}\), the associated simply connected Lie group \(G_{p}\) admits lattices, according to [13] (the Lie algebra \(\mathfrak{g}_{p}\) corresponds to the Lie group denoted by \(G_{5.17}^{p,-p,2}\times\mathbb{R}\) there). Moreover, it was shown, using techniques by Console and Fino in [12], that for certain values of \(p\) some lattices \(\Gamma\) in \(G_{p}\) satisfy \(b_{3}(\Gamma\backslash G_{p})=0\) (see [13, Table 7.1]); for instance, for any \(m\in\mathbb{N}\) take \(p=\frac{s_{m}}{\pi}\), where \(s_{m}=\log\left(\frac{m+\sqrt{m^{2}+4}}{2}\right)\). Then, \(\exp(\pi A_{p})=\operatorname{diag}(-\operatorname{e}^{-s_{m}},-\operatorname {e}^{-s_{m}},\operatorname{e}^{s_{m}},\operatorname{e}^{s_{m}},1)\) is conjugate to the integer unimodular matrix \(E_{m}=\begin{bmatrix}0&1\\ 1&m\end{bmatrix}^{\oplus 2}\oplus(1)\). According to Theorem 5.2, \(\operatorname{rank}(G_{p})=0\) for \(1\leq j\leq 5\) and \(\psi(e_{6})=2\) for \(1\leq j\leq 5\) and \(\psi(e_{6})=2\) for \(1\leq j\leq 5\).
**Example 5.8**.: For \(p\in\mathbb{R}\), let \(\mathfrak{g}_{p}=\mathbb{R}e_{6}\ltimes_{A_{p}}\mathbb{R}^{5}\), where the matrix \(A_{p}\) is given in the basis \(\{e_{1},\ldots,e_{5}\}\) of \(\mathbb{R}^{5}\) by:
\[A_{p}=\begin{bmatrix}-p&-1&&\\ 1&-p&&\\ &&p&2\\ &&-2&p&\\ &&&0\end{bmatrix}.\]
Equip \(\mathfrak{g}_{p}\) with the complex structure \(Je_{1}=e_{2}\), \(Je_{3}=e_{4}\) and \(Je_{5}=e_{6}\). Then an easy calculation shows that \(\psi(e_{j})=0\) for \(1\leq j\leq 5\) and \(\psi(e_{6})=2\). It follows from Theorem 3.1 that this Lie algebra does not admit any nonzero closed \((3,0)\)-form.
For some values of \(p\in\mathbb{R}\), the associated simply connected Lie group \(G_{p}\) admits lattices, according to [13] (the Lie algebra
2.4 the Lie group \(G_{p}\) admits a lattice \(\Gamma_{m}=\pi\mathbb{Z}\ltimes P\mathbb{Z}^{5}\), where \(P^{-1}\exp(\pi A_{p})P=E_{m}\). Since \(b_{3}(\Gamma_{m}\backslash G_{p})=0\), it follows from Proposition 2.1 that the canonical bundle of \((\Gamma_{m}\backslash G_{p},J)\) is not holomorphically trivial. However, this complex solvmanifold has holomorphically torsion canonical bundle. Indeed, if \(\sigma\) is a nonzero left invariant \((3,0)\)-form on \(G_{p}\) then \(\tau\otimes\tau\), where \(\tau=\mathrm{e}^{it}\,\sigma\), induces a trivializing section of \(K^{\otimes 2}_{\Gamma_{m}\backslash G_{p}}\) since \(\mathrm{e}^{2it}=(\mathrm{e}^{it})^{2}\) is \(\pi\)-periodic.
An immediate consequence of Theorem 5.2 is the following result concerning complex solvmanifolds associated to the almost abelian and almost nilpotent Lie algebras from SS3. We use the notation from that section.
**Corollary 5.6**.: _Let \((\Gamma\backslash G,J)\) be a complex solvmanifold with holomorphically torsion canonical bundle, and let \(\mathfrak{g}\) be the Lie algebra of \(G\)._
1. _If_ \(\mathfrak{g}\) _is almost abelian (as in Corollary_ 3.7_) then_6__\(a=\mathrm{Tr}\,A=0\);_ Footnote 6: See also [23, Lemma 7].
2. _If_ \(\mathfrak{g}\) _is one of the almost nilpotent Lie algebras considered in Case (i) (as in Proposition_ 3.11_) then_ \(a=\mathrm{Tr}\,A=0\)_;_
3. _If_ \(\mathfrak{g}\) _is one of the almost nilpotent Lie algebras considered in Case (ii) (as in Proposition_ 3.14_) then_ \(a=a_{1}=a_{2}=\mathrm{Tr}\,A=0\)_._
Proof.: This follows immediately from the condition \(\psi([\mathfrak{g},\mathfrak{g}])=0\), using the computations for \(\psi\) done previously for each case.
**Example 5.7**.: There are examples of complex solvmanifolds whose associated canonical bundle is not holomorphically torsion (and in particular not holomorphically trivial). Indeed, the Oeljeklaus-Toma manifolds, introduced in [37], are compact non-Kahler complex manifolds whose canonical bundle is not holomorphically torsion. These manifolds were constructed from certain number fields, generalizing some Inoue surfaces, but later Kasuya showed in [27] that they can be considered as solvmanifolds equipped with an invariant complex structure. Using Kasuya's description and Theorem 5.2 it is easy to verify that the canonical 1-form \(\psi\) does not vanish on the commutator of the corresponding Lie algebra, recovering in this way the fact that their canonical bundle is not holomorphically torsion.
As another illustration of the obstruction from Theorem 5.2, we deal in the next result with the case of compact semisimple Lie groups. We recall Samelson's construction of a complex structure on a compact semisimple even-dimensional Lie algebra \(\mathfrak{g}\)[43].
Let \(\mathfrak{h}\) be a maximal abelian subalgebra of \(\mathfrak{g}\). Then we have the root space decomposition of \(\mathfrak{g}_{\mathbb{C}}\) with respect to \(\mathfrak{h}_{\mathbb{C}}\)
\[\mathfrak{g}_{\mathbb{C}}=\mathfrak{h}_{\mathbb{C}}\oplus\sum_{\alpha\in \Phi}\mathfrak{g}_{\alpha},\]
where \(\Phi\) is the finite subset of nonzero elements in \((\mathfrak{h}_{\mathbb{C}})^{*}\) called roots, and
\[\mathfrak{g}_{\alpha}=\{x\in\mathfrak{g}_{\mathbb{C}}\mid[h,x]=\alpha(h)x \quad\forall h\in\mathfrak{h}_{\mathbb{C}}\}\]
are the one-dimensional root subspaces. Since \(\mathfrak{h}\) is even-dimensional, one can choose a skew-symmetric endomorphism \(J_{0}\) of \(\mathfrak{h}\) with respect to the Killing form such that \(J_{0}^{2}=-\operatorname{I}_{\mathfrak{h}}\). Samelson defines a complex structure on \(\mathfrak{g}\) by considering a positive system \(\Phi^{+}\) of roots, which is a set \(\Phi^{+}\subset\Phi\) satisfying
\[\Phi^{+}\cap(-\Phi^{+})=\emptyset,\qquad\Phi^{+}\cup(-\Phi^{+})=\Phi,\qquad \alpha,\beta\in\Phi^{+},\ \alpha+\beta\in\Phi\Rightarrow\alpha+\beta\in\Phi^{+}.\]
Setting
\[\mathfrak{m}=\mathfrak{h}^{1,0}\oplus\sum_{\alpha\in\Phi^{+}}\mathfrak{g}_{ \alpha},\]
where \(\mathfrak{h}^{1,0}\) is the eigenspace of \(J_{0}^{\mathbb{C}}\) of eigenvalue \(i\), it follows that \(\mathfrak{m}\) is a complex Lie subalgebra of \(\mathfrak{g}_{\mathbb{C}}\) which induces a complex structure \(J\) on \(\mathfrak{g}\) such that \(\mathfrak{g}^{1,0}=\mathfrak{m}\), that is, \(\mathfrak{m}\) is the eigenspace of \(J^{\mathbb{C}}\) with eigenvalue \(i\). This complex structure is skew-symmetric with respect to the Killing form on \(\mathfrak{g}\).
Conversely, Pittie proved in [39] that any left invariant complex structure on \(G\) is obtained in this way.
In the next result we use Theorem 5.2 in order to show that the canonical bundle of a compact Lie group equipped with a left invariant complex structure is not holomorphically torsion.
**Proposition 5.8**.: _The canonical bundle of a \(2n\)-dimensional compact semisimple Lie group equipped with a left invariant complex structure is not holomorphically torsion._
Proof.: We use the notation from the paragraphs above: \(G\) is the compact Lie group, \(\mathfrak{g}\) its Lie algebra and \(J:\mathfrak{g}\to\mathfrak{g}\) is the complex structure obtained by Samelson's construction.
Since \([\mathfrak{g},\mathfrak{g}]=\mathfrak{g}\), according to Theorem 5.2 it is sufficient to show that \(\operatorname{Tr}(J\operatorname{ad}x)\neq 0\) for some \(x\in\mathfrak{g}\), or equivalently, \(\operatorname{Tr}(J^{\mathbb{C}}\operatorname{ad}x)\neq 0\) for some \(x\in\mathfrak{g}_{\mathbb{C}}\).
Recall that \(\mathfrak{g}^{1,0}=\mathfrak{h}^{1,0}\oplus\sum_{\alpha\in\Phi^{+}}\mathfrak{ g}_{\alpha}\) and \(\mathfrak{g}^{0,1}=\mathfrak{h}^{0,1}\oplus\sum_{\alpha\in\Phi^{+}}\mathfrak{ g}_{-\alpha}\). Let \(x_{\alpha}\) be a generator of \(\mathfrak{g}_{\alpha}\) for any \(\alpha\in\Phi\). If \(\{h_{1},\ldots,h_{r}\}\) is a basis of \(\mathfrak{h}^{1,0}\), then \(\mathcal{B}=\{h_{1},\ldots,h_{r}\}\cup\{x_{\alpha}\mid\alpha\in\Phi^{+}\}\) is a basis of \(\mathfrak{g}^{1,0}\) and \(\overline{\mathcal{B}}=\{\overline{h_{1}},\ldots,\overline{h_{r}}\}\cup\{x_{- \alpha}\mid\alpha\in\Phi^{+}\}\) is a basis of \(\mathfrak{g}^{0,1}\).
Consider now \(h\in\mathfrak{h}^{1,0}\subset\mathfrak{g}^{1,0}\). Then, with respect to the basis \(\mathcal{B}\cup\overline{\mathcal{B}}\) of \(\mathfrak{g}\), we have:
\[\operatorname{ad}h=\left[\begin{array}{c|c}A_{h}&*\\ \hline 0&B_{h}\end{array}\right]\qquad\text{and}\qquad J^{\mathbb{C}} \operatorname{ad}h=\left[\begin{array}{c|c}iA_{h}&*\\ \hline 0&-iB_{h}\end{array}\right].\]
More precisely, since \(\mathfrak{h}^{1,0}\) is an abelian subalgebra and \([h,x_{\alpha}]=\alpha(h)x_{\alpha}\), the matrices \(A_{h}\) and \(B_{h}\) are given by:
\[A_{h}=\left[\begin{array}{c|c}0_{r}&&&\\ \hline&\alpha_{1}(h)&&\\ &&\ddots&\\ &&&\alpha_{s}(h)\end{array}\right]\quad\text{and}\quad B_{h}=\left[\begin{array} []{c|c}0_{r}&&&\\ \hline&-\alpha_{1}(h)&&\\ &&\ddots&\\ &&&-\alpha_{s}(h)\end{array}\right],\]
where \(s=|\Phi^{+}|\). Hence,
\[\operatorname{Tr}(J^{\mathbb{C}}\operatorname{ad}h)=2i\sum_{j=1}^{s}\alpha_{j} (h)=2i\sum_{\alpha\in\Phi^{+}}\alpha(h).\]
It is known that \(\sum_{\alpha\in\Phi^{+}}\alpha\neq 0\). Indeed, there is \(\Pi\subset\Phi^{+}\), whose elements are known as simple roots, such that \(\Pi\) is a basis of \((\mathfrak{h}_{\mathbb{C}})^{*}\) and each \(\alpha\in\Phi^{+}\) is a linear combination of the simple roots with non-negative integer coefficients. Therefore, if \(\sum_{\alpha\in\Phi^{+}}\alpha=0\) then every \(\alpha\in\Phi^{+}\) would be zero, which is impossible.
As a consequence, we can choose \(h\in\mathfrak{h}^{1,0}\) such that \(\operatorname{Tr}(J^{\mathbb{C}}\operatorname{ad}h)\neq 0\).
**Remark 5.9**.: Assume that \(G\) is a non-compact semisimple Lie group equipped with a left invariant complex structure \(J\). It follows from [11] that \(G\) has a uniform lattice \(\Gamma\). Again, since \(G\) is semisimple (so that \(\mathfrak{g}=\operatorname{Lie}(G)\) satisfies \([\mathfrak{g},\mathfrak{g}]=\mathfrak{g}\)), it follows from Theorem 3.1 and Theorem 5.2 that the canonical bundle of the compact complex manifold \((\Gamma\backslash G,J)\) is either trivial bundle via an invariant section (when \(\psi=0\)) or it is not holomorphically torsion (when \(\psi\neq 0\)), where \(\psi\) is the canonical 1-form on \((\mathfrak{g},J)\).
Some recent results concerning non-compact semisimple Lie groups are the following:
* In [19] it was proved that any non-compact real simple Lie group \(G\) of inner type and even dimension carries a left invariant complex structure \(J\). Moreover, if \(\Gamma\) is a lattice in \(G\) then \((\Gamma\backslash G,J)\) has non-trivial canonical bundle; in fact, the canonical bundle is not holomorphically torsion.
* In [38] it was proved that a \(6\)-dimensional unimodular non-solvable Lie algebra admits a complex structure with a nonzero closed \((3,0)\)-form if and only if it is isomorphic to \(\mathfrak{so}(3,1)\). It follows that \(\Gamma\backslash\operatorname{SO}(3,1)\) carries a complex structure with trivial canonical bundle (via an invariant section) for any lattice \(\Gamma\).
Generalizing this last result, we observe that if \(\mathfrak{g}\) is a semisimple complex Lie algebra then its "realification" \(\mathfrak{g}_{\mathbb{R}}\) admits a bi-invariant complex structure \(J\). If \(G_{\mathbb{R}}\) denotes the simply connected Lie group associated to \(\mathfrak{g}_{\mathbb{R}}\) then, as mentioned in Section 3, the pair \((G_{\mathbb{R}},J)\) admits a left invariant trivializing section of its canonical bundle. Therefore, any compact quotient \((\Gamma\backslash G_{\mathbb{R}},J)\) has trivial canonical bundle.
### Examples of complex solvmanifolds with non-invariantly trivial canonical bundle
In light of Theorem 5.2, in order to find examples of complex solvmanifolds \((\Gamma\backslash G,J)\) with (non-invariantly) trivial canonical bundle we need \(\psi\not\equiv 0\) but \(\psi([\mathfrak{g},\mathfrak{g}])=0\). We show next that in many cases the condition \(\psi([\mathfrak{g},\mathfrak{g}])=0\) allows us to obtain an explicit non-invariant trivializing section \(\tau\) of \((\Gamma\backslash G,J)\).
Let \((G,J)\) be a \(2n\)-dimensional unimodular solvable Lie group equipped with a left invariant complex structure and denote \(\mathfrak{h}=\operatorname{Ker}\psi\), where \(\psi:\mathfrak{g}\to\mathbb{R}\) is the canonical \(1\)-form. If \(\psi([\mathfrak{g},\mathfrak{g}])=0\) then \(\mathfrak{h}\) is a codimension-one ideal of \(\mathfrak{g}\). By using a Hermitian inner product on \(\mathfrak{g}\) we can take \(e_{1}\in\mathfrak{h}\cap(\mathfrak{h}\cap J\mathfrak{h})^{\perp}\), so that \(\mathfrak{h}=\mathbb{R}e_{1}\oplus(\mathfrak{h}\cap J\mathfrak{h})\). Set next \(e_{0}:=-Je_{1}\in\mathfrak{h}^{\perp}\), hence \(\mathfrak{g}=\mathbb{R}e_{0}\ltimes\mathfrak{h}\). Let \(\{u_{j},v_{j}\}_{j=1}^{n-1}\) be a basis of \(\mathfrak{h}\cap J\mathfrak{h}\) such that \(Ju_{j}=v_{j}\), \(1\leq j\leq n-1\).
Define the \((n,0)\)-form \(\sigma\) on \(\mathfrak{g}\) by \(\sigma=(e^{0}+ie^{1})\wedge\gamma_{1}\wedge\cdots\wedge\gamma_{n-1}\), where \(\gamma_{j}=u^{j}+iv^{j}\) and \(\{e^{0},e^{1},u^{i},v^{i}\}_{i=1}^{n-1}\) is the dual basis of \(\{e_{0},e_{1},u_{i},v_{i}\}_{i=1}^{n-1}\). In this basis, Remark 3.3 implies that
\[d\sigma =\frac{i}{4}\operatorname{Tr}(J\operatorname{ad}e_{0})\,(e^{0}- ie^{1})\wedge(e^{0}+ie^{1})\wedge\gamma_{1}\wedge\cdots\wedge\gamma_{n-1}\] \[=-\frac{1}{2}\operatorname{Tr}(J\operatorname{ad}e_{0})e^{01} \wedge\gamma_{1}\wedge\cdots\wedge\gamma_{n-1}.\]
Let \(G=\mathbb{R}\ltimes H\) be the associated simply connected Lie group, where \(H\) is the unique connected normal subgroup of \(G\) such that \(\operatorname{Lie}(H)=\mathfrak{h}\), and consider the coordinate \(t\) of \(\mathbb{R}\). By the definition of the product of \(G\) it follows that if we consider the dual basis \(\{e^{0},e^{1},u^{i},v^{i}\}_{i=1}^{n-1}\) as left invariant \(1\)-forms on \(G\) (which is diffeomorphic to \(\mathbb{R}^{2n}\)), then we have that \(dt=e^{0}\). Thus, we claim that the \((n,0)\)-form \(\tau=\operatorname{e}^{\mathrm{i}\lambda t}\sigma\) is closed, where \(\lambda=-\frac{1}{2}\operatorname{Tr}(J\operatorname{ad}e_{0})\). Indeed,
\[d\tau =\operatorname{e}^{\mathrm{i}\lambda t}\left(i\lambda\,e^{0} \wedge\sigma+d\sigma\right)\] \[=\operatorname{e}^{\mathrm{i}\lambda t}\left(\frac{1}{2} \operatorname{Tr}(J\operatorname{ad}e_{0})-\frac{1}{2}\operatorname{Tr}(J \operatorname{ad}e_{0})\right)e^{01}\wedge\gamma_{1}\cdots\wedge\gamma_{n-1}\] \[=0.\]
Therefore \(\tau\) is closed. In fact, this is a particular case of the construction given in Lemma 4.4, since the basis \(\{e_{0},e_{1},u_{i},v_{i}\}_{i=1}^{n-1}\) above satisfies the conditions in the lemma.
We can summarize this construction as follows:
**Proposition 5.10**.: _Let \((G,J)\) be a \(2n\)-dimensional simply connected solvable unimodular Lie group equipped with a complex structure. Let \(\mathfrak{h}\) denote the kernel of \(\psi:\mathfrak{g}\to\mathbb{R}\) and assume that \(\psi([\mathfrak{g},\mathfrak{g}])\equiv 0\), so that \(\mathfrak{g}=\mathbb{R}e_{0}\ltimes\mathfrak{h}\) and consequently \(G=\mathbb{R}\ltimes H\), where \(H\) is the unique connected normal subgroup of \(G\) such that \(\operatorname{Lie}(H)=\mathfrak{h}\). Then the \((n,0)\)-form \(\tau=\exp(-\frac{i}{2}\operatorname{Tr}(J\operatorname{ad}e_{0})t)\sigma\) is closed, where \(t\) is the coordinate of \(\mathbb{R}\) and \(\sigma\) is a left invariant \((n,0)\)-form._
In the next example we apply Proposition 5.10 in order to show the triviality of the canonical bundle associated to complex structures of splitting type (see [5] for a precise definition) on the \(6\)-dimensional complex parallelizable Nakamura solvmanifold.
**Example 5.11**.: In [5, Proposition 3.1] complex structures of splitting type on the \(6\)-dimensional complex parallelizable Nakamura manifold are classified. There are three non-equivalent cases:
1. \(J:d\omega^{1}=-\omega^{13},\quad d\omega^{2}=\omega^{23},\quad d\omega^{3}=0\),
2. \(J_{A}:\begin{cases}d\omega^{1}=A\omega^{13}-\omega^{1\bar{3}},\\ d\omega^{2}=-A\omega^{23}+\omega^{2\bar{3}},\quad A\in\mathbb{C},\ |A|\neq 1,\\ d\omega^{3}=0,\end{cases}\)
3. \(J_{B}:\begin{cases}d\omega^{1}=-\omega^{13}+B\omega^{1\bar{3}},\\ d\omega^{2}=-\bar{B}\omega^{23}+\omega^{2\bar{3}},\quad B\in\mathbb{C},\ |B|<1,\\ d\omega^{3}=0\end{cases}\)
where \(\{\omega^{1},\omega^{2},\omega^{3}\}\) is a basis of \((1,0)\)-forms.
According to [14, Proposition 3.7], the underlying Lie algebra admits a closed non-vanishing \((3,0)\)-form only for complex structures of type (i) and (ii). Therefore, any associated solvmanifold equipped with a complex structure \(J_{B}\) of (iii) does not have invariantly trivial canonical bundle. Nevertheless, we will see that we can get an associated solvmanifold with non-invariantly trivial canonical bundle also for the family (iii).
In a real basis of \(1\)-forms \(\{e^{1},\dots,e^{6}\}\) such that \(J_{B}e_{2i-1}=e_{2i}\) the equations (iii) can be written as
\[de^{1}=(r-1)e^{15}+se^{16}-se^{25}+(r+1)e^{26},\quad de^{2}=se^{15 }-(r+1)e^{16}+(r-1)e^{25}+se^{26},\] \[de^{3}=(1-r)e^{35}-se^{36}-se^{45}+(r+1)e^{46},\quad de^{4}=se^{35 }-(r+1)e^{36}-(r-1)e^{45}-se^{46},\] \[de^{5}=0,\quad de^{6}=0,\]
where \(B=r+is\). Therefore, the Lie brackets determined by \(\{de^{1},\dots,de^{6}\}\) are
\[[e_{1},e_{5}]=(1-r)e_{1}-se_{2},\quad[e_{1},e_{6}]=-se_{1}+(r+1)e_ {2},\] \[[e_{2},e_{5}]=se_{1}+(1-r)e_{2},\quad[e_{2},e_{6}]=-(r+1)e_{1}-se _{2},\] \[[e_{3},e_{5}]=(r-1)e_{3}-se_{4},\quad[e_{3},e_{6}]=se_{3}+(r+1)e_ {4},\] \[[e_{4},e_{5}]=se_{3}+(r-1)e_{4},\quad[e_{4},e_{6}]=-(r+1)e_{3}+se _{4}.\]
Let us denote \((\mathfrak{g},J_{B})\) the Lie algebra determined by these Lie brackets and complex structure \(J_{B}e_{1}=e_{2}\), \(J_{B}e_{3}=e_{4}\) and \(J_{B}e_{5}=e_{6}\). On the other hand, recall the Lie algebra \(\mathfrak{s}\) from the paragraph before Example 3.8, determined by \((f^{16}-f^{25},f^{15}+f^{26},-f^{36}+f^{45},-f^{35}-f^{46},0,0)\). It is straightforward to verify that \(\varphi:(\mathfrak{g},J_{B})\to(\mathfrak{s},\tilde{J}_{B})\) given by
\[\varphi=\begin{bmatrix}0&0&0&1\\ 0&0&1&0\\ 0&-1&0&0\\ 1&0&0&0\end{bmatrix}\oplus\begin{bmatrix}-s&r+1\\ 1-r&-s\end{bmatrix}\]
is a biholomorphic isomorphism, where
\[\tilde{J}_{B}f_{1}=-f_{2},\quad\tilde{J}_{B}f_{2}=f_{1},\quad \tilde{J}_{B}f_{3}=f_{4},\quad\tilde{J}_{B}f_{4}=-f_{3},\] \[\tilde{J}_{B}f_{5}=\frac{1}{r^{2}+s^{2}-1}(-2s\,f_{5}+(r^{2}+s^{2 }-2r+1)\,f_{6}),\] \[\tilde{J}_{B}f_{6}=\frac{1}{r^{2}+s^{2}-1}(-(r^{2}+s^{2}+2r+1)\,f _{5}+2s\,f_{6}).\]
If \(\psi(x)=\operatorname{Tr}(\tilde{J}_{B}\operatorname{ad}x)\) is the canonical \(1\)-form on \((\mathfrak{s},\tilde{J}_{B})\), then \(\psi(f_{5})=4\) and \(\psi(f_{j})=0\) for \(j\neq 5\). Since \(f_{5}\notin[\mathfrak{s},\mathfrak{s}]\) we can apply Proposition 5.10 and get a closed nowhere vanishing
\((3,0)\)-form in the Lie group \(S=\mathbb{R}\ltimes H\) given by
\[\tau=\mathrm{e}^{-2it}(f^{1}-if^{2})\wedge(f^{3}+if^{4})\wedge\left(f^{5}-i\left( \frac{2s}{r^{2}+s^{2}-1}f^{5}+\frac{r^{2}+s^{2}+2r+1}{r^{2}+s^{2}-1}f^{6}\right) \right),\]
where \(t\) is the coordinate of \(\mathbb{R}\). On the other hand, according to Example 3.8, the Lie group \(S\) admits lattices given by \(\Gamma_{m}=(\pi\mathbb{Z}\oplus t_{m}\mathbb{Z})\ltimes P_{m}\mathbb{Z}^{4}\) for \(m\in\mathbb{N}\), \(m\geq 3\). The form \(\tau\) is invariant by the lattice since \(\exp(-2i(t+\pi k))=\exp(-2it)\) for all \(k\in\mathbb{Z}\), so it induces a closed non-vanishing \((3,0)\)-form on the complex solvmanifolds \((\Gamma_{m}\backslash S,\tilde{J}_{B})\), which therefore have (non-invariantly) trivial canonical bundle.
We finish this section by using again Proposition 5.10 to obtain an example of a solvmanifold with trivial canonical bundle associated to a Lie algebra which does not appear in [14, Proposition 2.8]. This example appears as \(\mathfrak{s}_{6.44}\) in [16].
**Example 5.12**.: Let \(\mathfrak{g}=(e^{23},e^{36},-e^{26},e^{26}+e^{56},e^{36}-e^{46},0).\) We can see \(\mathfrak{g}\) as the unimodular almost nilpotent Lie algebra \(\mathfrak{g}=\mathbb{R}e_{6}\ltimes_{A}(\mathfrak{h}_{3}\oplus\mathbb{R}^{2})\), where the matrix \(A\) is written in the basis \(\{e_{1},\ldots,e_{5}\}\) as
\[A:=\mathrm{ad}\,e_{6}|_{\mathfrak{h}_{3}\oplus\mathbb{R}^{2}}=\begin{bmatrix} 0&0&0&0&0\\ 0&0&-1&0&0\\ 0&1&0&0&0\\ 0&-1&0&0&-1\\ 0&0&-1&1&0\end{bmatrix}\]
Equip \(\mathfrak{g}\) with the complex structure \(Je_{1}=e_{6},Je_{2}=e_{3},Je_{4}=e_{5}\). By Theorem 3.1, since \(\psi(e_{6})=\mathrm{Tr}(J\,\mathrm{ad}\,e_{6})=4\neq 0\), \((\mathfrak{g},J)\) does not admit a nonzero closed \((3,0)\)-form so that \(\mathfrak{g}\) does not appear in [14, Proposition 2.8]. However, since \(\psi([\mathfrak{g},\mathfrak{g}])\equiv 0\), using Proposition 5.10 we can obtain the closed non-vanishing \((3,0)\)-form \(\tau\) on the associated simply connected Lie group \(G\) given by \(\tau=\mathrm{e}^{-2it}(e^{1}+ie^{6})\wedge(e^{2}+ie^{3})\wedge(e^{4}+ie^{5})\), where \(t\) is the coordinate of \(\mathbb{R}\). On the other hand,
\[\exp(\pi A)=\begin{bmatrix}1&0&0&0&0\\ 0&-1&0&0&0\\ 0&0&-1&0&0\\ 0&-\pi&0&-1&0\\ 0&0&-\pi&0&-1\end{bmatrix}.\]
If we set
\[f_{1}=e_{1},\quad f_{2}=-\pi e_{5},\quad f_{3}=e_{3},\quad f_{4}=-\pi e_{4}, \quad f_{5}=e_{2}-e_{5},\]
then it is easy to check that \(\mathcal{B}=\{f_{1},\ldots,f_{5}\}\) is a rational basis of \(\mathfrak{h}_{3}\oplus\mathbb{R}^{2}\). Since
\[[\exp(\pi A)]_{\mathcal{B}}=(1)\oplus\begin{bmatrix}-1&1\\ 0&-1\end{bmatrix}^{\oplus 2},\]
by Theorem 2.4 the Lie group \(G\) admits a lattice \(\Gamma=\pi\mathbb{Z}\ltimes\exp^{H_{3}\oplus\mathbb{R}^{2}}(\mathrm{span}_{ \mathbb{Z}}(f_{1},f_{2},f_{3},f_{4},f_{5}))\).
The form \(\tau\) is invariant by the lattice \(\Gamma\) due to \(\exp(-2i(t+\pi k))=\exp(-2it)\). Therefore the associated solvmanifold has (non-invariantly) trivial canonical bundle.
## 6. Applications to hypercomplex geometry
In this last section, we explore the triviality of the canonical bundle of complex manifolds obtained from a hypercomplex Lie group \((G,\{J_{1},J_{2},J_{3}\})\), or the corresponding quotients by uniform lattices. More concretely, we will show that if there exists a left invariant trivializing section of \(K_{(G,J_{\alpha})}\) for some \(\alpha=1,2,3\), then any associated compact quotient \((\Gamma\backslash G,J_{\alpha})\) has trivial canonical bundle for all \(\alpha\), also via an invariant section. However, if the trivializing
section of \((\Gamma\backslash G,J_{\alpha})\) is not invariant, then \(K_{(\Gamma\backslash G,J_{\beta})}\) is not necessarily trivial for \(\beta\neq\alpha\). Using these results we provide a negative answer to a question by Verbitsky.
We begin by recalling some facts about hypercomplex manifolds. A hypercomplex structure on \(M\) is a triple of complex structures \(\{J_{1},J_{2},J_{3}\}\) on \(M\) which obey the laws of the quaternions:
\[J_{1}J_{2}=-J_{2}J_{1}=J_{3}.\]
In particular, \(J_{\alpha}J_{\beta}=-J_{\beta}J_{\alpha}=J_{\gamma}\) for any cyclic permutation \((\alpha,\beta,\gamma)\) of \((1,2,3)\).
It follows that \(M\) carries a sphere \(\mathbb{S}^{2}\) of complex structures. Indeed, if \(a=(a_{1},a_{2},a_{3})\in\mathbb{S}^{2}\) then
\[J_{a}:=a_{1}J_{1}+a_{2}J_{2}+a_{3}J_{3} \tag{9}\]
is a complex structure on \(M\). Moreover, each \(T_{p}M\), \(p\in M\), has an \(\mathbb{H}\)-module structure, where \(\mathbb{H}\) denotes the quaternions. In particular \(\dim_{\mathbb{R}}M=4n\), \(n\in\mathbb{N}\).
Any hypercomplex structure \(\{J_{\alpha}\}\) on \(M\) determines a unique torsion-free connection \(\nabla^{\mathcal{O}}\), called the _Obata connection_ (see [36]), which satisfies \(\nabla^{\mathcal{O}}J_{\alpha}=0\) for all \(\alpha\). It was shown in [44] that an expression for this connection is given by:
\[\nabla^{\mathcal{O}}_{X}Y=\frac{1}{2}\left([X,Y]+J_{1}[J_{1}X,Y]-J_{2}[X,J_{2 }Y]+J_{3}[J_{1}X,J_{2}Y]\right),\quad X,Y\in\mathfrak{X}(M).\]
Given the usual hypercomplex structure \(\{J_{\alpha}\}\) on \(\mathbb{R}^{4n}\) induced by the quaternions, we will denote by
\[\operatorname{GL}(n,\mathbb{H}):=\{T\in\operatorname{GL}(4n,\mathbb{R}):TJ_{ \alpha}=J_{\alpha}T\text{ for all }\alpha\},\]
the quaternionic general linear group, with corresponding Lie algebra
\[\mathfrak{gl}(n,\mathbb{H}):=\{T\in\operatorname{GL}(4n,\mathbb{R}):TJ_{ \alpha}=J_{\alpha}T\text{ for all}\alpha\}.\]
Since \(\nabla^{\mathcal{O}}J_{\alpha}=0\) for all \(\alpha\), the holonomy group of the Obata connection, \(\operatorname{Hol}(\nabla^{\mathcal{O}})\), is contained in \(\operatorname{GL}(n,\mathbb{H})\).
A hypercomplex manifold \((M^{4n},\{J_{\alpha}\})\) is called an \(\operatorname{SL}(n,\mathbb{H})\)-manifold if \(\operatorname{Hol}(\nabla^{\mathcal{O}})\subset\operatorname{SL}(n,\mathbb{H})\), where \(\operatorname{SL}(n,\mathbb{H})=[\operatorname{GL}(n,\mathbb{H}),\operatorname {GL}(n,\mathbb{H})]\) is the commutator subgroup of \(\operatorname{GL}(n,\mathbb{H})\). These manifolds have been actively studied (see for instance [18, 19, 21, 26, 30, 31]).
We will consider now left invariant hypercomplex structures on Lie groups, which are given equivalently by hypercomplex structures on Lie algebras, as usual. The corresponding Obata connection is also left invariant and it can be determined by its action on left invariant vector fields, that is, on the Lie algebra.
As an application of Theorem 3.1 we show that if \((G^{4n},J_{\alpha})\) admits a non-vanishing left invariant closed \((2n,0)\)-form for some \(\alpha=1,2,3\), then \((G^{4n},J_{a})\) (with \(J_{a}\) as in (9)) has a non-vanishing left invariant closed \((2n,0)\)-form, for all \(a\in\mathbb{S}^{2}\).
**Theorem 6.1**.: _Let \(\{J_{1},J_{2},J_{3}\}\) be a hypercomplex structure on the \(4n\)-dimensional Lie algebra \(\mathfrak{g}\). If \(J_{\alpha}\) admits a non-vanishing closed \((2n,0)\)-form for some \(\alpha=1,2,3\), then \(J_{a}\) admits a non-vanishing closed \((2n,0)\)-form for any \(a\in\mathbb{S}^{2}\), with \(J_{a}\) given by (9)._
Proof.: Let \((\alpha,\beta,\gamma)\) a cyclic permutation of \((1,2,3)\) with \(J_{\alpha}\) satisfying the conditions in the statement. Then, due to the vanishing of the Nijenhuis tensor \(N_{J_{\gamma}}\), for any \(x,y\in\mathfrak{g}\) we get
\[J_{\gamma}[x,y]=[J_{\gamma}x,y]+[x,J_{\gamma}y]+J_{\gamma}[J_{\gamma}x,J_{ \gamma}y].\]
Since \(J_{\gamma}=J_{\alpha}J_{\beta}\), applying \(-J_{\alpha}\) in both sides of this equality we have
\[J_{\beta}[x,y]=-J_{\alpha}[J_{\gamma}x,y]-J_{\alpha}[x,J_{\gamma}y]+J_{\beta}[ J_{\gamma}x,J_{\gamma}y],\]
which implies
\[\operatorname{Tr}(J_{\beta}\operatorname{ad}(x))=-\operatorname{Tr}(J_{\alpha} \operatorname{ad}(J_{\gamma}x))-\operatorname{Tr}(J_{\alpha}\operatorname{ad}( x)J_{\gamma})+\operatorname{Tr}(J_{\beta}\operatorname{ad}(J_{\gamma}x)J_{ \gamma}).\]
Using that \(\operatorname{Tr}(AB)=\operatorname{Tr}(BA)\) and \(\operatorname{Tr}(J_{\alpha}\operatorname{ad}(x))=\operatorname{Tr}(\operatorname{ ad}(J_{\alpha}x))\) for all \(x\in\mathfrak{g}\) due to Theorem 3.1 (since the canonical 1-form \(\psi_{\alpha}\) vanishes) we arrive at
\[\operatorname{Tr}(J_{\beta}\operatorname{ad}(x)) =-\operatorname{Tr}(\operatorname{ad}(J_{\alpha}J_{\gamma}x))- \operatorname{Tr}(J_{\gamma}J_{\alpha}\operatorname{ad}(x))+\operatorname{Tr} (J_{\gamma}J_{\beta}\operatorname{ad}(J_{\gamma}x))\] \[=\operatorname{Tr}(\operatorname{ad}(J_{\beta}x))-\operatorname{ Tr}(J_{\beta}\operatorname{ad}(x))-\operatorname{Tr}(J_{\alpha}\operatorname{ad}(J_{ \gamma}x))\] \[=\operatorname{Tr}(\operatorname{ad}(J_{\beta}x))-\operatorname{ Tr}(J_{\beta}\operatorname{ad}(x))+\operatorname{Tr}(\operatorname{ad}(J_{\beta}x)),\]
which implies
\[\operatorname{Tr}(J_{\beta}\operatorname{ad}(x))=\operatorname{Tr}( \operatorname{ad}(J_{\beta}x)).\]
The same computation with \(\alpha\) replaced by \(\beta\) shows that the same condition holds for \(J_{\gamma}\). It follows that
\[\operatorname{Tr}(J_{a}\operatorname{ad}(x))=\operatorname{Tr}(\operatorname{ ad}(J_{a}x))\]
for any \(a\in\mathbb{S}^{2}\). Therefore, the corresponding canonical 1-form \(\psi_{a}\) vanishes and according to Theorem 3.1 the proof is complete.
**Corollary 6.2**.: _Let \(G\) be a Lie group equipped with a left invariant hypercomplex structure \(\{J_{1},J_{2},J_{3}\}\) and let \(\Gamma\) be a uniform lattice of \(G\). If there exists a left invariant trivializing section of \(K_{(G,J_{\alpha})}\) for some \(\alpha=1,2,3\), then \(K_{(G,J_{a})}\) admits a left invariant trivializing section for any \(a\in\mathbb{S}^{2}\). In particular, \((\Gamma\backslash G,J_{a})\) has trivial canonical bundle for any \(a\in\mathbb{S}^{2}\)._
In [47], Verbitsky proves that if \((M,\{J_{\alpha}\})\) is an \(\operatorname{SL}(n,\mathbb{H})\)-manifold then the complex manifold \((M,J_{\alpha})\) has trivial canonical bundle for all \(\alpha\). Immediately after this, he poses the following question:
**Question [47]:** Let \((M,\{J_{\alpha}\})\) be a compact hypercomplex manifold. Assume that the complex manifold \((M,J_{1})\) has trivial canonical bundle. Does it follow that \(M\) is an \(\operatorname{SL}(n,\mathbb{H})\)-manifold?
In certain cases there is an affirmative answer to this question, for instance when \((M,\{J_{\alpha}\})\) admits a hyperKahler with torsion metric ([47, Theorem 2.3]) or when \(M\) is a nilmanifold equipped with an invariant hypercomplex structure [8, Corollary 3.3]. In the latter case, the key point was to notice the fact that every nilmanifold with an invariant complex structure has trivial canonical bundle via an invariant trivializing section. Using the same arguments, in [18] it is proved that if a hypercomplex solvmanifold \((\Gamma\backslash G,\{J_{\alpha}\})\) admits an invariant trivializing section of \(K_{(\Gamma\backslash G,J_{\alpha})}\) for some \(\alpha\) then \(\Gamma\backslash G\) is an \(\operatorname{SL}(n,\mathbb{H})\)-manifold.
Our next example of a hypercomplex solvmanifold \((\Gamma\backslash G,\{J_{\alpha}\})\) shows that if \(K_{(\Gamma\backslash G,J_{\alpha})}\) admits a non-invariant trivializing section for some \(\alpha\) then the solvmanifold is not necessarily an \(\operatorname{SL}(n,\mathbb{H})\)-manifold.
**Example 6.3**.: Let \(\mathfrak{g}=\operatorname{span}\{e_{1},\ldots,e_{4}\}\) be the 4-dimensional unimodular completely solvable Lie algebra given by
\[[e_{2},e_{3}]=e_{1},\ \ \ \ [e_{2},e_{4}]=e_{2},\ \ \ [e_{3},e_{4}]=-e_{3}.\]
It is easily verified that the almost complex structure \(J\) defined by \(Je_{1}=e_{2}\) and \(Je_{3}=e_{4}\) is integrable. Note that \(\mathfrak{g}_{+}=\operatorname{span}\{e_{1},e_{3}\}\) and \(\mathfrak{g}_{-}:=J\mathfrak{g}_{+}=\operatorname{span}\{e_{2},e_{4}\}\) are subalgebras of \(\mathfrak{g}\). Then, according to [3], the Lie algebra \(\hat{\mathfrak{g}}:=(\mathfrak{g}_{\mathbb{C}})_{\mathbb{R}}\) admits a hypercomplex structure \(\{J_{1},J_{2},J_{3}\}\). Indeed, with respect to the decomposition \(\hat{\mathfrak{g}}=\mathfrak{g}\oplus i\mathfrak{g}\), these complex structures are given by
\[J_{1}(x+iy) =\begin{cases}i(x+iy),&x,y\in\mathfrak{g}_{+},\\ -i(x+iy),&x,y\in\mathfrak{g}_{-},\end{cases}\] \[J_{2}(x+iy) =Jx+iJy,\ \ \ x,y\in\mathfrak{g},\]
and \(J_{3}=J_{1}J_{2}\). Let us write them down explicitly. Relabelling the basis \(\{e_{1},\ldots,e_{4},ie_{1},\ldots,ie_{4}\}\) as \(\{e_{1},\ldots,e_{8}\}\) we have that \(\{J_{1},J_{2},J_{3}\}\) are given by
\[J_{1}e_{1}=e_{5}, \quad J_{1}e_{2}=-e_{6}, \quad J_{1}e_{3}=e_{7}, \quad J_{1}e_{4}=-e_{8},\] \[J_{2}e_{1}=e_{2}, \quad J_{2}e_{3}=e_{4}, \quad J_{2}e_{5}=e_{6}, \quad J_{2}e_{7}=e_{8},\] \[J_{3}e_{1}=-e_{6}, \quad J_{3}e_{2}=-e_{5}, \quad J_{3}e_{3}=-e_{8}, \quad J_{3}e_{4}=-e_{7}.\]
With respect to this basis the Lie brackets of \(\mathbf{\hat{g}}\) are
\[[e_{2},e_{4}]=e_{2},\;[e_{3},e_{4}]=-e_{3},\;[e_{4},e_{6}]=-e_{6}, \;[e_{4},e_{7}]=e_{7},\] \[[e_{2},e_{8}]=e_{6},\;[e_{3},e_{8}]=-e_{7},\;[e_{6},e_{8}]=-e_{2}, \;[e_{7},e_{8}]=e_{3},\] \[[e_{2},e_{3}]=e_{1},\;[e_{2},e_{7}]=e_{5},\;[e_{3},e_{6}]=-e_{5}, \;[e_{6},e_{7}]=-e_{1}.\]
If we denote \(\psi_{\alpha}(x):=\operatorname{Tr}(J_{\alpha}\operatorname{ad}x)\) for \(\alpha=1,2,3\), then
\[\psi_{1}=-4e^{8},\psi_{2}=-4e^{3},\quad\psi_{3}=-4e^{7},\]
where \(\{e^{j}\}_{j=1}^{8}\) is the dual basis of \(\{e_{j}\}_{j=1}^{8}\). Since \(\psi_{\alpha}\neq 0\), we have that \((\mathbf{\hat{g}},J_{\alpha})\) does not admit a nonzero closed \((4,0)\)-form, for any \(\alpha\).
Moreover, note that \(\psi_{1}([\mathbf{\hat{g}},\mathbf{\hat{g}}])=0\) but \(\psi_{2}([\mathbf{\hat{g}},\mathbf{\hat{g}}])\neq 0\) and \(\psi_{3}([\mathbf{\hat{g}},\mathbf{\hat{g}}])\neq 0\). According to Theorem 5.2, for any lattice \(\Lambda\subset\hat{G}\), where \(\hat{G}\) is the simply connected Lie group associated to \(\mathbf{\hat{g}}\), the compact complex manifold \((\Lambda\backslash\hat{G},J_{\alpha})\) has non trivial canonical bundle for \(\alpha=2,3\). Nevertheless we show next that there exist lattices \(\Gamma_{m}\subset\hat{G}\) such that the corresponding complex solvmanifolds \((\Gamma_{m}\backslash\hat{G},J_{1})\) do have trivial canonical bundle.
Let us show first that \(\hat{G}\) admits a lattice \(\Gamma_{m}\) for any \(m\in\mathbb{N}\), \(m\geq 3\). Indeed, we may write \(\mathbf{\hat{g}}=(\mathbb{R}e_{8}\oplus\mathbb{R}e_{4})\ltimes\mathfrak{n}\), where the nilradical \(\mathfrak{n}:=\mathfrak{n}(\mathbf{\hat{g}})\) is spanned by \(\{e_{1},e_{2},e_{3},e_{5},e_{6},e_{7}\}\). We compute
\[A:=\exp(\pi\operatorname{ad}e_{8}|_{\mathfrak{n}})=\operatorname{diag}(1,-1,-1,1,-1,-1),\]
\[B_{m}:=\exp(t_{m}\operatorname{ad}e_{4}|_{\mathfrak{n}})=\operatorname{diag}(1,\alpha_{m}^{-1},\alpha_{m},1,\alpha_{m}^{-1},\alpha_{m}),\]
where \(\alpha_{m}=\frac{m+\sqrt{m^{2}-4}}{2}\), and \(t_{m}=\log\alpha_{m}\). Setting
\[P_{m}=\left[\begin{array}{ccc}1&0&0\\ 0&1&\alpha_{m}^{-1}\\ 0&\frac{1}{\alpha_{m}-\alpha_{m}^{-1}}&\frac{\alpha_{m}}{\alpha_{m}-\alpha_{ m}^{-1}}\end{array}\right]^{\oplus 2},\]
we obtain \(P_{m}^{-1}B_{m}P_{m}=\begin{bmatrix}1&0&0\\ 0&0&-1\\ 0&1&m\end{bmatrix}^{\oplus 2}\) and \(P_{m}^{-1}AP_{m}=A\) for any \(m\). If we define \(f_{j}=P_{m}e_{j}\) for \(j=1,2,3,5,6,7\) then it is easy to verify that \([f_{k},f_{\ell}]=[e_{k},e_{\ell}]\) for \(k,\ell\in\{1,2,3,5,6,7\}\). Therefore, \(\{f_{1},f_{2},f_{3},f_{5},f_{6},f_{7}\}\) is a rational basis of \(\mathfrak{n}\) in which \(A\) and \(B_{m}\) are expressed as unimodular integer matrices. According to Theorem 2.4, the semidirect product
\[\Gamma_{m}=(\pi\mathbb{Z}\oplus t_{m}\mathbb{Z})\ltimes\exp^{N}(\operatorname {span}_{\mathbb{Z}}\{f_{1},f_{2},f_{3},f_{5},f_{6},f_{7}\})\]
is a lattice in \(\hat{G}=\mathbb{R}^{2}\ltimes N\), where \(N\) is the nilradical of \(\hat{G}\).
Let \(\sigma_{1}:=(e^{1}+ie^{5})\wedge(e^{2}-ie^{6})\wedge(e^{3}+ie^{7})\wedge(e^{4}- ie^{8})\) which is a nonzero \((4,0)\)-form with respect to \(J_{1}\). It follows from Proposition 5.10 and \(-\frac{1}{2}\operatorname{Tr}(J_{1}\operatorname{ad}e_{8})=2\) that \(\tau_{1}:=\exp(2ix_{8})\sigma_{1}\) is a nonzero closed \((4,0)\)-form on \(\hat{G}\) with respect to \(J_{1}\), where \(\mathbf{x}:=(x_{8},x_{4},x_{1},x_{2},x_{3},x_{5},x_{6},x_{7})\) are the real coordinates of \(\hat{G}\). It follows from \(\exp(2i(x_{8}+\pi k))=\exp(2ix_{8})\) that \(f(\mathbf{x})=\exp(2ix_{8})\) is invariant by the action of \(\Gamma_{m}\) so there is an induced smooth function \(\hat{f}:\Gamma_{m}\backslash\hat{G}\to\mathbb{C}\) such
that the \((4,0)\)-form \(\hat{\tau}_{1}=\hat{f}\hat{\sigma}_{1}\) is a trivializing section of \((\Gamma_{m}\backslash\hat{G},J_{1})\). In particular, \((\Gamma_{m}\backslash\hat{G},J_{1})\) has trivial canonical bundle.
If \(\operatorname{Hol}(\nabla^{\mathcal{O}})\) were contained in \(\operatorname{SL}(n,\mathbb{H})\), then the canonical bundle of \((\Gamma_{m}\backslash\hat{G},J_{\alpha})\) would be trivial for all \(\alpha\) but we have shown that this is not the case for \(\alpha=2\) and \(\alpha=3\). Therefore, this example provides a negative answer to Verbitsky's question.
**Remark 6.4**.: Verbitsky's question remains open for a hypercomplex manifold \((M,\{J_{\alpha}\})\) such that \((M,J_{\alpha})\) has trivial canonical bundle for all \(\alpha\).
|
2305.20037 | Flat limit of massless scalar scattering in $\mathrm{AdS}_2$ | We explore the flat limit of massless scalar scattering in $\mathrm{AdS}_2$.
We derive the $1 \to 1$ $\mathcal{S}$-matrix from the CFT $2$-point function.
We show a key property of the $2 \to 2$ $\mathcal{S}$-matrix in $2d$, where the
contact interaction in the flat limit gives momentum conserving delta function.
We show the factorization of the $n \to n$ $\mathcal{S}$-matrix for integrable
models in the flat limit, focusing on contact interactions. We calculate the
$\mathcal{S}$-matrix by linking the CFT operator on the AdS boundary to the
scattering state in flat-space. We use bulk operator reconstruction to study
massless scalar scattering in the flat limit and solve the Klein-Gordon
equation in global $\mathrm{AdS}_2$ for the massless scalar field. The solution
is simple, involving a pure phase in global time and a sinusoidal function in
the radial coordinate. This simplicity also extends to the smearing function,
allowing us to map the scattering state to the CFT operator while taking AdS
corrections into account. | Sarthak Duary | 2023-05-31T17:10:10Z | http://arxiv.org/abs/2305.20037v4 | # Melting \(AdS_{2}\)-ice into flatland: flat limit of massless scalar scattering
###### Abstract
We delineate the flat limit of massless scalar scattering in \(\mathrm{AdS}_{2}\). We derive the \(1\to 1\)\(\mathcal{S}\)-matrix from the CFT 2-point function, which is proportional to the momentum-conserving delta function. We unveil a captivating kinematical characteristic of the \(2\to 2\) massless \(\mathcal{S}\)-matrix in \(2d\), elucidating the presence of product of two delta functions arising from the \(\phi^{4}\) contact interaction within the realm of the flat limit of AdS/CFT. We also show that the factorization of the \(n\to n\)\(\mathcal{S}\)-matrix for integrable models in the flat limit, employing a focused analysis on contact interaction, which play a pivotal role as fundamental constituents in the construction of the non-perturbative \(\mathcal{S}\)-matrix within integrable models. Although the factorization of the \(\mathcal{S}\)-matrix in integrable models is commonly perceived as an intrinsically non-perturbative notion, we effectively showcase its manifestation at the tree level in the flat limit. We calculate the \(\mathcal{S}\)-matrix by making use of the mapping between the CFT operator on the AdS boundary, and the scattering state in flat space. We adopt the bulk operator reconstruction to examine massless scalar scattering in the flat limit. We solve the Klein-Gordon equation in the global \(\mathrm{AdS}_{2}\) for the massless scalar field. Notably, the solution is remarkably simple, characterized by a pure phase in global time and a sinusoidal function in the radial coordinate. This simplicity extends to the smearing function, enabling a mapping between the scattering state and CFT operator taking AdS corrections into account.
## I Introduction
The theory of gravity, which encompasses various quantum fields and incorporates a negative cosmological constant, can be described as a weakly coupled QFT through the perturbative treatment of the curvature entwined with the AdS background. Despite the fact that the resulting QFT is AdS-based, we nevertheless continue to employ normal approaches to ascertain the AdS amplitudes for quantum fields. These AdS amplitudes are equivalent to the large-\(N\) CFT correlation functions at the boundary of AdS, in accordance with the AdS/CFT correspondence [1; 2; 3]. Now, in our foray into implementing QFT on AdS space, we find that the incorporation of a large AdS radius, denoted as \(L\to\infty\), becomes feasible by judiciously embracing the effective Lagrangian. In this circumstance, we may treat AdS space as if it were flat-space, disregarding its inherent influence. This large AdS radius limit is aptly coined the flat limit. It is readily apparent that in the limit of the large AdS radius, the AdS background can be reduced to a flat-space configuration.
To be more concrete, we can map the CFT correlation function in \(d\)-dimensions into AdS amplitude in \((d+1)\)-dimensions, and then we can take large AdS radius limit i.e., flat limit and we can relate the CFT correlation function with the \(\mathrm{Mink}_{d+1}\) scattering amplitude:
\[\mathrm{CFT}_{\mathrm{d}}\leftrightarrow\mathrm{AdS}_{\mathrm{d+1}}\xrightarrow{ \text{Large AdS radius}}\mathrm{Mink}_{\mathrm{d+1}}.\]
It is difficult to incorporate AdS amplitudes into the aforementioned flat limit. The notion of the flat limit has been around for a while in the literature [5; 6; 7; 8; 9; 10; 11]. In [12; 13; 14; 15; 16; 17; 18; 19], various flat limit mapping prescriptions were undertaken that were more detailed. Utilizing AdS/CFT, when we apply the flat limit to AdS, it suggests that the correlation functions of the boundary CFT hold important information regarding the local bulk observable, \(\mathcal{S}\)-matrix of the corresponding flat-space theory. Nevertheless, there are plenty of schemes that have been developed supporting various CFT representations: (a) momentum space [24; 33], (b) mellin space [12; 13; 15; 23], and (c) coordinate space [14; 15; 16; 17; 18; 19; 20; 21; 22].
In the case of a QFT residing in \((d+1)\)-dimensional AdS spacetime while maintaining its isometries, the theory is governed by the \(SO(d,2)\) conformal group. Consequently, we have the ability to define boundary correlation functions by considering correlation functions of local bulk operators. These operators are positioned closer to the conformal boundary through a process of maneuvering their insertion points. The complete collection of correlation functions establishes a theory that exhibits conformal invariance and exists solely on the boundary. When we place a local operator at the boundary of AdS, it acts as a source and gives rise to a particle inside. However, in the realm of the flat limit, this operator is mapped to the corresponding asymptotic state.
In the flat limit, the central region of AdS space simply morphs into a flat-space. As a result, it becomes natural to envision the extraction of the \(\mathcal{S}\)-matrix of the flat-space QFT, from the correlation functions of the boundary CFT. The consequences arising from the flat limit manifest in striking dissimilarities between the phenomena of massless scattering and massive scattering. Massless particles are characterized by operators with a finite conformal dimension, whereas massive particles are characterized by operators with large conformal dimension \(\Delta\sim L\to\infty\). This flat limit approach offers an alternative method to investigate the analytic structure of the flat-space \(\mathcal{S}\)-matrix, particularly in non-perturbative scenarios. It is advantageous because CFT possesses more constraints and their analytic structure is better comprehended, facilitating the analysis.
Setup of the paper.The idea we persue in the paper is about exploring the flat limit of the AdS/CFT correspondence. Let us contemplate a scenario where, inside the \(\mathrm{AdS}_{2}\) geometry, deep inside the center scattering phenomena unfurls. By confining our observations to scales diminutive in comparison
to the characteristic length scale of \({\rm AdS}_{2}\), we can perceive \({\rm Mink}_{2}\) geometry. As a statement of the metric of the geometry, it simply means that the original \({\rm AdS}_{2}\) geometry, when we we examine the vicinity of the center of it, it will turn into \({\rm Mink}_{2}\) spacetime. The key idea here is that flat-space is a component of AdS space, implying that the physics must be encoded into AdS spacetime. Now, considering that the physics inside AdS spacetime is dual to CFT, it is reasonable to assert that CFT encapsulates the physics of flat-space in an additional dimension. This reasoning forms the fundamental logic behind the flat limit of AdS/CFT. Now, by taking the flat limit of the CFT correlation function, we confine the dual description of the correlation function, represented by bulk Witten diagrams, to a specific small region within AdS. In this region, the physics resembles that of flat-space, leading to the emergence of the flat-space \(\mathcal{S}\)-matrix from the CFT correlation function, as anticipated.
Motivating questions of the paper.
* One motivating question behind our work is to examine the kinematics that in \(2d\), when two identical particles interact, their momenta remain unchanged between the initial and final states.
* Another motivation is to understand the delta functions, one for every set of incoming and outgoing momenta in the flat limit, which appear in higher-point scattering amplitudes in integrable models in \(2d\).
Our questions revolves around constructing scattering states in Fock space in terms of CFT operators, utilizing techniques within the framework of AdS/CFT. In this paper, we follow bulk reconstruction approach for calculating the \(2d\) massless \(\mathcal{S}\)-matrix in the flat limit by establishing a mapping between scattering states and CFT operators. The key aspect of our method involves reconstructing the corresponding bulk operator for the massless scalar field in \({\rm AdS}_{2}\). By utilizing the mapping between the CFT operator on the boundary of AdS and the scattering states in flat space, we derive the \(1\to 1\)\(\mathcal{S}\)-matrix from the CFT \(2\)-point function, which is found to be proportional to the momentum-conserving delta function \(\delta(p_{2}-p_{1})\). In our paper, we reveal an intriguing property of the \(2\to 2\) massless \(\mathcal{S}\)-matrix in \(2d\), which showees the presence of a product of two delta functions originating from the \(\phi^{4}\) contact interaction in the flat limit. By focusing primarily on this contact interaction, particularly at the tree level, we provide a comprehensive analysis that demonstrates how the \(n\to n\)\(\mathcal{S}\)-matrix can be effectively factorized in the flat limit. While the flat limit reveals the flat-space \(\mathcal{S}\)-matrix from the CFT correlation function, it is important to note that the correlation function carries additional information beyond this limit. From the perspective of a bulk observer, the question arises as to how one can account for corrections to the flat-space \(\mathcal{S}\)-matrix in subleading order, specifically through AdS corrections. The other objective is to understand how the bulk physics in terms of the \(\mathcal{S}\)-matrix, including these corrections, can be derived from the CFT correlation function.
In this paper, we adhere to the underlying principles of the bulk operator reconstruction following [35; 36] to study massless scalar scattering in the flat limit of \({\rm AdS}_{2}\), which presents a representation of the \(\mathcal{S}\)-matrix in flat-space. For QFT in \({\rm AdS}_{2}\) see e.g. [45; 46; 47; 48; 49; 50; 51; 52]. In the paper, the representation of the \(\mathcal{S}\)-matrix for massless scalar involves smearing with a scattering smearing function and connecting it to the correlation function of the boundary. In the flat limit, the observer in the CFT cannot make usage of the with large conformal dimension limit \(\Delta\to\infty\) for massless scattering.1 The physics of flat spacetime is achieved instead by using the bulk-point limit.2 The bulk \(\mathcal{S}\)-matrix results as the coefficient of this singularity, which occurs when the correlator reaches this limit and singularizes on the appropriate sheet [9; 19]. In the paper, we consider QFT in AdS with no gravity. The configuration in this paper is distinct from the typical AdS/CFT scenario. In the typical AdS/CFT scenario, it is well-established that the presence of a boundary stress-energy tensor corresponds to the dynamic behavior of the bulk metric. In this paper, our focus is limited to examining QFT within a fixed AdS background geometry, disregarding the dynamical aspects typically considered in the AdS/CFT correspondence. This means that, unlike the standard AdS/CFT correspondence where the presence of a boundary stress-energy tensor corresponds to the dynamics of the bulk metric, our specific scenario involves a fixed bulk metric, thereby eliminating the existence of a stress-energy tensor in the boundary spectrum. A boundary conformal theory (BCT) is the term used to describe a conformally invariant theory on the boundary that lacks a stress-energy tensor. It is important to differentiate a BCT from the boundary conformal field theory (BCFT) which includes the existence of the stress-energy tensor.
Footnote 1: \(\Delta/L\sim m\) : conformal dimension \(\Delta\) is finite for massless particle at the large AdS radius limit (in the flat limit), conformal dimension scales linearly with \(L\), i.e., \(\Delta\sim L\to\infty\) for massive particle.
Footnote 2: For massless scattering in \({\rm AdS}_{4}\), if the incoming particles are positioned at lorentzian time \(-\pi/2\), and outgoing particles are positioned at lorentzian time \(\pi/2\), the particles are zoomed in locally on the lightlike seperated bulk-point inside AdS, so that locally the amplitude looks like a flat-space amplitude.
An overview of the key findings.We will now give an overview of the key findings, followed by an organization of the paper.
* **Bulk operator reconstruction in global \({\rm AdS}_{2}\).** In this paper, we focus on the reconstruction of an operator within the scattering region of AdS, which possesses a flat geometry. Specifically, we concentrate on the reconstruction of the bulk operator corresponding to the massless scalar field in \({\rm AdS}_{2}\), using a first principles approach. We first solve the Klein-Gordon equation for the massless scalar field within the global \({\rm AdS}_{2}\) setting. Remarkably, the solution is elegantly simple, exhibiting a pure phase in global time combined with a sinusoidal function dependent on the radial coordinate. This simplicity extends to the smearing function, which facilitates the mapping between the bulk and boundary operator. By expanding the smearing function under the large \(L\) limit, we construct the creation or annihilation modes in terms of the CFT operators.
* **Scattering state in a Fock space in the flat limit.** In order to define a scattering state, we conceptualize it as an operator that operates on the vacuum. To achieve this, we construct the operator within a Fock space framework in flat-space. The approach involves selecting a local operator situated deep within the bulk of AdS, extracting a creation or annihilation mode through a straightforward Fourier transform, and subsequently taking a large radius of AdS limit to transition to the flat limit.
* **1 \(\rightarrow\) 1 \(\mathcal{S}\)-matrix from the flat limit of CFT 2-point function.** By employing the map between the CFT operator and the flat-space scattering state, we evaluate the \(\mathcal{S}\)-matrix from the CFT \(2\)-point function which is proportional to the momentum conserving delta function \(\delta(p_{2}-p_{1})\).
* **Kinematics of \(2\to 2\) massless \(\mathcal{S}\)-matrix in \(2d\) in the flat limit.** When two identical particles scatter in flat-space in \(2d\), an intriguing phenomenon unfolds. Their momenta, remain unchanged both before and after the scattering, this means that the particles maintain the same momentum throughout the process. As a result, the \(\mathcal{S}\)-matrix, which describes the scattering process, takes a specific form. It becomes proportional to the product of two delta functions. These delta functions, denoted as \(\delta\left(p_{3}-p_{1}\right),\) and \(\delta\left(p_{4}-p_{2}\right)\), ensure that the momenta of the particles in the final state (with labels 3 and 4) are equal to the momenta of the particles in the initial state (with labels 1 and 2), respectively. When we analyse the flat limit, we understand the product of two delta functions for \(2\to 2\) massless \(\mathcal{S}\)-matrix in \(2d\), naturally occur for \(\phi^{4}\) contact interaction.
* **Factorization of the \(n\to n\)\(\mathcal{S}\)-matrix from the flat limit.** We demonstrate the factorization of the \(n\to n\)\(\mathcal{S}\)-matrix from the flat limit by focusing primarily on contact interaction, specifically at the tree level in leading order in coupling. At the tree level, contact interactions serve as the fundamental components of the non-perturbative \(\mathcal{S}\)-matrix in integrable models like the Sinh-Gordon model. While the factorization of the \(\mathcal{S}\)-matrix in integrable models is typically a non-perturbative concept, we manage to demonstrate the factorization at the tree level. Furthermore, after AdS corrections are taken into account, we notice that delta functions conserving momentum present in the \(\mathcal{S}\)-matrix, up to subleading order of AdS corrections. We provide a qualitative analysis and discussion of this topic in the concluding remarks section.
Organization of the paper.This paper is organized in the following way. In SSII, we review the flat limit, make notations and conventions for succeeding sections. In SSIII, we work out the solution of Klein-Gordon equation for the massless scalar field in global \(\mathrm{AdS}_{2}\). We also calculate the bulk operator reconstruction smearing function for the massless scalar field in SSIII.1. Then in SSIV, we construct a map between a creation or an annhilation operator in flat-space, and a CFT operator, which also accounts for subleading order AdS corrections. In SSV, we evaluate the \(1\to 1\)\(\mathcal{S}\)-matrix from the flat limit of CFT 2-point function. In SSV, we calculate the \(2\to 2\)\(\mathcal{S}\)-matrix in the flat limit from the 4-point contact Witten diagram, and recover the kinematics of \(2\to 2\) scattering of identical particles in \(2d\). In SSVIA, we evaluate the Witten diagram in global \(\mathrm{AdS}_{2}\) coordinates. In SSVII, we comment on the factorization of the \(n\to n\)\(\mathcal{S}\)-matrix from the flat limit. Finally, in SSVIII, we summarize our results and suggest future plans.
## II The flat limit: \(\mathrm{Mink}_{2}\) tiling inside \(\mathrm{AdS}_{2}\) spacetime
In this section, we review the basics facts about the flat limit of AdS, fix notations and conventions.
### Notations and Conventions
The objective of this section is to fix the notations and conventions for succeeding sections. We consider Lorentzian \(\mathrm{AdS}_{2}\) space with global coordinate \((\tau,\rho)\). The embedding space coordinates \((X_{1},X_{2},X_{3})\) is related to global coordinates as
\[X_{1}=L\frac{\cos\tau}{\cos\rho},\ \ X_{2}=L\frac{\sin\tau}{\cos\rho},\ \ X_{3}=L\tan\rho. \tag{1}\]
We obtain Lorentzian \(\mathrm{AdS}_{2}\) space from 3-dimensional Minkowski space \(\mathbb{M}^{d+3}\) in (2, 1) signature. The embeddding space coordinates \(X^{M}\) obey where, \(M=1,2,3\).
\[\begin{split} X_{M}X^{M}&=-L^{2}\\ \implies-X_{1}^{2}-X_{2}^{2}+X_{3}^{2}&=-L^{2}. \end{split} \tag{2}\]
The metric in the embedding space coordinate is given by
\[\mathrm{d}s^{2}=-\mathrm{d}X_{1}^{2}-\mathrm{d}X_{2}^{2}+\mathrm{d}X_{3}^{2}, \tag{3}\]
which can be represented in the global coordinate system as follows
\[\mathrm{d}s^{2}=\frac{L^{2}}{\cos^{2}\rho}\left(-\mathrm{d}\tau^{2}+\mathrm{d }\rho^{2}\right). \tag{4}\]
We begin by defining a specific region within \(\mathrm{AdS}_{2}\) which, when subjected to a flat limit, transforms into a flat spacetime. The limit is achieved by letting the \(\mathrm{AdS}_{2}\) length scale approach infinity. To begin, we establish a scattering region within the bulk of \(\mathrm{AdS}_{2}\), which is analogous to a flat spacetime. The metric of \(\mathrm{AdS}_{2}\) takes the form
\[\mathrm{d}s^{2}=\frac{L^{2}}{\cos^{2}\rho}\left(-\mathrm{d}\tau^{2}+\mathrm{d }\rho^{2}\right), \tag{5}\]
where \(0\leq\rho<\pi/2,-\infty<\tau<\infty\). Let us explore the process of taking the flat limit for the coordinates. In our notation, flat-space is represented by
\[ds^{2}=-\mathrm{d}t^{2}+\mathrm{d}r^{2}. \tag{6}\]
Taking the flat limit for coordinates is a straightforward process. We can achieve this by employing the coordinate transformation
\[L\tan\rho=r\ \,\ \ \tau L=t\,, \tag{7}\]
sending \(L\to\infty\).
## III Solution of Klein-Gordon equation in \(\mathrm{AdS}_{2}\)
In this section, we solve the Klein-Gordon equation for the massless scalar field in global \(\mathrm{AdS}_{2}\). From the global \(\mathrm{AdS}_{2}\) metric
\[\mathrm{d}s^{2}=\frac{L^{2}}{\cos^{2}\rho}\left(-\mathrm{d}\tau^{2}+\mathrm{d }\rho^{2}\right), \tag{8}\]
we can read off
\[g_{\tau\tau}=-\frac{L^{2}}{\cos^{2}\rho}\ \,\ \ g_{\rho\rho}=\frac{L^{2}}{\cos^{2}\rho} \tag{9}\]
and other components are zero.
We consider a massless scalar field \(\Phi(\rho,\tau)\), that satisfies Klein-Gordon equation as given by
\[\Box\Phi=0. \tag{10}\]
The d'Alembertian operator \(\Box\Phi\), corresponding to the global \(\mathrm{AdS}_{2}\) metric can be expressed as
\[\begin{split}\Box\Phi&\equiv\frac{1}{\sqrt{-g}} \partial_{\mu}\left(\sqrt{-g}g^{\mu\nu}\partial_{\nu}\Phi\right)\\ &=\frac{1}{L^{2}\sec^{2}\rho}\Bigg{[}\partial_{\tau}\Bigg{(}L^{2 }\sec^{2}\rho\Bigg{(}-\frac{\cos^{2}\rho}{L^{2}}\Bigg{)}\partial_{\tau}\Phi \Bigg{)}\\ &+\partial_{\rho}\Big{(}L^{2}\sec^{2}\rho\frac{\cos^{2}\rho}{L^{ 2}}\partial_{\rho}\Phi\Bigg{)}\Bigg{]}\\ &=\frac{\cos^{2}\rho}{L^{2}}\left[-\partial_{\tau}^{2}\Phi+ \partial_{\rho}^{2}\Phi\right].\end{split} \tag{11}\]
Here, the indices \(\mu,\nu=1,2\), and
\[g=\Big{(}-\frac{L^{2}}{\cos^{2}\rho}\Big{)}\Big{(}\frac{L^{2}}{\cos^{2}\rho} \Big{)}=-L^{4}\sec^{4}\rho.\]
Using separation of variables and requiring an oscillatory phase behaviour with global time implies solutions of the kind
\[\Phi(\rho,\tau)=e^{-\imath\omega\tau}R(\rho). \tag{12}\]
We can express the radial equation satisfied by \(R(\rho)\) as
\[R^{\prime\prime}(\rho)+\omega^{2}R(\rho)=0. \tag{13}\]
The solution for \(R(\rho)\) takes the form
\[R_{\omega}(\rho)=\mathcal{C}_{1\omega}\cos(\omega\rho)+\mathcal{C}_{2\omega} \sin(\omega\rho). \tag{14}\]
At the \(\mathrm{AdS}_{2}\) boundary \(\rho=\pm\frac{\pi}{2}\), we have
\[R_{\omega}\left(\rho=\pm\frac{\pi}{2}\right)=0\]
which implies
\[R_{n}(\rho)\sim\sin(\omega_{n}\rho) \tag{15}\]
where
\[\omega_{n}=2n\ \ \text{where},\ \ n\in\mathbb{N}. \tag{16}\]
Physically, in this context the \(\mathrm{AdS}_{2}\) metric provides an attractive confining potential, which pulls any particle to the center of the global \(\mathrm{AdS}_{2}\) and produces quantization or discrete spectrum given by \(\omega_{n}=2n\ \ \text{where},\ \ n\in\mathbb{N}\). Therefore, the solution can be expressed as follows
\[\Phi_{n}(\rho,\tau)=\mathfrak{c}_{n}e^{-2n\tau\tau}\sin(2n\rho), \tag{17}\]
where \(\mathfrak{c}_{n}\) is some normalization constant. \(\mathfrak{c}_{n}\) can be fixed using the inner product
\[\begin{split}&\langle\Phi_{n}|\Phi_{n^{\prime}}\rangle\\ &=\imath\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}}\mathrm{d}\rho \sqrt{-g}g^{\tau\tau}\\ &[\Phi_{n}^{\star}(\rho,\tau)\ \partial_{\tau}\Phi_{n^{\prime}}( \rho,\tau)-\Phi_{n^{\prime}}(\rho,\tau)\ \partial_{\tau}\Phi_{n}^{\star}(\rho,\tau)]\ \bigg{|}_{\tau=\text{constant}}\\ &=2\pi\delta_{n,n^{\prime}}.\end{split} \tag{18}\]
which gives
\[\mathfrak{c}_{n}=\sqrt{\frac{2}{n}}. \tag{19}\]
We save the details of the computation of the normalization constant \(\mathfrak{c}_{n}\) in Appendix A.
### Bulk operator reconstruction smearing function for the massless scalar field
In this section, we evaluate the bulk operator reconstruction smearing function for the massless scalar field. The scalar field \(\Phi(\rho,x)\) in \(\mathrm{AdS}_{2}\) can be expressed as
\[\Phi(\rho,\tau;\tau^{\prime})=\sum_{n\in\mathbb{N}}\mathfrak{c}_{n}\sin(2n \rho)\left[e^{-2n\tau}a_{n}+e^{+2n\tau}a_{n}^{\dagger}\right]. \tag{20}\]
where \(a_{n}\) is the annihilation operator, and \(a_{n}^{\dagger}\) is the creation operator.
At the \(\mathrm{AdS}_{2}\) boundary, we define the boundary operator which is the primary CFT operator \(\mathcal{O}(\tau^{\prime})\) following the BDHM relation as [34]
\[\begin{split}\mathcal{O}(\tau^{\prime})&\equiv \lim_{\rho^{\prime}\to\frac{\pi}{2}}\frac{\Phi(\rho^{\prime},\tau^{\prime})}{ \cos(\rho^{\prime})}\\ &=-\sum_{n\in\mathbb{N}}2n\mathfrak{c}_{n}\cos(\pi n)\left[e^{-2n \imath\tau^{\prime}}a_{n}+e^{+2n\imath\tau^{\prime}}a_{n}^{\dagger}\right]. \end{split} \tag{21}\]
In eq.(21), we have the conformal dimension of dual primary CFT operator corresponding to massless scalar bulk field i.e., \(\Delta=1\) since
\[\Delta(\Delta-1)=m^{2}L^{2}. \tag{22}\]
We get annihilation boundary operator \(\mathcal{O}_{+}(\tau^{\prime})\) and creation boundary operator \(\mathcal{O}_{-}(\tau^{\prime})\) as
\[\begin{split}\mathcal{O}_{+}(\tau^{\prime})&=-\sum_{ n\in\mathbb{N}}2n\epsilon_{n}\cos(\pi n)e^{-2n\tau^{\prime}}a_{n}\\ \mathcal{O}_{-}(\tau^{\prime})&=-\sum_{n\in\mathbb{ N}}2n\epsilon_{n}\cos(\pi n)e^{+2n\tau^{\prime}}a_{n}^{\dagger}.\end{split} \tag{23}\]
By applying the inverse operation to the aforementioned relationships described in eq.(23), we obtain the following
\[\begin{split} a_{n}&=-\frac{1}{2n\epsilon_{n}\cos( \pi n)}\int_{-\pi}^{\pi}\mathrm{d}\tau^{\prime}\mathcal{O}_{+}(\tau^{\prime})e ^{+2n\tau^{\prime}}\\ a_{n}^{\dagger}&=-\frac{1}{2n\epsilon_{n}\cos( \pi n)}\int_{-\pi}^{\pi}\mathrm{d}\tau^{\prime}\mathcal{O}_{-}(\tau^{\prime}) e^{-2n\tau^{\prime}}.\end{split} \tag{24}\]
By substituting the expression of the annihilation and creation operators into the function \(\Phi(\rho,\tau)\), we obtain
\[\Phi(\rho,\tau)=\int_{-\pi}^{\pi}\mathrm{d}\tau^{\prime}\left[K_{+}(\rho,\tau; \tau^{\prime})\mathcal{O}_{+}(\tau^{\prime})+K_{-}(\rho,\tau;\tau^{\prime}) \mathcal{O}_{-}(\tau^{\prime})\right] \tag{25}\]
where, the expression for the bulk operator reconstruction smearing function \(K_{\pm}(\rho,\tau;\tau^{\prime})\) can be stated as follows
\[\begin{split} K_{\pm}(\rho,\tau;\tau^{\prime})&=- \sum_{n\in\mathbb{N}}\frac{1}{2n\cos(\pi n)}\sin(2n\rho)e^{\mp 2n\left(\tau-\tau^{ \prime}\right)}\\ &=\sum_{n\in\mathbb{N}}\frac{(-1)^{2n+1}}{2n\cos(\pi n)}\sin(2n \rho)e^{\mp 2n\left(\tau-\tau^{\prime}\right)}.\end{split} \tag{26}\]
In eq.(24), we reconstruct the local bulk operator in terms the boundary CFT operator. In next section, we will calculate the modes of the massless scaler field in terms of the CFT operators in the flat limit.
## IV Creation and annihilation operators: massless scalar field
In this section, we extract the modes like the creation and annihilation operators. By first principle, we calculate the map between creation and annihilation operators in flat-space and CFT operators in the boundary of \(\mathrm{AdS}_{2}\). Deep within the center of \(\mathrm{AdS}_{2}\), there exists a scattering region. We think of the massless scalar field \(\Phi(\rho,\tau)\) in \(\mathrm{AdS}_{2}\) as the local bulk operator in the scattering region, which we reconstruct in terms of the boundary CFT operator in SSIII.1. In the flat limit, the field \(\Phi(r,t)\) is a free field in \(2d\) flat-space. In order to construct it, we can combine the creation and annihilation operators linearly
\[\Phi(r,t)=\int\frac{\mathrm{d}p}{\sqrt{2\pi}}\frac{1}{\sqrt{2\omega_{p}}}\left( a_{p}\,e^{+ip\cdot x}+a_{p}^{\dagger}e^{-ip\cdot x}\right). \tag{27}\]
The annihilation and creation operators can be extracted from the field \(\Phi(r,t)\) through a Fourier transform as
\[\begin{split} a_{p}&=+\frac{\imath}{\sqrt{2\omega_{p }}}\int\frac{\mathrm{d}r}{\sqrt{2\pi}}\,e^{-ip\cdot x}\overset{\leftrightarrow }{\partial}_{t}\Phi(r,t)\\ a_{p}^{\dagger}&=-\frac{\imath}{\sqrt{2\omega_{p }}}\int\frac{\mathrm{d}r}{\sqrt{2\pi}}\,e^{+ip\cdot x}\overset{\leftrightarrow }{\partial}_{t}\Phi(r,t)\end{split} \tag{28}\]
where \(\overset{\leftrightarrow}{\partial}_{t}\equiv\overset{\rightarrow}{\partial} _{t}-\overset{\leftarrow}{\partial}_{t}\) and \(p\cdot x=pr-\omega_{p}t\).
To successfully reconstruct the operator in the boundary of \(\mathrm{AdS}_{2}\), a crucial step involves determining the dynamics of the operator. This can be likened to selecting an appropriate asymptotic Hamiltonian in the framework of flat space. In our analysis, we focus on free fields, which corresponds to the utilization of a free asymptotic Hamiltonian. This assumption, known as asymptotic decoupling in \(\mathcal{S}\)-matrix theory, leads us to the notion of a Fock space. By considering these principles, we pave the way for a coherent and meaningful reconstruction of the operator within the AdS boundary.
### Flat limit and finite \(L\) improvements to subleading order
To have a flat limit, we need
\[\omega_{n}=2n=kL \tag{29}\]
which gives
\[\sum_{n\in\mathbb{N}}\rightarrow\frac{L}{2}\int\mathrm{d}k. \tag{30}\]
We substitute the following relation as
\[\frac{t}{L}=\tau\,\ \ \frac{r}{L}=\tan(\rho), \tag{31}\]
in the smearing function, and get
\[\begin{split} K_{\pm}(r,t;\tau^{\prime})&=-\frac{L}{ 2}\int\mathrm{d}k\frac{(-1)^{kL}}{kL\cos\left(\frac{\pi kL}{2}\right)}\sin\! \left(kL\tan^{-1}\!\left(\frac{r}{L}\right)\right)\\ e^{\mp kL\left(\frac{r}{L}-\tau^{\prime}\right)}& \quad\quad e^{\mp kL\left(\frac{r}{L}-\tau^{\prime}\right)}\\ &=-\int\frac{\mathrm{d}k}{2k}\frac{e^{\mp kt}}{\cos\left(\frac{ \pi kL}{2}\right)}e^{ikL\left(\pi\pm r^{\prime}\right)}\sin\!\left(kL\tan^{-1} \!\left(\frac{r}{L}\right)\right).\end{split} \tag{32}\]
We expand \(\sin\left(kL\tan^{-1}\left(\frac{r}{L}\right)\right)\) at large \(L\) as
\[\sin\!\left(kL\tan^{-1}\!\left(\frac{r}{L}\right)\right)=\sin(kr)-\frac{kr^{3 }}{3L^{2}}\cos(kr)+\mathcal{O}\left(\frac{1}{L^{4}}\right). \tag{33}\]
Consequently, the smearing function can be derived up to subleading order in \(\mathcal{O}\!\left(\frac{1}{L^{2}}\right)\)
\[\begin{split} K_{\pm}(r,t;\tau^{\prime})&=-\int\frac{ \mathrm{d}k}{2k}\frac{e^{\mp ikt}}{\cos\!\left(\frac{\pi kL}{2}\right)}e^{ikL \left(\pi\pm\tau^{\prime}\right)}\sin(kr)\\ &+\frac{r^{3}}{6L^{2}}\int\mathrm{d}k\frac{e^{\mp ikt}}{\cos\! \left(\frac{\pi kL}{2}\right)}e^{ikL\left(\pi\pm\tau^{\prime}\right)}\cos(kr) \\ &+\mathcal{O}\left(\frac{1}{L^{4}}\right).\end{split} \tag{34}\]
In order to derive the creation/annihilation operators presented in eq.(28), we begin by examining the quantity
\[\begin{split}&\int\mathrm{d}r\,e^{-ipr}\overset{\leftrightarrow}{ \partial}_{t}K_{+}(r,t;\tau^{\prime})\\ &=\int\mathrm{d}re^{-ipr+u\omega_{p}t}\left[\partial_{t}K_{+}(r, t;\tau^{\prime})-u\omega_{p}K_{+}(r,t;\tau^{\prime})\right].\end{split} \tag{35}\]
By substituting the expression of the subleading order smearing function from eq.(34), the integrand in eq.(35) without the exponential term can be expressed as follows
\[\begin{split}&\partial_{t}K_{+}(r,t;\tau^{\prime})-\omega_{p}K_{+} (r,t;\tau^{\prime})\\ &=\imath\int\frac{\mathrm{d}k}{2k}\frac{e^{-\imath kt}}{\cos\! \left(\frac{\pi kL}{2}\right)}e^{\imath kL\left(\pi+\tau^{\prime}\right)} \left(k+\omega_{p}\right)\sin(kr)\\ &-\frac{\imath r^{3}}{12L^{2}}\int\mathrm{d}k\frac{e^{-\imath kt }}{\cos\!\left(\frac{\pi kL}{2}\right)}e^{\imath kL\left(\pi+\tau^{\prime} \right)}\left(k+\omega_{p}\right)\cos(kr)\\ &+\mathcal{O}\left(\frac{1}{L^{4}}\right).\end{split} \tag{36}\]
The annihilation and creation operators up to subleading order in \(\mathcal{O}\!\left(\frac{1}{L^{2}}\right)\) can be expressed as
\[\begin{split}& a_{p}=\frac{\imath}{4}\sqrt{\frac{\pi}{\omega_{p} }}\frac{1}{\cos\!\left(\frac{\pi\omega_{p}L}{2}\right)}\left[1-\frac{1}{\omega _{p}^{2}L^{2}}+\mathcal{O}\left(\frac{1}{L^{4}}\right)\right]\\ &\int_{-\pi}^{\pi}\mathrm{d}\tau^{\prime}\,e^{\imath\omega_{p}L \left(\pi+\tau^{\prime}\right)}\mathcal{O}_{+}(\tau^{\prime})\\ & a_{p}^{\dagger}=-\frac{\imath}{4}\sqrt{\frac{\pi}{\omega_{p}}} \frac{1}{\cos\!\left(\frac{\pi\omega_{p}L}{2}\right)}\left[1-\frac{1}{\omega_{p }^{2}L^{2}}+\mathcal{O}\left(\frac{1}{L^{4}}\right)\right]\\ &\int_{-\pi}^{\pi}\mathrm{d}\tau^{\prime}\,e^{\imath\omega_{p}L \left(\pi-\tau^{\prime}\right)}\mathcal{O}_{-}(\tau^{\prime}).\end{split} \tag{37}\]
The details of the computation of the creation and annihilation operators as a smearing over global time of the CFT operators is worked out in Appendix B.
We have the expression for \(\Phi\) as a function of CFT operators
\[\Phi(\rho,\tau)=\int_{-\pi}^{\pi}\mathrm{d}\tau^{\prime}\left[K_{+}(\rho,\tau; \tau^{\prime})\mathcal{O}_{+}(\tau^{\prime})+K_{-}(\rho,\tau;\tau^{\prime}) \mathcal{O}_{-}(\tau^{\prime})\right]. \tag{38}\]
Therefore, we basically perform a Fourier transform of a CFT operator smeared over the boundary of \(\mathrm{AdS}_{2}\) in extracting the annihilation or creation operator. The map in eq.(37) is a map between a creation or an annhilation operator in flat-space, and a CFT operator.
## V Warm up exercise: \(1\to 1\)\(\mathcal{S}\)-matrix from the flat limit of CFT 2-point function
In this section, we calculate the \(1\to 1\)\(\mathcal{S}\)-matrix from the flat limit of CFT 2-point function. The key component in the formulation of the \(\mathcal{S}\)-matrix is the scattering states. To make a transition from the AdS picture to the flat-space picture, we need to understand the relationship between operators and scattering states, specifically how we can construct the scattering states by smearing CFT operators. We establish the mapping of scattering states and CFT operators in the earlier section IV.
In fig.1, the delineated red and blue regions indicate the specific locations where the insertion of a CFT operator is necessary. The red region corresponds to the insertion of a CFT operator associated with an outgoing massless scattering state, while the blue region represents the insertion of a CFT operator linked to an incoming massless scattering state. These strategic insertions play a crucial role in understanding the scattering dynamics within the depicted scenario.
We now can compute \(1\to 1\)\(\mathcal{S}\)-matrix from CFT \(2\)-point function. The formula expressing the mapping between the annihilation and creation modes and CFT operators derived in
Figure 1: Dashed line represents the massless particle in the center of AdS. Massless scattering state corresponds to windows(red and blue regions in the CFT in the figure) of width \(L^{-1}\), at \(\tau=\pm\pi\). These red and blue portions of the CFT depict flat-space null infinity; it’s where massless particles go and reach the AdS boundary.
the earlier section IV is given by
\[\begin{split}& a_{p}=\frac{\imath}{4}\sqrt{\frac{\pi}{\omega_{p}}} \frac{1}{\cos\!\left(\frac{\pi\omega_{p}L}{2}\right)}\left[1-\frac{1}{\omega_{p }^{2}L^{2}}+\mathcal{O}\left(\frac{1}{L^{4}}\right)\right]\\ &\int_{-\pi}^{\pi}\mathrm{d}\tau^{\prime}\,e^{\imath\omega_{p}L( \pi+\tau^{\prime})}\mathcal{O}_{+}(\tau^{\prime})\\ & a_{p}^{\dagger}=-\frac{\imath}{4}\sqrt{\frac{\pi}{\omega_{p}}} \frac{1}{\cos\!\left(\frac{\pi\omega_{p}L}{2}\right)}\left[1-\frac{1}{\omega_ {p}^{2}L^{2}}+\mathcal{O}\left(\frac{1}{L^{4}}\right)\right]\\ &\int_{-\pi}^{\pi}\mathrm{d}\tau^{\prime}\,e^{\imath\omega_{p}L( \pi-\tau^{\prime})}\mathcal{O}_{-}(\tau^{\prime}).\end{split} \tag{39}\]
In this formula of eq.(39), we take the AdS correction to the flat space creation and annihilation modes to subleading order, i.e., \(\mathcal{O}\left(\frac{1}{L^{2}}\right)\). The global time integral is outshined by the operator insertions points \(\tau^{\prime}=\pm\pi\) due to the extremely oscillatory nature of the exponential at the large AdS radius.
Now, we construct "in" and "out" Fock space scattering states by acting the modes to the vacuum \(\ket{0}\)
\[\begin{split}\ket{p,\text{in}}&=\,a_{\text{in},p}^ {\dagger}\ket{0}\\ \bra{\text{out},p}&=\,\bra{0}a_{\text{out},p}.\end{split} \tag{40}\]
For \(1\to 1\) scattering, the \(\mathcal{S}\)-matrix which is the overlap between scattering states, in terms of the CFT \(2\)-point function is given by
\[\begin{split}\mathcal{S}_{1\to 1}&=\int\mathrm{d}\tau_{2}^{ \prime}\,e^{\imath\omega_{p_{2}}L(\pi-\tau_{2}^{\prime})}\int\mathrm{d}\tau_{ 1}^{\prime}\,e^{\imath\omega_{p_{1}}L(\pi+\tau_{1}^{\prime})}\,\bra{\mathcal{ O}_{+}(\tau_{2}^{\prime})\mathcal{O}_{-}(\tau_{1}^{\prime})}\\ &\times\frac{\pi}{16\sqrt{\omega_{p_{1}}\omega_{p_{2}}}}\left[1- \frac{1}{L^{2}}\left(\frac{1}{\omega_{p_{1}}^{2}}+\frac{1}{\omega_{p_{2}}^{2} }\right)+\mathcal{O}\left(\frac{1}{L^{4}}\right)\right]\\ &\sec\!\left(\frac{\pi L\omega_{p_{1}}}{2}\right)\sec\!\left( \frac{\pi L\omega_{p_{2}}}{2}\right)\!.\end{split} \tag{41}\]
Using the map between the CFT operator, and the scattering states in eq.(39), we derive the map between \(1\to 1\)\(\mathcal{S}\)-matrix and CFT \(2\)-point function in eq.(41). Now, the Lorentzian CFT \(2\)-point function for the primary field is given by [11]
\[\begin{split}\bra{\mathcal{O}_{+}(\tau_{2}^{\prime})\mathcal{O}_ {-}(\tau_{1}^{\prime})}&=\,\frac{e^{\imath(\tau_{2}^{\prime}- \tau_{1}^{\prime})\Delta}}{\left[1-e^{\imath(\tau_{2}^{\prime}-\tau_{1}^{ \prime})}\right]^{2\Delta}}\\ &=\sum_{n\in\mathbb{N}}e^{-\imath(\tau_{1}^{\prime}-\tau_{2}^{ \prime})(\Delta+n)}\frac{\Gamma(2\Delta+n)}{\Gamma(2\Delta)n!}.\end{split} \tag{42}\]
We consider the integral
\[\int_{-\pi}^{\pi}\mathrm{d}\tau_{2}^{\prime}\,e^{\imath\omega_{p_{2}}L(\pi- \tau_{2}^{\prime})}\int_{-\pi}^{\pi}\mathrm{d}\tau_{1}^{\prime}\,e^{\imath \omega_{p_{1}}L(\pi+\tau_{1}^{\prime})}\,e^{-\imath(\tau_{1}^{\prime}-\tau_{2}^ {\prime})(\Delta+n)}. \tag{43}\]
We change variables of integration as
\[\tau_{1}^{\prime}=-\pi+\tilde{\tau}_{1}\,\,\,\,\text{and}\,\,\,\tau_{2}^{ \prime}=\pi-\tilde{\tau}_{2}. \tag{44}\]
So, we have
\[\begin{split}&\int_{0}^{2\pi}\mathrm{d}\tilde{\tau}_{2}e^{ \imath\tilde{\tau}_{2}\left(\omega_{p_{2}}L-\left(\Delta+\frac{\omega_{n}}{2} \right)\right)}\int_{0}^{2\pi}\mathrm{d}\tilde{\tau}_{1}e^{\imath\tilde{\tau}_ {1}\left(\omega_{p_{1}}L-\left(\Delta+\frac{\omega_{n}}{2}\right)\right)}e^{2 \pi\imath\Delta}\\ &=(2\pi)^{2}\delta_{K}\left(\omega_{p_{2}}L-\Delta-\frac{\omega_{ n}}{2}\right)\delta_{K}\left(\omega_{p_{1}}L-\Delta-\frac{\omega_{n}}{2}\right)e^{2 \pi\imath\Delta}\end{split} \tag{45}\]
where \(\omega_{n}=2n\).
We substitute \(\Delta+n=\eta L\) which implies that for large \(L\), we have
\[\sum_{n\in\mathbb{N}}\to L\int\mathrm{d}\eta. \tag{46}\]
In this limit, the kroneker delta becomes
\[\begin{split}&\delta_{K}\left(\omega_{p_{2}}L-\Delta-\frac{ \omega_{n}}{2}\right)\delta_{K}\left(\omega_{p_{1}}L-\Delta-\frac{\omega_{n}}{2} \right)\\ &\to\delta\left(\omega_{p_{2}}-\eta\right)\delta\left(\omega_{p_{1 }}-\eta\right).\end{split} \tag{47}\]
Now, we have
\[\frac{\Gamma(2\Delta+n)}{\Gamma(2\Delta)n!}\to\frac{1}{\Gamma(2\Delta)}\frac{ \Gamma\left(\eta L+\Delta\right)}{\Gamma\left(\eta L+1-\Delta\right)} \tag{48}\]
which reduces to the following expression for large \(L\) as
\[\frac{\Gamma(2\Delta+n)}{\Gamma(2\Delta)n!}\to\frac{1}{\Gamma(2\Delta)}\left( \eta L\right)^{2\Delta-1}. \tag{49}\]
In deriving eq.(49) we use the series expansion of \(\frac{\Gamma(x+a)}{\Gamma(x+b)}\) around \(x=\infty\) which gives
\[\frac{\Gamma(x+a)}{\Gamma(x+b)}=x^{a-b}\Bigg{(}1+\mathcal{O}\Big{(}\frac{1}{x} \Big{)}\Bigg{)}. \tag{50}\]
In conclusion, we obtain the following expression
\[\begin{split}&\int\mathrm{d}\tau_{2}^{\prime}\,e^{\imath\omega_{p_{2}} L(\pi-\tau_{2}^{\prime})}\int\mathrm{d}\tau_{1}^{\prime}\,e^{\imath\omega_{p_{1}}L(\pi+ \tau_{1}^{\prime})}\,\bra{\mathcal{O}_{+}(\tau_{2}^{\prime})\mathcal{O}_{-}( \tau_{1}^{\prime})}\\ &=(2\pi)^{2}\frac{e^{2\pi\imath\Delta}}{\Gamma(2\Delta)}L\int_{ \eta=0}^{\infty}\mathrm{d}\eta\,\delta\left(\omega_{p_{2}}-\eta\right)\delta \left(\omega_{p_{1}}-\eta\right)\left(\eta L\right)^{2\Delta-1}\\ &=(2\pi)^{2}\frac{e^{2\pi\imath\Delta}}{\Gamma(2\Delta)}L^{2\Delta} \omega_{p_{1}}^{2\Delta-1}\delta\left(\omega_{p_{2}}-\omega_{p_{1}}\right).\end{split} \tag{51}\]
For the on-shell massless particles \(\omega_{p_{i}}=p_{i}\). So, the \(1\to 1\)\(\mathcal{S}\)-matrix becomes
\[\begin{split}\mathcal{S}_{1\to 1}&=\frac{\pi^{3}e^{2\pi\imath\Delta}}{4 \Gamma(2\Delta)}L^{2\Delta}\omega^{2(\Delta-1)}\sec^{2}\!\left(\frac{\pi L \omega}{2}\right)\!\delta(p_{2}-p_{1})\\ &\left[1-\frac{2}{\omega^{2}L^{2}}+\mathcal{O}\left(\frac{1}{L^{4} }\right)\right]\end{split} \tag{52}\]
where \(\omega=\omega_{p_{1}}=\omega_{p_{2}}\). Now, the conformal dimension of dual primary CFT operator corresponding to massless scalar bulk field is \(\Delta=1\) since
\[\Delta(\Delta-1)=m^{2}L^{2}. \tag{53}\]
The \(\mathcal{S}\)-matrix reduces to
\[\mathcal{S}_{1\to 1}=\frac{\pi^{3}}{4}L^{2}\sec^{2}\biggl{(}\frac{\pi L\omega}{2} \biggr{)}\delta(p_{2}-p_{1})\left[1-\frac{2}{\omega^{2}L^{2}}+\mathcal{O} \left(\frac{1}{L^{4}}\right)\right]. \tag{54}\]
Finally, the \(\mathcal{S}\)-matrix derived from the CFT \(2\)-point function is proportional to the momentum-conserving delta function, denoted as \(\delta(p_{2}-p_{1})\).
## VI \(\mathcal{S}\)-matrix in the flat limit from the 4-point contact Witten diagram
In this section, we evaluate the \(\mathcal{S}\)-matrix in the flat limit from the 4-point contact Witten diagram. We examine 4-point contact Witten diagram, where the interaction term is given by
\[\mathcal{L}=\lambda\phi^{4}, \tag{55}\]
where, \(\lambda\) is the coupling constant. The process of calculating Witten diagrams is made easier by employing the embedding space framework. Lorentzian global \(\mathrm{AdS}_{2}\) is given by
\[\mathrm{d}s^{2}=\frac{L^{2}}{\cos^{2}(\rho)}\left(-\mathrm{d}\tau^{2}+ \mathrm{d}\rho^{2}\right). \tag{56}\]
The global \(\mathrm{AdS}_{2}\) is embedded in 3-dimensional Minkowski spacetime. The embedding coordinate \(X\) is such that
\[X^{2}=-L^{2}. \tag{57}\]
The conformal boundary of AdS coordinate or CFT embedding coordinate can be thought of as null ray
\[P^{2}=0\ \,\ \ \ P\sim\lambda P\ \ (\lambda\in\mathbb{R}). \tag{58}\]
The parametrization of the embedding coordinates \(X\) and \(P\) is given by
\[\begin{split} X&=\frac{L}{\cos\rho}(\cos\tau,-\imath \sin\tau,\sin\rho)\\ P&=(\cos\tau,-\imath\sin\tau,1).\end{split} \tag{59}\]
The key constituent to evaluate the \(\mathrm{AdS}_{2}\) 4-point amplitude for contact interaction is the bulk-to-boundary propagator in \(\mathrm{AdS}_{2}\) which is given by
\[\begin{split} G_{b\partial}(X,P)&=\frac{\mathcal{C }_{\Delta}}{(-2P.X/L)^{\Delta}}\\ &=\frac{\mathcal{C}_{\Delta}}{\Gamma(\Delta)}\int_{0}^{\infty} \mathrm{dt}\,\mathfrak{t}^{\Delta-1}e^{\frac{2\mathrm{i}P.X}{L}},\end{split} \tag{60}\]
where, the normalization constant is given by [25, 4]
\[\mathcal{C}_{\Delta}=\frac{\Gamma(\Delta)}{2\sqrt{\pi}\Gamma\Bigl{(}\Delta+ \frac{1}{2}\Bigr{)}}. \tag{61}\]
The \(4\)-point amplitude in \(\mathrm{AdS}_{2}\) associated with the contact interaction is expressed as follows
\[\mathcal{A}=\lambda\,\int_{\text{AdS}}\mathrm{d}^{2}X\prod_{i=1}^{4}G_{b \partial}(X,P_{i}), \tag{62}\]
where \(G_{b\partial}\) is the bulk-to-boundary propagator
\[G_{b\partial}(X,P_{i})=\frac{\mathcal{C}_{\Delta_{i}}}{(-2P_{i}.X/L)^{\Delta_{ i}}}. \tag{63}\]
Using the representation in eq.(60) in eq.(62), we have
\[\mathcal{A}=\lambda L^{2}\Bigg{(}\prod_{i=1}^{4}\mathcal{C}_{\Delta_{i}}\Bigg{)}D _{\Delta_{1}\Delta_{2}\Delta_{3}\Delta_{4}}(P_{i}), \tag{64}\]
where, the \(D\)-function is given by
\[\begin{split} D_{\Delta_{1}\Delta_{2}\Delta_{3}\Delta_{4}}(P_{i })&=\prod_{i=1}^{4}\frac{1}{\Gamma(\Delta_{i})}\int_{0}^{\infty} \prod_{i=1}^{4}\mathrm{dt}_{i}\mathfrak{t}_{i}^{\Delta_{i}-1}\\ &\int_{\text{AdS}}\mathrm{d}(X/L)\,e^{-\frac{2\left(\sum_{i=1}^{ 4}i_{i}P_{i}\right)\cdot X}{L}}.\end{split} \tag{65}\]
By performing the integration over the bulk \(X\) coordinate, the \(4\)-point amplitude is simplified to the following expression
\[\begin{split}\mathcal{A}&=L^{2}\pi\Gamma\Biggl{(} \frac{\sum_{i=1}^{4}\Delta_{i}-1}{2}\Biggr{)}\prod_{i=1}^{4}\frac{\mathcal{C}_{ \Delta_{i}}}{\Gamma(\Delta_{i})}\\ &\int_{0}^{\infty}\prod_{i=1}^{4}\mathrm{dt}_{i}\,\mathfrak{t}_{i }^{\Delta_{i}-1}e^{-\sum_{1\leq i<j\leq 4}\mathfrak{t}_{i}\mathfrak{t}_{j}P_{ij}}, \end{split} \tag{66}\]
where,
\[\mathcal{C}_{\Delta_{i}}=\frac{\Gamma(\Delta_{i})}{2\sqrt{\pi}\,\Gamma\Bigl{(} \Delta_{i}+\frac{1}{2}\Bigr{)}}. \tag{67}\]
### Witten diagram computation in global \(\mathrm{AdS}_{2}\) coordinates
In this section, we will express the the \(4\)-point amplitude in global \(\mathrm{AdS}_{2}\) coordinates. The boundary coordinates of the CFT are defined by the parametrization of null rays, resulting in the following expression
\[P_{ij}=-2P_{i}.P_{j}\,\ \text{since}\ \ P_{i}^{2}=0\,\ P_{j}^{2}=0. \tag{68}\]
We choose global time coordinates to parametrize \(P_{i}\), and \(P_{j}\) given by
\[\begin{split} P_{i}&=(\cos\tau_{i},-\imath\sin\tau_{i },1)\\ P_{j}&=(\cos\tau_{j},-\imath\sin\tau_{j},1).\end{split} \tag{69}\]
Substituting eq.(69) in eq.(68) we get
\[P_{ij}=2(\cos(\tau_{i}-\tau_{j})-1). \tag{70}\]
Now, we restrict to massless scalars which are dual to conformal scalars of dimension \(\Delta_{i}=1\). As a result, the \(4\)-point amplitude in \(\mathrm{AdS}_{2}\) is reduced to
\[\mathcal{A}=\frac{L^{2}}{2\pi^{\frac{7}{2}}}\int_{0}^{\infty}\prod_{i=1}^{4}d \mathfrak{t}_{i}e^{-2\sum_{1\leq i<j\leq 4}\mathfrak{t}_{i}\mathfrak{t}_{j}(\cos(\tau_{i}- \tau_{j})-1)}. \tag{71}\]
### \(\mathbf{2\to 2}\)\(\mathcal{S}\)-matrix from well-chosen kinematics
In this section, we evaluate the \(2\to 2\)\(\mathcal{S}\)-matrix from well-chosen kinematics in the flat limit from the \(4\)-point amplitude. By examining a \(\phi^{4}\) contact interaction, we can explicitly illustrate how the delta functions, which signify the equality of each pair of incoming and outgoing momenta, emerge naturally.
We construct "in" and "out" scattering states by acting the modes to the vacuum \(\ket{0}\)
\[\ket{p_{1},p_{2};\text{in}} =\,a_{\text{in},p_{1}}^{\dagger}a_{\text{in},p_{2}}^{\dagger}\ket{0} \tag{72}\] \[\bra{\text{out};p_{3},p_{4}} =\,\bra{0}a_{\text{out},p_{3}}a_{\text{out},p_{4}}.\]
For \(2\to 2\) scattering, the \(\mathcal{S}\)-matrix is
\[\mathcal{S}_{2\to 2} =\bra{\text{out};p_{3},p_{4}|p_{1},p_{2};\text{in}} \tag{73}\] \[=\bra{0}a_{\text{out},p_{3}}a_{\text{out},p_{4}}a_{\text{in},p_{ 1}}^{\dagger}a_{\text{in},p_{2}}^{\dagger}\ket{0}.\]
The \(\mathcal{S}\)-matrix in terms of the CFT \(4\)-point function is given by
\[\mathcal{S}_{2\to 2} =\int\mathrm{d}\tau_{3}^{\prime}\,e^{\mathrm{i}\omega_{p_{3}}L( \pi-\tau_{3}^{\prime})}\int\mathrm{d}\tau_{4}^{\prime}\,e^{\mathrm{i}\omega_{p _{4}}L(\pi-\tau_{4}^{\prime})} \tag{74}\] \[\int\mathrm{d}\tau_{1}^{\prime}\,e^{\mathrm{i}\omega_{p_{1}}L( \pi+\tau_{1}^{\prime})}\int\mathrm{d}\tau_{2}^{\prime}\,e^{\mathrm{i}\omega_{ p_{2}}L(\pi+\tau_{2}^{\prime})}\] \[\bra{\mathcal{O}_{+}(\tau_{4}^{\prime})\mathcal{O}_{+}(\tau_{3}^ {\prime})\mathcal{O}_{-}(\tau_{2}^{\prime})\mathcal{O}_{-}(\tau_{1}^{\prime})}\] \[\times\frac{\pi^{2}}{64\sqrt{\omega_{p_{1}}\omega_{p_{2}}\omega_{ p_{3}}\omega_{p_{4}}}}\] \[\left[1-\frac{1}{L^{2}}\left(\frac{1}{\omega_{p_{1}}^{2}}+\frac{ 1}{\omega_{p_{2}}^{2}}+\frac{1}{\omega_{p_{3}}^{2}}+\frac{1}{\omega_{p_{4}}^{2 }}\right)+\mathcal{O}\left(\frac{1}{L^{4}}\right)\right]\] \[\times\sec\!\left(\frac{\pi L\omega_{p_{1}}}{2}\right)\sec\! \left(\frac{\pi L\omega_{p_{2}}}{2}\right)\] \[\sec\!\left(\frac{\pi L\omega_{p_{3}}}{2}\right)\sec\!\left( \frac{\pi L\omega_{p_{4}}}{2}\right)\!.\]
With well-chosen kinematics, for \(2\to 2\) scattering, the \(4\)-point function (modulo some overall factor) is written as
\[\bra{\mathcal{O}_{+}(\tau_{4}^{\prime})\mathcal{O}_{+}(\tau_{3}^ {\prime})\mathcal{O}_{-}(\tau_{2}^{\prime})\mathcal{O}_{-}(\tau_{1}^{\prime})} \tag{75}\] \[=\int_{0}^{\infty}\prod_{i=1}^{4}\mathrm{d}t_{i}\,e^{-2\mathrm{t }_{i}\mathrm{t}_{i}\left(\cos\!\left(\tau_{1}^{\prime}-\tau_{3}^{\prime} \right)-1\right)}\] \[e^{-2\mathrm{t}_{2}\mathrm{t}_{4}\left(\cos\!\left(\tau_{2}^{ \prime}-\tau_{4}^{\prime}\right)-1\right)}.\]
We choose the kinematics in such a way that \(P_{ij}=2(\cos(\tau_{i}-\tau_{j})-1)=0\) for other \(4\) out of \(\binom{4}{2}=6\) pairings, i.e., \(\{i,j\}\) as \(\{1,2\},\{1,4\},\{2,3\},\text{and},\{\{3,4\}}\).
Now, using the Jacobi-Anger expansion, we have
\[e^{-z\cos(\theta)}=\sum_{n\in\mathbb{Z}}I_{n}(z)e^{-in\theta}. \tag{76}\]
Here, \(I_{n}(z)\) is the \(n\)-th modified Bessel function of the first kind. So, using eq.(76) the integrands in eq.(75) becomes
\[e^{-2\mathrm{t}_{1}\mathrm{t}_{3}\left(\cos\!\left(\tau_{1}^{\prime}-\tau_{3}^{ \prime}\right)-1\right)}=e^{2\mathrm{t}_{1}\mathrm{t}_{3}}\sum_{m\in\mathbb{Z }}I_{m}\left(2\mathrm{t}_{1}\mathrm{t}_{3}\right)e^{-m\left(\tau_{1}^{\prime} -\tau_{3}^{\prime}\right)} \tag{77}\]
and similarly
\[e^{-2\mathrm{t}_{2}\mathrm{t}_{4}\left(\cos\!\left(\tau_{2}^{\prime}-\tau_{4}^ {\prime}\right)-1\right)}=e^{2\mathrm{t}_{2}\mathrm{t}_{4}}\sum_{n\in\mathbb{Z }}I_{n}\left(2\mathrm{t}_{2}\mathrm{t}_{4}\right)e^{-in\left(\tau_{2}^{\prime} -\tau_{4}^{\prime}\right)}. \tag{78}\]
The \(4\)-point amplitude associated with the contact interaction, as given in eq.(75), can be expressed as
\[\bra{\mathcal{O}_{+}(\tau_{4}^{\prime})\mathcal{O}_{+}(\tau_{3}^{ \prime})\mathcal{O}_{-}(\tau_{2}^{\prime})\mathcal{O}_{-}(\tau_{1}^{\prime})} \tag{79}\] \[=\sum_{m,n\in\mathbb{Z}}C_{m,n}e^{-m\left(\tau_{1}^{\prime}-\tau_ {3}^{\prime}\right)}e^{-in\left(\tau_{2}^{\prime}-\tau_{4}^{\prime}\right)}\]
where the coefficients \(C_{m,n}\) are defined as
\[C_{m,n}\equiv\int_{0}^{\infty}\prod_{i=1}^{4}\mathrm{d}t_{i}\,e^{2\mathrm{t}_{ 1}\mathrm{t}_{3}}e^{2\mathrm{t}_{2}\mathrm{t}_{4}}I_{m}\left(2\mathrm{t}_{1} \mathrm{t}_{3}\right)I_{n}\left(2\mathrm{t}_{2}\mathrm{t}_{4}\right). \tag{80}\]
We examine the integral involved in the calculation
\[\int_{-\pi}^{\pi}\mathrm{d}\tau_{3}^{\prime}\,e^{i\omega_{p_{3}}L(\pi-\tau_{3}^{ \prime})}\int_{-\pi}^{\pi}\mathrm{d}\tau_{1}^{\prime}\,e^{i\omega_{p_{1}}L(\pi+ \tau_{1}^{\prime})}\,e^{-\imath m\left(\tau_{1}^{\prime}-\tau_{3}^{\prime} \right)}. \tag{81}\]
We substitute
\[\tau_{1}^{\prime}=-\pi+\tilde{\tau}_{1}\,\text{ and }\,\tau_{3}^{ \prime}=\pi-\tilde{\tau}_{3} \tag{82}\]
in eq.(81), and we have
\[\int_{0}^{2\pi}\mathrm{d}\tilde{\tau}_{3}e^{i\tilde{\tau}_{3} \left(\omega_{p_{3}}L-\frac{\omega_{m}}{2}\right)}\int_{0}^{2\pi}\mathrm{d} \tilde{\tau}_{1}e^{i\tilde{\tau}_{1}\left(\omega_{p_{1}}L-\frac{\omega_{m}}{2} \right)} \tag{83}\] \[=(2\pi)^{2}\delta_{K}\left(\omega_{p_{3}}L-\frac{\omega_{m}}{2} \right)\delta_{K}\left(\omega_{p_{1}}L-\frac{\omega_{m}}{2}\right)\]
where \(\omega_{m}=2m\).
Similarly, we have
\[\int_{-\pi}^{\pi}\mathrm{d}\tau_{4}^{\prime}\,e^{i\omega_{p_{4}}L(\pi- \tau_{4}^{\prime})}\int_{-\pi}^{\pi}\mathrm{d}\tau_{2}^{\prime}\,e^{i\omega_{p_ {2}}L(\pi+\tau_{2}^{\prime})}\,e^{-\imath m\left(\tau_{2}^{\prime}-\tau_{4}^{ \prime}\right)} \tag{84}\] \[=(2\pi)^{2}\delta_{K}\left(\omega_{p_{4}}L-\frac{\omega_{n}}{2} \right)\delta_{K}\left(\omega_{p_{2}}L-\frac{\omega_{n}}{2}\right).\]
We substitute \(m=\eta_{m}L\) and \(n=\eta_{n}L\). In the limit of large \(L\), we get
\[\sum_{m,n\in\mathbb{Z}}\to L^{2}\int\mathrm{d}\eta_{m}\mathrm{d}\eta_{n}. \tag{85}\]
In this large \(L\) limit, the kronecker delta becomes
\[\delta_{K}\left(\omega_{p_{3}}L-\frac{\omega_{m}}{2}\right)
Hence, we obtain
\[\begin{split}&\int\mathrm{d}\tau_{3}^{\prime}\,e^{\mathrm{i}\omega_{p _{3}}L(\pi-\tau_{3}^{\prime})}\int\mathrm{d}\tau_{4}^{\prime}\,e^{\mathrm{i} \omega_{p_{4}}L(\pi-\tau_{4}^{\prime})}\int\mathrm{d}\tau_{1}^{\prime}\,e^{ \mathrm{i}\omega_{p_{1}}L(\pi+\tau_{1}^{\prime})}\\ &\int\mathrm{d}\tau_{2}^{\prime}\,e^{\mathrm{i}\omega_{p_{2}}L( \pi+\tau_{2}^{\prime})}\times\langle\mathcal{O}_{+}(\tau_{4}^{\prime})\mathcal{ O}_{+}(\tau_{3}^{\prime})\mathcal{O}_{-}(\tau_{2}^{\prime})\mathcal{O}_{-}(\tau_{1}^{ \prime})\rangle\\ &=(2\pi)^{4}L^{2}\int_{-\infty}^{\infty}\mathrm{d}\eta_{m} \mathrm{d}\eta_{n}\,C(L,\eta_{m},\eta_{n})\\ &\delta\left(\omega_{p_{3}}-\eta_{m}\right)\delta\left(\omega_{p_ {1}}-\eta_{m}\right)\delta\left(\omega_{p_{4}}-\eta_{n}\right)\delta\left( \omega_{p_{2}}-\eta_{n}\right)\\ &=(2\pi)^{4}L^{2}C(L,\omega_{p_{1}},\omega_{p_{2}})\delta\left( \omega_{p_{3}}-\omega_{p_{1}}\right)\delta\left(\omega_{p_{4}}-\omega_{p_{2}} \right).\end{split} \tag{87}\]
For the on-shell massless particles \(\omega_{p_{i}}=p_{i}\). So, the \(2\to 2\)\(S\)-matrix is given by
\[\begin{split}\mathcal{S}_{2\to 2}&=\frac{\pi^{6}}{4 \omega_{p_{1}}\omega_{p_{2}}}L^{2}\sec^{2}\!\left(\frac{\pi L\omega_{p_{1}}}{2} \right)\sec^{2}\!\left(\frac{\pi L\omega_{p_{2}}}{2}\right)\\ & C(L,\omega_{p_{1}},\omega_{p_{2}})\delta\left(p_{3}-p_{1}\right) \delta\left(p_{4}-p_{2}\right)\\ &\times\left[1-\frac{2}{L^{2}}\left(\frac{1}{\omega_{p_{1}}^{2}} +\frac{1}{\omega_{p_{2}}^{2}}\right)+\mathcal{O}\left(\frac{1}{L^{4}}\right) \right].\end{split} \tag{88}\]
The process of taking the flat limit effectively "takes away the box" from the contact Witten diagram in \(\mathrm{AdS}_{2}\), turning it into a Feynman diagram characterized by the presence of delta functions \(\delta\left(p_{3}-p_{1}\right)\delta\left(p_{4}-p_{2}\right)\) as in fig.2.
Let's briefly spell out the kinematics of \(2\to 2\) scattering of identical particles in flat-space.
Kinematics of \(2\to 2\) scattering of identical particles in \(2d\).Let us consider a scenario where we begin with the simplest possible scattering process of \(2\to 2\) massless particles. Energy and momentum conservation determine
\[\begin{split}\{p_{3},p_{4}\}&=\{p_{1},p_{2}\}\\ \{p_{3},p_{4}\}&=\{p_{2},p_{1}\}.\end{split} \tag{89}\]
In other words, the set is the same, i.e., the set of initial momenta is equal to the set of the final momenta [43, 44]. When dealing with distinguishable particles, there exist two discernible alternatives. Nonetheless, when dealing with identical particles, they represent the same possibility. In the scenario of scattering between two indistinguishable particles in flat space, their momenta remain unaltered in both the initial and final states. The \(\mathcal{S}\)-matrix becomes
\[\mathcal{S}_{2\to 2}\propto\delta\left(p_{3}-p_{1}\right)\delta\left(p_{4}-p_{2} \right). \tag{90}\]
A noteworthy observation is that, due to the preservation of momentum in the scattering process, there is no significant distinction between the connected and disconnected components in the \(\mathcal{S}\)-matrix. This means that in \(2d\), we can readily make transition between the connected and disconnected components without the need for complicated distributional considerations.
In the context of the flat limit, in eq.(88), we demonstrate that the delta functions associated with the equality of each pair of incoming and outgoing momenta naturally emerge when considering a \(\phi^{4}\) contact interaction. In eq.(88), we also find that momentum-conserving delta functions show up in the \(\mathcal{S}\)-matrix up to subleading order, after accounting for AdS corrections. In the concluding remarks section, we provide a qualitative discussion concerning this issue.
## VII Comment on factorization of the \(n\to n\)\(\mathcal{S}\)-matrix from the flat limit
In this section, we will comment on the factorization of the \(n\to n\)\(\mathcal{S}\)-matrix from the flat limit. As a result of the presence of _higher conserved charges_, scattering amplitudes in integrable models, particularly higher-point \(\mathcal{S}\)-matrix, exhibit the inclusion of products of delta functions [40, 41, 42]. We denote the \(n\to n\)\(\mathcal{S}\)-matrix by \(\mathcal{S}_{n\to n}\), which can be expressed as
\[\mathcal{S}_{n\to n}\propto\prod_{i=1}^{n}\delta(p_{i}^{\prime}-p_{i})=\delta \left(p_{1}^{\prime}-p_{1}\right)\delta\left(p_{2}^{\prime}-p_{2}\right)\cdots \left(p_{n}^{\prime}-p_{n}\right). \tag{91}\]
Here, in eq.(91), we denote \(\prod_{i=1}^{2}\delta(p_{i}^{\prime}-p_{i})\) which comes in the \(2\to 2\)\(\mathcal{S}\)-matrix. As in SSVI, for \(2\to 2\) scattering, the \(\mathcal{S}\)-matrix is
\[\mathcal{S}_{2\to 2}\propto\delta(p_{3}-p_{1})\delta(p_{4}-p_{2}). \tag{92}\]
Noting, \(p_{3}=p_{1}^{\prime},\text{and}\)\(p_{4}=p_{2}^{\prime}\). Now, we show that these delta functions appearing in eq.(91) corresponding to equality of each pair of incoming and outgoing momenta naturally arise in the flat limit considering contact interaction \(\phi^{2n}\). Although the factorization of the \(\mathcal{S}\)-matrix is non-perturbative statement, we show the factorization by only considering contact interaction i.e., to leading order in coupling (at tree level). At tree level, the contact interactions form the building blocks of the non-perturbative \(\mathcal{S}\)-matrix for the integrable models e.g., Sinh-Gordon model. Thus, we expect the factorization of the \(\mathcal{S}\)-matrix hold at tree level.
As described in section VI.2, with well-chosen kinematics for \(n\to n\) scattering, the \(2n\)-point function is written as
\[\mathcal{G}_{2n}=\int_{0}^{\infty}\prod_{i=1}^{n}\prod_{i^{\prime}=1}^{n}\mathrm{ d}t_{i}\,\mathrm{d}t_{i^{\prime}}e^{-2t_{i}t_{i^{\prime}}(\cos\left(\tau_{i}- \tau_{i^{\prime}}\right)-1)}. \tag{93}\]
Figure 2: Witten diagram \(\rightarrow\) Feynman diagram in the flat limit: contact diagram in \(\mathrm{AdS}_{2}\) becomes \(\delta\left(p_{3}-p_{1}\right)\delta\left(p_{4}-p_{2}\right)\) in the flat limit.
We can do exactly same analysis to get the \(n\to n\)\(\mathcal{S}\)-matrix in the flat limit from CFT \(2n\)-point function which is given by
\[\mathcal{S}_{n\to n}\propto\delta\left(p_{1}^{\prime}-p_{1}\right)\delta \left(p_{2}^{\prime}-p_{2}\right)\cdots\left(p_{n}^{\prime}-p_{n}\right). \tag{94}\]
The steps to get the resulting \(n\to n\)\(\mathcal{S}\)-matrix of eq.(94) from CFT \(2n\)-point function are as follows.
* First, we use Jacobi-Anger expansion in eq.(93) for each indivisual exponentials given by \[e^{-2\mathfrak{t}_{i}\mathfrak{t}_{j}(\cos(\tau_{i}-\tau_{j})-1)}=e^{2 \mathfrak{t}_{i}\mathfrak{t}_{j}}\sum_{\mathfrak{p}\in\mathbb{Z}}I_{\mathfrak{ p}}\left(2\mathfrak{t}_{i}\mathfrak{t}_{j}\right)e^{-\mathfrak{p}(\tau_{i}- \tau_{j})}.\] (95)
* We perform the global time integrals which gives kronecker deltas with sum.
* In the large \(L\) limit, the sum becomes integral, and each of the kronecker deltas becomes dirac delta like \(\delta\left(\omega_{p_{i}}-\eta_{n_{i}}\right)\). The large \(L\) limit plays the role of continuum limit which implies \[\sum_{n_{i}\in\mathbb{Z}}\to L\int\mathrm{d}\eta_{n_{i}}.\] (96)
* Finally, we use delta functions to reduce the \(n\to n\)\(\mathcal{S}\)-matrix in the flat limit yielding \[\mathcal{S}_{n\to n}\propto\delta\left(p_{1}^{\prime}-p_{1}\right)\delta \left(p_{2}^{\prime}-p_{2}\right)\cdots\left(p_{n}^{\prime}-p_{n}\right).\] (97)
In our analysis, we find that the momentum-conserving delta functions manifest in the \(\mathcal{S}\)-matrix up to subleading order, accounting for AdS corrections. We offer a qualitative discussion on this topic in the next concluding remarks section.
## VIII Concluding remarks
In this section, we wrap up by summarising our results and outlining potential avenues for future explorations. In this paper, we develop a mapping between scattering state and CFT operator that allows us to calculate the massless \(\mathcal{S}\)-matrix in \(2d\) in the flat limit from the Lorentzian CFT \(2\)-point function. Our strategy of the mapping lies in the meticulous reconstruction of the bulk operator that corresponds to the massless scalar field in \(\mathrm{AdS}_{2}\). Using the mapping between the CFT operator in the boundary of AdS and flat-space scattering state, we derive the 1 \(\to\) 1 \(\mathcal{S}\)-matrix from the CFT \(2\)-point function, which is proportional to the momentum-conserving delta function \(\delta(p_{2}-p_{1})\). We demonstrate that the \(2\to 2\) massless \(\mathcal{S}\)-matrix in \(2d\) exhibits a natural occurrence of the product of two delta functions due to the \(\phi^{4}\) contact interaction in the flat limit. By concentrating mainly on contact interaction, specifically at the tree level, we show how the \(n\to n\)\(\mathcal{S}\)-matrix can be factorized from the flat limit.
Additionally, even with AdS corrections, the momentum-conserving delta functions persist in the \(\mathcal{S}\)-matrix, albeit at subleading order. Our prescription for calculating the \(\mathcal{S}\)-matrix in the flat limit and, AdS corrections to it is an alternative to the paper [33] exploring massive scalar particle scattering. They define the "\(\mathcal{S}\)-matrix" from the Witten diagram computation to only those terms which are proportinal to the overall momentum conserving delta function. We also retain the same term involving the momentum conserving delta functions to subleading order. The delta functions originating from the momentum conservation are a consequence of translational symmetry. While AdS lacks translational symmetry, the concept of momentum conservation can still be upheld within the flat-space region. The defining equation of the annihilation and creation operators in our approach serves as the rationale for retaining the momentum conserving delta functions. The defining equation of the annihilation and creation operators given by
\[a_{p} =+\frac{\imath}{\sqrt{2\omega_{p}}}\int\frac{\mathrm{d}r}{\sqrt{ 2\pi}}\,e^{-\imath p\cdot x}\overset{\leftrightarrow}{\partial_{t}}\Phi(r,t) \tag{98}\] \[a_{p}^{\dagger} =-\frac{\imath}{\sqrt{2\omega_{p}}}\int\frac{\mathrm{d}r}{\sqrt{ 2\pi}}\,e^{+\imath p\cdot x}\overset{\leftrightarrow}{\partial_{t}}\Phi(r,t).\]
In inverting the creation and annihilation operators through this eq.(98), we still mode expand the free field in terms of the decomposition of plane waves. In the context of full AdS spacetime, discussing the "\(\mathcal{S}\)-matrix" doesn't hold meaningful significance because the wave packet reaches the asymptotic boundary and returns in a finite time. Nonetheless, it is of significance to delve into the correlation function, which, given suitable kinematics, can be determined by the \(\mathcal{S}\)-matrix in the flat limit of spacetime. As the correlation function possesses a well-defined nature, we can delve into the analysis of corrections to this outcome, particularly focusing on subleading terms that relate to the correlation function within the flat spacetime framework. Finally, we move on to talk about some fascinating open problems.
### Future plans
Integrable bootstrap from the flat limit of AdS/CFT.One interesting potential application lies in the exploration of what insights the flat limit of AdS/CFT can provide regarding non-perturbative \(\mathcal{S}\)-matrices. In \(2d\), there are plethora of exactly solvable \(\mathcal{S}\)-matrices in flat-space. If the exactly solvable or integrable QFTs are placed in \(\mathrm{AdS}_{2}\), the interesting question arises: does integrability persist in some manner, meaning, do subleading corrections to the flat-space \(\mathcal{S}\)-matrix retain integrable properties? Following a similar line of thought, it would be captivating to investigate those models which have exactly solvable \(\mathcal{S}\)-matrices in \(\mathrm{AdS}_{2}\) in terms of boundary correlators. Different integrable models in \(\mathrm{AdS}_{2}\) are studied in [26; 27; 28; 29; 30; 31; 32] but, these attempts are at the level of perturbative boundary correlators. It is intriguing to contemplate a notion of integrability for the non-pertubative boundary correlators.
It would be interesting to generalize the mapping between \(\mathrm{CFT}_{1}\) correlators and flat-space \(S\)-matrices for massive scalar fields in \(\mathrm{AdS}_{2}\), and make a connection with the recent devel
opments of conformal bootstrap [37]. It would also be exciting to make a connection between Celestial \(\mathrm{CFT}_{0}\) amplitudes developed in [38; 39] from the flat limit of \(\mathrm{CFT}_{1}\) correlators.
## Acknowledgements
I thank Nava Gaddam, R. Loganayagam, Pronobesh Maity, and definitely Pabitra Ray for discussions and/or related collaboration. I express my gratitude for the support received from the Department of Atomic Energy, Government of India, through project number RT14001.
Appendix A Calculation of the normalization constant of the massless scalar field in \(\mathrm{AdS}_{2}\)
In this Appendix, we evaluate the normalization constant of the massless scalar field in \(\mathrm{AdS}_{2}\). The solution for the massless scalar field in \(\mathrm{AdS}_{2}\) is expressed as
\[\Phi_{n}(\rho,\tau)=\mathfrak{c}_{n}e^{-2n\tau}\sin(2n\rho), \tag{10}\]
where \(\mathfrak{c}_{n}\) is the normalization constant. We evaluate \(\mathfrak{c}_{n}\) using the inner product
\[\begin{split}&\langle\Phi_{n}|\Phi_{n^{\prime}}\rangle\\ &=\imath\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}}\mathrm{d}\rho\sqrt{ -g}g^{\tau\tau}\\ &[\Phi_{n}^{*}(\rho,\tau)\;\partial_{\tau}\Phi_{n^{\prime}}(\rho,\tau)-\Phi_{n^{\prime}}(\rho,\tau)\;\partial_{\tau}\Phi_{n}^{*}(\rho,\tau)] \,\bigg{|}_{\tau=\text{constant}}\\ &=2\pi\delta_{n,n^{\prime}}.\end{split} \tag{11}\]
For the \(\mathrm{AdS}_{2}\) metric the determinant is given by
\[\begin{split} g&=\Big{(}-\frac{L^{2}}{\cos^{2}\rho }\Big{)}\Big{(}\frac{L^{2}}{\cos^{2}\rho}\Big{)}=-L^{4}\sec^{4}\rho\\ \implies\sqrt{-g}&=L^{2}\sec^{2}\rho.\end{split} \tag{12}\]
and
\[g^{\tau\tau}=-\frac{1}{L^{2}\sec^{2}\rho}. \tag{13}\]
From eq.(11), we have
\[\begin{split}&\imath\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}}\mathrm{d} \rho\sqrt{-g}g^{\tau\tau}\\ &[\Phi_{n}^{*}(\rho,\tau)\;\partial_{\tau}\Phi_{n}(\rho,\tau)- \Phi_{n}(\rho,\tau)\;\partial_{\tau}\Phi_{n}^{*}(\rho,\tau)]\,\bigg{|}_{\tau= \text{constant}}\\ &=2\pi\end{split} \tag{14}\]
From eq.(14)
\[\mathfrak{c}_{n}=\sqrt{\frac{2}{n}}. \tag{15}\]
We use the integral
\[\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}}\sin^{2}(2n\rho)\mathrm{d}\rho=\frac{\pi }{2} \tag{16}\]
where, \(n\in\mathbb{N}\).
Appendix B Derivation of the formula for the creation and annihilation operators of massless scalar field in terms of the CFT operators
In this Appendix, we derive the formula for the creation and annihilation operators restricting to massless scalar field in \(\mathrm{AdS}_{2}\) as a smearing over global time of the CFT operators against some exponential. We note the smearing function to subleading order in \(\mathcal{O}\Big{(}\frac{1}{L^{2}}\Big{)}\) from section IV.1
\[\begin{split} K_{\pm}(r,t;\tau^{\prime})&=-\int \frac{\mathrm{d}k}{2k}\frac{e^{\mp kt}}{\cos\big{(}\frac{\pi k}{2}\big{)}}e^{ \imath kL\big{(}\pi\pm\tau^{\prime}\big{)}}\sin(kr)\\ &+\frac{r^{3}}{6L^{2}}\int\mathrm{d}k\frac{e^{\mp kt}}{\cos\big{(} \frac{\pi kL}{2}\big{)}}e^{\imath kL\big{(}\pi\pm\tau^{\prime}\big{)}}\cos( kr)\\ &+\mathcal{O}\left(\frac{1}{L^{4}}\right).\end{split} \tag{17}\]
In eq.(17), we expand the smearing function to subleading order in \(\mathcal{O}\Big{(}\frac{1}{L^{2}}\Big{)}\). The annihilation and creation operators in terms of the scalar fields are given by
\[\begin{split}& a_{p}=+\frac{\imath}{\sqrt{2\omega_{p}}}\int \frac{\mathrm{d}r}{\sqrt{2\pi}}\;e^{-\imath p\cdot x}\overset{\leftrightarrow }{\partial_{t}}\Phi(r,t)\\ & a_{p}^{\dagger}=-\frac{\imath}{\sqrt{2\omega_{p}}}\int\frac{ \mathrm{d}r}{\sqrt{2\pi}}\;e^{+\imath p\cdot x}\overset{\leftrightarrow}{ \partial_{t}}\Phi(r,t)\end{split} \tag{18}\]
where \(\overset{\leftrightarrow}{\partial_{t}}\equiv\overset{\rightarrow}{\partial_ {t}}-\overset{\leftarrow}{\partial_{t}}\) and \(p\cdot x=pr-\omega_{p}t\). To obtain the creation/annihilation operators in eq.(18), first we consider the quantity
\[\begin{split}&\int\mathrm{d}r\,e^{-\imath p\cdot x}\overset{ \leftrightarrow}{\partial_{t}}K_{+}(r,t;\tau^{\prime})\\ &=\int\mathrm{d}r\,e^{-\imath pr+\imath\omega_{p}t}\left[ \partial_{t}K_{+}(r,t;\tau^{\prime})-\imath\omega_{p}K_{+}(r,t;\tau^{\prime}) \right].\end{split} \tag{19}\]
Now, after substituting the expression of the smearing function of eq.(17) to subleading order in \(\mathcal{O}\Big{(}\frac{1}{L^{2}}\Big{)}\), the integrand without the exponential piece of eq.(19) becomes
\[\begin{split}&\partial_{t}K_{+}(r,t;\tau^{\prime})-\imath \omega_{p}K_{+}(r,t;\tau^{\prime})\\ &=\imath\int\frac{\mathrm{d}k}{2k}\frac{e^{-\imath kt}}{\cos\big{(} \frac{\pi kL}{2}\big{)}}e^{\imath kL\big{(}\pi+\tau^{\prime}\big{)}}\left(k+ \omega_{p}\right)\sin(kr)\\ &-\frac{n^{3}}{12L^{2}}\int\mathrm{d}k\frac{e^{-\imath kt}}{\cos \big{(}\frac{\pi kL}{2}\big{)}}e^{\imath kL\big{(}\pi+\tau^{\prime}\big{)}} \left(k+\omega_{p}\right)\cos(kr)\\ &+\mathcal{O}\left(\frac{1}{L^{4}}\right).\end{split} \tag{20}\]
Substituting eq.(B), the integral of eq.(B) reduces to the following integral
\[\begin{split}\int\mathrm{d}r\,e^{-ipr}\sin(kr)&=\int \mathrm{d}r\,e^{-\imath\omega_{p}r}\sin(kr)=\frac{\pi}{2\imath}\delta(k-\omega_ {p})\\ &\quad\text{for}\;\;k>0.\end{split}\] (B)
We then differentiate eq.(B) by \(k\) three times, and we have
\[\begin{split}\int\mathrm{d}r\,e^{-\imath pr}r^{3}\cos(kr)& =\int\mathrm{d}r\,e^{-\imath\omega_{p}r}r^{3}\cos(kr)\\ &=-\frac{\pi}{2\imath}\delta^{(3)}(k-\omega_{p})\\ &=\frac{3\pi}{\imath k^{3}}\delta(k-\omega_{p})\quad\text{for}\;\; k>0.\end{split}\] (B)
Finally, eq.(B) gives
\[\begin{split}&\int\mathrm{d}r\,e^{-\imath p\cdot x}\overset{ \leftrightarrow}{\partial_{t}}K_{+}(r,t;\tau^{\prime})\\ &=\frac{\pi}{2}\frac{e^{\imath\omega_{p}L(\pi+\tau^{\prime})}}{ \cos\!\left(\frac{\pi\omega_{p}L}{2}\right)}-\frac{\pi}{2\omega_{p}^{2}L^{2}} \frac{e^{\imath\omega_{p}L(\pi+\tau^{\prime})}}{\cos\!\left(\frac{\pi\omega_{ p}L}{2}\right)}+\mathcal{O}\!\left(\frac{1}{L^{4}}\right)\end{split}\] (B)
|
2308.16579 | Highest Cusped Waves for the Fractional KdV Equations | In this paper we prove the existence of highest, cusped, traveling wave
solutions for the fractional KdV equations $f_t + f f_x = |D|^{\alpha} f_x$ for
all $\alpha \in (-1,0)$ and give their exact leading asymptotic behavior at
zero. The proof combines careful asymptotic analysis and a computer-assisted
approach. | Joel Dahne | 2023-08-31T09:23:12Z | http://arxiv.org/abs/2308.16579v1 | # Highest Cusped Waves for the Fractional KdV Equations
###### Abstract
In this paper we prove the existence of highest, cusped, traveling wave solutions for the fractional KdV equations \(f_{t}+ff_{x}=|D|^{\alpha}f_{x}\) for all \(\alpha\in(-1,0)\) and give their exact leading asymptotic behavior at zero. The proof combines careful asymptotic analysis and a computer-assisted approach.
## 1 Introduction
This paper is concerned with the existence and regularity of highest, cusped, periodic traveling-wave solutions to the fractional Korteweg-de Vries (KdV) equations, in the periodic setting given by
\[f_{t}+ff_{x}=|D|^{\alpha}f_{x},\quad\text{ for }(x,t)\in\mathbb{T}\times\mathbb{R}. \tag{1}\]
Here \(|D|^{\alpha}\) is the Fourier multiplier operator given by
\[\widehat{|D|^{\alpha}f}(\xi)=|\xi|^{\alpha}\widehat{f}(\xi)\]
where the parameter \(\alpha\) may in general take any real value. It can serve as a model for investigating the balance between nonlinear and dispersive effects [47]. For \(\alpha=2\) and \(\alpha=1\) it reduces to the classical KdV and Benjamin-Ono equations, for \(\alpha=-2\) one gets the reduced Ostrovsky equation. For \(\alpha=-1\) it reduces to the Burgers-Hilbert equation [31]
\[f_{t}+ff_{x}=\mathbf{H}[f],\quad\text{ for }(x,t)\in\mathbb{T}\times\mathbb{R}. \tag{2}\]
Here \(\mathbf{H}\) is the Hilbert transform which, for \(f:\mathbb{T}\rightarrow\mathbb{R}\), is defined by
\[\mathbf{H}[f](x)=\frac{1}{2\pi}p.v.\int_{-\pi}^{\pi}\cot\left(\frac{x-y}{2} \right)f(y)\ dy,\quad\widehat{\mathbf{H}}[\widehat{f}](k)=-i\,\text{sgn}(k) \widehat{f}(k).\]
The results in this paper, see Theorem 1.1, are for \(\alpha\in(-1,0)\), though the analysis of the Burgers-Hilbert case, \(\alpha=-1\), is also important for the proof.
For \(\alpha\in(-1,0)\) and small initial data in \(H^{N}(\mathbb{R})\) with \(N\geq 3\) estimates for the life span were proved by Ehrnstrom and Wang [21], see also [32] and [33] for the Burgers-Hilbert equation. Well-posedness in \(H^{s}(\mathbb{R})\) with \(s>3/2\) was established by Riano [52].
For \(\alpha\in(-1,0)\) the equation exhibits finite time blow [8, 34]. The characterization as wave breaking (i.e. the functions stay bounded, but its gradient blows up) was proved for \(\alpha\in(-1,-1/3)\) by Hur and Tao [36, 35] and for \(\alpha\in(-1,0)\) by Oh and Pasqualotto [50]. See [45] for a numerical study and [13] where they give a precise characterization of a chock forming in finite time for \(0<\alpha<\frac{1}{3}\).
The study of traveling waves is an important topic in fluid dynamics, see e.g. [29] for a recent overview of traveling water waves. The traveling wave assumption \(f(x,t)=\varphi(x-ct)\), where \(c>0\) denotes the wave speed, gives us
\[-c\varphi^{\prime}+\varphi\varphi^{\prime}=|D|^{\alpha}\varphi^{\prime}. \tag{3}\]
For the fractional KdV equation there is a branch of even, zero-mean, \(2\pi\)-periodic, smooth traveling wave solutions bifurcating from constant solutions. For \(\alpha<-1\) and \(\alpha\in(-1,0)\) this branch has been studied and proved to end in a highest cusped wave, for \(\alpha<-1\) by Bruell and Dhara [6] and for \(\alpha\in(-1,0)\) by Hildrum and Xue [30]. For \(\alpha=-1\) the branch has been studied by Castro, Cordoba and Zheng [9].
The notion of a highest traveling wave goes back to Stokes. For the free boundary Euler equation Stokes argued that if there exists a singular solution with a steady profile it must have an interior angle of \(120^{\circ}\) at the crest [53]. This is known as the Stokes conjecture and was proved in 1982 [1]. For the Whitham equation [56] the existence of a highest cusped traveling wave was conjectured by Whitham in [55]. Its existence, together with its \(C^{1/2}\) regularity, was proved by Ehrnstrom and Wahlen [20]. For the family of fractional KdV equation and variants thereof there has recently been much progress related to highest waves. Bruell and Dhara proved existence of highest traveling waves which are Lipschitz at their cusp for \(\alpha<-1\)[6]. Hildrum and Xue proved existence and the optimal \(-\alpha\)-Holder regularity for a family of equations including the fractional KdV equation with \(\alpha\in(-1,0)\)[30]. Orke proved their existence for \(\alpha\in(-1,0)\) for the inhomogeneous fractional KdV equations as well as the fractional Degasperis-Procesi equations, together with their optimal \(-\alpha\)-Holder regularity [51].
The results in [20, 6, 51, 30] are all based on global bifurcation arguments, bifurcating from the constant solution and proving that the branch must end in a highest wave which is not smooth at its crest. For the Burgers-Hilbert equation Dahne and Gomez-Serrano proved the existence of a highest wave which at its cusp behaves like \(|x|\log|x|\)[15]. The proof uses a different method where the problem is first reduced to a fixed point problem. The wave is therefore not directly tied to a branch of solutions as in the other results. This method of rewriting the problem into a fixed point problem was first used by Enciso, Gomez-Serrano and Vergara for proving the convexity and the precise asymptotic behavior of a highest wave solution to the Whitham equation [22], answering a conjecture made by Ehrnstrom and Wahlen [20]. In this paper we use a similar approach.
Similar to Hildrum and Xue we prove the existence of a highest wave for the fractional KdV equation with \(\alpha\in(-1,0)\). Our main contribution is a more precise description of the asymptotic behavior of the wave at the cusp. The description is in line with earlier results for the Whitham and Burgers-Hilbert equation. Recently Ehrnstrom, Mehlen and Varholm have obtained similar results for the asymptotic behavior of two families of equations including the uni- and bidirectional Whitham equations using different types of methods [19].
We prove the following theorem:
**Theorem 1.1**.: _There is a \(2\pi\)-periodic traveling wave \(\varphi\) of (3) for every \(\alpha\in(-1,0)\), which behaves asymptotically at \(x=0\) as_
\[\varphi(x)=c-\nu_{\alpha}|x|^{-\alpha}+\mathcal{O}(|x|^{p}).\]
_for \(\nu_{\alpha}>0\) as given in Lemma 4.1 and some \(-\alpha<p\leq 1\) to be made explicit later on._
**Remark 1.2**.: _The remainder term \(\mathcal{O}(|x|^{p})\) in Theorem 1.1 is in general not sharp. The estimate follows from the choice of our space._
**Remark 1.3**.: _Both our results and Hildrum and Xue's results assert the existence of a highest wave. While, a priori, they don't necessarily correspond to the same wave, we do believe this is the case._
The ansatz \(\varphi(x)=c-u(x)\) allows us to rewrite (3) as an equation that does not explicitly depend on the wave speed \(c\). Proving the existence of a solution \(u\) can be rewritten as a fixed point problem by considering the ansatz
\[u(x)=u_{\alpha}(x)+w_{\alpha}(x)v(x)\]
where \(u_{\alpha}(x)\) is an explicit, carefully chosen, approximate solution and \(w_{\alpha}(x)\) is an explicit weight factor. Proving the existence of a fixed point can be reduced to checking an inequality involving three constants, \(D_{\alpha}\), \(\delta_{\alpha}\) and \(n_{\alpha}\), that only depend on the choice of \(u_{\alpha}\) and \(w_{\alpha}\), see Proposition 2.2. This inequality is checked by bounding \(D_{\alpha}\), \(\delta_{\alpha}\) and \(n_{\alpha}\) using a computer-assisted proof, see Section 12.
One of the key difficulties compared to the Whitham equation [22] and the Burgers-Hilbert equation [15] is that we are treating a family of equations, instead of one fixed equation. To handle the full family we
need to understand how the equation changes with respect to \(\alpha\). In particular the endpoints of the interval, \(\alpha\) near \(-1\) and \(0\), require adapting the method to those cases. To handle this we split the interval \((-1,0)\) into three parts,
\[(-1,0)=(-1,-1+\delta_{1})\cup[-1+\delta_{1},-\delta_{2}]\cup(-\delta_{2},0)=I_{1 }\cup I_{2}\cup I_{3},\]
and adapt the methods used for \(I_{1}\) and \(I_{3}\). For \(\alpha\in I_{1}\) the main complication comes from that as \(\alpha\to-1\) the coefficient \(\nu_{\alpha}\) in Theorem 1.1 tends to infinity. For \(\alpha\in I_{3}\) the issue is that both sides of the inequality that needs to be verified for the fixed point argument tend to zero as \(\alpha\to 0\) and to assert that it holds arbitrarily close to \(\alpha=0\) we need an understanding of the rate at which the two sides tend to zero.
An important part of the work is the construction of the approximate solution \(u_{\alpha}\). Due to the singularity at \(x=0\) it is not possible to use a trigonometric polynomial alone, it would converge very slowly and have the wrong asymptotic behavior. Pure products of powers and logarithms, \(|x|^{a}\log^{b}|x|\), have the issue that they are not periodic and do not interact well with the operator \(|D|^{\alpha}\). Instead we take inspiration from the construction in [22] and consider a combination of trigonometric polynomials and Clausen functions of different orders, defined as
\[C_{s}(x)=\sum_{n=1}^{\infty}\frac{\cos(nx)}{n^{s}},\quad S_{s}(x)=\sum_{n=1}^ {\infty}\frac{\sin(nx)}{n^{s}},\]
for \(s>1\) and by analytic continuation otherwise. We also make use of their derivatives with respect to the order, for which we use the notation
\[C_{s}^{(\beta)}(x):=\frac{d^{\beta}}{ds^{\beta}}C_{s}(x),\quad S_{s}^{(\beta) }(x):=\frac{d^{\beta}}{ds^{\beta}}S_{s}(x).\]
The Clausen functions are \(2\pi\)-periodic, non-analytic at \(x=0\) and behave well with respect to \(|D|^{\alpha}\). In particular \(C_{s}(x)-C_{s}(0)\sim|x|^{s-1}\), which corresponds to the behavior we expect in Theorem 1.1. See Section 3 and Appendix C for more details about the Clausen functions. The main idea for the construction is the same as in [22], to choose the coefficients for the Clausen functions according to the asymptotic expansion of the solution at \(x=0\). However, in our case we also have to handle the limits \(\alpha\to-1\) and \(\alpha\to 0\), which requires a good understanding of how the approximations depend on \(\alpha\). In particular understanding the limit \(\alpha\to-1\) is important not only for this work, but is also used to handle the Burgers-Hilbert case in [15].
An essential part of our work is the interplay between traditional mathematical tools and rigorous computer calculations. Traditional numerical methods typically only compute approximate results, to be able to use the results in a proof we need the results to be rigorously verified. The basis for rigorous calculations is interval arithmetic, pioneered by Moore in the 1970s [49]. Due to improvements in both computational power and great improvements in software it is getting practical to use computer-assisted tools in more and more complicated settings. The main idea with interval arithmetic is to do arithmetic not directly on real numbers but on intervals with computer representable endpoints. Given a function \(f:\mathbb{R}\to\mathbb{R}\), an interval extension of \(f\) is an extension to intervals satisfying that for an interval \(\mathbf{x}=[\underline{x},\overline{x}]\), \(f(\mathbf{x})\) is an interval satisfying \(f(x)\in f(\mathbf{x})\) for all \(x\in\mathbf{x}\). In particular this allows us to prove inequalities for the function \(f\), for example the right endpoint of \(f(\mathbf{x})\) gives an upper bound of \(f\) on the interval \(\mathbf{x}\). For an introduction to interval arithmetic and rigorous numerics we refer the reader to the books [49, 54] and to the survey [26] for a specific treatment of computer-assisted proofs in PDE. For computer-assisted proofs in fluid mechanics in particular some recent results are: [4, 2] for the Navier-Stokes equation, [23, 24, 25] for the Kuramoto-Shivasinsky equation, [12] for the Hou-Luo model, [7] for the compressible Euler and Navier-Stokes equations and [10, 11] for blowup of the 2D Boussinesq and 3D Euler equation.
For all the calculations in this paper we make use of the Arb library [41] for ball (intervals represented as a midpoint and radius) arithmetic. It has good support for many of the special functions we use [43, 37, 42], Taylor arithmetic (see e.g. [39]) as well as rigorous integration [38].
The paper is organized as follows. In Section 2 we reduce the proof of Theorem 1.1 to a fixed point problem. In Section 3 we give a brief overview of properties of the Clausen functions that are relevant for the construction of \(u_{\alpha}\), in Section 4 we give the construction of \(u_{\alpha}\) and in Section 5 we discuss the choice of the weight function \(w_{\alpha}\) used. Section 6 gives the general strategy for bounding \(n_{\alpha}\), \(\delta_{\alpha}\) and \(D_{\alpha}\), Section 7 and 8
discusses evaluation of \(u_{\alpha}\). Section 9 is devoted to the approach for bounding \(n_{\alpha}\), Section 10 to bounding \(\delta_{\alpha}\) and Section 11 to studying the linear operator that appears in the construction of the fixed point problem and bounding \(D_{\alpha}\). The computer-assisted proofs giving bounds for \(n_{\alpha}\), \(\delta_{\alpha}\) and \(D_{\alpha}\) are given in Section 12. Finally we give the proof of Theorem 1.1 in Section 13.
Six appendices are given at the end of the paper. Appendix A gives some technical details for how to compute enclosures of functions around removable singularities. Appendix B gives a brief introduction to Taylor models which are used in some parts of the proof. Appendix C is concerned with computing enclosures of the Clausen functions and Appendix D with the rigorous numerical integration needed for bounding \(D_{0}\). Finally Appendix E, F and G contains some of the details needed for \(\alpha\) close to \(-1\).
## 2 Reduction to a fixed point problem
In this section we reduce the problem of proving Theorem 1.1 to proving the existence of a fixed point for a certain operator.
From [30, Theorem 12] we have the following characterization of even, nondecreasing solutions of (1).
**Lemma 2.1**.: _Let \(\varphi\in C^{1}\) be a nonconstant, even solution of (1) which is nondecreasing on \((-\pi,0)\), then_
\[\varphi^{\prime}>0\quad\text{ and }\quad\varphi<c\]
_on \((-\pi,0)\)._
As a consequence, any continuous, nonconstant, even function which is nondecreasing on \((-\pi,0)\) that satisfy (1) almost everywhere must satisfy \(\varphi\leq c\). The maximal possible height is thus given by \(c\) and due to the function being even and nondecreasing on \((-\pi,0)\) the maximal height has to be attained at \(x=0\).
Now, the ansatz \(\varphi(x)=c-u(x)\) inserted in (3) gives an equation which does not explicitly depend on the wave speed \(c\). Indeed, inserting this gives us
\[uu^{\prime}=-|D|^{\alpha}u^{\prime}. \tag{4}\]
Note that a solution of this equation gives a solution of (3) for any wave speed \(c\). This is to be expected due to the Galilean change of variables
\[\varphi\mapsto\varphi+\gamma,\ c\mapsto c+\gamma\]
which leaves (3) invariant. In particular, taking \(c\) equal to the mean of \(u\) gives a zero mean solution. For a highest wave we expect to have \(\varphi(0)=c\), giving us \(u(0)=0\). Integrating (4) gives us
\[\frac{1}{2}u^{2}=-\mathcal{H}^{\alpha}[u]. \tag{5}\]
Here \(\mathcal{H}\) is the operator
\[\mathcal{H}^{\alpha}[u](x)=|D|^{\alpha}u(x)-|D|^{\alpha}u(0). \tag{6}\]
It is the integral of the right-hand side of (4) with the constant of integration taken such that \(\mathcal{H}^{\alpha}[f](0)=0\), this ensures that any solution of (5) satisfies \(u(0)=0\). Note that any solution of (5) is a solution of (4) and hence gives a solution to (3).
To reduce the problem to a fixed point problem the idea is to write \(u\) as one, explicit, approximate solution of Equation (5) and one unknown term. More precisely we make the ansatz
\[u(x)=u_{\alpha}(x)+w_{\alpha}(x)v(x) \tag{7}\]
where \(u_{\alpha}(x)\) is an explicit, carefully chosen, approximate solution of Equation (5), see Section 4, and \(w_{\alpha}(x)\) is an explicit weight, see Section 5, both of them depending on \(\alpha\). By taking \(u_{\alpha}(x)\sim\nu_{\alpha}|x|^{-\alpha}\) and \(w_{\alpha}=\mathcal{O}(|x|^{p})\), proving Theorem 1.1 reduces to proving existence of \(v\in L^{\infty}(\mathbb{T})\) such that the given ansatz is a solution of Equation (5).
Inserting the ansatz (7) into Equation (5) gives us
\[\frac{1}{2}(u_{\alpha}+w_{\alpha}v)^{2}=-\mathcal{H}^{\alpha}[u_{\alpha}+w_{ \alpha}v]\iff\frac{1}{2}u_{\alpha}^{2}+u_{\alpha}w_{\alpha}v+\frac{1}{2}w_{ \alpha}^{2}v^{2}=-\mathcal{H}^{\alpha}[u_{\alpha}]-\mathcal{H}^{\alpha}[w_{ \alpha}v]\]
By collecting all the linear terms in \(v\) we can write this as
\[u_{\alpha}w_{\alpha}v+\mathcal{H}^{\alpha}[w_{\alpha}v]=-\mathcal{H}^{\alpha} u_{\alpha}-\frac{1}{2}u_{\alpha}^{2}-\frac{1}{2}w_{\alpha}^{2}v^{2}\iff v+ \frac{1}{w_{\alpha}u_{\alpha}}\mathcal{H}^{\alpha}[v]=-\frac{1}{w_{\alpha}u_{ \alpha}}\left(\mathcal{H}^{\alpha}[u_{\alpha}]+u_{\alpha}^{2}\right)-\frac{w_ {\alpha}}{2u_{\alpha}}v^{2}.\]
Now let \(T_{\alpha}\) denote the operator
\[T_{\alpha}[v]=-\frac{1}{w_{\alpha}u_{\alpha}}\mathcal{H}^{\alpha}[w_{\alpha}v]. \tag{8}\]
Denote the weighted defect of the approximate solution \(u_{\alpha}(x)\) by
\[F_{\alpha}(x)=\frac{1}{w_{\alpha}(x)u_{\alpha}(x)}\left(\mathcal{H}^{\alpha}[ u_{\alpha}](x)+\frac{1}{2}u_{\alpha}(x)^{2}\right), \tag{9}\]
and let
\[N_{\alpha}(x)=\frac{w_{\alpha}(x)}{2u_{\alpha}(x)}. \tag{10}\]
Then we can write the above as
\[(I-T_{\alpha})v=-F_{\alpha}-N_{\alpha}v^{2}.\]
Assuming that \(I-T_{\alpha}\) is invertible we rewrite this as
\[v=(I-T_{\alpha})^{-1}\left(-F_{\alpha}-N_{\alpha}v^{2}\right)=:G_{\alpha}[v]. \tag{11}\]
Hence proving the existence of \(v\) such that \(u_{\alpha}+w_{\alpha}v\) is a solution to Equation (5) reduces to proving existence of a fixed point of the operator \(G_{\alpha}\).
Next we reduce the problem of proving that \(G_{\alpha}\) has a fixed point to checking an inequality for three numbers (depending on \(\alpha\)) that depend only on the choice of \(u_{\alpha}\) and \(w_{\alpha}\). We let \(\|T\|\) denote the \(L^{\infty}(\mathbb{T})\to L^{\infty}(\mathbb{T})\) norm of a linear operator \(T\).
**Proposition 2.2**.: _Let \(D_{\alpha}=\|T_{\alpha}\|\), \(\delta_{\alpha}=\|F_{\alpha}\|_{L^{\infty}(\mathbb{T})}\) and \(n_{\alpha}=\|N_{\alpha}\|_{L^{\infty}(\mathbb{T})}\). If \(D_{\alpha}<1\) and they satisfy the inequality_
\[\delta_{\alpha}<\frac{(1-D_{\alpha})^{2}}{4n_{\alpha}}\]
_then for_
\[\epsilon_{\alpha}=\frac{1-D_{\alpha}-\sqrt{(1-D_{\alpha})^{2}-4\delta_{\alpha }n_{\alpha}}}{2n_{\alpha}}\]
_and_
\[X_{\epsilon}=\{v\in L^{\infty}(\mathbb{T}):v(x)=v(-x),\|v\|_{L^{\infty}( \mathbb{T})}\leq\epsilon\}\]
_we have_
1. \(G_{\alpha}(X_{\epsilon_{\alpha}})\subseteq X_{\epsilon_{\alpha}}\)_;_
2. \(\|G_{\alpha}[v]-G_{\alpha}[w]\|_{L^{\infty}(\mathbb{T})}\leq k_{\alpha}\|v-w\| _{L^{\infty}(\mathbb{T})}\) _with_ \(k_{\alpha}<1\) _for all_ \(v,w\in X_{\epsilon_{\alpha}}\)_._
Proof.: Using that \(N_{\alpha}\) and \(F_{\alpha}\) are even it can be checked that
\[G_{\alpha}(X_{\epsilon_{\alpha}})\subseteq(I-T)^{-1}X_{\delta_{\alpha}+n_{ \alpha}\epsilon_{\alpha}^{2}}.\]
Since \(\|T_{\alpha}\|<1\) the operator \(I-T_{\alpha}\) is invertible and an upper bound of the norm of the inverse is given by \(\frac{1}{1-D_{\alpha}}\), moreover \(T_{\alpha}\) takes even functions to even functions and hence so will \((I-T_{\alpha})^{-1}\). This gives us
\[G_{\alpha}(X_{\epsilon_{\alpha}})\subseteq(I-T)^{-1}X_{\delta_{\alpha}+n_{ \alpha}\epsilon_{\alpha}^{2}}\subseteq X_{\frac{\delta_{\alpha}+n_{\alpha} \epsilon_{\alpha}^{2}}{1-D_{\alpha}}}.\]
The choice of \(\epsilon_{\alpha}\) then gives
\[\frac{\delta_{\alpha}+n_{\alpha}\epsilon_{\alpha}^{2}}{1-D_{\alpha}}=\epsilon _{\alpha}.\]
Next we have \(G_{\alpha}[v]-G_{\alpha}[w]=(I-T_{\alpha})^{-1}(-N_{\alpha}(v^{2}-w^{2}))\) and hence
\[\|G_{\alpha}[v]-G_{\alpha}[w]\|_{L^{\infty}(\mathbb{T})}\leq\frac{n_{\alpha}} {1-D_{\alpha}}\|v^{2}-w^{2}\|_{L^{\infty}(\mathbb{T})}\leq\frac{2n_{\alpha} \epsilon_{\alpha}}{1-D_{\alpha}}\|v-w\|_{L^{\infty}(\mathbb{T})}.\]
Where \(k_{\alpha}=\frac{2n_{\alpha}\epsilon_{\alpha}}{1-D_{\alpha}}<1\) since \(\epsilon_{\alpha}<\frac{1-D_{\alpha}}{2n_{\alpha}}\).
## 3 Clausen functions
We here give definitions and properties of the Clausen functions that are used in the construction of \(u_{\alpha}\) and when bounding \(n_{\alpha}\), \(\delta_{\alpha}\) and \(D_{\alpha}\). For more details about the Clausen functions see Appendix C.
For \(s>1\) the Clausen functions can be defined through their Fourier expansions as
\[C_{s}(x) =\sum_{n=1}^{\infty}\frac{\cos(nx)}{n^{s}},\] \[S_{s}(x) =\sum_{n=1}^{\infty}\frac{\sin(nx)}{n^{s}}.\]
For general \(s\) they are more conveniently defined by their relation to the polylogarithm through
\[C_{s}(x) =\frac{1}{2}\left(\mathrm{Li}_{s}(e^{ix})+\mathrm{Li}_{s}(e^{-ix} )\right)=\mathrm{Re}\left(\mathrm{Li}_{s}(e^{ix})\right),\] \[S_{s}(x) =\frac{1}{2}\left(\mathrm{Li}_{s}(e^{ix})-\mathrm{Li}_{s}(e^{-ix })\right)=\mathrm{Im}\left(\mathrm{Li}_{s}(e^{ix})\right).\]
They behave nicely with respect to the operator \(|D|^{\alpha}\), for which we have
\[|D|^{\alpha}C_{s}=-C_{s-\alpha}(x),\quad|D|^{\alpha}S_{s}=-S_{s-\alpha}(x).\]
In many cases we want to work with functions which are normalized to be zero at \(x=0\), for which we use the notation
\[\tilde{C}_{s}(x)=C_{s}(x)-C_{s}(0),\quad\tilde{C}_{s}^{(\beta)}(x)=C_{s}^{( \beta)}(x)-C_{s}^{(\beta)}(0).\]
Note that \(C_{s}(0)\) is in general only finite for \(s>1\), in which case we get directly from the Fourier expansion that \(C_{s}(0)=\zeta(s)\) and \(C_{s}^{(\beta)}(0)=\zeta^{(\beta)}(s)\). With this notation we get for the operator \(\mathcal{H}^{\alpha}\),
\[\mathcal{H}^{\alpha}[\tilde{C}_{s}](x)=-\tilde{C}_{s-\alpha}(x),\quad\mathcal{ H}^{\alpha}[S_{s}](x)=-S_{s-\alpha}(x).\]
From [22] we have the following expansion for \(C_{s}(x)\) and \(S_{s}(x)\), valid for \(|x|<2\pi\),
\[C_{s}(x) =\Gamma(1-s)\sin\left(\frac{\pi}{2}s\right)|x|^{s-1}+\sum_{m=0}^{ \infty}(-1)^{m}\zeta(s-2m)\frac{x^{2m}}{(2m)!};\] \[S_{s}(x) =\Gamma(1-s)\cos\left(\frac{\pi}{2}s\right)\mathrm{sgn}(x)|x|^{s- 1}+\sum_{m=0}^{\infty}(-1)^{m}\zeta(s-2m-1)\frac{x^{2m+1}}{(2m+1)!}.\]
For the functions \(C_{s}^{(\beta)}\) and \(S_{s}^{(\beta)}\) we will mainly make use of \(C_{2}^{(1)}(x)\) and \(C_{3}^{(1)}(x)\), for which we have the following expansions [3, Eq. 16], valid for \(|x|<2\pi\),
\[C_{2}^{(1)}(x)= \zeta^{(1)}(2)-\frac{\pi}{2}|x|\log|x|-(\gamma-1)\frac{\pi}{2}|x|+ \sum_{m=1}^{\infty}(-1)^{m}\zeta^{(1)}(2-2m)\frac{x^{2m}}{(2m)!};\] \[C_{3}^{(1)}(x)= \zeta^{(1)}(3)-\frac{1}{4}x^{2}\log^{2}|x|+\frac{3-2\gamma}{4}x^{ 2}\log|x|-\frac{36\gamma-12\gamma^{2}-24\gamma_{1}-42+\pi^{2}}{48}x^{2}\] \[+\sum_{m=2}^{\infty}(-1)^{m}\zeta^{(1)}(3-2m)\frac{x^{2m}}{(2m)!}.\]
Where \(\gamma_{n}\) is the Stieltjes constant and \(\gamma=\gamma_{0}\). Bounds for the tails are given in Lemmas C.5 and C.6.
## 4 Construction of \(u_{\alpha}\)
In this section we give the construction of the approximate solution \(u_{\alpha}\) for \(\alpha\in(-1,0)\). As a first step we determine the coefficient for the leading term in the asymptotic expansion.
**Lemma 4.1**.: _Let \(\alpha\in(-1,0)\) and assume that \(u\) is a bounded, even solution of Equation (5) with the asymptotic behavior_
\[u(x)=\nu_{a}|x|^{-\alpha}+o(|x|^{-\alpha})\]
_close to zero, with \(\nu_{\alpha}\neq 0\). Then the coefficient is given by_
\[\nu_{\alpha}=\frac{2\Gamma(2\alpha)\cos\left(\pi\alpha\right)}{\Gamma(\alpha) \cos\left(\frac{\pi}{2}\alpha\right)}.\]
Proof.: We directly get
\[\frac{1}{2}u(x)^{2}=\frac{\nu_{\alpha}^{2}}{2}|x|^{-2\alpha}+o(|x|^{-2\alpha}).\]
To get the asymptotic behavior of \(\mathcal{H}^{\alpha}[u]\) hand side we go through the Clausen function \(\tilde{C}_{1-\alpha}(x)\). Based on the asymptotic behavior of \(\tilde{C}_{1-\alpha}(x)\) we can write \(u\) as
\[u(x)=\frac{\nu_{\alpha}}{\Gamma(\alpha)\cos\left(\frac{\pi}{2}\alpha\right)} \tilde{C}_{1-\alpha}(x)+o(|x|^{-\alpha}).\]
This gives us
\[\mathcal{H}^{\alpha}[u](x) =-\frac{\nu_{\alpha}}{\Gamma(\alpha)\cos\left(\frac{\pi}{2}\alpha \right)}\tilde{C}_{1-2\alpha}(x)+\mathcal{H}^{\alpha}[o(|x|^{-\alpha})](x)\] \[=-\nu_{\alpha}\frac{\Gamma(2\alpha)\cos\left(\pi\alpha\right)}{ \Gamma(\alpha)\cos\left(\frac{\pi}{2}\alpha\right)}|x|^{-2\alpha}+o(|x|^{-2 \alpha})+\mathcal{H}^{\alpha}[o(|x|^{-\alpha})](x).\]
In a similar way as in [15, Lemma 4.1] it can be shown that \(\mathcal{H}^{\alpha}[o(|x|^{-\alpha})](x)=o(|x|^{-2\alpha})\), which implies the lemma.
In addition to having the correct asymptotic behavior we want \(u_{\alpha}\) to be a good approximate solution of (5), in the sense that we want the defect,
\[F_{\alpha}(x)=\frac{1}{w_{\alpha}(x)u_{\alpha}(x)}\left(\mathcal{H}^{\alpha}[ u_{\alpha}](x)+\frac{1}{2}u_{\alpha}(x)^{2}\right),\]
to be small for \(x\in[0,\pi]\). The hardest part is to make \(F_{\alpha}(x)\) small locally near the singularity at \(x=0\), this is done by studying the asymptotic behavior of \(\mathcal{H}^{\alpha}[u_{\alpha}](x)+\frac{1}{2}u_{\alpha}(x)^{2}\). Once the defect is sufficiently small near \(x=0\) it can be made small globally by adding a suitable trigonometric polynomial to \(u_{\alpha}\).
The construction is similar to that in [22], but requires more work to handle the limits \(\alpha\to-1\) and \(\alpha\to 0\). We take \(u_{\alpha}\) to be a combination of three parts:
1. The first term is \(a_{\alpha,0}\tilde{C}_{1-\alpha}\), where the coefficient is chosen to give the right asymptotic behavior according to Lemma 4.1
2. The second part is chosen to make the defect small near \(x=0\), similarly to in [22], it is given by a sum of Clausen functions.
3. The third part is chosen to make the defect small globally and is given by a trigonometric polynomial.
More precisely the approximation is given by
\[u_{\alpha}(x)=\sum_{j=0}^{N_{\alpha,0}}a_{\alpha,j}\tilde{C}_{1-\alpha+jp_{ \alpha}}(x)+\sum_{n=1}^{N_{\alpha,1}}b_{\alpha,n}(\cos(nx)-1), \tag{12}\]
The values of \(a_{\alpha,j}\) for \(j\geq 1\) and \(p_{\alpha}\) will be taken to make the defect small near \(x=0\) and \(b_{\alpha,n}\) is taken to make the defect small globally.
While the general form of the approximation is the same for all values of \(\alpha\) the limits \(\alpha\to-1\) and \(\alpha\to 0\) requires a slightly different approach. For that reason we split it into three cases
1. \(\alpha\in(-1,-1+\delta_{1})=I_{1}\);
2. \(\alpha\in[-1+\delta_{1},-\delta_{2}]=I_{2}\);
3. \(\alpha\in(-\delta_{2},0)=I_{3}\),
where the precise values of \(\delta_{1}\) and \(\delta_{2}\) will be determined later. We start with the approximation for the interval \(I_{2}\) which has the least amount of technical details. We then discuss the alterations required to handle \(I_{1}\) and \(I_{3}\).
### Construction of \(u_{\alpha}\) for \(I_{2}\)
The asymptotic behavior of the approximation (12) is determined by \(a_{\alpha,0}\) and for it to agree with Lemma 4.1 we have to take
\[a_{\alpha,0}=\frac{2\Gamma(2\alpha)\cos{(\pi\alpha)}}{\Gamma(\alpha)^{2}\cos {\left(\frac{\pi}{2}\alpha\right)}^{2}}. \tag{13}\]
The behavior of \(a_{\alpha,0}\) as a function in \(\alpha\) is shown in Figure 1a.
To choose the other parameters we have to study the defect of the approximation. The defect is given by \(\mathcal{H}^{\alpha}[u_{\alpha}]+\frac{1}{2}u_{\alpha}^{2}\), where we have
\[\mathcal{H}^{\alpha}[u_{\alpha}](x)=-\sum_{j=0}^{N_{\alpha,0}}a_{\alpha,j} \tilde{C}_{1-2\alpha+jp_{\alpha}}(x)-\sum_{n=1}^{N_{\alpha,1}}b_{\alpha,n}n^{ \alpha}(\cos(nx)-1). \tag{14}\]
For the defect close to \(x=0\) we want to study the asymptotic expansion. We get the asymptotic behavior of \(u_{\alpha}\) and \(\mathcal{H}^{\alpha}[u_{\alpha}]\) in the following lemma, whose proof we omit since it follows directly from the expansions of the Clausen and cosine functions.
**Lemma 4.2**.: _Let \(u_{\alpha}\) be as in (12). Then the following asymptotic expansions hold near \(x=0\):_
\[u_{\alpha}(x)=\sum_{j=0}^{N_{\alpha,0}}a_{\alpha,j}^{0}|x|^{-\alpha+jp_{ \alpha}}+\sum_{m=1}^{\infty}\frac{(-1)^{m}}{(2m)!}\left(\sum_{j=0}^{N_{\alpha,0}}a_{\alpha,j}\zeta(1-\alpha+jp_{\alpha}-2m)+\sum_{m=1}^{N_{\alpha,1}}b_{ \alpha,j}n^{2m}\right)x^{2m} \tag{15}\]
\[\mathcal{H}^{\alpha}[u_{\alpha}](x)=-\sum_{j=0}^{N_{\alpha,0}}A_{\alpha,j}^{0} |x|^{-2\alpha+jp_{\alpha}}-\sum_{m=1}^{\infty}\frac{(-1)^{m}}{(2m)!}\left( \sum_{j=0}^{N_{\alpha,0}}a_{\alpha,j}\zeta(1-2\alpha+jp_{\alpha}-2m)+\sum_{m= 1}^{N_{\alpha,1}}b_{\alpha,j}n^{2m+\alpha}\right)x^{2m} \tag{16}\]
_where_
\[a^{0}_{\alpha,j} =\Gamma(\alpha-jp_{\alpha})\cos\left(\frac{\pi}{2}(\alpha-jp_{\alpha })\right)a_{\alpha,j};\] \[A^{0}_{\alpha,j} =\Gamma(2\alpha-jp_{\alpha})\cos\left(\frac{\pi}{2}(2\alpha-jp_{ \alpha})\right)a_{\alpha,j}.\]
From this we can compute the expansion of \(\mathcal{H}^{\alpha}[u_{\alpha}]+\frac{1}{2}u_{\alpha}^{2}\), given in the following Lemma. Again we omit the proof which only involves standard calculations.
**Lemma 4.3**.: _Let \(u_{\alpha}\) be as in (12). Then we have the following asymptotic expansion near \(x=0\)_
\[\mathcal{H}^{\alpha}[u_{\alpha}]+\frac{1}{2}u_{\alpha}^{2} =\left(\frac{1}{2}(a^{0}_{\alpha,0})^{2}-A^{0}_{\alpha,0} \right)|x|^{-2\alpha}+\left(a^{0}_{\alpha,0}a^{0}_{\alpha,1}-A^{0}_{\alpha,1} \right)|x|^{-2\alpha+p_{\alpha}}\] \[\quad+\sum_{k=2}^{N_{\alpha,0}}\left(\frac{1}{4}((-1)^{k}+1)(a^{ 0}_{\alpha,\lfloor\frac{k}{2}\rfloor})^{2}+\sum_{j=0}^{\lfloor\frac{k-1}{2} \rfloor}a^{0}_{\alpha,j}a^{0}_{\alpha,k-j}-A^{0}_{\alpha,k}\right)|x|^{-2 \alpha+kp_{\alpha}}\] \[\quad+\sum_{k=N_{\alpha,0}+1}^{2N_{\alpha,0}}\left(\frac{1}{4}(( -1)^{k}+1)(a^{0}_{\alpha,\lfloor\frac{k}{2}\rfloor})^{2}+\sum_{j=0}^{max(0, \lfloor\frac{k-1}{2}\rfloor)}a^{0}_{\alpha,j}a^{0}_{\alpha,k-j}\right)|x|^{-2 \alpha+kp_{\alpha}}\] \[\quad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\
1. The coefficients in the expansion might also depend on \(b_{\alpha,n}\) which we have yet to determine;
2. It is not clear if we indeed can pick \(a_{\alpha,j}\) to make these coefficients zero;
3. Even if we can pick \(a_{\alpha,j}\) to make the coefficients zero it is not clear how we would find these values.
The first point is handled by simply ignoring the values of \(b_{\alpha,n}\) at this stage, taking them as zero. This means that some of the coefficients in the expansion might not be zero in the end, once the values for \(b_{\alpha,n}\) are taken into account. This however turns out to not be an issue, which is related to how to handle the other two points. We summarize it in the following remark
**Remark 4.4**.: _Recall that \(u_{\alpha}\) is only an approximate solution to the equation. In the end the only important part is that the defect is sufficiently small. This means that we don't need to take exact values for \(p_{\alpha}\), \(\{a_{\alpha,j}\}_{1\leq j\leq N_{\alpha,0}}\) or \(\{b_{\alpha,n}\}_{1\leq n\leq N_{\alpha,1}}\). We are free to use any means, in particular non-rigorous numerical methods, for finding values which gives us a small defect. That the defect indeed is small is then later rigorously verified._
_Note that this does not apply to the parameter \(a_{\alpha,0}\). For the defect to be bounded near \(x=0\) we need the leading coefficient in the expansion to be exactly zero, it is therefore not enough to use a numerical approximation of \(a_{\alpha,0}\)._
I light of this remark, points 2 and 3 above can be dealt with by setting up the problem as a non-linear system of equations and finding a solution using non-rigorous numerical methods. The numerical procedure might or might not converge and the solution it gives might or might not correspond to a true solution, but since we only need approximate value this doesn't matter. That the solutions we get are good enough will be verified when bounding the defect. Similarly, we don't need to take \(p_{\alpha}\) to be an exact solution of (17), a numerical approximation of the solution suffices.
Once \(p_{\alpha}\) and \(\{a_{\alpha,j}\}_{1\leq j\leq N_{\alpha,0}}\) have been picked in the above way, to make the asymptotic defect small, we wish to find \(\{b_{\alpha,n}\}_{1\leq n\leq N_{\alpha,1}}\) to make the defect small globally. This is done by taking \(N_{\alpha,1}\) equally spaced points \(\{x_{n}\}_{1\leq n\leq N_{\alpha,1}}\) on the interval \((0,\pi)\) and numerically solving the non-linear system
\[\mathcal{H}^{\alpha}[u_{\alpha}](x_{n})+\frac{1}{2}u_{\alpha}(x_{n})^{2}=0 \quad\text{ for }1\leq n\leq N_{\alpha,1}.\]
So far we have not said anything about the values for \(N_{\alpha,0}\) and \(N_{\alpha,1}\). These are tuning parameters, higher values will typically lead to a smaller defect but also to a higher computational cost. For \(N_{\alpha,0}\) this is only true up to some point where the system for determining the coefficients starts to become ill-conditioned. The values are chosen to make sure that the inequality in Proposition 2.2 holds and depends on \(\alpha\). Some more details are given in Section 12.2.
Figure 1: Values of \(a_{\alpha,0}\), defined by Equation (13), and \(p_{\alpha}\), defined by Equation (17), as functions of \(\alpha\).
### Construction of \(u_{\alpha}\) for \(I_{1}\)
The approximation in the previous section encounters several issues as \(\alpha\) moves close to \(-1\). To begin with \(a_{\alpha,0}\) diverges towards negative infinity, as can be seen in Figure 0(a). Furthermore \(p_{\alpha}\) tends towards zero, as can be seen in Figure 0(b), which means that the parameters for all the Clausen functions converge towards the same value.
We can get an idea for how to handle these issues by numerically studying the behavior of
\[\sum_{j=0}^{N_{\alpha,0}}a_{\alpha,j}\tilde{C}_{1-\alpha+jp_{\alpha}}(x), \tag{18}\]
as \(\alpha\) goes to \(-1\). The first important observation is that as \(\alpha\to-1\) the parameter \(a_{\alpha,1}\) goes to infinity in such a way that \(a_{\alpha,0}+a_{\alpha,1}\) remains bounded, while \(\{a_{\alpha,j}\}_{2\leq j\leq N_{\alpha,0}}\) remains individually bounded. The second important observation is that as \(\alpha\to-1\) the parameter \(p_{\alpha}\) behaves like \(1+\alpha+(1+\alpha)^{2}/2+\mathcal{O}((1+\alpha)^{3})\). This hints at a solution to the problem that \(a_{\alpha,0}\tilde{C}_{1-\alpha}(x)\) doesn't converge: by taking \(a_{\alpha,1}=-a_{\alpha,0}\) and \(p_{\alpha}=1+\alpha+(1+\alpha)^{2}/2\) the first two terms in the above sum become
\[a_{\alpha,0}(\tilde{C}_{1-\alpha}(x)-\tilde{C}_{2+(1+\alpha)^{2}/2}(x)).\]
For which we have the following lemma.
**Lemma 4.5**.: _The function_
\[a_{\alpha,0}(\tilde{C}_{1-\alpha}(x)-\tilde{C}_{2+(1+\alpha)^{2}/2}(x)),\]
_with \(a_{\alpha,0}\) as in (13), converges to \(\frac{2}{\pi^{2}}\tilde{C}_{2}^{(1)}(x)\) pointwise in \(x\) as \(\alpha\to-1\). Moreover, for \(\alpha\in\left(-1,-\frac{3}{4}\right)\) it satisfies the inequality_
\[a_{\alpha,0}(\tilde{C}_{1-\alpha}(x)-\tilde{C}_{2+(1+\alpha)^{2}/2}(x))\geq \frac{2}{\pi^{2}}\tilde{C}_{2}^{(1)}(x).\]
Proof.: We have
\[a_{\alpha,0}(\tilde{C}_{1-\alpha}(x)-\tilde{C}_{2+(1+\alpha)^{2}/2}(x))=-(1+ \alpha)a_{\alpha,0}\cdot\frac{\tilde{C}_{2+(1+\alpha)^{2}/2}(x)-\tilde{C}_{1- \alpha}(x)}{1+\alpha}.\]
Taking the limit as \(\alpha\to-1\) for the first factor we get \(\lim_{\alpha\to-1}-(1+\alpha)a_{\alpha,0}=\frac{2}{\pi^{2}}.\) The second factor converges to the derivative \(\tilde{C}_{1-\alpha}^{(1)}(x)\). This gives us the pointwise convergence.
To prove the inequality it is enough to prove that the left-hand side is increasing in \(\alpha\). For that it is enough to prove that both factors are positive and increasing in \(\alpha\).
The proof that \(-(1+\alpha)a_{\alpha,0}\) is positive and increasing in \(\alpha\) is computer-assisted. It is done by rigorously computing a lower bound of the value and the derivative for \(-1<\alpha<-\frac{3}{4}\) and asserting that these lower bounds are positive.
For the Clausen-factor we let \(h=1+\alpha\) and expand the Clausen functions at \(s=2\), giving us
\[\tilde{C}_{1-\alpha}(x) =\sum_{n=0}^{\infty}\frac{\tilde{C}_{2}^{(n)}(x)}{n!}(-h)^{n},\] \[\tilde{C}_{1+(1+\alpha)^{2}/2}(x) =\sum_{n=0}^{\infty}\frac{\tilde{C}_{2}^{(n)}(x)}{n!}\left(\frac{ h^{2}}{2}\right)^{n}.\]
From which we get
\[\frac{\tilde{C}_{2+(1+\alpha)^{2}/2}(x)-\tilde{C}_{1-\alpha}(x)}{1+\alpha}= \sum_{n=1}^{\infty}\frac{\tilde{C}_{2}^{(n)}(x)}{n!}h^{n-1}\left(\frac{h^{n}}{ 2^{n}}+(-1)^{n-1}\right). \tag{19}\]
For \(-1<\alpha<0\) we have \(\left|\frac{h^{n}}{2^{n}}\right|<1\) and the sign of the factor \(\frac{h}{2^{n}}+(-1)^{n-1}\) is therefore the same as for \((-1)^{n-1}\). Furthermore, we have
\[\tilde{C}_{2}^{(n)}(x)=(-1)^{n}\sum_{k=1}^{\infty}\frac{\cos(kx)-1}{k^{2}}\log k\]
and since \(\cos(kx)-1\leq 0\) this has the same sign as \((-1)^{n-1}\). It follows that all the terms in (19) are positive, hence the sum is positive and increasing in \(\alpha\) for \(-1<\alpha<0\).
If we subtract \(a_{\alpha,0}(\tilde{C}_{1-\alpha}(x)-\tilde{C}_{2+(1+\alpha)^{2}/2}(x))\) from (18) the remaining part of
\[(a_{\alpha,0}+a_{\alpha,1})\tilde{C}_{1-\alpha+jp_{\alpha}}(x)+\sum_{j=2}^{N_ {\alpha,0}}a_{\alpha,j}\tilde{C}_{1-\alpha+jp_{\alpha}}(x),\]
Numerically we can see that this converges to a bounded function as \(\alpha\) goes towards \(-1\). Since we only need an approximation of the limit we can simply fix some \(\alpha>-1\) and use the values for this \(\alpha\).
More precisely we construct the approximation for \(I_{1}\) in the following way. First fix \(\hat{\alpha}\) close to \(-1\) but such that \(\alpha<\hat{\alpha}\) for all \(\alpha\in I_{1}\). Then compute \(\{a_{\hat{\alpha},j}\}_{0\leq j\leq N_{\hat{\alpha},1}}\) and \(p_{\hat{\alpha}}\) using the approach in Section 4.1. We then take
\[u_{\alpha}(x)=a_{\alpha,0}(\tilde{C}_{1-\alpha}(x)-\tilde{C}_{2 +(1+\alpha)^{2}/2}(x))+(a_{\hat{\alpha},0}+a_{\hat{\alpha},1})\tilde{C}_{1- \hat{\alpha}+p_{\hat{\alpha}}}(x)\\ +\sum_{j=2}^{N_{\hat{\alpha},0}}a_{\hat{\alpha},j}\tilde{C}_{1- \hat{\alpha}+jp_{\hat{\alpha}}}(x)+\sum_{n=1}^{N_{-1,1}}b_{-1,n}(\cos(nx)-1) \tag{20}\]
The parameters \(\{b_{-1,n}\}_{1\leq n\leq N_{-1,1}}\) are determined by considering the defect at the limit \(\alpha\to-1\), where we from Lemma 4.5 have
\[u_{-1}(x)=\frac{2}{\pi^{2}}\tilde{C}_{2}^{(1)}(x)+(a_{\hat{\alpha},0}+a_{\hat {\alpha},1})\tilde{C}_{1-\hat{\alpha}+p_{\hat{\alpha}}}(x)+\sum_{j=2}^{N_{ \hat{\alpha},0}}a_{\hat{\alpha},j}\tilde{C}_{1-\hat{\alpha}+jp_{\hat{\alpha}} }(x)+\sum_{n=1}^{N_{-1,1}}b_{-1,n}(\cos(nx)-1). \tag{21}\]
In the same way as in Section 4.1 we take \(\{b_{-1,n}\}_{1\leq n\leq N_{-1,1}}\) so that the defect is minimized on \(N_{-1,1}\) equally spaced points on the interval \((0,\pi)\). Note that by Lemma 4.5\(u_{-1}(x)\) gives a lower bound of \(u_{\alpha}(x)\).
With this approximation we get
\[\mathcal{H}^{\alpha}[u_{\alpha}](x)=-a_{\alpha,0}(\tilde{C}_{1-2 \alpha}(x)-\tilde{C}_{2-\alpha+(1+\alpha)^{2}/2}(x))+(a_{\hat{\alpha},0}+a_{ \hat{\alpha},1})\tilde{C}_{1-\alpha-\hat{\alpha}+p_{\hat{\alpha}}}(x)\\ -\sum_{j=2}^{N_{\alpha,0}}a_{\hat{\alpha},j}\tilde{C}_{1-\alpha- \hat{\alpha}+jp_{\hat{\alpha}}}(x)-\sum_{n=1}^{N_{-1,1}}b_{-1,n}n^{-\alpha}( \cos(nx)-1). \tag{22}\]
Note that since \(a_{\alpha,0}\to-\infty\) as \(\alpha\to-1\) we cannot compute a finite enclosure of \(a_{\alpha,0}\) valid for an interval of the form \((-1,-1+\delta)\). When computing enclosures we therefore have to treat
\[a_{\alpha,0}(\tilde{C}_{1-\alpha}(x)-\tilde{C}_{2+(1+\alpha)^{2}/2}(x))\quad \text{and}\quad a_{\alpha,0}(\tilde{C}_{1-2\alpha}(x)-\tilde{C}_{2-\alpha+(1+ \alpha)^{2}/2}(x))\]
as one part, and explicitly handle the removable singularity.
The asymptotic behavior are given in the following lemma, again we omit the proof since it follows directly from the expansions of the Clausen functions.
**Lemma 4.6**.: _The approximation for \(\alpha\in I_{1}\), with \(u_{\alpha}\) given by (20) and \(\mathcal{H}^{\alpha}[u_{\alpha}]\) given by (22) has the asymptotic expansions_
\[u_{\alpha}(x)= a_{\alpha,0}\left(\Gamma(\alpha)\cos\left(\frac{\pi}{2}\alpha \right)-\Gamma(-1-(1+\alpha)^{2}/2)\cos\left(\frac{\pi}{2}(1+(1+\alpha)^{2}/2 )\right)|x|^{1+\alpha+(1+\alpha)^{2}/2}\right)|x|^{-\alpha}\] \[+a_{\alpha,0}\sum_{m=1}^{\infty}\frac{(-1)^{m}}{(2m)!}\left(\zeta (1-\alpha-2m)-\zeta(2+(1+\alpha)^{2}/2-2m)\right)x^{2m}\] \[+\sum_{j=1}^{N_{\dot{\alpha},0}}\hat{a}_{\alpha,j}^{0}|x|^{-\hat{ \alpha}+jp_{\dot{\alpha}}}+\sum_{m=1}^{\infty}\frac{(-1)^{m}}{(2m)!}\left(\sum _{j=1}^{N_{\dot{\alpha},0}}a_{\hat{\alpha},j}\zeta(1-\hat{\alpha}+jp_{\dot{ \alpha}}-2m)+\sum_{m=1}^{N_{-1,1}}b_{\hat{\alpha},j}n^{2m}\right)x^{2m}\]
_and_
\[\mathcal{H}^{\alpha}u_{\alpha}(x)= -a_{\alpha,0}\left(\Gamma(2\alpha)\cos\left(\pi\alpha\right)- \Gamma(-1+\alpha-(1+\alpha)^{2}/2)\cos\left(\frac{\pi}{2}(1-\alpha+(1+\alpha)^ {2}/2)\right)|x|^{1+\alpha+(1+\alpha)^{2}/2}\right)|x|^{-2\alpha}\] \[-a_{\alpha,0}\sum_{m=1}^{\infty}\frac{(-1)^{m}}{(2m)!}\left( \zeta(1-2\alpha-2m)-\zeta(2-\alpha+(1+\alpha)^{2}/2-2m)\right)x^{2m}\] \[-\sum_{j=1}^{N_{\dot{\alpha},0}}\hat{A}_{\alpha,j}^{0}|x|^{- \alpha-\hat{\alpha}+jp_{\dot{\alpha}}}-\sum_{m=1}^{\infty}\frac{(-1)^{m}}{(2m )!}\left(\sum_{j=1}^{N_{\dot{\alpha},0}}a_{\hat{\alpha},j}\zeta(1-\alpha-\hat{ \alpha}+jp_{\dot{\alpha}}-2m)+\sum_{m=1}^{N_{-1,1}}b_{\hat{\alpha},j}n^{2m+ \alpha}\right)x^{2m},\]
_where_
\[\hat{a}_{\alpha,j}^{0} =\Gamma(\hat{\alpha}-jp_{\dot{\alpha}})\cos\left(\frac{\pi}{2}( \hat{\alpha}-jp_{\dot{\alpha}})\right)a_{\hat{\alpha},j};\] \[\hat{A}_{j}^{0} =\Gamma(\alpha+\hat{\alpha}-jp_{\dot{\alpha}})\cos\left(\frac{ \pi}{2}(\alpha+\hat{\alpha}-jp_{\dot{\alpha}})\right)a_{\hat{\alpha},j}.\]
There are a couple of things that make these expansions tricky to handle. To begin with the terms \(a_{\alpha,0}(\zeta(1-\alpha-2m)-\zeta(2+(1+\alpha)^{2}/2-2m))\) in \(u_{\alpha}\) have removable singularities at \(\alpha=-1\), this is handled using the approach described in Appendix A. For \(\mathcal{H}^{\alpha}[u_{\alpha}]\) the exponent for the term
\[-a_{\alpha,0}\left(\Gamma(2\alpha)\cos\left(\pi\alpha\right)-\Gamma(-1+\alpha -(1+\alpha)^{2}/2)\cos\left(\frac{\pi}{2}(1-\alpha+(1+\alpha)^{2}/2)\right)| x|^{1+\alpha+(1+\alpha)^{2}/2}\right)|x|^{-2\alpha}\]
approaches \(2\) and its interaction with the \(x^{2}\) term
\[-\frac{1}{2}a_{\alpha,0}\left(\zeta(1-2\alpha-2)-\zeta(2-\alpha+(1+\alpha)^{2} /2-2)\right)x^{2}\]
needs special care. A similar thing happens for \(\hat{A}_{\alpha,j}^{0}|x|^{-\alpha-\hat{\alpha}+jp_{\dot{\alpha}}}\) and the corresponding \(x^{2}\) terms. How this is handled is discussed in detail in Section 10.2 and Appendix E. Bounding the tails of the sum
\[\sum_{m=1}^{\infty}\frac{(-1)^{m}}{(2m)!}a_{\alpha,0}\left(\zeta(1-\alpha-2m) -\zeta(2+(1+\alpha)^{2}/2-2m)\right)x^{2m}\]
and the corresponding one for \(\mathcal{H}^{\alpha}[u_{\alpha}]\) requires a bit extra work, this is done in Lemma C.7 and C.8.
For the precise values of \(\hat{\alpha}\), \(N_{\dot{\alpha},0}\) and \(N_{-1,1}\) see Section 12.1.
### Construction of \(u_{\alpha}\) for \(I_{3}\)
Using the approximation from Section 4.1 and letting \(\alpha\) tend to \(0\) it turns out that we need fewer and fewer terms to get a small defect. For \(\alpha\) close to \(0\) (around \(-1/6\) or closer) it is enough to take \(N_{\alpha,0}=2\) and \(N_{\alpha,1}=0\), giving us the approximation
\[u_{\alpha}(x)=\sum_{j=0}^{2}a_{\alpha,j}\tilde{C}_{1-\alpha+jp_{\alpha}}(x) \tag{23}\]
\[\mathcal{H}^{\alpha}[u_{\alpha}](x)=-\sum_{j=0}^{2}a_{\alpha,j}\tilde{C}_{1-2 \alpha+jp_{\alpha}}(x). \tag{24}\]
With this approximation the defect tends to zero as \(\alpha\to 0\), which is good. However, in Section 11.2.3 we will see that the value of \(D_{\alpha}\) in Proposition 2.2 tends to \(1\), meaning that \((1-D_{\alpha})^{2}/4n_{\alpha}\) also tends to zero. This means that we can never compute an enclosure for \(D_{\alpha}\) and \(\delta_{\alpha}\) on any interval \((-\delta_{2},0)\) such that the inequality in Proposition 2.2 holds. Instead, we compute expansions in \(\alpha\) at \(\alpha=0\) of both \(D_{\alpha}\) and \(\delta_{\alpha}\), using these expansions we can then prove that the inequality holds on the interval \(I_{3}\).
In this case it is not enough to use numerical approximations of \(a_{\alpha,1}\), \(a_{\alpha,2}\) and \(p_{\alpha}\), we need to take into account their dependence on \(\alpha\). For \(p_{\alpha}\) we use the defining equation (17). As \(\alpha\to 0\) we have \(p_{\alpha}\to 1.426\ldots\), this gives us that the four leading terms (in \(x\)) in the expansion from Lemma 4.3 are given by, ordered by their exponents,
\[\left(\frac{1}{2}(a_{\alpha,0}^{0})^{2}-A_{\alpha,0}^{0}\right) \left|x\right|^{-2\alpha},\] \[\left(a_{\alpha,0}^{0}a_{\alpha,1}^{0}-A_{\alpha,1}^{0}\right) \left|x\right|^{-2\alpha+p_{\alpha}},\] \[\frac{1}{2}\left(\sum_{j=0}^{2}a_{\alpha,j}\zeta(-1-2\alpha+jp_{ \alpha})\right)x^{2},\] \[-\frac{1}{2}a_{\alpha,0}^{0}\left(\sum_{j=0}^{2}a_{\alpha,j}\zeta (-1-\alpha+jp_{\alpha})\right)x^{2-\alpha}.\]
The choice of \(a_{\alpha,0}\) makes the first term zero and the choice of \(p_{\alpha}\) makes the second term zero. We then want to take \(a_{\alpha,1}\) and \(a_{\alpha,2}\) to make the third and fourth term zero. Solving for \(a_{\alpha,1}\) and \(a_{\alpha,2}\) we get the linear system
\[\begin{cases}a_{\alpha,1}\zeta(-1-2\alpha+p_{\alpha})+a_{\alpha,2}\zeta(-1-2 \alpha+2p_{\alpha})&=-a_{\alpha,0}\zeta(-1-2\alpha)\\ a_{\alpha,1}\zeta(-1-\alpha+p_{\alpha})+a_{\alpha,2}\zeta(-1-\alpha+2p_{\alpha}) &=-a_{\alpha,0}\zeta(-1-\alpha)\end{cases},\]
with the solution
\[\begin{cases}a_{\alpha,1}&=a_{\alpha,0}\frac{\zeta(-1-\alpha)\zeta(-1-2\alpha +2p_{\alpha})-\zeta(-1-2\alpha)\zeta(-1-\alpha+2p_{\alpha})}{\zeta(-1-2\alpha +p_{\alpha})\zeta(-1-\alpha+2p_{\alpha})-\zeta(-1-2\alpha+2p_{\alpha})\zeta(- 1-\alpha+p_{\alpha})}\\ a_{\alpha,2}&=-a_{\alpha,0}\frac{\zeta(-1-\alpha)\zeta(-1-2\alpha+p_{\alpha}) -\zeta(-1-2\alpha)\zeta(-1-\alpha+p_{\alpha})}{\zeta(-1-2\alpha+p_{\alpha}) \zeta(-1-\alpha+2p_{\alpha})-\zeta(-1-2\alpha+2p_{\alpha})\zeta(-1-\alpha+ p_{\alpha})}\end{cases}. \tag{25}\]
The following lemma gives us information about the asymptotic behavior of \(a_{\alpha,j}\).
**Lemma 4.7**.: _The expansion of \(a_{\alpha,0}\) at \(\alpha=0\) is given by_
\[a_{\alpha,0}=\alpha+\mathcal{O}(\alpha^{3}).\]
_For \(a_{\alpha,1}\) and \(a_{\alpha,2}\) it is given by_
\[a_{\alpha,j}=a_{\alpha,j,1}\alpha+\mathcal{O}(\alpha^{2})\]
_with_
\[a_{\alpha,1,1}=\lim_{\alpha\to 0}\frac{\zeta(-1-\alpha)\zeta(-1-2\alpha+2p_{ \alpha})-\zeta(-1-2\alpha)\zeta(-1-\alpha+2p_{\alpha})}{\zeta(-1-2\alpha+p_{ \alpha})\zeta(-1-\alpha+2p_{\alpha})-\zeta(-1-2\alpha+2p_{\alpha})\zeta(-1- \alpha+p_{\alpha})}\]
_and_
\[a_{\alpha,2,1}=\lim_{\alpha\to 0}\frac{\zeta(-1-2\alpha)\zeta(-1-\alpha+p_{ \alpha})-\zeta(-1-\alpha)\zeta(-1-\alpha+2p_{\alpha})}{\zeta(-1-2\alpha+p_{ \alpha})\zeta(-1-\alpha+2p_{\alpha})-\zeta(-1-2\alpha+2p_{\alpha})\zeta(-1- \alpha+p_{\alpha})}.\]
Proof.: The expansion for \(a_{\alpha,0}\) follows from Equation (13) by using that \(\Gamma(\alpha)=\alpha^{-1}+\gamma+\mathcal{O}(\alpha)\), giving us
\[a_{\alpha,0}=\frac{2\left((2\alpha)^{-1}+\gamma+\mathcal{O}(\alpha)\right)(1+ \mathcal{O}(\alpha^{2}))}{(\alpha^{-1}+\gamma+\mathcal{O}(\alpha))^{2}(1+ \mathcal{O}(\alpha^{2}))}=\frac{\alpha^{-1}+2\gamma+\mathcal{O}(\alpha)}{ \alpha^{-2}+2\gamma\alpha^{-1}+\mathcal{O}(1)}=\alpha\frac{1+2\gamma\alpha+ \mathcal{O}(\alpha^{2})}{1+2\gamma\alpha+\mathcal{O}(\alpha^{2})}=\alpha+ \mathcal{O}(\alpha^{3}).\]
The expansions for \(a_{\alpha,1}\) and \(a_{\alpha,2}\) follows directly from Equation (25).
Using this we can get information about the asymptotic behavior of \(u_{\alpha}\) and \(\mathcal{H}^{\alpha}[u_{\alpha}]\).
**Lemma 4.8**.: _For \(x\in(0,\pi]\) we have the following expansions at \(\alpha=0\):_
\[u_{\alpha}(x) =1+b(x)\alpha+\mathcal{O}(\alpha^{2}),\] \[\mathcal{H}^{\alpha}[u_{\alpha}](x) =-\frac{1}{2}-b(x)\alpha+\mathcal{O}(\alpha^{2}),\]
_with_
\[b(x)=C_{1}(x)-\gamma+a_{\alpha,1,1}\tilde{C}_{1+p_{0}}(x)+a_{\alpha,2,1} \tilde{C}_{1+2p_{0}}(x).\]
Proof.: For \(j\geq 1\) the Clausen functions are all finite at \(\alpha=0\) and from Lemma 4.7 we therefore get
\[a_{\alpha,1}\tilde{C}_{1-\alpha+p_{\alpha}}(x) =a_{\alpha,1,1}\tilde{C}_{1+p_{0}}(x)\alpha+\mathcal{O}(\alpha^{2 }),\] \[a_{\alpha,2}\tilde{C}_{1-\alpha+2p_{\alpha}}(x) =a_{\alpha,2,1}\tilde{C}_{1+2p_{0}}(x)\alpha+\mathcal{O}(\alpha^{ 2}),\] \[a_{\alpha,1}\tilde{C}_{1-2\alpha+p_{\alpha}}(x) =a_{\alpha,1,1}\tilde{C}_{1+p_{0}}(x)\alpha+\mathcal{O}(\alpha^{ 2}),\] \[a_{\alpha,2}\tilde{C}_{1-2\alpha+p_{\alpha}}(x) =a_{\alpha,2,1}\tilde{C}_{1+2p_{0}}(x)\alpha+\mathcal{O}(\alpha^{ 2}).\]
For \(j=0\) the function \(\tilde{C}_{1-\alpha}(x)\) has a singularity at \(\alpha=0\). More precisely
\[\tilde{C}_{1-\alpha}(x)=C_{1-\alpha}(x)-\zeta(1-\alpha),\]
where for \(0<x<2\pi\) the factor \(C_{1-\alpha}(x)\) is finite, but \(\zeta(1-\alpha)\) has a singularity at \(\alpha=0\), similarly for \(\tilde{C}_{1-2\alpha}(x)\). After multiplying with \(a_{\alpha,0}\) the singularity is removable. To begin with we have
\[a_{\alpha,0}C_{1-\alpha}(x)=C_{1}(x)\alpha+\mathcal{O}(\alpha^{2}).\]
Using the Laurent series of the zeta function we have
\[\zeta(1-\alpha)=-\alpha^{-1}+\sum_{n=0}^{\infty}\frac{1}{n!}\gamma_{n}\alpha^ {n}=-\alpha^{-1}+\gamma+\mathcal{O}(\alpha),\]
where \(\gamma_{n}\) is the Stieltjes constant and \(\gamma=\gamma_{0}\) is Euler's constant, from which we get
\[a_{\alpha,0}\zeta(1-\alpha)=(\alpha+\mathcal{O}(\alpha^{3}))(-\alpha^{-1}+ \gamma+\mathcal{O}(\alpha))=-1+\gamma\alpha+\mathcal{O}(\alpha^{2}).\]
Giving us
\[a_{\alpha,0}\tilde{C}_{1-\alpha}(x)=1+(C_{1}(x)-\gamma)\alpha+\mathcal{O}( \alpha^{2}).\]
In the same way we get
\[a_{\alpha,0}\tilde{C}_{1-2\alpha}(x)=\frac{1}{2}+(C_{1}(x)-\gamma)\alpha+ \mathcal{O}(\alpha^{2}).\]
Combining all the above gives us
\[u_{\alpha}(x)=1+(C_{1}(x)-\gamma+\tilde{C}_{1+p_{0}}(x)+\tilde{C}_{1+2p_{0}}(x ))\alpha+\mathcal{O}(\alpha^{2})\]
and
\[\mathcal{H}^{\alpha}[u_{\alpha}](x)=-\frac{1}{2}-(C_{1}(x)-\gamma+\tilde{C}_{1 +p_{0}}(x)+\tilde{C}_{1+2p_{0}}(x))\alpha+\mathcal{O}(\alpha^{2}),\]
from which the result follows.
### Hybrid cases
While it is technically possible to use the construction from Section 4.1 to handle the entire interval \(I_{2}\) it is not computationally feasible. As \(\alpha\) gets close to either \(-1\) or \(0\) the bounds that needs to be computed becomes harder and harder to handle.
For \(\alpha\) near \(0\) it is mainly bounding \(\delta_{\alpha}\) that becomes hard to deal with. It goes to zero as \(\alpha\to 0\), which means that absolute errors in the bound that are negligible when \(\delta_{\alpha}\) is larger quickly grow to become very large relative errors.
Near \(\alpha=-1\) it is mainly the behavior of \(F_{\alpha}(x)\) and \(\mathcal{T}_{\alpha}(x)\) (from Equation (31)) around \(x=0\) that become problematic. This is related to the powers for the terms \(\nu_{\alpha}|x|^{-\alpha}\) and \(\mathcal{O}(|x|^{p})\) in Theorem 1.1 getting closer and closer to each other, giving us less space to work with. Another issue is, as mentioned in Section 4.2, that the two Clausen functions \(a_{\alpha,0}\tilde{C}_{1-\alpha}\) and \(a_{\alpha,1}\tilde{C}_{1-\alpha+p_{\alpha}}\) grow quickly in opposite directions as \(\alpha\) gets closer to \(-1\) and as a result there are large cancellations to handle.
For \(\alpha\) near \(0\) we use the approximation from Equation (23), but instead of using expansions centered at \(\alpha=0\) we use expansions centered at \(\alpha\) (or the midpoint, when computing with intervals). Compared to the case \(\alpha=0\) there are no removable singularities to handle, which makes the process simpler.
For \(\alpha\) near \(-1\) we use the ideas from Section 4.2 for handling \(\alpha\in I_{1}\). Instead of using
\[u_{\alpha}(x)=\sum_{j=0}^{N_{\alpha,0}}a_{\alpha,j}\tilde{C}_{1-\alpha+jp_{ \alpha}}(x)+\sum_{n=1}^{N_{\alpha,1}}b_{\alpha,n}(\cos(nx)-1),\]
from Equation (12) we modify it in the same spirit as Equation (20). More precisely we use
\[u_{\alpha}(x)=a_{\alpha,0}(\tilde{C}_{1-\alpha}(x)-\tilde{C}_{1-\alpha+p_{ \alpha}}(x))+\hat{a}_{\alpha,1}\tilde{C}_{1-\alpha+p_{\alpha}}(x)+\sum_{j=2}^ {N_{\alpha,0}}a_{\alpha,j}\tilde{C}_{1-\alpha+jp_{\alpha}}(x)+\sum_{n=1}^{N_{ \alpha,1}}b_{\alpha,n}(\cos(nx)-1).\]
If we take \(\hat{a}_{\alpha,1}=a_{\alpha,0}+a_{\alpha,1}\) this is equivalent to Equation (20) The benefit with this approach is that we don't have to use an enclosure of \(a_{\alpha,0}\) in \(\hat{a}_{\alpha,1}\), but can take a numerical approximation instead.
In both cases we make adjustments to how the bounds are computed, the details are given in the relevant sections.
## 5 Choice of \(w_{\alpha}\)
In the above section we discussed how to construct the approximation \(u_{\alpha}\) for the ansatz (7). We here discuss the choice of the weight \(w_{\alpha}\) in the ansatz. To get the statement in Theorem 1.1 we need \(w_{\alpha}(x)=\mathcal{O}(|x|^{p})\) for some \(p\) satisfying \(-\alpha<p\leq 1\). The natural choice is \(w_{\alpha}(x)=|x|\), however this turns out to not work for all values of \(\alpha\).
The main obstruction when choosing \(w_{\alpha}\) is its effect on the value of \(D_{\alpha}\). For Proposition 2.2 to apply we require that \(D_{\alpha}<1\), if we take \(w_{\alpha}(x)=|x|\) this is not always the case. Using the procedure described in Section 11, in particular lemma 11.7, we can compute a lower bound of \(D_{\alpha}\) by computing \(\mathcal{T}_{\alpha}(0)\), introduced in (31) and (30). A plot of \(\mathcal{T}_{\alpha}(0)\) as a function of \(\alpha\), using \(w_{\alpha}(x)=|x|\), is given in Figure 2. From this plot we can see that with the weight \(w_{\alpha}(x)=|x|\) we have \(D_{\alpha}>1\) for \(\alpha\) between \(-1\) and some value slightly smaller than \(-0.5\), in which case Proposition 2.2 won't apply.
By taking \(w_{\alpha}(x)=|x|^{p}\) with \(-\alpha<p<1\) the value of \(D_{\alpha}\) can be made smaller than \(1\), so that Proposition 2.2 applies. The precise value of \(p\) to use depends on \(\alpha\) and is made more precise in Section 12.2.
The limits \(\alpha\to-1\) and \(\alpha\to 0\) both require extra attention. For \(\alpha\to 0\) with the weight \(w_{\alpha}(x)=|x|\) the value of \(D_{\alpha}\) tends to \(1\). However, in this case \(\delta_{\alpha}\) tends to \(0\) and by controlling the rate of convergence of \(D_{\alpha}\) and \(\delta_{\alpha}\) it is still possible to apply Proposition 2.2, see Section 12.3 for more details.
For \(\alpha\to-1\) the situation is more complicated. It is not possible to use the weight \(w_{\alpha}(x)=|x|\) because \(D_{\alpha}\) would be strictly greater than \(1\) close to \(\alpha=-1\). It is also not possible to use the weight \(w_{\alpha}(x)=|x|^{p}\) with \(p\) satisfying \(-\alpha<p<1\) since in the limit we would have need to have \(p\to 1\), and again we find that
\(D_{\alpha}\) is greater than \(1\). A natural choice would be to add a log-factor, \(w_{\alpha}(x)=|x|\log(1+1/|x|)\). Lengthly calculations however shows that this would not work either. Instead, we have to take \(w_{\alpha}(x)\) to behave like \(|x|^{p}\log(1+1/|x|)\) with \(p<1\) and \(p\to 1\) as \(\alpha\to-1\). More precisely we take \(w_{\alpha}(x)=|x|^{(1-\alpha)/2}\log(2e+1/|x|)\), the constant \(2e\) does not change the asymptotic behavior close to \(x=0\) but does change the non-asymptotic behavior in a way that turns out to make it slightly easier to satisfy the required inequality in Proposition 2.2. That we have to take such a complicated weight, compared to just \(w_{\alpha}(x)=|x|^{p}\), makes the analysis of \(D_{\alpha}\) in Section 11 significantly harder.
For the hybrid cases we mimic what happens at the endpoints. Summarizing we have the following choice for \(w_{\alpha}\):
* For \(\alpha\in I_{1}\) we take \(w_{\alpha}(x)=|x|^{(1-\alpha)/2}\log(2e+1/|x|)\).
* For \(\alpha\in I_{2}\) we have three cases
* Near \(\alpha=-1\) we take \(w_{\alpha}(x)=|x|^{p}\log(2e+1/|x|)\) with \(-\alpha<p<1\).
* Near \(\alpha=0\) we take \(w_{\alpha}(x)=|x|\)
* In between we take \(w_{\alpha}(x)=|x|^{p}\) with \(-\alpha<p<1\). The precise choice of \(p\) as a function of \(\alpha\) is given in Section 12.2.
* For \(\alpha\in I_{3}\) we take \(w_{\alpha}(x)=|x|\).
## 6 Proof strategy
The main part of the proof of Theorem 1.1 is the computation of the bounds of \(n_{\alpha}\), \(\delta_{\alpha}\) and \(D_{\alpha}\) required to apply Proposition 2.2. Explicit bounds of these values are given in Section 12. In this section we describe the general procedure for computing these bounds. The details for the different cases are given in Sections 7 to 11.
The values we want to bound are given by
\[n_{\alpha}=\sup_{x\in[0,\pi]}|N_{\alpha}(x)|,\quad\delta_{\alpha}=\sup_{x\in [0,\pi]}|F_{\alpha}(x)|,\quad D_{\alpha}=\sup_{x\in[0,\pi]}|\mathcal{T}_{ \alpha}(x)|.\]
With \(N_{\alpha}\) and \(F_{\alpha}\) as in (10) and (9) in Section 2 and \(\mathcal{T}_{\alpha}\) as in (31) in Section 11. At \(x=0\) the functions all have removable singularities, for small values of \(x\) the functions therefore need extra care when evaluating them. For that reason the computation of the supremum is split into two parts, one on the interval \([0,\epsilon]\)
Figure 2: Value of \(\mathcal{T}_{\alpha}(0)\), defined by (31), as a function of \(\alpha\) using the weight \(w_{\alpha}(x)=|x|\). This gives a lower bound of \(D_{\alpha}\) and since we need \(D_{\alpha}<1\) it means that this choice of weight does not work for all values of \(\alpha\).
and one on the interval \([\epsilon,\pi]\), for some \(\epsilon\) depending on \(\alpha\) and the function. On the interval \([0,\epsilon]\) we use a method of evaluation which takes into account the removable singularity.
To enclose the supremum on the intervals \([0,\epsilon]\) and \([\epsilon,\pi]\) we make use of methods from interval arithmetic and rigorous numerics. In this section we give a brief description of these methods and also discuss what we require of \(N_{\alpha}\), \(F_{\alpha}\) and \(\mathcal{T}_{\alpha}\) to be able to apply these methods.
### Enclosing the supremum
Consider the general problem of enclosing the maximum of \(f\) on some interval \(I\) to some predetermined tolerance. Assume that we can evaluate \(f\) using interval arithmetic, i.e. given any interval \(\mathbf{x}\subseteq I\) we can compute an interval enclosing the set \(f(\mathbf{x})=\{f(x):x\in\mathbf{x}\}\). The main idea is to iteratively bisect the interval \(I\) into smaller and smaller subintervals. At every iteration we compute an enclosure of \(f\) on each subinterval. From these enclosures a lower bound of the maximum can be computed. We then discard all subintervals for which the enclosure is less than the lower bound of the maximum, the maximum cannot be attained there. For the remaining subintervals we check if their enclosure satisfies the required tolerance, in that case we don't bisect them further. If there are any subintervals left we bisect them and continue with the next iteration. In the end, either when there are no subintervals left to bisect or we have reached some maximum number of iterations (to guarantee that the procedure terminates), we return the maximum of all subintervals that were not discarded. This is guaranteed to give an enclosure of the maximum of \(f\) on the interval.
If we are able to compute Taylor expansions of the function \(f\) we can improve the performance of this procedure significantly (see e.g. [17, 16] where a similar approach is used). Consider a subinterval \(I_{i}\), instead of computing an enclosure of \(f(I_{i})\) we compute a Taylor polynomial \(P\) at the midpoint and an enclosure \(R\) of the remainder term such that \(f(x)\in P(x)+R\) for \(x\in I_{i}\). We then have
\[\sup_{x\in I_{i}}f(x)\in\sup_{x\in I_{i}}P(x)+R. \tag{26}\]
To compute \(\sup_{x\in I_{i}}P(x)\) we isolate the roots of \(P^{\prime}\) on \(I_{i}\) and evaluate \(P\) on the roots as well as the endpoints of the interval. In practice the computation of \(R\) involves computing an enclosure of the Taylor expansion of \(f\) on the full interval \(I_{i}\). Since this includes the derivative we can as an extra optimization check if the derivative is non-zero, in which case \(f\) is monotone, and it is enough to evaluate \(f\) on either the left or the right endpoint of \(I_{i}\), depending on the sign of the derivative.
The above procedures can easily be adapted to instead compute the minimum of \(f\) on the interval, joining them together we can thus compute the extrema on the interval. In some cases we don't care about computing an enclosure of the maximum, but only to prove that it is bounded by some value. Instead of using a tolerance we then discard any subintervals for which the enclosure of the maximum is less than the bound.
### Evaluation of \(N_{\alpha}\), \(F_{\alpha}\) and \(\mathcal{T}_{\alpha}\) on intervals
To be able to use the above method we need to be able to evaluate \(N_{\alpha}\), \(F_{\alpha}\) and \(\mathcal{T}_{\alpha}\) using interval arithmetic. That is, given some interval \(\mathbf{x}\subseteq[0,\pi]\) we need to be able to compute enclosures of \(N_{\alpha}(\mathbf{x})\), \(F_{\alpha}(\mathbf{x})\) and \(\mathcal{T}_{\alpha}(\mathbf{x})\) respectively.
In Sections 7 to 11 we discuss how to evaluate these functions for (non-interval) \(x\in[0,\pi]\). This then needs to be extended to also work for intervals \(\mathbf{x}\subseteq(0,\pi)\). In general this is straight forward and handled directly by just replacing the point \(x\in[0,\pi]\) with the interval \(\mathbf{x}\subseteq(0,\pi)\) in the computations, the operations on the interval are then done using Arb [40]. In some cases we can make use of monotonicity properties of the function. For example when computing the root \(r_{\alpha,x}\) introduced in Lemma 11.2 for an interval \(\mathbf{x}=[\underline{x},\overline{x}]\) we can use that it is decreasing in \(x\), giving us \(r_{\alpha,\mathbf{x}}=[r_{\alpha,\overline{x}},r_{\alpha,\underline{x}}]\)..
To cover all values of \(\alpha\) in \((-1,0)\) we need to use interval values also for \(\alpha\), and not only for \(x\). Similar to \(x\) this is in general done by just replacing \(\alpha\in(-1,0)\) with \(\boldsymbol{\alpha}\subseteq(-1,0)\) in the computations.
To be able to use the improved version of the method for enclosing the supremum discussed in Section 6.1 we need to be able to compute Taylor expansions of \(N_{\alpha}\), \(F_{\alpha}\) and \(\mathcal{T}_{\alpha}\). This is possible in many, but not all cases. For \(x\in[\epsilon,\pi]\) this is in general straight forward for \(N_{\alpha}\) and \(F_{\alpha}\), for example the methods in Appendix C.2 can be used for computing Taylor expansions of the involved Clausen functions, but not for \(\mathcal{T}_{\alpha}\).
For \(x\in[0,\epsilon]\) the functions are in general expanded at \(x=0\) and you have to be slightly careful when computing Taylor expansions from this expansion. For example, for \(\frac{\tilde{C}_{1-\alpha}(x)}{x^{\alpha}}\) we have the expansion
\[\frac{\tilde{C}_{1-\alpha}(x)}{x^{\alpha}}=\Gamma(\alpha)\sin\Big{(}\frac{ \pi}{2}(1-\alpha)\Big{)}+\sum_{m=1}^{\infty}(-1)^{m}\zeta(1-\alpha-2m)\frac{x ^{2m+\alpha}}{(2m)!}.\]
When computing it we need to bound the tail. If we let \(R\) be an interval bounding the remainder term given in Lemma C.5 on the interval \([0,\epsilon]\) then for \(x\in[0,\epsilon]\) we have
\[\frac{\tilde{C}_{1-\alpha}(x)}{x^{\alpha}}\in\Gamma(\alpha)\sin\Big{(}\frac{ \pi}{2}(1-\alpha)\Big{)}+\sum_{m=1}^{M-1}(-1)^{m}\zeta(1-\alpha-2m)\frac{x^{2 m+\alpha}}{(2m)!}+Rx^{2M+\alpha}.\]
If we consider \(R\) to be fixed then it is straight forward to compute Taylor expansions in \(x\) from this representation. This will in general not give us a Taylor expansion of \(\frac{\tilde{C}_{1-\alpha}(x)}{x^{\alpha}}\) since we have lost the remainder terms dependence on \(x\). When enclosing the supremum this is however not an issue, we can apply the method on
\[\Gamma(\alpha)\sin\Big{(}\frac{\pi}{2}(1-\alpha)\Big{)}+\sum_{m=1}^{M-1}(-1)^ {m}\zeta(1-\alpha-2m)\frac{x^{2m+\alpha}}{(2m)!}+Rx^{2M+\alpha}\]
directly.
## 7 Evaluation of \(u_{\alpha}\) and \(\mathcal{H}^{\alpha}[u_{\alpha}]\)
To compute bounds for \(D_{\alpha}\), \(\delta_{\alpha}\) and \(n_{\alpha}\) we need to be able to compute enclosures of \(u_{\alpha}(x)\) and \(\mathcal{H}^{\alpha}[u_{\alpha}](x)\). In this section we describe how this is done for the approximations used on the intervals \(I_{1}\), \(I_{2}\) and \(I_{3}\) respectively.
### Evaluation of \(u_{\alpha}\) and \(\mathcal{H}^{\alpha}[u_{\alpha}]\) for \(I_{2}\)
In this case the expressions for \(u_{\alpha}\) and \(\mathcal{H}^{\alpha}[u_{\alpha}]\) are given by (12) and (14). The parameter \(a_{\alpha,0}\) is given by (13), which is easily enclosed for a given \(\alpha\), and \(p_{\alpha}\), \(a_{\alpha,j}\) for \(j\geq 1\) and \(b_{\alpha,n}\) are fixed, numerically determined, numbers. Evaluation is straight forward, using the approach in Appendix C to enclose the Clausen functions.
### Evaluation of \(u_{\alpha}\) and \(\mathcal{H}^{\alpha}[u_{\alpha}]\) for \(I_{1}\)
In this case the expressions for \(u_{\alpha}\) and \(\mathcal{H}^{\alpha}[u_{\alpha}]\) are given by (20) and (22). Most of the parameters are fixed, numerically determined, numbers, these parts of the functions are straight forward to enclose. The only problematic part are the leading terms,
\[a_{\alpha,0}(\tilde{C}_{1-\alpha}(x)-\tilde{C}_{2+(1+\alpha)^{2}/2}(x))\text{ and }-a_{\alpha,0}(\tilde{C}_{1-2\alpha}(x)-\tilde{C}_{2-\alpha+(1+\alpha)^{2}/2}(x)),\]
which both have removable singularities at \(\alpha=-1\). They are handled by rewriting them as
\[a_{\alpha,0}(\tilde{C}_{1-\alpha}(x)-\tilde{C}_{2+(1+\alpha)^{2}/2}(x))=(( \alpha+1)a_{\alpha,0})\cdot\frac{\tilde{C}_{1-\alpha}(x)-\tilde{C}_{2+(1+ \alpha)^{2}/2}(x)}{\alpha+1},\]
as well as
\[-a_{\alpha,0}(\tilde{C}_{1-2\alpha}(x)-\tilde{C}_{2-\alpha+(1+\alpha)^{2}/2}( x))=-((\alpha+1)a_{\alpha,0})\cdot\frac{\tilde{C}_{1-2\alpha}(x)-\tilde{C}_{2- \alpha+(1+\alpha)^{2}/2}(x)}{\alpha+1}\]
and using the approach in Appendix A to handle the removable singularities.
### Evaluation of \(u_{\alpha}\) and \(\mathcal{H}^{\alpha}[u_{\alpha}]\) for \(I_{3}\)
In this case the expressions for \(u_{\alpha}\) and \(\mathcal{H}^{\alpha}[u_{\alpha}]\) are, as for \(I_{2}\), given by (23) and (24). The parameters \(p_{\alpha}\) and \(a_{\alpha,j}\) are not fixed numbers, but all depend on \(\alpha\). Moreover, it is not enough to just compute enclosures of \(u_{\alpha}(x)\) and \(\mathcal{H}^{\alpha}[u_{\alpha}]\), we need their expansions in \(\alpha\). For this we make use of Taylor models [44], which we give a brief introduction of in Appendix B, centered at \(\alpha=0\). For \(a_{\alpha,0}\), \(a_{\alpha,1}\) and \(a_{\alpha,2}\) we have explicit formulas in (13) and (25) and Taylor models can be computed directly from them, there are several removable singularities which are handled as described in Appendix A. The value of \(p_{\alpha}\) is given implicitly as a function of \(\alpha\) by (17), from this we can compute a Taylor model using the approach described in Appendix B.2. Once these Taylor models are computed it is straight forward to compute Taylor models of (23) and (24).
### Evaluation of \(u_{\alpha}\) and \(\mathcal{H}^{\alpha}[u_{\alpha}]\) in hybrid cases
For \(\alpha\) near \(-1\) the only part that needs special care is the evaluation of \(\tilde{C}_{1-\alpha}(x)-\tilde{C}_{1-\alpha+p_{\alpha}}(x)\) for \(u_{\alpha}\) and \(\tilde{C}_{1-2\alpha}(x)-\tilde{C}_{1-2\alpha+p_{\alpha}}(x)\) for \(\mathcal{H}^{\alpha}[u_{\alpha}]\), in both cases due to the large cancellations between the two terms. For interval arguments \(\mathbf{x}\) and \(\mathbf{s}\) with midpoints \(x_{0}\) and \(s_{0}\) respectively we compute \(\tilde{C}_{\mathbf{s}}(\mathbf{x})-\tilde{C}_{\mathbf{s}+p_{\alpha}}(\mathbf{ x})\) using a midpoint approximation as
\[\tilde{C}_{\mathbf{s}}(\mathbf{x})-\tilde{C}_{\mathbf{s}+p_{\alpha}}(\mathbf{ x})=(\tilde{C}_{s_{0}}(x_{0})-\tilde{C}_{s_{0}+p_{\alpha}}(x_{0}))+(\mathbf{s}-s_{0}) (\tilde{C}_{\mathbf{s}}^{(1)}(x_{0})-\tilde{C}_{\mathbf{s}+p_{\alpha}}^{(1)}( x_{0}))-(\mathbf{x}-x_{0})(\tilde{S}_{\mathbf{s}-1}(\mathbf{x})-\tilde{S}_{ \mathbf{s}+p_{\alpha}-1}(\mathbf{x})).\]
For \(\alpha\) near \(0\) we follow exactly the same approach as for \(\alpha\in I_{3}\), the only difference being that the Taylor models are centered at \(\alpha\) (or the midpoint of \(\boldsymbol{\alpha}\) in the case of intervals) instead of at \(0\).
## 8 Division by \(u_{\alpha}\)
The computations of the values \(n_{\alpha}\), \(\delta_{\alpha}\) and \(D_{\alpha}\) all involve functions given by fractions where the denominator contains \(u_{\alpha}\). For \(x>0\) this is not an issue since the denominators are non-zero, we can enclose the numerator and denominator separately and then perform the division. For \(x=0\) we get a removable singularity and to handle that we need to understand the asymptotic behavior of \(u_{\alpha}(x)\) at \(x=0\). For this we start of with the following, straight forward, lemma
**Lemma 8.1**.: _For \(-1<\alpha<0\) the function \(\frac{|x|^{-\alpha}}{u_{\alpha}(x)}\) is non-zero and bounded at \(x=0\)._
Proof.: From the asymptotic expansion of \(u_{\alpha}(x)\) at \(x=0\) we have
\[u_{\alpha}(x)=a_{\alpha,0}\Gamma(\alpha)\cos\left(\frac{\pi}{2}\alpha\right)|x |^{-\alpha}+o(|x|^{-\alpha}).\]
This gives us
\[\frac{|x|^{-\alpha}}{u_{\alpha}(x)}=\frac{1}{a_{\alpha,0}\Gamma(\alpha)\cos \left(\frac{\pi}{2}\alpha\right)+o(1)}.\]
This is non-zero and bounded as long as
\[a_{\alpha,0}\Gamma(\alpha)\cos\left(\frac{\pi}{2}\alpha\right)\]
is non-zero and bounded. We get
\[a_{\alpha,0}\Gamma(\alpha)\cos\left(\frac{\pi}{2}\alpha\right)=\frac{2\Gamma (2\alpha)\cos\left(\pi\alpha\right)}{\Gamma(\alpha)\cos\left(\frac{\pi}{2} \alpha\right)}\]
The denominator is easily seen to be non-zero and bounded for \(\alpha\in(-1,0)\). For the numerator we can rewrite it as
\[2\Gamma(2\alpha)\cos\left(\pi\alpha\right)=\frac{2\Gamma(2\alpha+2)\cos\left( \pi\alpha\right)}{2\alpha(2\alpha+1)}=\frac{\Gamma(2\alpha+2)}{\alpha}\frac{ \sin\left(\frac{\pi}{2}(2\alpha+1)\right)}{(2\alpha+1)},\]
which is bounded and non-zero.
Computing \(\frac{|x|^{-\alpha}}{u_{\alpha}(x)}\) near \(x=0\) is straight forward. Compute the expansion of \(u_{\alpha}(x)\) at \(x=0\) and subtract \(-\alpha\) from the exponents of all terms in the expansion, this gives an expansion where all exponents are non-negative. Evaluate the expansion at \(x\) and then compute the inverse.
While Lemma 8.1 is valid for the full interval \((-1,0)\) the bound is not uniform in \(\alpha\). For \(\alpha\to 0\) it converges to \(1\), and we are fine, but for \(\alpha\to-1\) the value of \(\frac{x^{-\alpha}}{u_{\alpha}(x)}\) goes to zero, and we need an understanding of the rate in which it does so. For this we use the following, modified version of the lemma
**Lemma 8.2**.: _For \(\alpha\in I_{1}\) the function_
\[\frac{\Gamma(1+\alpha)|x|^{-\alpha}(1-|x|^{1+\alpha+(1+\alpha)^{2}/2})}{u_{ \alpha}(x)}\]
_is non-zero and uniformly bounded in \(\alpha\) at \(x=0\)._
Proof.: Since the function is even with respect to \(x\) we can take \(x\geq 0\).
From Lemma 4.6 we have that the leading term in the asymptotic expansion at \(x=0\) for \(u_{\alpha}\) is given by
\[a_{\alpha,0}\left(\Gamma(\alpha)\cos\left(\frac{\pi}{2}\alpha\right)-\Gamma(- 1-(1+\alpha)^{2}/2)\cos\left(\frac{\pi}{2}(1+(1+\alpha)^{2}/2)\right)x^{1+ \alpha+(1+\alpha)^{2}/2}\right)x^{-\alpha}.\]
We therefore study the asymptotic behavior of
\[\frac{\Gamma(1+\alpha)x^{-\alpha}(1-x^{1+\alpha+(1+\alpha)^{2}/2})}{a_{\alpha,0}\left(\Gamma(\alpha)\cos\left(\frac{\pi}{2}\alpha\right)-\Gamma(-1-(1+ \alpha)^{2}/2)\cos\left(\frac{\pi}{2}(1+(1+\alpha)^{2}/2)\right)x^{1+\alpha+( 1+\alpha)^{2}/2}\right)x^{-\alpha}}. \tag{27}\]
We can cancel the \(x^{-\alpha}\) factor and split it as
\[\frac{\Gamma(1+\alpha)}{a_{\alpha,0}}\frac{1-x^{1+\alpha+(1+\alpha)^{2}/2}}{ \Gamma(\alpha)\cos\left(\frac{\pi}{2}\alpha\right)-\Gamma(-1-(1+\alpha)^{2}/2) \cos\left(\frac{\pi}{2}(1+(1+\alpha)^{2}/2)\right)x^{1+\alpha+(1+\alpha)^{2}/ 2}}.\]
From the definition of \(a_{\alpha,0}\) as given in Equation (13) we get that the first factor is given by
\[\frac{\Gamma(1+\alpha)\Gamma(\alpha)^{2}\cos\left(\frac{\pi}{2}\alpha\right)^ {2}}{2\Gamma(2\alpha)\cos\left(\pi\alpha\right)}=\frac{\alpha\Gamma(\alpha)^ {3}\cos\left(\frac{\pi}{2}\alpha\right)^{2}}{2\Gamma(2\alpha)\cos\left(\pi \alpha\right)}.\]
This has a removable singularity at \(\alpha=-1\). It is then easily checked to be non-zero.
For the second factor we let \(c(\alpha)=\Gamma(\alpha)\cos\left(\frac{\pi}{2}\alpha\right)\) and \(\tilde{p}_{\alpha}=1+\alpha+(1+\alpha)^{2}/2\), allowing us to write it as
\[\frac{1-x^{\tilde{p}_{\alpha}}}{c(\alpha)-c(\alpha-\tilde{p}_{\alpha})x^{ \tilde{p}_{\alpha}}}=\frac{1}{c(\alpha)}\frac{1-x^{\tilde{p}_{\alpha}}}{1- \frac{c(\alpha-\tilde{p}_{\alpha})}{c(\alpha)}x^{\tilde{p}_{\alpha}}}\]
Here \(c(\alpha)\) has a removable singularity, which can be handled in the same way as the one for \(\frac{\Gamma(1+\alpha)}{a_{0}}\). Again it can easily be checked to be non-zero, so that \(\frac{1}{c(\alpha)}\) is bounded. What remains to handle is thus
\[\frac{1-x^{\tilde{p}_{\alpha}}}{1-\frac{c(\alpha-\tilde{p}_{\alpha})}{c( \alpha)}x^{\tilde{p}_{\alpha}}}.\]
Computing an enclosure of the derivative of \(\frac{c(\alpha-\tilde{p}_{\alpha})}{c(\alpha)}\) allows us to check that it is negative. This together with the fact that \(\lim_{\alpha\to-1}\frac{c(\alpha-\tilde{p}_{\alpha})}{c(\alpha)}=1\) means that an upper bound is given by
\[\frac{1-x^{\tilde{p}_{\alpha}}}{1-x^{\tilde{p}_{\alpha}}}=1.\]
For a lower bound we note that since \(\frac{c(\alpha-\bar{p}_{\alpha})}{c(\alpha)}<1\) it is decreasing in \(x\) as long as \(x<1\). It is therefore enough to check the value at some \(0<x_{0}<1\) to get a lower bound. For this fixed \(x_{0}>0\) we can handle the removable singularity of
\[\frac{1-x_{0}^{\bar{p}_{\alpha}}}{1-\frac{c(\alpha-\bar{p}_{\alpha})}{c(\alpha )}x_{0}^{\bar{p}_{\alpha}}}\]
at \(\alpha=-1\) to compute an enclosure of the lower bound.
The proof of this lemma also tells us how to compute an upper bound of
\[\frac{\Gamma(1+\alpha)|x|^{-\alpha}(1-|x|^{1+\alpha+(1+\alpha)^{2}/2})}{u_{ \alpha}(x)}.\]
It is enough to check that the terms in the asymptotic expansion of \(u_{\alpha}\) that we skipped are positive, in that case Equation (27) gives an upper bound. For \(x\) not overlapping zero we instead split it as
\[\left(\Gamma(1+\alpha)(1-|x|^{1+\alpha+(1+\alpha)^{2}/2})\right)\cdot\frac{| x|^{-\alpha}}{u_{\alpha}(x)}\]
and handle the removable singularity for the first factor and enclose the second factor directly using the asymptotic expansion.
## 9 Evaluation of \(N_{\alpha}\)
Recall that
\[n_{\alpha}:=\|N_{\alpha}\|_{L^{\infty}(\mathbb{T})}=\sup_{x\in[0,\pi]}|N_{ \alpha}(x)|\]
where
\[N_{\alpha}(x)=\frac{w_{\alpha}(x)}{2u_{\alpha}(x)}.\]
In this section we discuss how to evaluate \(N_{\alpha}\). The interval \([0,\pi]\) is split into two subintervals, \([0,\epsilon]\) and \([\epsilon,\pi]\). On the interval \([0,\epsilon]\) we make use of asymptotic expansions, on \([\epsilon,\pi]\) the function is evaluated directly. The value of \(\epsilon\) depends on \(\alpha\) and is discussed more in Section 12.
We start by describe the procedure for evaluation for \(\alpha\in I_{2}\) and then discuss the alterations required to handle \(I_{1}\) and \(I_{3}\).
### Evaluation of \(N_{\alpha}\) for \(I_{2}\)
For \(x\in[\epsilon,\pi]\) both \(w_{\alpha}(x)\), \(u_{\alpha}(x)\) and the resulting quotient are straight forward to evaluate.
For \(x\in[0,\epsilon]\) we use that the weight is given by \(w_{\alpha}(x)=x^{p}\) and write the function as
\[N_{\alpha}(x)=\frac{1}{2}x^{p+\alpha}\cdot\frac{x^{-\alpha}}{u_{\alpha}(x)}.\]
The first factor is easily enclosed and we use Lemma 8.1 for enclosing the second factor.
### Evaluation of \(N_{\alpha}\) for \(I_{1}\)
For \(x\in[\epsilon,\pi]\) we make one optimization compared to \(I_{2}\), instead of computing an enclosure we only compute an upper bound. For this we use that \(u_{\alpha}(x)\geq u_{-1}(x)\) for \(\alpha\in I_{1}\) and hence
\[N_{\alpha}(x)\leq\frac{w_{\alpha}(x)}{2u_{-1}(x)},\]
as long as \(u_{-1}(x)\) is positive.
For \(x\in[0,\epsilon]\) we use that the weight is given by \(w_{\alpha}(x)=x^{(1-\alpha)/2}\log(2e+1/x)\) and write the function as
\[N_{\alpha}(x) =\frac{1}{2}\frac{w_{\alpha}(x)}{\Gamma(1+\alpha)x^{-\alpha}(1-x ^{1+\alpha+(1+\alpha)^{2}/2})}\cdot\frac{\Gamma(1+\alpha)x^{-\alpha}(1-x^{1+ \alpha+(1+\alpha)^{2}/2})}{u_{\alpha}(x)}\] \[=\frac{1}{\Gamma(2+\alpha)}\cdot\frac{\log(2e+1/x)}{2\log(1/x)} \cdot\frac{x^{(1+\alpha)/2}\log(1/x)(1+\alpha)}{1-x^{1+\alpha+(1+\alpha)^{2}/ 2}}\cdot\frac{\Gamma(1+\alpha)x^{-\alpha}(1-x^{1+\alpha+(1+\alpha)^{2}/2})}{u_ {\alpha}(x)}.\]
The first factor is well-behaved, the second factor can be enclosed using that it is increasing in \(x\) and the limit as \(x\to 0\) is \(\frac{1}{2}\). The fourth factor can be enclosed using Lemma 8.2. For the third factor we get a bound from the following lemma
**Lemma 9.1**.: _Let \(-1<\alpha<0\) and \(0<x<1\), then_
\[0<\frac{x^{(1+\alpha)/2}\log(1/x)(1+\alpha)}{1-x^{1+\alpha+(1+\alpha)^{2}/2}} <1.\]
Proof.: The lower bound follows directly from that all the factors are positive. What remains is proving the upper bound.
If we let \(s=(1+\alpha)\log x\) we can write the quotient as
\[\frac{-se^{s/2}}{1-e^{s(1+(1+\alpha)/2)}}.\]
From \(-1<\alpha<0\) and \(0<x<1\) we get that \(s<0\). We want to prove that
\[-se^{s/2}<1-e^{s(1+(1+\alpha)/2)},\]
which is equivalent to proving that
\[f(s)=1-e^{s(1+(1+\alpha)/2)}+se^{s/2}\]
is positive for \(s<0\). We have \(\lim_{s\to 0}f(s)=0\), and it is hence enough to prove that \(f\) is decreasing for \(s<0\). The derivative is given by
\[f^{\prime}(s)=-(1+(1+\alpha)/2)e^{s(1+(1+\alpha)/2)}+\left(1+\frac{s}{2} \right)e^{s/2}=e^{s/2}\left(1+\frac{s}{2}-(1+(1+\alpha)/2)e^{s(1+(1+\alpha))/ 2}\right)\]
so it is enough to prove that
\[g(s)=1+\frac{s}{2}-(1+(1+\alpha)/2)e^{s(1+(1+\alpha))/2}\]
is negative for \(s<0\). We have \(g(0)=-\frac{1+\alpha}{2}<0\) and \(\lim_{s\to-\infty}g(s)=-\infty\). The unique critical point with respect to \(s\) is
\[-2\frac{\log\left((1+(1+\alpha)/2)(1+(1+\alpha))\right)}{1+(1+\alpha)},\]
which is negative. Inserting this into \(g\) gives us
\[1-\frac{\log\left((1+(1+\alpha)/2)(1+(1+\alpha))\right)}{1+(1+ \alpha)}-(1+(1+\alpha)/2)e^{-\log\left((1+(1+\alpha)/2)(1+(1+\alpha))\right)} \\ =1-\frac{1+\log\left((1+(1+\alpha)/2)(1+(1+\alpha))\right)}{1+(1 +\alpha)}\\ =1-\frac{1+\log(1+(1+\alpha)/2)+\log(1+(1+\alpha))}{1+(1+\alpha)}\]
For this to be negative we need
\[\log(1+(1+\alpha)/2)+\log(1+(1+\alpha))>1+\alpha,\]
which holds for \(-1<\alpha<0\). It follows that \(g(s)\) is negative for \(s<0\) and hence \(f(s)\) is decreasing in \(s\) and from \(\lim_{s\to 0}f(s)=0\) we then get that \(f(s)\) is positive for \(s<0\) and the result follows.
### Evaluation of \(N_{\alpha}\) for \(I_{3}\)
Compared to \(\delta_{\alpha}\) and \(D_{\alpha}\) we don't need to compute an expansion in \(\alpha\) of \(n_{\alpha}\) for \(I_{3}\), it is therefore enough to just compute enclosures of \(N_{\alpha}(x)\). The asymptotic expansions perform very well in this case and it is possible to take \(\epsilon=\pi\), we hence only have to consider the interval \([0,\epsilon]\). The evaluation is done in the same way as for \(I_{2}\) (with \(p=1\)), by rewriting it as
\[N_{\alpha}(x)=\frac{1}{2}x^{1+\alpha}\cdot\frac{x^{-\alpha}}{u_{\alpha}(x)}.\]
and using Lemma 8.1.
### Evaluation of \(N_{\alpha}\) in hybrid cases
For \(\alpha\) near \(-1\) we don't have to do anything special and follow the same approach as in Section 9.1. For \(\alpha\) near \(0\) the approach is the same as for \(\alpha\in I_{3}\).
## 10 Evaluation of \(F_{\alpha}\)
Recall that
\[\delta_{\alpha}=\|F_{\alpha}\|_{L^{\infty}(\mathbb{T})}=\sup_{x\in[0,\pi]}|F_ {\alpha}(x)|,\]
where
\[F_{\alpha}(x)=\frac{\mathcal{H}^{\alpha}[u_{\alpha}](x)+\frac{1}{2}u_{\alpha} (x)^{2}}{w_{\alpha}(x)u_{\alpha}(x)}.\]
In this section we discuss how to evaluate \(F_{\alpha}\). The interval \([0,\pi]\) is split into two subintervals, \([0,\epsilon]\) and \([\epsilon,\pi]\). On the interval \([0,\epsilon]\) we make use of asymptotic expansions, on \([\epsilon,\pi]\) the function is evaluated directly. The value of \(\epsilon\) depends on \(\alpha\) and is discussed more in Section 12.
We start by describe the procedure for evaluation for \(\alpha\in I_{2}\) and then discuss the alterations required to handle \(I_{1}\) and \(I_{3}\).
### Evaluation of \(F_{\alpha}\) for \(I_{2}\)
For \(x\in[\epsilon,\pi]\) we evaluate each part separately. That is, we compute \(w_{\alpha}(x)\), \(u_{\alpha}(x)\) and \(\mathcal{H}^{\alpha}[u_{\alpha}](x)\), it is then straight forward to compute \(F_{\alpha}(x)\) from these values.
For \(x\in[0,\epsilon]\) we use that the weight is given by \(w_{\alpha}(x)=x^{p}\) and write the function as
\[F_{\alpha}(x)=\frac{x^{-\alpha}}{u_{\alpha}(x)}\cdot\frac{\mathcal{H}^{ \alpha}[u_{\alpha}](x)+\frac{1}{2}u_{\alpha}(x)^{2}}{x^{p-\alpha}}.\]
The first factor is enclosed using Lemma 8.1. For the second factor we use the expansion from Lemma 4.3 and explicitly cancel the \(x^{p-\alpha}\) from the denominator. Note that since \(-2\alpha<p-\alpha\) we need the leading term in the expansion to be identically zero, hence our choice of \(a_{\alpha,0}\). For the second term in the expansion we have \(p-\alpha<-2\alpha+p_{\alpha}\), so it is bounded at \(x=0\), as required.
### Evaluation of \(F_{\alpha}\) for \(I_{1}\)
For \(x\in[\epsilon,\pi]\) we make one optimization compared to \(I_{2}\), instead of computing an enclosure we only compute an upper bound. For this we use that \(u_{\alpha}(x)\geq u_{-1}(x)\) for \(\alpha\in I_{1}\) and hence
\[|F_{\alpha}(x)|\leq\left|\frac{\mathcal{H}^{\alpha}[u_{\alpha}](x)+\frac{1}{2} u_{\alpha}(x)^{2}}{w_{\alpha}(x)u_{-1}(x)}\right|,\]
as long as \(u_{-1}(x)\) is positive.
For \(x\in[0,\epsilon]\) we use that the weight is given by \(w_{\alpha}(x)=x^{(1-\alpha)/2}\log(2e+1/x)\) and write the function as
\[F_{\alpha}(x)=\frac{\log(1/x)}{\log(2e+1/x)}\cdot\frac{\Gamma(1+\alpha)x^{- \alpha}(1-x^{1+\alpha+(1+\alpha)^{2}/2})}{u_{\alpha}(x)}\cdot\frac{\mathcal{H }^{\alpha}[u_{\alpha}](x)+\frac{1}{2}u_{\alpha}(x)^{2}}{\Gamma(1+\alpha)\log( 1/x)(1-x^{1+\alpha+(1+\alpha)^{2}/2})x^{(1-\alpha)/2-\alpha}}.\]
The first two factors are handled in the same way as when handling \(N_{\alpha}\) in Section 9.2. For the third factor we use that \((1-\alpha)/2<1\) and hence, if we require \(\epsilon\leq 1\), an upper bound, in absolute value, is given by the absolute value of
\[\frac{\mathcal{H}^{\alpha}[u_{\alpha}](x)+\frac{1}{2}u_{\alpha}(x)^{2}}{ \Gamma(1+\alpha)\log(1/x)(1-x^{1+\alpha+(1+\alpha)^{2}/2})x^{1-\alpha}}. \tag{28}\]
A bound for this is given in Appendix E.
### Evaluation of \(F_{\alpha}\) for \(I_{3}\)
In this case we need to not only compute an enclosure of \(F_{\alpha}(x)\), but understand it behavior in \(\alpha\). We therefore compute Taylor models of degree 1 centered at \(\alpha=0\).
We start with the following lemma that gives us information about the first two terms in the Taylor model.
**Lemma 10.1**.: _For \(x\in[0,\pi]\) the constant and linear terms in the expansion at \(\alpha=0\) of \(F_{\alpha}(x)\) are both zero._
Proof.: From lemma 4.8 we get
\[F_{\alpha}(x)=\frac{\left(-\frac{1}{2}-b(x)\alpha+\mathcal{O}(\alpha^{2}) \right)+\frac{1}{2}\left(1+b(x)\alpha+\mathcal{O}(\alpha^{2})\right)^{2}}{x(1 +b(x)\alpha+\mathcal{O}(\alpha^{2}))}=\frac{\mathcal{O}(\alpha^{2})}{x(1+b(x )\alpha+\mathcal{O}(\alpha^{2}))}=\mathcal{O}(\alpha^{2}),\]
which gives us the result.
For \(x\in[\epsilon,\pi]\) we compute the Taylor models of \(u_{\alpha}(x)\) and \(\mathcal{H}^{\alpha}[u_{\alpha}](x)\) using the approach described in Section 7.3. For this case we have \(w_{\alpha}(x)=|x|\) and hence \(w_{\alpha}\) does not depend on \(\alpha\) and its corresponding Taylor model is just a constant.
For \(x\in[0,\epsilon]\) we, similarly to for \(I_{2}\), write the function as
\[F_{\alpha}(x)=\frac{x^{-\alpha}}{u_{\alpha}(x)}\cdot\frac{\mathcal{H}^{\alpha} [u_{\alpha}](x)+\frac{1}{2}u_{\alpha}(x)^{2}}{x^{1-\alpha}}\]
and compute Taylor models of the two factors independently, which are then multiplied. See Appendix B.1 for how to compute Taylor models of expansions in \(x\).
### Evaluation of \(F_{\alpha}\) in hybrid cases
Near \(\alpha=-1\) we don't have to do anything special for \(x\in[\epsilon,\pi]\), the approach is thus the same as described in Section 10.1. For \(x\in[0,\epsilon]\) it is in principle the same as well, but to get good enclosures it requires slightly more care. The reasons this needs to be done is that for \(\alpha\) near \(-1\) the factor
\[\frac{\mathcal{H}^{\alpha}[u_{\alpha}](x)+\frac{1}{2}u_{\alpha}(x)^{2}}{x^{p- \alpha}}\]
tends to zero very slowly and there are several terms in the asymptotic expansions between which there are large cancellations. The main adjustment is to more carefully handle the cancellations between the leading terms of \(\mathcal{H}^{\alpha}[u_{\alpha}](x)\) and \(\frac{1}{2}u_{\alpha}(x)^{2}\). We also make heavy use of Lemma C.3 when evaluating the expansion of \(\mathcal{H}^{\alpha}[u_{\alpha}](x)\).
Near \(\alpha=0\) the approach is similar to that used for \(\alpha\in I_{3}\). Instead of the Taylor models being centered at zero they are centered at (the midpoint) of \(\alpha\). Since we in the end don't need an expansion in \(\alpha\) we enclose the computed Taylor models over the interval, giving us an enclosure of \(F_{\alpha}(x)\).
Analysis of \(T_{\alpha}\)
In this section we give more details about the operator \(T_{\alpha}\) defined by
\[T_{\alpha}v=-\frac{1}{w_{\alpha}u_{\alpha}}\mathcal{H}^{\alpha}[w_{\alpha}v]\]
and discuss how to bound \(D_{\alpha}:=\|T_{\alpha}\|\).
The operator \(\mathcal{H}^{\alpha}\) is defined by
\[\mathcal{H}^{\alpha}[v](x)=|D|^{\alpha}[v](x)-|D|^{\alpha}[v](0).\]
For even functions \(v(x)\) and \(0<x<\pi\) it has the integral representation
\[\mathcal{H}^{\alpha}[v](x)=-\frac{1}{\pi}\int_{0}^{\pi}(C_{-\alpha}(x-y)+C_{- \alpha}(x+y)-2C_{-\alpha}(y))v(y)\ dy.\]
This gives us
\[T_{\alpha}[v](x)=\frac{1}{\pi w_{\alpha}(x)u_{\alpha}(x)}\int_{0}^{\pi}(C_{- \alpha}(x-y)+C_{-\alpha}(x+y)-2C_{-\alpha}(y))w_{\alpha}(y)v(y)\ dy.\]
Using the above expressions it is standard that the norm of \(T_{\alpha}\) is given by
\[D_{\alpha}=\|T_{\alpha}\|=\sup_{0<x<\pi}\frac{1}{\pi|w_{\alpha}(x)||u_{\alpha }(x)|}\int_{0}^{\pi}|C_{-\alpha}(x-y)+C_{-\alpha}(x+y)-2C_{-\alpha}(y)||w_{ \alpha}(y)|\ dy. \tag{29}\]
Let
\[I_{\alpha}(x,y)=C_{-\alpha}(x-y)+C_{-\alpha}(x+y)-2C_{-\alpha}(y)\]
and
\[U_{\alpha}(x)=\int_{0}^{\pi}|I_{\alpha}(x,y)|w_{\alpha}(y)\ dt,\]
where we have removed the absolute value around \(w_{\alpha}\) since it is positive. We are then interested in computing
\[D_{\alpha}=\|T_{\alpha}\|=\sup_{0<x<\pi}\frac{U_{\alpha}(x)}{\pi|w_{\alpha}( x)||u_{\alpha}(x)|}. \tag{30}\]
We use the notation
\[\mathcal{T}_{\alpha}(x)=\frac{U_{\alpha}(x)}{\pi|w_{\alpha}(x)||u_{\alpha}(x)|}. \tag{31}\]
### Properties of \(U_{\alpha}\)
Before discussing how to evaluate \(\mathcal{T}_{\alpha}\) we give some general properties about \(U_{\alpha}\) that will be useful.
The integrand of \(U_{\alpha}(x)\) has a singularity at \(y=x\). It is therefore natural to split the integral into two parts
\[U_{\alpha,1}(x)=\int_{0}^{x}|I_{\alpha}(x,y)|w_{\alpha}(y)\ dy,\quad U_{ \alpha,2}(x)=\int_{x}^{\pi}|I_{\alpha}(x,y)|w_{\alpha}(y)\ dy.\]
In some cases it will be beneficial to make the change of variables \(y=tx\), giving us
\[U_{\alpha,1}(x)=x\int_{0}^{1}|\hat{I}_{\alpha}(x,t)|w_{\alpha}(tx)\ dt,\quad U _{\alpha,2}(x)=x\int_{1}^{\pi/x}|\hat{I}_{\alpha}(x,t)|w_{\alpha}(tx)\ dt,\]
where
\[\hat{I}_{\alpha}(x,t)=C_{-\alpha}(x(1-t))+C_{-\alpha}(x(1+t))-2C_{-\alpha}(xt).\]
The following lemmas give information about the sign of \(I(x,y)\) and \(\hat{I}(x,t)\), allowing us to remove the absolute value.
**Lemma 11.1**.: _For all \(\alpha\in(-1,0)\) the function_
\[(1-t)^{-\alpha-1}+(1+t)^{-\alpha-1}-2t^{-\alpha-1}\]
_is increasing and continuous in \(t\) for \(t\in(0,1)\) and has a unique root \(r_{\alpha,0}\). For \(x\in(0,\pi)\) and \(t\in(0,r_{\alpha,0})\) the function \(\hat{I}_{\alpha}(x,t)\) is increasing in \(x\)._
Proof.: Differentiating the function with respect to \(t\) gives us
\[(-\alpha-1)(-(1-t)^{-\alpha-2}+(1+t)^{-\alpha-2}-2t^{-\alpha-2}),\]
we want to prove that this is positive. Since \(-\alpha-1<0\) and \(t^{-\alpha-2}>0\) it is enough to prove that
\[-(1-t)^{-\alpha-2}+(1+t)^{-\alpha-2}<0,\]
which follows immediately from that \(-\alpha-2<0\) and \(1-t<1+t\). This proves that the function is increasing. Noticing that the limit \(t\to 0\) is negative and \(t\to 1\) is positive gives us the existence of a unique root \(r_{\alpha,0}\).
To get the monotonicity for \(\hat{I}_{\alpha}\) we differentiate with respect to \(x\),
\[\frac{d}{dx}\hat{I}_{\alpha}(x,t)=-(1-t)S_{-\alpha-1}(x(1-t))-(1+t)S_{-\alpha -1}(x(1+t))+2tS_{\alpha-1}(xt),\]
and expand,
\[\frac{d}{dx}\hat{I}_{\alpha}(x,t)=-\Gamma(2+\alpha)\cos\left(- \frac{\pi}{2}(\alpha+1)\right)x^{-\alpha-2}((1-t)^{-\alpha-1}+(1+t)^{-\alpha- 1}-2t^{\alpha-1})\\ -\sum_{m=0}^{\infty}(-1)^{m}\zeta(-\alpha-1-2m)\frac{x^{2m}}{(2m)!}((1-t)^{2m+1}+(1+t)^{2m+1}-2t^{2m+1}).\]
We have \(-\Gamma(2+\alpha)\cos\left(-\frac{\pi}{2}(\alpha+1)\right)x^{-\alpha-2}<0\) and since \(0<t<r_{\alpha,0}\) we have from the above that \((1-t)^{-\alpha-1}+(1+t)^{-\alpha-1}-2t^{\alpha-1}<0\), the first term is hence positive. Due to the location of the zeros of the zeta function on the negative real axis the factor \((-1)^{m}\zeta(-\alpha-1-2m)\) is negative. Hence all terms in the sum are negative and the sum is decreasing in \(x\). Taking into account the minus sign in front of the sum gives us something increasing.
All terms in the sum are, due to the location of the zeta function on the negative real axis, adding the minus sign in front means that the whole expression is positive. It follows that \(\hat{I}_{\alpha}(x,t)\) is increasing in \(x\).
**Lemma 11.2**.: _For all \(\alpha\in(-1,0)\) and \(x\in(0,\pi)\) the function \(\hat{I}_{\alpha}(x,t)\) is increasing and continuous in \(t\) for \(t\in(0,1)\) and has the limits_
\[\lim_{t\to 0^{+}}\hat{I}_{\alpha}(x,y) =-\infty,\] \[\lim_{t\to 1^{-}}\hat{I}_{\alpha}(x,y) =\infty.\]
_Moreover, the unique root, \(r_{\alpha,x}\), in \(t\) is decreasing in \(x\) and satisfies the inequality_
\[\frac{1}{2}<r_{\alpha,x}<r_{\alpha,0},\]
_with \(r_{\alpha,0}\) as in lemma 11.1._
Proof.: The continuity in \(t\) follows from that \(C_{\alpha}\) is continuous on the interval \((0,2\pi)\) and that the arguments all stay in this range. For the left limit we note that \(C_{-\alpha}(x(1-t))\) and \(C_{-\alpha}(x(1+t))\) both remain finite, while \(C_{-\alpha}(xt)\) diverges towards \(\infty\). For the right limit \(C_{-\alpha}(x(1+t))\) and \(C_{-\alpha}(xt)\) are finite while \(C_{-\alpha}(x(1-t))\) diverges towards \(\infty\).
To show that it is increasing in \(t\) we differentiate, giving us
\[\frac{d}{dt}\hat{I}_{\alpha}(x,t)=x(S_{-\alpha-1}(x(1-t))-S_{-\alpha-1}(x(1+t))+ 2S_{-\alpha-1}(xt)).\]
We want to prove that this is positive. Note that \(0<xt<\pi\) and from Lemma C.1 we have that \(S_{-\alpha-1}\) is positive on the interval \((0,\pi)\). Since \(x\) is also positive it thus satisfies to check that
\[S_{-\alpha-1}(x(1-t))-S_{-\alpha-1}(x(1+t))>0,\]
for which it is enough to assert that \(S_{-\alpha-1}\) is decreasing on \((0,2\pi)\), which is the result of Lemma C.2. This proves the existence of a unique root \(r_{\alpha,x}\) on the interval \((0,1)\).
To prove that \(r_{\alpha,x}\) is decreasing in \(x\) we first prove that it is upper bounded by \(r_{\alpha,0}\). Expanding the Clausen functions gives us
\[\hat{I}_{\alpha}(x,t)=\Gamma(1+\alpha)\sin\left(-\frac{\pi}{2} \alpha\right)x^{-\alpha-1}((1-t)^{-\alpha-1}+(1+t)^{-\alpha-1}-2t^{\alpha-1}) \\ +\sum_{m=1}^{\infty}(-1)^{m}\zeta(-\alpha-2m)\frac{x^{2m}}{(2m)!} ((1-t)^{2m}+(1+t)^{2m}-2t^{2m}).\]
All terms in the sum are positive, due to the location of the roots of the zeta function on the negative real axis, and to have a root we must hence have
\[\Gamma(1+\alpha)\sin\left(-\frac{\pi}{2}\alpha\right)x^{-\alpha-1}((1-t)^{- \alpha-1}+(1+t)^{-\alpha-1}-2t^{\alpha-1})<0.\]
Since \(\Gamma(1+\alpha)\sin\left(-\frac{\pi}{2}\alpha\right)x^{-\alpha-1}>0\) this means we must have
\[(1-t)^{-\alpha-1}+(1+t)^{-\alpha-1}-2t^{\alpha-1}<0,\]
but from the previous lemma we know that this only holds for \(t<r_{\alpha,0}\), it follows that \(r_{\alpha,x}<r_{\alpha,0}\). The monotonicity in \(x\) now follows directly from that \(\hat{I}_{\alpha}(x,t)\) is increasing in \(x\) for \(t\in(0,r_{\alpha,0})\). To get the lower bound \(\frac{1}{2}<r_{\alpha,x}\) it is enough to notice that
\[\hat{I}_{\alpha}(\pi,1/2)=C_{-\alpha}(\pi/2)+C_{-\alpha}(\pi/2)-2C_{-\alpha}( \pi/2)=0.\]
In practice the root \(r_{\alpha,x}\) is decreasing also in \(\alpha\). However, instead of proving this in the general case the following lemma can be used as an a posteriori check.
**Lemma 11.3**.: _Let \(\boldsymbol{\alpha}=[\underline{\alpha},\overline{\alpha}]\subseteq(-1,0)\) and \(x\in(0,\pi)\). If_
\[\frac{d}{d\alpha}\hat{I}_{\alpha}(x,r_{\underline{\alpha},x})>0\quad\text{for all}\quad\alpha\in\boldsymbol{\alpha},\]
_then \(r_{\alpha,x}\leq r_{\underline{\alpha},x}\) for all \(\alpha\in\boldsymbol{\alpha}\). Similarly, if \(\frac{d}{d\alpha}\hat{I}_{\alpha}(x,r_{\overline{\alpha},x})>0\) for all \(\alpha\in\boldsymbol{\alpha}\), then \(r_{\alpha,x}\geq r_{\overline{\alpha},x}\) for all \(\alpha\in\boldsymbol{\alpha}\)._
Proof.: We only prove the first statement, the second one is similar. By definition of \(r_{x,\underline{\alpha}}\) we have \(\hat{I}_{\underline{\alpha}}(x,r_{x,\underline{\alpha}})=0\) and hence if \(\frac{d}{d\alpha}\hat{I}_{\alpha}(x,r_{x,\underline{\alpha}})>0\) we have \(\hat{I}_{\alpha}(x,r_{x,\underline{\alpha}})\geq 0\). Since \(\hat{I}_{\alpha}(x,t)\) is increasing in \(t\) this means that \(r_{x,\alpha}\leq r_{x,\underline{\alpha}}\).
**Lemma 11.4**.: _For all \(\alpha\in(-1,0)\) and \(x\in(0,\pi)\) we have \(I_{\alpha}(x,y)>0\) for \(y\in(x,\pi)\) and \(\hat{I}_{\alpha}(x,t)>0\) for \(t\in(1,\pi/x)\)._
Proof.: The function \(C_{-\alpha}(y)\) is strictly convex for \(y\in(0,2\pi)\), this can be seen from the integral representation of \(\frac{d^{2}}{dy^{2}}C_{-\alpha}(y)=-C_{-\alpha-2}(y)\) given in the proof of Lemma C.2. It immediately follows that
\[I_{\alpha}(x,y)=C_{-\alpha}(y-x)+C_{-\alpha}(y+x)-2C_{-\alpha}(y)<0.\]
A simple change of variables gives the result for \(\hat{I}_{\alpha}(x,t)\).
With the help of Lemma 11.2 and 11.4 we can slightly simplify \(U_{1}\) and \(U_{2}\), we get
\[U_{\alpha,1}(x)=-x\int_{0}^{r_{\alpha,x}}\hat{I}(x,t)w_{\alpha}(tx)\ dt+x\int_{r_{ \alpha,x}}^{1}\hat{I}(x,t)w_{\alpha}(tx)\ dt=U_{\alpha,1,1}(x)+U_{\alpha,1,2}(x)\]
and
\[U_{\alpha,2}(x)=\int_{x}^{\pi}I(x,y)w_{\alpha}(y)\ dy.\]
In general the integrals cannot be explicitly computed, the exception is when \(w_{\alpha}(x)=|x|\) in which case we have the following lemma.
**Lemma 11.5**.: _For \(\alpha\in(-1,0)\), \(x\in(0,\pi)\) and \(w_{\alpha}(x)=|x|\) we have_
\[U_{\alpha}(x)=2\tilde{C}_{2-\alpha}(x)+2(C_{2-\alpha}(x+\pi)-C_ {2-\alpha}(\pi))\\ -2\left(C_{2-\alpha}(x(1-r_{\alpha,x}))+C_{2-\alpha}(x(1+r_{\alpha,x}))-2C_{2-\alpha}(xr_{\alpha,x})\right)\\ -2xr_{\alpha,x}\left(-S_{1-\alpha}(x(1-r_{\alpha,x}))+S_{1-\alpha }(x(1+r_{\alpha,x}))-2S_{1-\alpha}(xr_{\alpha,x})\right).\]
Proof.: For \(w_{\alpha}(x)=|x|\) we have
\[U_{\alpha}(x)=-U_{\alpha,1,1}(x)+U_{\alpha,1,2}(x)+U_{\alpha,2}(x)\]
with
\[U_{\alpha,1,1}(x)=x^{2}\int_{0}^{r_{\alpha,x}}(C_{-\alpha}(x(1-t ))+C_{-\alpha}(x(1+t))-2C_{-\alpha}(xt))t\ dt,\] \[U_{\alpha,1,2}(x)=x^{2}\int_{r_{\alpha,x}}^{1}(C_{-\alpha}(x(1-t ))+C_{-\alpha}(x(1+t))-2C_{-\alpha}(xt))t\ dt,\] \[U_{\alpha,2}(x)= x^{2}\int_{1}^{\pi/x}(C_{-\alpha}(x(1-t))+C_{-\alpha}(x(1+t))-2C_{- \alpha}(xt))t\ dt.\]
If we let
\[J_{\alpha}(x,t) =\int(C_{-\alpha}(x(1-t))+C_{-\alpha}(x(1+t))-2C_{-\alpha}(xt))t \ dt\] \[=\frac{1}{x^{2}}\left(C_{2-\alpha}(x(1-t))+C_{2-\alpha}(x(1+t))-2 C_{2-\alpha}(xt)\right)+\frac{t}{x}\left(-S_{1-\alpha}(x(1-t))+S_{1-\alpha}(x(1+t)) -2S_{1-\alpha}(xt)\right),\]
we get
\[U_{\alpha}(x) =-x^{2}(J_{\alpha}(x,r_{\alpha,x})-J_{\alpha}(x,0))+x^{2}(J_{ \alpha}(x,1)-J_{\alpha,x}(x,r_{\alpha,x}))+x^{2}(J_{\alpha}(x,\pi/x)-J_{\alpha }(x,1))\] \[=x^{2}J_{\alpha}(x,0)-2x^{2}J_{\alpha}(x,r_{x,\alpha})+x^{2}J_{ \alpha}(x,\pi/x),\]
where we have used that \(J_{\alpha}(x,1)\) is finite. Both \(J_{\alpha}(x,0)\) and \(J_{\alpha}(x,\pi/x)\) can be simplified further. We have
\[J_{\alpha}(x,0)=\frac{2}{x^{2}}\left(C_{2-\alpha}(x)-C_{2-\alpha}(0)\right)= \frac{2}{x^{2}}\tilde{C}_{2-\alpha}(x).\]
\[J_{\alpha}(x,\pi/x) =\frac{1}{x^{2}}\left(C_{2-\alpha}(x-\pi)+C_{2-\alpha}(x+\pi)-2C_{2- \alpha}(\pi)\right)+\frac{\pi}{x}\left(-S_{1-\alpha}(x-\pi)+S_{1-\alpha}(x+\pi )-2S_{1-\alpha}(\pi)\right)\] \[=\frac{2}{x^{2}}(C_{2-\alpha}(x+\pi)-C_{2-\alpha}(\pi)),\]
where we have used that \(C_{s}(x-\pi)=C_{s}(x+\pi)\), \(S_{s}(x-\pi)=S_{s}(x+\pi)\) and \(S_{s}(\pi)=0\). Putting all of this together gives us the result.
### Evaluation of \(\mathcal{T}_{\alpha}\)
Similarly to in the previous sections we divide the interval \([0,\pi]\), on which we take the supremum, into two parts, \([0,\epsilon]\) and \([\epsilon,\pi]\).
#### 11.2.1 Evaluation of \(\mathcal{T}_{\alpha}\) for \(I_{2}\)
For the weight \(w_{\alpha}(x)=|x|\) evaluation is straight forward for \(x\in[\epsilon,\pi]\), using Lemma 11.5. For \(x\in[0,\epsilon]\) we write
\[\mathcal{T}_{\alpha}(x)=\frac{1}{\pi}\frac{x^{-\alpha}}{u_{\alpha}(x)}\cdot \frac{U_{\alpha}(x)}{x^{1-\alpha}}. \tag{32}\]
The first factor is enclosed using Lemma 8.1. For the second factor we use Lemma 11.5 and expand the Clausen functions at zero to be able to cancel the removable singularity. For both \(x\in[0,\epsilon]\) and \(x\in[\epsilon,\pi]\) we need to compute enclosures of the root of the integrand \(r_{\alpha,x}\), this is done using standard interval methods for root enclosures, the monotonicity in \(x\) from Lemma 11.2 and Lemma 11.3.
When the weight is \(w_{\alpha}(x)=|x|^{p}\) with \(p<1\) more work is required when evaluating \(U_{\alpha}\). For \(x\in[\epsilon,\pi]\) it uses a combination of asymptotic analysis near the singularities of the integrand and a rigorous integrator, as described in Appendix D. For \(x\in[0,\epsilon]\) we split it as in (32) and use the following lemma to handle the second factor.
**Lemma 11.6**.: _Let \(0<\epsilon<\frac{\pi}{2}\), for \(\alpha\in(-1,0)\), \(x\in[0,\epsilon]\) and \(w_{\alpha}(x)=|x|^{p}\) with \(-\alpha<p<1\) and \(1+\alpha\neq p\) we have_
\[\frac{U_{\alpha,1}(x)}{x^{-\alpha+p}}\leq c_{\alpha,1}+d_{\alpha,1}x^{3+\alpha}\]
_and_
\[\frac{U_{\alpha,2}(x)}{x^{-\alpha+p}}\leq c_{\alpha,2}+d_{\alpha,2}x^{2+ \alpha-p},\]
_where_
\[c_{\alpha,1}= \Gamma(1+\alpha)\sin\left(-\frac{\pi}{2}\alpha\right)\left(\frac {2}{\alpha-p}+\frac{\Gamma(-\alpha)\Gamma(1+p)}{\Gamma(1-\alpha+p)}+\frac{{}_ {2}F_{1}(1+\alpha,1+p;2+p;-1)}{1+p}\right.\] \[-2r_{\alpha,0}^{p}\left(\frac{2r_{\alpha,0}^{-\alpha}}{\alpha-p} +\frac{r_{\alpha,0}\ {}_{2}F_{1}(1+\alpha,1+p,2+p,-r_{\alpha,0})}{1+p}+\frac{r_{\alpha,0}\ {}_{2}F_{1}(1+\alpha,1+p,2+p,r_{\alpha,0})}{1+p}\right)\left.\right)\] \[c_{\alpha,2}= \Gamma(1+\alpha)\sin\left(-\frac{\pi}{2}\alpha\right)\left(\frac {\Gamma(-\alpha)\Gamma(\alpha-p)}{\Gamma(-p)}+\frac{{}_{2}F_{1}(1+\alpha, \alpha-p;1+\alpha-p;-1)-2}{\alpha-p}\right);\] \[d_{\alpha,1}= 2\sum_{m=1}^{M-1}(-1)^{m}\zeta(-\alpha-2m)\frac{\epsilon^{2m-2 }}{(2m)!}\sum_{k=0}^{m-1}\binom{2m}{2k}\frac{1}{2k+1+p}+\frac{1}{\epsilon^{2} }\sum_{m=M}^{\infty}(-1)^{m}\zeta(-\alpha-2m)\frac{(2\epsilon)^{2m}}{(2m)!};\] \[d_{\alpha,2}= -\Gamma(1+\alpha)\sin\left(-\frac{\pi}{2}\alpha\right)\frac{(1+ \alpha)(2+\alpha)}{(2+\alpha-p)\pi^{2+\alpha-p}}\] \[+2\pi^{p-1}\sum_{m=1}^{M-1}(-1)^{m}\zeta(-\alpha-2m)\frac{\pi^{2m }}{(2m)!}\sum_{k=0}^{m-1}\binom{2m}{2k}\frac{\left(\frac{\epsilon}{\pi} \right)^{2(m-1-k)}}{2k+1+p}+6\pi^{p-1}\sum_{m=M}^{\infty}(-1)^{m}\zeta(-\alpha- 2m)\frac{(\frac{3\pi}{2})^{2m}}{(2m)!}\]
_The tails for \(d_{\alpha,1}\) and \(d_{\alpha,2}\) are of the same form as in those for the Clausen functions and can be bounded using Lemma C.5._
Proof.: Recall that
\[U_{\alpha,1}(x)=x^{1+p}\int_{0}^{1}\left|C_{-\alpha}(x(1-t))+C_{- \alpha}(x(1+t))-2C_{-\alpha}(xt)\right|t^{p}\ dt,\] \[U_{\alpha,2}(x)= x^{1+p}\int_{1}^{\pi/x}(C_{-\alpha}(x(1-t))+C_{-\alpha}(x(1+t))-2C_{- \alpha}(xt))t^{p}\ dt.\]
The idea is to expand the integrals in \(x\) and integrate the expansions termwise.
From the asymptotic expansion of \(C_{s}(x)\) we get, with \(x,t>0\),
\[C_{-\alpha}(x(1-t))+C_{-\alpha}(x(1+t))-2C_{-\alpha}(xt)=\Gamma( 1+\alpha)\sin\left(-\frac{\pi}{2}\alpha\right)(\left|1-t\right|^{-\alpha-1}+(1 +t)^{-\alpha-1}-2t^{-\alpha-1})x^{-\alpha-1}\\ +\sum_{m=1}^{\infty}(-1)^{m}\zeta(-\alpha-2m)((1-t)^{2m}+(1+t)^{ 2m}-2t^{2m})\frac{x^{2m}}{(2m)!}. \tag{33}\]
Using that
\[(1-t)^{2m}+(1+t)^{2m}-2t^{2m}=\sum_{k=0}^{2m}\binom{2m}{k}(1+(-1)^{k})t^{k}-2t ^{2m}=2\sum_{k=0}^{m-1}\binom{2m}{2k}t^{2k}\]
the sum can be rewritten as
\[\sum_{m=1}^{\infty}(-1)^{m}\zeta(-\alpha-2m)((1-t)^{2m}+(1+t)^{2m}-2t^{2m}) \frac{x^{2m}}{(2m)!}=2\sum_{m=1}^{\infty}(-1)^{m}\zeta(-\alpha-2m)\frac{x^{2m} }{(2m)!}\sum_{k=0}^{m-1}\binom{2m}{2k}t^{2k}.\]
We can further note that \((-1)^{m}\zeta(-\alpha-2m)>0\) for \(-1<\alpha<0\) and \(m=1,2,\ldots\), so all terms in the sum are positive.
For \(U_{\alpha,1}\) we get
\[U_{\alpha,1}(x)\leq\Gamma(1+\alpha)\sin\left(-\frac{\pi}{2}\alpha \right)x^{-\alpha+p}\int_{0}^{1}\left|(1-t)^{-\alpha-1}+(1+t)^{-\alpha-1}-2t^ {-\alpha-1}\right|t^{p}\ dt\\ +2x^{1+p}\sum_{m=1}^{\infty}(-1)^{m}\zeta(-\alpha-2m)\frac{x^{2m} }{(2m)!}\sum_{k=0}^{m-1}\binom{2m}{2k}\int_{0}^{1}t^{2k+p}\ dt.\]
Where we have use that \(1-t>0\) and the positivity of the terms in the sum to remove some absolute values. For the first term we get from Lemma 11.2 that the integrand has the unique root \(r_{\alpha,0}\) on the interval \([0,1]\). This gives us
\[\int_{0}^{1}\left|(1-t)^{-\alpha-1}+(1+t)^{-\alpha-1}-2t^{-\alpha -1}\right|t^{p}\ dt\\ =-\int_{0}^{r_{\alpha,0}}((1-t)^{-\alpha-1}+(1+t)^{-\alpha-1}-2t^ {-\alpha-1})t^{p}\ dt\\ +\int_{r_{\alpha,0}}^{1}((1-t)^{-\alpha-1}+(1+t)^{-\alpha-1}-2t^{ -\alpha-1})t^{p}\ dt.\]
Integrating this gives us
\[\int_{0}^{1}\left|(1-t)^{-\alpha-1}+(1+t)^{-\alpha-1}-2t^{-\alpha -1}\right|t^{p}\ dt=\frac{2}{\alpha-p}+\frac{\Gamma(-\alpha)\Gamma(1+p)}{ \Gamma(1-\alpha+p)}+\tfrac{{}_{2}F_{1}(1+\alpha,1+p;2+p;-1)}{1+p}\\ -2r_{\alpha,0}^{p}\left(\frac{2r_{\alpha,0}^{-\alpha}}{\alpha-p}+ \frac{r_{\alpha,0}\ {}_{2}F_{1}(1+\alpha,1+p,2+p,-r_{\alpha,0})}{1+p}+ \frac{r_{\alpha,0}\ {}_{2}F_{1}(1+\alpha,1+p,2+p,r_{\alpha,0})}{1+p}\right)\]
This, together with the factor \(\Gamma(1+\alpha)\sin\left(-\frac{\pi}{2}\alpha\right)\) gives us \(c_{\alpha,1}\).
For the sum we have \(\int_{0}^{1}t^{2k+p}\ dt=\frac{1}{2k+1+p}\), giving us
\[2x^{1+p}\sum_{m=1}^{\infty}(-1)^{m}\zeta(-\alpha-2m)\frac{x^{2m}}{(2m)!}\sum_{ k=0}^{m-1}{2m\choose 2k}\frac{1}{2k+1+p}.\]
Factoring out \(x^{2}\) and using that the sum is increasing in \(x\) we get the bound
\[2x^{3+p}\sum_{m=1}^{\infty}(-1)^{m}\zeta(-\alpha-2m)\frac{\epsilon^{2m-2}}{(2m )!}\sum_{k=0}^{m-1}{2m\choose 2k}\frac{1}{2k+1+p}.\]
To bound the tail we use that
\[\sum_{k=0}^{m-1}{2m\choose 2k}\frac{1}{2k+1+p}\leq\sum_{k=0}^{m}{2m\choose 2k} \frac{1}{2k+1}=\frac{2^{2m}}{1+2m}\leq 2^{2m}\]
and hence
\[2\sum_{m=M}^{\infty}(-1)^{m}\zeta(-\alpha-2m)\frac{\epsilon^{2m-2}}{(2m)!} \sum_{k=0}^{m-1}{2m\choose 2k}\frac{1}{2k+1+p}\leq\frac{1}{\epsilon^{2}} \sum_{m=M}^{\infty}(-1)^{m}\zeta(-\alpha-2m)\frac{(2\epsilon)^{2m}}{(2m)!}\]
This together with the first \(M-1\) terms in the sum gives us \(d_{\alpha,1}\).
For \(U_{\alpha,2}\) there is no absolute value and integrating termwise gives us
\[U_{\alpha,2}(x)=\Gamma(1+\alpha)\sin\left(-\frac{\pi}{2}\alpha \right)x^{-\alpha+p}\int_{1}^{\pi/x}((t-1)^{-\alpha-1}+(1+t)^{-\alpha-1}-2t^{ -\alpha-1})t^{p}\ dt\\ +2x^{1+p}\sum_{m=1}^{\infty}(-1)^{m}\zeta(-\alpha-2m)\frac{x^{2m} }{(2m)!}\sum_{k=0}^{m-1}{2m\choose 2k}\int_{1}^{\pi/x}t^{2k+p}\ dt\]
For the first term we get after long calculations that
\[\int_{1}^{\pi/x}((t-1)^{-\alpha-1}+(1+t)^{-\alpha-1}-2t^{-\alpha -1})t^{p}\ dt=\frac{\Gamma(-\alpha)\Gamma(\alpha-p)}{\Gamma(-p)}+\frac{{}_{2} F_{1}(1+\alpha,\alpha-p;1+\alpha-p;-1)-2}{\alpha-p}\\ -\frac{1}{\alpha-p}\left(\frac{x}{\pi}\right)^{\alpha-p}\left({} {}_{2}F_{1}\left(1+\alpha,\alpha-p;1+\alpha-p;\frac{x}{\pi}\right)+{}_{2}F_{1} \left(1+\alpha,\alpha-p;1+\alpha-p;-\frac{x}{\pi}\right)-2\right).\]
To handle the hypergeometric functions we use the series representation
\[{}_{2}F_{1}\left(1+\alpha,\alpha-p;1+\alpha-p;\frac{x}{\pi}\right)=\sum_{k=0} ^{\infty}\frac{(1+\alpha)_{k}(\alpha-p)_{k}}{(1+\alpha-p)_{k}}\frac{1}{k!} \left(\frac{x}{\pi}\right)^{k},\]
and similarly for \(-\frac{x}{\pi}\), which holds since \(\frac{x}{\pi}<1\) and \(1+\alpha-p\) is not equal to a non-positive integer. This gives us
\[{}_{2}F_{1}\left(1+\alpha,\alpha-p;1+\alpha-p;\frac{x}{\pi}\right) +{}_{2}F_{1}\left(1+\alpha,\alpha-p;1+\alpha-p;-\frac{x}{\pi}\right)-2\\ =\sum_{k=0}^{\infty}\frac{(1+\alpha)_{k}(\alpha-p)_{k}}{(1+\alpha -p)_{k}}\frac{1}{k!}\left(\left(\frac{x}{\pi}\right)^{k}+\left(-\frac{x}{\pi} \right)^{k}\right)-2=2\sum_{k=1}^{\infty}\frac{(1+\alpha)_{2k}(\alpha-p)_{2k}} {(1+\alpha-p)_{2k}}\frac{1}{(2k)!}\left(\frac{x}{\pi}\right)^{2k}\]
Putting this together we have
\[\int_{1}^{\pi/x}((t-1)^{-\alpha-1}+(1+t)^{-\alpha-1}-2t^{-\alpha-1})t ^{p}\ dt =\frac{\Gamma(-\alpha)\Gamma(\alpha-p)}{\Gamma(-p)}+\tfrac{2F_{1}(1+ \alpha,\alpha-p;1+\alpha-p;-1)-2}{\alpha-p}\] \[\quad-\frac{2}{\alpha-p}\left(\frac{x}{\pi}\right)^{\alpha-p}\sum_ {k=1}^{\infty}\frac{(1+\alpha)_{2k}(\alpha-p)_{2k}}{(1+\alpha-p)_{2k}}\frac{1 }{(2k)!}\left(\frac{x}{\pi}\right)^{2k}\]
Noticing that
\[\frac{1}{\alpha-p}\frac{(1+\alpha)_{2k}(\alpha-p)_{2k}}{(1+\alpha-p)_{2k}}= \frac{(1+\alpha)_{2k}}{2k+\alpha-p}>0\]
for \(k\geq 1\) we see that all terms in the sum are positive. Since we are subtracting the sum we get an upper bound even if we truncate the sum to any finite number of terms. In particular, keeping only the first term we get
\[\int_{1}^{\pi/x}((t-1)^{-\alpha-1}+(1+t)^{-\alpha-1}-2t^{-\alpha- 1})t^{p}\ dt\leq\frac{\Gamma(-\alpha)\Gamma(\alpha-p)}{\Gamma(-p)}+\frac{2F_ {1}(1+\alpha,\alpha-p;1+\alpha-p;-1)-2}{\alpha-p}\\ -\frac{(1+\alpha)(2+\alpha)}{2+\alpha-p}\left(\frac{x}{\pi} \right)^{2+\alpha-p}\]
Multiplying this with \(\Gamma(1+\alpha)\sin\left(-\frac{\pi}{2}\alpha\right)\) (which is positive) we get \(c_{\alpha,2}\) from the constant term and the first part of \(d_{\alpha,2}\) from the non-constant term.
For the sum we get
\[2x^{1+p}\sum_{m=1}^{\infty}(-1)^{m}\zeta(-\alpha-2m)\frac{x^{2m}}{(2m)!}\sum_ {k=0}^{m-1}\binom{2m}{2k}\frac{\left(\frac{x}{\pi}\right)^{2k+1+p}-1}{2k+1+p}\]
Factoring out \((\pi/x)^{2m-1+p}\) gives us
\[2x^{1+p}\sum_{m=1}^{\infty}(-1)^{m}\zeta(-\alpha-2m)\frac{x^{2 m}}{(2m)!}\left(\frac{\pi}{x}\right)^{2m-1+p}\sum_{k=0}^{m-1}\binom{2m}{2k} \frac{\left(\frac{x}{\pi}\right)^{2(m-1-k)}-\left(\frac{x}{\pi}\right)^{2m-1+ p}}{2k+1+p}\\ =2x^{2}\pi^{p-1}\sum_{m=1}^{\infty}(-1)^{m}\zeta(-\alpha-2m)\frac {\pi^{2m}}{(2m)!}\sum_{k=0}^{m-1}\binom{2m}{2k}\frac{\left(\frac{x}{\pi} \right)^{2(m-1-k)}-\left(\frac{x}{\pi}\right)^{2m-1+p}}{2k+1+p}.\]
Looking at the inner sum an upper bound is given by
\[\sum_{k=0}^{m-1}\binom{2m}{2k}\frac{\left(\frac{x}{\pi}\right)^{2(m-1-k)}}{2k +1+p}.\]
This is increasing in \(x\) and an upper bound is hence given by
\[2x^{2}\pi^{p-1}\sum_{m=1}^{\infty}(-1)^{m}\zeta(-\alpha-2m)\frac{\pi^{2m}}{(2 m)!}\sum_{k=0}^{m-1}\binom{2m}{2k}\frac{\left(\frac{x}{\pi}\right)^{2(m-1-k)}}{2k +1+p}.\]
To bound the tail we use that \(\epsilon\leq\pi/2\) and hence
\[\sum_{k=0}^{m-1}\binom{2m}{2k}\frac{\left(\frac{\epsilon}{\pi}\right)^{2(m-1- k)}}{2k+1+p}\leq\sum_{k=0}^{m-1}\binom{2m}{2k}\frac{\left(\frac{1}{2}\right)^{2(m-1- k)}}{2k+1}=\frac{2^{-2m}(1+3^{1+2m})-4}{1+2m}<3\left(\frac{3}{2}\right)^{2m}\]
Inserting this we have
\[\sum_{m=M}^{\infty}(-1)^{m}\zeta(-\alpha-2m)\frac{\pi^{2m}}{(2m)!}\sum_{k=0}^ {m-1}\binom{2m}{2k}\frac{\left(\frac{\epsilon}{\pi}\right)^{2(m-1-k)}}{2k+1} \leq 3\sum_{m=M}^{\infty}(-1)^{m}\zeta(-\alpha-2m)\frac{\left(\frac{3\pi}{ 2}\right)^{2m}}{(2m)!}.\]
This together with the first \(M-1\) terms in the sum gives us the second part of \(d_{\alpha,2}\)
#### 11.2.2 Evaluation of \(\mathcal{T}_{\alpha}\) for \(I_{1}\)
For \(x\in[\epsilon,\pi]\) we make one optimization compared to \(I_{2}\), instead of computing an enclosure we only compute an upper bound. For this we use that \(u_{\alpha}(x)\geq u_{-1}(x)\) for \(\alpha\in I_{1}\) and hence
\[\mathcal{T}_{\alpha}(x)\leq\frac{U_{\alpha}(x)}{\pi|w_{\alpha}(x)||u_{-1}(x)|}.\]
The different weight also means that the asymptotic analysis near the singularities of the integrand has to be adjusted, this is discussed in Appendix D.
For \(x\in[0,\epsilon]\) we write
\[\mathcal{T}_{\alpha}(x)=\frac{\log(1/x)}{\pi\log(2e+1/x)}.\frac{\Gamma(1+ \alpha)x^{-\alpha}(1-x^{1+\alpha+(1+\alpha)^{2}/2})}{u_{\alpha}(x)}.\frac{U_{ \alpha}(x)}{x^{(1-\alpha)/2-\alpha}\log(1/x)(1-x^{1+\alpha+(1+\alpha)^{2}/2}) \Gamma(1+\alpha)}.\]
The first two factors are handled in the same way as in Section 10.2. For the third factor we start with the following lemma
**Lemma 11.7**.: _For \(\alpha\in(-1,0)\) and \(w_{\alpha}(x)=|x|^{(1-\alpha)/2}\log(2e+1/|x|)\) we have_
\[\frac{U_{\alpha}(x)}{x^{(1-\alpha)/2-\alpha}\log(1/x)(1-x^{1+ \alpha+(1+\alpha)^{2}/2})\Gamma(1+\alpha)}\leq \sin\left(-\frac{\pi}{2}\alpha\right)\left(G_{\alpha,1}(x)+G_{ \alpha,2}(x)\right)\] \[+\frac{x^{1+\alpha}}{\Gamma(1+\alpha)\log(1/x)(1-x^{1+\alpha+(1 +\alpha)^{2}/2})}R_{\alpha}(x)\]
_with_
\[G_{\alpha,1}(x) =\frac{\int_{0}^{1}\left|(1-t)^{-\alpha-1}+(1+t)^{-\alpha-1}-2t^ {-\alpha-1}\right|t^{(1-\alpha)/2}\log(2e+1/(xt))\ dt}{\log(1/x)(1-x^{1+ \alpha+(1+\alpha)^{2}/2})}; \tag{34}\] \[G_{\alpha,2}(x) =\frac{\int_{1}^{\pi/x}\left((t-1)^{-\alpha-1}+(1+t)^{-\alpha-1} -2t^{-\alpha-1}\right)t^{(1-\alpha)/2}\log(2e+1/(xt))\ dt}{\log(1/x)(1-x^{1+ \alpha+(1+\alpha)^{2}/2})};\] (35) \[R_{\alpha}(x) =2\sum_{m=1}^{\infty}(-1)^{m}\zeta(-\alpha-2m)\frac{x^{2m}}{(2m)!}\sum_{k=0}^{m-1}\binom{2m}{2k}\int_{0}^{\pi/x}t^{2k+(1-\alpha)/2}\log(2e+1/ (xt))\ dt. \tag{36}\]
Proof.: With the given weight we have
\[U_{\alpha}(x)=x\int_{0}^{\pi/x}|C_{-\alpha}(x(1-t))+C_{-\alpha}(x(1+t))-2C_{- \alpha}(xt)|\,(xt)^{(1-\alpha)/2}\log(2e+1/(xt))\ dt.\]
This means that we want to bound
\[\frac{x^{1+\alpha}\int_{0}^{\pi/x}|C_{-\alpha}(x(1-t))+C_{-\alpha}(x(1+t))-2C _{-\alpha}(xt)|\,t^{(1-\alpha)/2}\log(2e+1/(xt))\ dt}{\log(1/x)(1-x^{1+\alpha+ (1+\alpha)^{2}/2})\Gamma(1+\alpha)}.\]
Using the asymptotic expansion of the Clausen terms in the integrand from (33) we can split the integral as
\[\int_{0}^{\pi/x}|C_{-\alpha}(x(1-t))+C_{-\alpha}(x(1+t))-2C_{- \alpha}(xt)|\,t^{(1-\alpha)/2}\log(2e+1/(xt))dt\] \[\leq\Gamma(1+\alpha)\sin(-\pi\alpha/2)x^{-\alpha-1}\int_{0}^{\pi/ x}\left||1-t|^{-\alpha-1}+(1+t)^{-\alpha-1}-2t^{-\alpha-1}\right|t^{(1-\alpha)/2} \log(2e+1/(xt))\ dt\] \[+2\sum_{m=1}^{\infty}(-1)^{m}\zeta(-\alpha-2m)\frac{x^{2m}}{(2m)! }\sum_{k=0}^{m-1}\binom{2m}{2k}\int_{0}^{\pi/x}t^{2k+(1-\alpha)/2}\log(2e+1/( xt))\ dt\]
From this we get
\[\frac{\sin(-\pi\alpha/2)\int_{0}^{\pi/x}\left|\left|1-t\right|^{- \alpha-1}+(1+t)^{-\alpha-1}-2t^{-\alpha-1}\right|t^{(1-\alpha)/2}\log(2e+1/(xt) )\ dt}{\log(1/x)(1-x^{1+\alpha+(1+\alpha)^{2}/2})}\\ +\frac{x^{1+\alpha}2\sum_{m=1}^{\infty}(-1)^{m}\zeta(-\alpha-2m) \frac{x^{2m}}{(2m)!}\sum_{k=0}^{m-1}\binom{2m}{2k}\int_{0}^{\pi/x}t^{2k+(1- \alpha)/2}\log(2e+1/(xt))\ dt}{\log(1/x)(1-x^{1+\alpha+(1+\alpha)^{2}/2})\Gamma (1+\alpha)}\]
as an upper bound. For the integral in the first term we can split the interval of integration at \(t=1\) to get
\[\int_{0}^{\pi/x}\left|\left|1-t\right|^{-\alpha-1}+(1+t)^{-\alpha -1}-2t^{-\alpha-1}\right|t^{(1-\alpha)/2}\log(2e+1/(xt))\ dt\\ =\int_{0}^{1}\left|(1-t)^{-\alpha-1}+(1+t)^{-\alpha-1}-2t^{- \alpha-1}\right|t^{(1-\alpha)/2}\log(2e+1/(xt))\ dt\\ +\int_{1}^{\pi/x}\left((t-1)^{-\alpha-1}+(1+t)^{-\alpha-1}-2t^{- \alpha-1}\right)t^{(1-\alpha)/2}\log(2e+1/(xt))\ dt.\]
Here we have removed the absolute values around \(1-t\) according to its sign and also used that
\[(t-1)^{-\alpha-1}+(1+t)^{-\alpha-1}-2t^{-\alpha-1}\]
is positive for \(t>1\), which can be shown in the same way as in Lemma 11.4.
Bounds of \(G_{\alpha,1}\), \(G_{\alpha,2}\) and \(R_{\alpha}\) are given in Appendix F. The factor
\[\frac{x^{1+\alpha}}{\Gamma(1+\alpha)\log(1/x)(1-x^{1+\alpha+(1+\alpha)^{2}/2})}\]
has a removable singularity at \(\alpha=-1\) that needs to be treated but is otherwise straightforward to enclose.
#### 11.2.3 Evaluation of \(\mathcal{T}_{\alpha}\) for \(I_{3}\)
In this case we need to not only compute an enclosure of \(\mathcal{T}_{\alpha}(x)\), but understand it behavior in \(\alpha\). We therefore compute Taylor models of degree \(0\) centered at \(\alpha=0\). The weight is given by \(w_{\alpha}(x)=|x|\), and we can therefore use the explicit expression for the integral given in Lemma 11.5.
We start with the following lemma that gives us information about the first term in the Taylor model.
**Lemma 11.8**.: _For \(x\in[0,\pi]\) the constant term in the expansion at \(\alpha=0\) of \(\mathcal{T}_{\alpha}(x)\) is \(1\)._
Proof.: Recall that
\[\mathcal{T}_{\alpha}(x)=\frac{U_{\alpha}(x)}{\pi|w_{\alpha}(x)||u_{\alpha}(x)|}.\]
In this case \(w_{\alpha}(x)=|x|\), giving us
\[\mathcal{T}_{\alpha}(x)=\frac{U_{\alpha}(x)}{\pi x|u_{\alpha}(x)|}.\]
By Lemma 4.8 the constant term in the expansion at \(\alpha=0\) of \(u_{\alpha}(x)\) is \(1\), it is therefore enough to show that the constant term for the numerator is \(\pi x\).
From Lemma 11.5 we have that
\[U_{\alpha}(x)=2\tilde{C}_{2-\alpha}(x)+2(C_{2-\alpha}(x+\pi)-C_{ 2-\alpha}(\pi))\\ -2\left(C_{2-\alpha}(x(1-r_{\alpha,x}))+C_{2-\alpha}(x(1+r_{ \alpha,x}))-2C_{2-\alpha}(xr_{\alpha,x})\right)\\ -2xr_{\alpha,x}\left(-S_{1-\alpha}(x(1-r_{\alpha,x}))+S_{1-\alpha }(x(1+r_{\alpha,x}))-2S_{1-\alpha}(xr_{\alpha,x})\right).\]
To get the constant term in the expansion we want to compute the limit of this as \(\alpha\to 0\), which we denote by \(U_{0}(x)\).
To do that we first need to compute \(\lim_{\alpha\to 0^{-}}r_{\alpha,x}\). The function \(\hat{I}_{\alpha}(x,t)\) for which \(r_{\alpha,x}\) is a zero converges to \(0\) everywhere as \(\alpha\to 0\). If we divide by \(\alpha\) and compute the zero of \(\frac{\hat{I}_{\alpha}(x,t)}{\alpha}\) instead, we get a well-defined limit, and we get that \(r_{0,x}\) is the zero of
\[\frac{d}{d\alpha}\hat{I}_{\alpha}(x,t)=C_{-\alpha}^{(1)}(x(1-t))+C_{-\alpha}^{ (1)}(x(1+t))-2C_{-\alpha}^{(1)}(xt).\]
This function satisfies the same properties as in Lemma 11.2 with respect to the root \(r_{0,x}\).
Taking the limit in \(\alpha\) now gives us
\[U_{0}(x)=2\tilde{C}_{2}(x)+2(C_{2}(x+\pi)-C_{2}(\pi))\\ -2\left(C_{2}(x(1-r_{0,x}))+C_{2}(x(1+r_{0,x}))-2C_{2}(xr_{0,x}) \right)\\ -2xr_{0,x}\left(-S_{1}(x(1-r_{0,x}))+S_{1}(x(1+r_{0,x}))-2S_{1}( xr_{0,x})\right).\]
For these parameters to the Clausen functions we have the explicit expressions
\[C_{2}(x)=\frac{\pi^{2}}{6}-\frac{\pi}{2}x+\frac{1}{4}x^{2},\quad S_{1}(x)= \frac{\pi}{2}-\frac{1}{2}x,\]
valid when \(x\in[0,2\pi]\). For the different parts of \(U_{\alpha}(x)\) we get
\[2\tilde{C}_{2}(x)+2(C_{2}(x+\pi)-C_{2}(\pi))\\ =2\left(\left(-\frac{\pi}{2}x+\frac{1}{4}x^{2}\right)+\left( \frac{\pi^{2}}{6}-\frac{\pi}{2}(x+\pi)+\frac{1}{4}(x+\pi)^{2}\right)-\left( \frac{\pi^{2}}{6}-\frac{\pi}{2}\pi+\frac{1}{4}\pi^{2}\right)\right)\\ =2\left(-\frac{\pi}{2}x+\frac{1}{2}x^{2}\right)=-\pi x+x^{2},\]
\[2\left(C_{2}(x(1-r_{0,x}))+C_{2}(x(1+r_{0,x}))-2C_{2}(xr_{0,x})\right)\\ =2\bigg{(}\left(\frac{\pi^{2}}{6}-\frac{\pi}{2}x(1-r_{0,x})+\frac {1}{4}x^{2}(1-r_{0,x})^{2}\right)+\left(\frac{\pi^{2}}{6}-\frac{\pi}{2}x(1+r_{ 0,x})+\frac{1}{4}x^{2}(1+r_{0,x})^{2}\right)\\ -2\left(\frac{\pi^{2}}{6}-\frac{\pi}{2}xr_{0,x}+\frac{1}{4}x^{2} r_{0,x}^{2}\right)\bigg{)}=2\left(-\pi x(1-r_{0,x})+\frac{1}{2}x^{2} \right)=-2\pi x(1-r_{0,x})+x^{2},\]
\[2xr_{0,x}\left(-S_{1}(x(1-r_{0,x}))+S_{1}(x(1+r_{0,x}))-2S_{1}( xr_{0,x})\right)\\ =2xr_{0,x}\left(-\left(\frac{\pi}{2}-\frac{1}{2}x(1-r_{0,x}) \right)+\left(\frac{\pi}{2}-\frac{1}{2}x(1+r_{0,x})\right)-2\left(\frac{\pi}{ 2}-\frac{1}{2}xr_{0,x}\right)\right)=-2\pi xr_{0,x}.\]
Putting this together we get
\[U_{0}(x)=(-\pi x+x^{2})-(-2\pi x(1-r_{0,x})+x^{2})+2\pi xr_{0,x}=\pi x,\]
which is exactly what we needed to show.
For \(x\in[\epsilon,\pi]\) we compute the Taylor model of \(u_{\alpha}(x)\) using the approach described in Section 7.3. For \(U_{\alpha}\) we use the expression
\[U_{\alpha}(x)=2\tilde{C}_{2-\alpha}(x)+2(C_{2-\alpha}(x+\pi)-C_ {2-\alpha}(\pi))\\ -2\left(C_{2-\alpha}(x(1-r_{\alpha,x}))+C_{2-\alpha}(x(1+r_{\alpha,x}))-2C_{2-\alpha}(xr_{\alpha,x})\right)\\ -2xr_{\alpha,x}\left(-S_{1-\alpha}(x(1-r_{\alpha,x}))+S_{1-\alpha }(x(1+r_{\alpha,x}))-2S_{1-\alpha}(xr_{\alpha,x})\right)\]
from Lemma 11.5. For the Clausen functions not depending on \(r_{\alpha,x}\) we can compute Taylor models directly. If we let
\[K_{\alpha}(x)=\left(C_{2-\alpha}(x(1-r_{\alpha,x}))+C_{2-\alpha}( x(1+r_{\alpha,x}))-2C_{2-\alpha}(xr_{\alpha,x})\right)\\ +xr_{\alpha,x}\left(-S_{1-\alpha}(x(1-r_{\alpha,x}))+S_{1-\alpha} (x(1+r_{\alpha,x}))-2S_{1-\alpha}(xr_{\alpha,x})\right)\]
then the part of \(U_{\alpha}(x)\) depending on \(r_{\alpha,x}\) is given by \(-2K_{\alpha}(x)\). For the Taylor model of \(K_{\alpha}(x)\) we need to take into account that \(r_{\alpha,x}\) also depends on \(\alpha\). The constant term in the Taylor model is given by \(K_{0}(x)\), which we can compute. For the remainder term we want to enclose \(\frac{d}{d\alpha}K_{\alpha}(x)\) for \(\alpha\in I_{3}\). Differentiation gives us
\[\frac{d}{d\alpha}K_{\alpha}(x)=-\left(C_{2-\alpha}^{(1)}(x(1-r_{ \alpha,x}))+C_{2-\alpha}^{(1)}(x(1+r_{\alpha,x}))-2C_{2-\alpha}^{(1)}(xr_{ \alpha,x})\right)\\ -x\left(\frac{d}{d\alpha}r_{\alpha,x}\right)\left(-S_{1-\alpha}( x(1-r_{\alpha,x}))+S_{1-\alpha}^{(1)}(x(1+r_{\alpha,x}))-2S_{1-\alpha}^{(1)}(xr_{ \alpha,x})\right)\\ +x\left(\frac{d}{d\alpha}r_{\alpha,x}\right)\left(-S_{1-\alpha}( x(1-r_{\alpha,x}))+S_{1-\alpha}(x(1+r_{\alpha,x}))-2S_{1-\alpha}(xr_{\alpha,x})\right)\\ -xr_{\alpha,x}\left(-S_{1-\alpha}^{(1)}(x(1-r_{\alpha,x}))+S_{1- \alpha}^{(1)}(x(1+r_{\alpha,x}))-2S_{1-\alpha}^{(1)}(xr_{\alpha,x})\right)\\ +x^{2}r_{\alpha,x}\left(\frac{d}{d\alpha}r_{\alpha,x}\right)\left( -C_{-\alpha}(x(1-r_{\alpha,x}))+C_{-\alpha}(x(1+r_{\alpha,x}))-2C_{-\alpha}( xr_{\alpha,x})\right).\]
We see that two of the terms cancel out, and the last term is zero since \(r_{\alpha,x}\) is a root of exactly this function, this gives us
\[\frac{d}{d\alpha}K_{\alpha}(x)=-\left(C_{2-\alpha}^{(1)}(x(1-r_{ \alpha,x}))+C_{2-\alpha}^{(1)}(x(1+r_{\alpha,x}))-2C_{2-\alpha}^{(1)}(xr_{ \alpha,x})\right)\\ -xr_{\alpha,x}\left(-S_{1-\alpha}^{(1)}(x(1-r_{\alpha,x}))+S_{1- \alpha}^{(1)}(x(1+r_{\alpha,x}))-2S_{1-\alpha}^{(1)}(xr_{\alpha,x})\right).\]
In particular we see that the derivative only depends on \(r_{\alpha,x}\), for which we can easily compute an enclosure, and not on \(\frac{d}{d\alpha}r_{\alpha,x}\).
For \(x\in[0,\epsilon]\) we write the function as in 32 and the only problematic part is the evaluation of \(U_{\alpha}(x)x^{\alpha-1}\). The terms not depending on \(r_{\alpha,x}\) are handled by expanding the Clausen functions, following Appendix B.1 to get a Taylor model of the expansion. What remains is to compute a Taylor model of \(K_{\alpha}(x)x^{\alpha-1}\). For that we compute the constant term and an enclosure of the derivative separately. The constant term is done by just expanding the Clausen functions. For the derivative we get
\[\frac{d}{d\alpha}K_{\alpha}(x)x^{\alpha-1}=\left(\frac{d}{d\alpha}K_{\alpha}( x)\right)x^{\alpha-1}+K_{\alpha}(x)\log(x)x^{\alpha-1}.\]
When expanding the Clausen functions for this we have to handle the cancellations between the two terms since they individually blow up at \(x=0\).
**Lemma 11.9**.: _We have the following expansion for \(0<x<\pi\)_
\[\frac{d}{d\alpha}K_{\alpha}(x)x^{\alpha-1}= -\frac{d}{ds}\left(\Gamma(1-s)\sin\left(\frac{\pi}{2}s\right)\left( (1-r_{\alpha,x})^{s-1}+(1+r_{\alpha,x})^{s-1}-2r_{\alpha,x}^{s-1}\right)\right) \Bigg{|}_{s=2-\alpha}\] \[-r_{\alpha,x}\frac{d}{ds}\left(\Gamma(1-s)\cos\left(\frac{\pi}{2} s\right)\left(-(1-r_{\alpha,x})^{s-1}+(1+r_{\alpha,x})^{s-1}-2r_{\alpha,x}^{s-1} \right)\right)\Bigg{|}_{s=1-\alpha}\] \[+\log(x)\sum_{m=1}^{\infty}\frac{(-1)^{m}}{(2m)!}\zeta(2-\alpha- 2m)\left((1-r_{\alpha,x})^{2m}+(1+r_{\alpha,x})^{2m}-2r_{\alpha,x}^{2m}\right) x^{2m-1+\alpha}\] \[+r_{\alpha,x}\log(x)\sum_{m=0}^{\infty}\frac{(-1)^{m}}{(2m+1)!} \zeta(-\alpha-2m)\left(-(1-r_{\alpha,x})^{2m+1}+(1+r_{\alpha,x})^{2m+1}-2r_{ \alpha,x}^{2m+1+\alpha}\right)x^{2m+1+\alpha}\] \[-\sum_{m=1}^{\infty}\frac{(-1)^{m}}{(2m)!}\zeta^{\prime}(2- \alpha-2m)\left((1-r_{\alpha,x})^{2m}+(1+r_{\alpha,x})^{2m}-2r_{\alpha,x}^{2m }\right)x^{2m-1+\alpha}\] \[-r_{\alpha,x}\sum_{m=0}^{\infty}\frac{(-1)^{m}}{(2m+1)!}\zeta^{ \prime}(-\alpha-2m)\left(-(1-r_{\alpha,x})^{2m+1}+(1+r_{\alpha,x})^{2m+1}-2r_{ \alpha,x}^{2m+1}\right)x^{2m+1+\alpha}.\]
Proof.: We get directly from expanding the Clausen functions that
\[K_{\alpha}(x)\log(x)x^{\alpha-1}=\Gamma(\alpha-1)\sin\left( \frac{\pi}{2}(2-\alpha)\right)\left((1-r_{\alpha,x})^{1-\alpha}+(1+r_{\alpha, x})^{1-\alpha}-2r_{\alpha,x}^{1-\alpha}\right)\log(x)\\ +r_{\alpha,x}\Gamma(\alpha)\cos\left(\frac{\pi}{2}(1-\alpha) \right)\left((1-r_{\alpha,x})^{-\alpha}+(1+r_{\alpha,x})^{-\alpha}-2r_{ \alpha,x}^{-\alpha}\right)\log(x)\\ +\log(x)\sum_{m=1}^{\infty}\frac{(-1)^{m}}{(2m)!}\zeta(2-\alpha- 2m)\left((1-r_{\alpha,x})^{2m}+(1+r_{\alpha,x})^{2m}-2r_{\alpha,x}^{2m}\right) x^{2m-1+\alpha}\\ +r_{\alpha,x}\log(x)\sum_{m=0}^{\infty}\frac{(-1)^{m}}{(2m+1)!} \zeta(-\alpha-2m)\left(-(1-r_{\alpha,x})^{2m+1}+(1+r_{\alpha,x})^{2m+1}-2r_{ \alpha,x}^{2m+1}\right)x^{2m+1+\alpha} \tag{37}\]
and
\[\left(\frac{d}{d\alpha}K_{\alpha}(x)\right)x^{\alpha-1}=-\frac{d}{ds}\left( \Gamma(1-s)\sin\left(\frac{\pi}{2}s\right)\left((1-r_{\alpha,x})^{s-1}+(1+r_{ \alpha,x})^{s-1}-2r_{\alpha,x}^{s-1}\right)x^{s-1}\right)\Bigg{|}_{s=2-\alpha}x ^{\alpha-1}\\ -r_{\alpha,x}\frac{d}{ds}\left(\Gamma(1-s)\cos\left(\frac{\pi}{2 }s\right)\left((1-r_{\alpha,x})^{s-1}+(1+r_{\alpha,x})^{s-1}-2r_{\alpha,x}^{s -1}\right)x^{s-1}\right)\Bigg{|}_{s=1-\alpha}x^{\alpha}\\ -\sum_{m=1}^{\infty}\frac{(-1)^{m}}{(2m)!}\zeta^{\prime}(2-\alpha -2m)\left((1-r_{\alpha,x})^{2m}+(1+r_{\alpha,x})^{2m}-2r_{\alpha,x}^{2m} \right)x^{2m-1+\alpha}\\ -r_{\alpha,x}\sum_{m=0}^{\infty}\frac{(-1)^{m}}{(2m+1)!}\zeta^{ \prime}(-\alpha-2m)\left(-(1-r_{\alpha,x})^{2m+1}+(1+r_{\alpha,x})^{2m+1}-2r_{ \alpha,x}^{2m+1}\right)x^{2m+1+\alpha}. \tag{38}\]
For the two first terms in the expansion for \(\left(\frac{d}{d\alpha}K_{\alpha}(x)\right)x^{\alpha-1}\) we have
\[\frac{d}{ds}\left(\Gamma(1-s)\sin\left(\frac{\pi}{2}s\right)\left((1-r_{ \alpha,x})^{s-1}+(1+r_{\alpha,x})^{s-1}-2r_{\alpha,x}^{s-1}\right)x^{s-1} \right)\Bigg{|}_{s=2-\alpha}x^{\alpha-1}\\ =\frac{d}{ds}\left(\Gamma(1-s)\sin\left(\frac{\pi}{2}s\right) \left((1-r_{\alpha,x})^{s-1}+(1+r_{\alpha,x})^{s-1}-2r_{\alpha,x}^{s-1} \right)\right)\Bigg{|}_{s=2-\alpha}\\ +\Gamma(1-s)\sin\left(\frac{\pi}{2}s\right)\left((1-r_{\alpha,x}) ^{s-1}+(1+r_{\alpha,x})^{s-1}-2r_{\alpha,x}^{s-1}\right)\log(x)\]
and
\[r_{\alpha,x}\frac{d}{ds}\left(\Gamma(1-s)\cos\left(\frac{\pi}{2}s \right)\left((1-r_{\alpha,x})^{s-1}+(1+r_{\alpha,x})^{s-1}-2r_{\alpha,x}^{s-1} \right)\right)_{s=1-\alpha}\!\!x^{\alpha}\\ =r_{\alpha,x}\frac{d}{ds}\left(\Gamma(1-s)\cos\left(\frac{\pi}{2}s \right)\left((1-r_{\alpha,x})^{s-1}+(1+r_{\alpha,x})^{s-1}-2r_{\alpha,x}^{s-1} \right)\right)_{s=1-\alpha}\\ +r_{\alpha,x}\Gamma(1-s)\cos\left(\frac{\pi}{2}s\right)\left((1-r _{\alpha,x})^{s-1}+(1+r_{\alpha,x})^{s-1}-2r_{\alpha,x}^{s-1}\right)\log(x).\]
For both of these the second term exactly cancels the corresponding one in (37), this gives us the result.
#### 11.2.4 Evaluation of \(\mathcal{T}_{\alpha}\) in hybrid cases
For \(\alpha\) near \(0\) this is straight forward, the weight is in this case given by \(w_{\alpha}(x)=|x|\), and we can use the same approach as described in Section 11.2.1.
For \(\alpha\) near \(-1\) we can use the same approach for \(x\in[\epsilon,\pi]\). For \(x\in[0,\epsilon]\) this doesn't work since the weight is not on the form \(w_{\alpha}(x)=|x|^{p}\) so Lemma 11.6 doesn't apply. Instead, we have to use an approach more similar to that used for \(\alpha\in I_{1}\). We write \(\mathcal{T}_{\alpha}\) as
\[\mathcal{T}_{\alpha}(x)=\frac{\log(1/x)}{\pi\log(2e+1/x)}\cdot\frac{x^{-\alpha }}{u_{\alpha}(x)}\cdot\frac{U_{\alpha}(x)}{\log(1/x)x^{-\alpha+p}}.\]
The first two factors are bounded in the same way as in the above sections. For the third one we give a bound in Appendix G.
## 12 Bounds for \(D_{\alpha}\), \(\delta_{\alpha}\) and \(n_{\alpha}\)
We are now ready to give bounds for \(D_{\alpha}\), \(\delta_{\alpha}\) and \(n_{\alpha}\). Recall that they are given by
\[n_{\alpha}=\sup_{x\in[0,\pi]}|N_{\alpha}(x)|,\quad\delta_{\alpha}=\sup_{x\in[ 0,\pi]}|F_{\alpha}(x)|,\quad D_{\alpha}=\sup_{x\in[0,\pi]}|\mathcal{T}_{ \alpha}(x)|.\]
In each case we split the interval \([0,\pi]\) into two parts, \([0,\epsilon]\) and \([\epsilon,\pi]\), with \(\epsilon\) varying for the different cases, and threat them separately. For the interval \([0,\epsilon]\) we use the asymptotic bounds for the different functions that were introduced in the previous sections. For the interval \([\epsilon,\pi]\) we evaluate the functions directly using interval arithmetic. The method we use for bounding the supremum is the one described in Section 6.1.
In most cases the subintervals in the computations are bisected at the midpoint, meaning that the interval \([\underline{x},\overline{x}]\) would be bisected into the two intervals \([\underline{x},(\underline{x}+\overline{x})/2]\) and \([(\underline{x}+\overline{x})/2,\overline{x}]\). However, when the magnitude of \(\underline{x}\) and \(\overline{x}\) are very different it can be beneficial to bisect at the geometric midpoint (see e.g. [27]), in that case we split the interval into \([\underline{x},\sqrt{\underline{x}\overline{x}}]\) and \([\sqrt{\underline{x}\overline{x}},\overline{x}]\), where we assume that \(\underline{x}>0\).
The code [14] for the computer-assisted parts is implemented in Julia [5]. The main tool for the rigorous numerics is Arb [40] which we use through the Julia wrapper _Arblib.jl_1. Many of the basic interval arithmetic algorithms, such as isolating roots or enclosing maximum values, are implemented in a separate package, _ArbExtras.jl_2. For finding the coefficients \(\{a_{j}\}\) with \(j\geq 1\) and \(\{b_{n}\}\) of \(u_{\alpha}\) we make use of non-linear solvers from _NLsolve.jl_[48].
Footnote 1: [https://github.com/kalmarek/Arblib.jl](https://github.com/kalmarek/Arblib.jl)
Footnote 2: [https://github.com/Joel-Dahne/ArbExtras.jl](https://github.com/Joel-Dahne/ArbExtras.jl)
For \(\alpha\in I_{1}\) and \(\alpha\in I_{3}\) the computations were done on an AMD Ryzen 9 5900X processor with 32 GB of RAM using 12 threads. For \(\alpha\in I_{2}\) most of the computations where done on the Dardel HPC system at the PDC Center for High Performance Computing, KTH Royal Institute of Technology. The nodes are equipped with two AMD EPYC(tm) Zen2 2.25 GHz 64 core processors and 256 GB of RAM.
We handle the intervals \(I_{1}\), \(I_{2}\) and \(I_{3}\) separately, the all require slightly different approaches for computing the bounds.
### Bounds for \(I_{1}\)
We here give bounds of \(n_{\alpha}\), \(\delta_{\alpha}\) and \(D_{\alpha}\) for \(\alpha\in I_{1}=(-1,-1+\delta_{1})\), with \(\delta_{1}=10^{-4}\). The bounds are split into three lemmas. Recall that in this case the weight is given by \(w_{\alpha}(x)=|x|^{(1-\alpha)/2}\log(2e+1/|x|)\). We take \(u_{\alpha}\) is in (20) with \(\hat{\alpha}=-0.9997\), \(N_{\hat{\alpha},0}=1929\) and \(N_{-1,1}=16\).
**Lemma 12.1**.: _The constant \(n_{\alpha}\) satisfies the inequality \(n_{\alpha}\leq\bar{n}_{1}=1.79\) for all \(\alpha\in I_{1}\)._
Proof.: A plot of \(N_{\alpha}(x)\) on the interval \([0,\pi]\) is given in Figure 2(a). It hints at a local maximum at \(x=\pi\) but does not fully show what happens near \(x=0\), a plot closer to the origin is given in Figure 2(b), and we see that the function flattens out around \(\pi/2\) when \(x\) approaches zero, which indicates that the maximum indeed is attained at \(x=\pi\).
For the interval \([\epsilon,\pi]\) we don't compute with \(N_{\alpha}(x)\) directly but instead with \(\bar{N}_{\alpha}(x)=\frac{w_{\alpha}(x)}{2u_{-1}(x)}\) which satisfies \(N_{\alpha}(x)\leq\bar{N}_{\alpha}(x)\), as mentioned in Section 9.2.
The value of \(\epsilon\) is chosen dynamically. We don't compute an enclosure of the maximum for the interval \([0,\epsilon]\) but only prove that it is bounded by \(\bar{N}_{\alpha}(\pi)\). The value for \(\epsilon\) is determined in the following way. Starting with \(\epsilon=0.45\) we compute an enclosure of \(N_{\alpha}([0,\epsilon])\). If this enclosure is bounded by \(\bar{N}_{\alpha}(\pi)\) we stop, otherwise we multiply \(\epsilon\) by \(0.8\) and try again. Once the bound holds we stop, in practice this happens for \(\epsilon\approx 0.075\).
For the interval \([\epsilon,\pi]\) we compute an enclosure of the maximum of \(\bar{N}_{\alpha}(x)\). This gives us
\[n_{\alpha}\leq[1.7870\pm 8.20\cdot 10^{-5}],\]
which is upper bounded by \(\bar{n}_{1}\). On this interval we are able to compute Taylor expansions of \(\bar{N}_{\alpha}(x)\) on the interval \([\epsilon,\pi]\), allowing us to use the better version of the algorithm, based on the Taylor polynomial, for enclosing the maximum. We use a Taylor expansion of degree \(0\), which is enough to pick up the monotonicity after only a few bisections in most cases.
The runtime is about \(6\) seconds, most of it for handling the interval \([\epsilon,\pi]\).
**Lemma 12.2**.: _The constant \(\delta_{\alpha}\) satisfies the inequality \(\delta_{\alpha}\leq\bar{\delta}_{1}=0.0005\) for all \(\alpha\in I_{1}\)._
Proof.: A plot of \(F_{\alpha}(x)\) on the interval \([0,\pi]\) is given in Figure 3(a), a logarithmic plot on the interval \([10^{-5},10^{-1}]\) is given in Figure 3(b).
In this case we don't try to compute a very accurate bound but instead only prove that \(\delta_{\alpha}\) is bounded by \(\bar{\delta}_{1}\).
The value for \(\epsilon\) is chosen such that \(F_{\alpha}(\epsilon)\) evaluated using the asymptotic version gives an enclosure that satisfies the bound. We find that \(\epsilon\approx 0.48\) works.
For the interval \([0,\epsilon]\) we first find \(\epsilon_{1}\) such that \(F_{\alpha}([0,\epsilon_{1}])\) directly gives an enclosure that is bounded by \(\bar{\delta}_{1}\). We get that \(\epsilon_{1}\approx 10^{-7000}\) works. For the interval \([\epsilon_{1},\epsilon]\) we compute an enclosure of the maximum by iteratively bisecting the interval. Since the endpoints are very different in size we bisect at the geometric midpoint. We stop once the enclosure is bounded by \(\bar{\delta}_{1}\). The asymptotic version of \(F_{\alpha}\) doesn't allow for evaluation with Taylor expansions, it therefore requires many bisections to get a good enough enclosure, the maximum depth for the bisection is \(30\).
For the interval \([\epsilon,\pi]\) we also compute a bound by bisection, stopping once the bound is less than \(\bar{\delta}_{1}\). Similar to in the previous lemma, and as mentioned in Section 10.2, we use \(u_{-1}(x)\) in the numerator of \(F_{\alpha}(x)\), which gives an upper bound. In this case we use Taylor polynomials of degree \(4\) and maximum depth for the bisection is only \(4\).
The runtime is about \(380\) seconds.
**Lemma 12.3**.: _The constant \(D_{\alpha}\) satisfies the inequality \(D_{\alpha}\leq\bar{D}_{1}=0.938\) for all \(\alpha\in I_{1}\)._
Proof.: A plot of \(\mathcal{T}_{\alpha}(x)\) on the interval \([0,\pi]\) is given in Figure 2(c). It hints at the maximum being attained around \(x\approx 1.4\).
We take \(\epsilon=0.1\). We first bound the maximum on \([\epsilon,\pi]\), for which we get that the maximum is bounded by \([0.93\pm 7.86\cdot 10^{-3}]\), which in turn is bounded by \(\bar{D}_{1}\).
For the interval \([0,\epsilon]\) we only prove that the maximum is bounded by \([0.93\pm 7.86\cdot 10^{-3}]\). This is done by first computing \(\mathcal{T}_{\alpha}([0,10^{-10}])\) and verifying that this satisfies the bound. The interval \([10^{-10},\epsilon]\) is then bisected until the bound can be verified, in this case the bisection is done at the geometric midpoint.
For the interval \([0,\epsilon]\) we do not make use of Taylor expansions but only use direct evaluation. For the interval \([\epsilon,\pi]\) we are not able to compute Taylor expansion of \(U_{\alpha}\), and hence not of \(\mathcal{T}_{\alpha}\). We do however make one optimization to get better enclosures. Recall that
\[\mathcal{T}_{\alpha}(x)\leq\frac{U_{\alpha}(x)}{\pi|w_{\alpha}(x)||u_{-1}(x )|}.\]
For a given interval \(\mathbf{x}\) both \(U_{\alpha}(\mathbf{x})\) and \(w_{\alpha}(\mathbf{x})\) are evaluated directly. To enclose \(u_{-1}(\mathbf{x})\) we first check if it is monotone by computing the derivative and checking that it is non-zero. If it is monotone we compute an enclosure by evaluating it at the endpoints of \(\mathbf{x}\).
The runtime for the computation is around \(410\) seconds, the majority for handling the interval \([\epsilon,\pi]\).
Combining the above three lemmas we get the following result.
**Lemma 12.4**.: _For all \(\alpha\in I_{1}\) the following inequality is satisfied_
\[\delta_{\alpha}<\frac{(1-D_{\alpha})^{2}}{4n_{\alpha}}.\]
Proof.: Follows immediately from the above lemmas, noticing that
\[\delta_{\alpha}\leq\bar{\delta}_{-1}<\frac{(1-\bar{D}_{\alpha})^{2}}{4\bar{n} _{\alpha}}\leq\frac{(1-D_{\alpha})^{2}}{4n_{\alpha}}.\]
### Bounds for \(I_{2}\)
We here give bounds of \(n_{\alpha}\), \(\delta_{\alpha}\) and \(D_{\alpha}\) for \(\alpha\in I_{2}=(-0.9999,-0.0012)\).
In this case it is not possible to give uniform bounds valid on the whole interval \(I_{2}\). Instead we split \(I_{2}\) into many subintervals and for each individual subinterval bounds are computed and they are checked to satisfy
\[\delta_{\alpha}<\frac{(1-D_{\alpha})^{2}}{4n_{\alpha}},\]
as required for Proposition 2.2.
When the midpoint of the subinterval is less than \(-0.95\) the hybrid approach corresponding to \(\alpha\) near \(-1\) is used. In this case the weight is given by \(w_{\alpha}(x)=|x|^{p}\log(2e+1/|x|)\). If the midpoint of the subinterval
Figure 3: Plot of the functions \(N_{\alpha}\) and \(\mathcal{T}_{\alpha}\) for \(\alpha\in I_{1}\). The dashed green lines show the upper bounds as given in Lemmas 12.1 and 12.3
is larger than \(-0.16\) then the hybrid approach corresponding to \(\alpha\) near \(0\) is used, in which case the weight is \(w_{\alpha}(x)=|x|\). For the rest of the interval the default approach is used, with the weight \(w_{\alpha}(x)=|x|^{p}\).
The value of \(p\) is chosen based on the midpoint of the subinterval. If the midpoint is above \(-0.33\) we use \(p=1\) and if it is between \(-0.5\) and \(-0.33\) we use \(p=\frac{3}{4}\). If it is below \(-0.5\) we take \(p=\frac{1-\alpha}{2}\), evaluated at the midpoint of the interval. Note that in all cases we have \(-\alpha<p\leq 1\).
The interval is split into \(72000\) subintervals. The sizes of the subintervals are smaller near \(\alpha=-1\) and \(\alpha=0\). The interval \((-0.9999,-0.0012)\) is first split non-uniformly into \(32\) subintervals, each of these subintervals are then further split uniformly into a given number of subintervals, the number varying for the different parts, see Table 1. Of these \(72000\) subintervals there are a few for which the computed bounds are too bad to satisfy the required inequality. This happens in \(28\) cases, in all of these cases it is enough to bisect the subinterval once for the bounds to be good enough. The final result is therefore based on a total of \(72028\) subintervals.
Due to the large number of subintervals, explicit bounds for each case is not given here, we refer to the repository [14] where the data can be found. The bounds are visualized in Figures 4(a), 4(b) and 4(c). However, these figures only give a rough picture since the number of subintervals is larger than the number of pixels in the picture and the subintervals are concentrated near \(\alpha=-1\) and \(\alpha=0\).
For the precise values of \(N_{\alpha,0}\) and \(N_{\alpha,1}\) we refer to the repository [14]. In general \(N_{\alpha,0}\) takes it maximum value near \(\alpha=-1\) and minimum value near \(\alpha=0\), varying from \(5799\) to \(2\). For \(N_{\alpha,1}\) it varies between \(0\) and \(16\).
The approach for bounding \(n_{\alpha}\), \(\delta_{\alpha}\) and \(D_{\alpha}\) is given in the following three lemmas. The total runtime for the calculations on Dardel were around \(12000\) core hours (corresponding to a wall time of around \(48\) hours using \(256\) cores).
**Lemma 12.5**.: _For all \(\alpha\in I_{2}\) the value \(N_{\alpha}\) is bounded. A sketch of the bound is given in Figure 4(a), for the precise bound for each value of \(\alpha\) we refer to the repository [14]._
Proof.: Similar to in Lemma 12.1 the maximum of \(N_{\alpha}\) is in practice attained at \(x=\pi\), which makes it easy to compute an accurate bound. As seen in the plot of \(n_{\alpha}\) in Figure 4(a) it in general varies smoothly with \(\alpha\), the exception being the points where the choice of weight is changed. We can also note that the bounds are well-behaved near \(\alpha=-1\) and \(\alpha=0\).
When computing the bounds the value of \(\epsilon\) is not fixed, instead it is determined dynamically. Starting with \(\epsilon=3\) we check if the asymptotic version of \(N_{\alpha}\) evaluated at this point is less than \(N_{\alpha}(\pi)\). if this is the case we take this value of \(\epsilon\), otherwise we try with a slightly smaller \(\epsilon\), stopping once we find one that works. On the interval \([0,\epsilon]\) we then check that \(N_{\alpha}\) is bounded by \(N_{\alpha}(\pi)\).
For the interval \([\epsilon,\pi]\) we compute an enclosure of the maximum.
Figure 4: Plot of the function \(F_{\alpha}\) for \(\alpha\in I_{1}\). The function is plotted on the interval \([0,\pi]\) and \([10^{-5},10^{-1}]\). The dashed green line shows the upper bound as given in Lemma 12.2. The dotted red line shows \(\frac{(1-\bar{D}_{1})^{2}}{4\hat{n}_{1}}\), which is the value we want \(\delta_{\alpha}\) to be smaller than.
In both cases we use Taylor expansions of degree \(0\) when computing the bounds.
For the hybrid case when \(\alpha\) is close to \(-1\) the approach is exactly the same as above. For the hybrid case when \(\alpha\) is close to \(0\) we use the same approach as in Lemma 12.9 for computing a bound.
The computational time varies with \(\alpha\), on average the computations took \(4.05\) seconds for each subinterval with a minimum and maximum of \(0.018\) and \(144\) seconds respectively.
**Lemma 12.6**.: _For all \(\alpha\in I_{2}\) the value \(\delta_{\alpha}\) is bounded. A sketch of the bound is given in Figure 4(b), for the precise bound for each value of \(\alpha\) we refer to the repository [14]._
Proof.: The value of \(\delta_{\alpha}\) is heavily dependent on the precise approximation used. Very small changes in the approximation can give relatively large changes for \(\delta_{\alpha}\). This is clearly seen in Figure 4(b) where the bound is seen to vary a lot from subinterval to subinterval. The patches where the bound is more or less continuous correspond to certain values of \(N_{\alpha,0}\) in the approximation, when the value changes from one to the other it gives a large change in defect. There are also many isolated points where the bound is very different from the nearby points, this is mostly due to numerical instabilities when computing the coefficients for the approximation.
Similar to in the previous lemma the value of \(\epsilon\) is not fixed but chosen dynamically. The precise choice is based on comparing the asymptotic and the non-asymptotic versions at different values and see where the asymptotic one starts to give tighter enclosures. The bounds on \([0,\epsilon]\) and \([\epsilon,\pi]\) are then computed separately, and their maximum is returned.
In both cases we use Taylor expansions, the degree is tuned depending on \(\alpha\) and is between \(2\) and \(6\).
For the hybrid case when \(\alpha\) is close to \(-1\) the approach is exactly the same as above. For the hybrid case when \(\alpha\) is close to \(0\) some modifications needs to be made. For the interval \([0,\epsilon]\) we use Taylor models in \(\alpha\) to compute enclosures for the coefficients in the expansion at \(x=0\). Once we have the expansion we are able to compute Taylor expansions in \(x\) of it, where we use degree \(8\). For the interval \([\epsilon,\pi]\) we are not able to compute Taylor expansions in \(x\) and have to fall back to using direct enclosures.
The computational time varies with \(\alpha\), on average the computations took \(37\) seconds for each subinterval with a minimum and maximum of \(0.11\) and \(1084\) seconds respectively.
\begin{table}
\begin{tabular}{c|c|c|c} Interval & Subintervals & Interval & Subintervals \\ \hline \([-0.9999,-0.999875]\) & \(500\) & \([-0.7,-0.6]\) & \(1000\) \\ \([-0.999875,-0.99985]\) & \(500\) & \([-0.6,-0.5]\) & \(1500\) \\ \([-0.99985,-0.9998]\) & \(1000\) & \([-0.5,-0.45]\) & \(1000\) \\ \([-0.9998,-0.9996]\) & \(1000\) & \([-0.45,-0.41]\) & \(1000\) \\ \([-0.9996,-0.9993]\) & \(1000\) & \([-0.41,-0.37]\) & \(1000\) \\ \([-0.9993,-0.999]\) & \(1000\) & \([-0.37,-0.33]\) & \(1000\) \\ \([-0.999,-0.998]\) & \(1000\) & \([-0.33,-0.2]\) & \(10000\) \\ \([-0.998,-0.996]\) & \(1000\) & \([-0.2,-0.16]\) & \(3000\) \\ \([-0.996,-0.993]\) & \(1000\) & \([-0.16,-0.1]\) & \(500\) \\ \([-0.993,-0.99]\) & \(1000\) & \([-0.1,-0.05]\) & \(500\) \\ \([-0.99,-0.95]\) & \(2000\) & \([-0.05,-0.025]\) & \(500\) \\ \([-0.95,-0.935]\) & \(5000\) & \([-0.025,-0.0125]\) & \(1000\) \\ \([-0.935,-0.9]\) & \(8000\) & \([-0.0125,-0.00625]\) & \(2000\) \\ \([-0.9,-0.85]\) & \(4000\) & \([-0.00625,-0.003125]\) & \(4000\) \\ \([-0.85,-0.8]\) & \(2000\) & \([-0.003125,-0.0015625]\) & \(8000\) \\ \([-0.8,-0.7]\) & \(2000\) & \([-0.0015625,-0.0012]\) & \(4000\) \\ \end{tabular}
\end{table}
Table 1: The interval \([-0.9999,-0.0012]\) is split into these \(32\) subintervals which are then further split uniformly into the given number of subintervals. The first \(11\) use the hybrid approach corresponding to \(\alpha\) near \(-1\) and the last \(8\) use the hybrid approach corresponding to \(\alpha\) near \(0\), the ones in between use the default approach.
**Lemma 12.7**.: _For all \(\alpha\in I_{2}\) the value \(D_{\alpha}\) is bounded by a value smaller than \(1\). A sketch of the bound is given in Figure 5c, for the precise bound for each value of \(\alpha\) we refer to the repository [14]._
Proof.: The computation of the bound of \(D_{\alpha}\) is the most time-consuming part. For that reason we do not attempt to compute a very accurate upper bound but only as much as we need for Lemma 12.8 to go through. We need \(D_{\alpha}\) to satisfy the inequality
\[D_{\alpha}<1-2\sqrt{n_{\alpha}\delta_{\alpha}}.\]
Taking the upper bounds of \(n_{\alpha}\) and \(\delta_{\alpha}\) from the previous two lemmas we compute the value that \(D_{\alpha}\) needs to be bounded by. To give a little headroom we subtract \(2^{-26}\) to get the bound that we use. This bound is seen in Figure 5c, and we prove that \(D_{\alpha}\) is bounded by this value for all the subintervals.
To get an understanding of how this bound compares to the actual value of \(D_{\alpha}\) we also compute a non-rigorous approximation of it. This approximation is computed by evaluating \(\mathcal{T}_{\alpha}\) on a few points in the interval \([0,\pi]\) and taking the maximum, but with no control of what happens in between these points. This estimate can also be seen in Figure 5c. From this estimate we can also compute an estimate of \(\frac{(1-D_{\alpha})^{2}}{4n_{\alpha}}\), which is the value we want \(\delta_{\alpha}\) to be less than, this estimate is seen in Figure 5b.
Similar to in the previous two lemmas the value of \(\epsilon\) is not fixed but chosen dynamically. It is determined by starting at \(\epsilon=1\) and then taking smaller and smaller values until the asymptotic version of \(\mathcal{T}_{\alpha}\) evaluated at \(\epsilon\) satisfies the prescribed bound. We then prove that the bound holds on both \([0,\epsilon]\) and \([\epsilon,\pi]\).
For the interval \([0,\epsilon]\) we do not make use of Taylor expansions but only use direct evaluation. For the interval \([\epsilon,\pi]\) we are not able to compute Taylor expansion of \(U_{\alpha}\), and hence not of \(\mathcal{T}_{\alpha}\). We do however make one optimization to get better enclosures. Recall that
\[\mathcal{T}_{\alpha}(x)=\frac{U_{\alpha}(x)}{\pi|w_{\alpha}(x)||u_{\alpha}(x)|}.\]
For a given interval \(\mathbf{x}\) both \(U_{\alpha}(\mathbf{x})\) and \(w_{\alpha}(\mathbf{x})\) are evaluated directly. To enclose \(u_{\alpha}(\mathbf{x})\) we first check if it is monotone by computing the derivative and checking that it is non-zero. If it is monotone we compute an enclosure by evaluating it at the endpoints of \(\mathbf{x}\).
Both the hybrid cases use exactly the same approach.
The computational time varies with \(\alpha\), on average the computations took 72 seconds for each subinterval with a minimum and maximum of 0.740.68 and 1075 seconds respectively.
**Lemma 12.8**.: _For all \(\alpha\in I_{2}\) the following inequality is satisfied_
\[\delta_{\alpha}<\frac{(1-D_{\alpha})^{2}}{4n_{\alpha}}.\]
Proof.: Using the computed upper bounds from Lemma 12.5, 12.6 and 12.7 it is straight forward to check that the inequality
\[\delta_{\alpha}<\frac{(1-D_{\alpha})^{2}}{4n_{\alpha}}\]
holds for each of the 72028 subintervals. Since the union of these subintervals cover the entire of \(I_{2}\) it follows that the inequality holds for all \(\alpha\in I_{2}\).
### Bounds for \(I_{3}\)
For this interval the bounds are given in a slightly different form. As \(\alpha\) gets close to \(0\) the value of \(\delta_{\alpha}\) tends to zero and \(D_{\alpha}\) tends to one. To be able to say something about the inequality
\[\delta_{\alpha}<\frac{(1-D_{\alpha})^{2}}{4n_{\alpha}}\]
in Proposition 2.2 in the limit we therefore need information about the rate that they go to zero and one respectively. This is however not the case for \(n_{\alpha}\), for which we have the following lemma:
**Lemma 12.9**.: _The constant \(n_{\alpha}\) satisfies the inequality \(n_{\alpha}\leq\bar{n}_{3}=1.58\) for all \(\alpha\in I_{3}\)._
Proof.: A plot of \(N_{\alpha}(x)\) on the interval \([0,\pi]\) is given in Figure 5(a), clearly hinting at the maximum being attained at \(x=\pi\). We use the asymptotic expansion on the entire interval, i.e. \(\epsilon=\pi\). The computation of the enclosure takes less than a second and gives us
\[n_{\alpha}\in[1.57\pm 2.45\cdot 10^{-3}],\]
which is upper bounded by \(\bar{n}_{3}\).
For \(D_{\alpha}\) and \(\delta_{\alpha}\) we give bounds which are functions of \(\alpha\).
**Lemma 12.10**.: _The constant \(\delta_{\alpha}\) satisfies the inequality \(\delta_{\alpha}\leq\bar{\Delta}_{\delta}\cdot\alpha^{2}\), where \(\bar{\Delta}_{\delta}=0.0077\), for all \(\alpha\in I_{3}\)._
Proof.: For each \(x\in[0,\pi]\) we can compute a Taylor model \(M_{F}(x)\) of degree \(1\) in \(\alpha\) of \(F_{\alpha}(x)\) centered at \(\alpha=0\) and valid for \(\alpha\in I_{3}\). This means that \(M_{F}(x)\) satisfies
\[F_{\alpha}(x)\in p_{M_{F}(x)}(\alpha)+\Delta_{M_{F}(x)}\alpha^{2}\]
for all \(\alpha\in I_{3}\), where \(p_{M_{F}(x)}\) is the degree \(1\) polynomial associated with the Taylor model and \(\Delta_{M_{F}(x)}\) is the remainder. From Lemma 10.1 we get that \(p_{M_{F}(x)}=0\) for all \(x\in[0,\pi]\), hence
\[|F_{\alpha}(x)|\in|\Delta_{M_{F}(x)}|\alpha^{2}.\]
This gives us
\[\delta_{\alpha}=\sup_{x\in[0,\pi]}|F_{\alpha}(x)|\in\left(\sup_{x\in[0,\pi]}| \Delta_{M_{F}(x)}|\right)\alpha^{2}=:\Delta_{\delta}\alpha^{2},\]
where we with the supremum of intervals we mean the interval whose upper endpoint is the supremum of all upper endpoints and the lower endpoint is the supremum of all lower endpoints. A plot of \(\Delta_{M_{F}(x)}\) on the interval \([0,\pi]\) is given in Figure 5(b), we are interested in enclosing \(\Delta_{\delta}\).
We use the asymptotic expansion on the entire interval, i.e. \(\epsilon=\pi\). Enclosing the supremum we get the interval
\[\Delta_{\delta}\subseteq[0,7.65\cdot 10^{-3}],\]
for which an upper bound is given by \(\bar{\Delta}_{\delta}\). The runtime is around \(3\) seconds.
**Lemma 12.11**.: _The constant \(D_{\alpha}\) satisfies the inequality \(D_{\alpha}\leq 1+\bar{\Delta}_{D}\cdot\alpha\), where \(\bar{\Delta}_{D}=0.226\), for all \(\alpha\in I_{3}\)._
Proof.: We first show that \(D_{\alpha}\) satisfies the inequality \(D_{\alpha}\leq 1+\bar{\Delta}_{D}\cdot\alpha\). Since \(\bar{\Delta}_{D}=0.226\), for all \(\alpha\in I_{3}\), we have
\[D_{\alpha}\leq 1+\bar{\Delta}_{D}\cdot\alpha,\]
for all \(\alpha\in I_{3}\). Since \(\bar{\Delta}_{D}=0.226\), for all \(\alpha\in I_{3}\), we have
\[D_{\alpha}\leq 1+\bar{\Delta}_{D}\cdot\alpha,\]
Proof.: The proof is similar to the previous lemma. For each \(x\in[0,\pi]\) we can compute a Taylor model \(M_{\mathcal{T}}(x)\) of degree \(0\) in \(\alpha\) of \(\mathcal{T}_{\alpha}(x)\) centered at \(\alpha=0\) and valid for \(\alpha\in I_{3}\). This means that \(M_{\mathcal{T}}(x)\) satisfies
\[\mathcal{T}_{\alpha}(x)\in p_{M_{\mathcal{T}}(x)}(\alpha)+\Delta_{M_{\mathcal{ T}}(x)}\alpha\]
for all \(\alpha\in I_{3}\), where \(p_{M_{\mathcal{T}}(x)}\) is the degree \(0\) polynomial associated with the Taylor model and \(\Delta_{M_{\mathcal{T}}(x)}\) is the remainder. From Lemma 11.8 we get that \(p_{M_{\mathcal{T}}(x)}=1\) for all \(x\in[0,\pi]\), hence
\[\mathcal{T}_{\alpha}(x)\in 1+\Delta_{M_{\mathcal{T}}(x)}\alpha.\]
This gives us
\[D_{\alpha}=\sup_{x\in[0,\pi]}|\mathcal{T}_{\alpha}(x)|\in 1+\left(\inf_{x\in[0, \pi]}\Delta_{M_{\mathcal{T}}(x)}\right)\alpha=:1+\Delta_{D}\alpha,\]
where we take the infimum since \(\alpha\) is negative. A plot of \(\Delta_{M_{\mathcal{T}}(x)}\) is given in Figure 5(c), we are interested in enclosing \(\Delta_{D}\).
We take \(\epsilon=1\). From Figure 5(c) it seems like the infimum is attained at \(x=0\), we therefore compute the infimum on the interval \([0,\epsilon]\) first and then only prove that the value on \([\epsilon,\pi]\) is lower bounded by this. Enclosing the infimum on \([0,\epsilon]\) we get the enclosure
\[\Delta_{D}\subset[0.2413\pm 0.0151],\]
which is then checked to be a lower bound for \([\epsilon,\pi]\). This computed enclosure is then lower bounded by \(\bar{\Delta}_{D}\). The runtime is around \(7\) seconds.
Combining the above we get the following result.
**Lemma 12.12**.: _For all \(\alpha\in I_{3}\) the following inequality is satisfied_
\[\delta_{\alpha}<\frac{(1-D_{\alpha})^{2}}{4n_{\alpha}}.\]
Proof.: From lemma 12.9 and 12.11 we get
\[\frac{(1-D_{\alpha})^{2}}{4n_{\alpha}}\geq\frac{\bar{\Delta}_{D}^{2}}{4\bar{n }_{3}}\cdot\alpha^{2}\]
for all \(\alpha\in I_{3}\). Combining this with lemma 12.10 means we only have to check the inequality
\[\bar{\Delta}_{\delta}<\frac{\bar{\Delta}_{D}^{2}}{4\bar{n}_{3}},\]
which indeed holds.
## 13 Proof of Theorem 1.1
We are now ready to give the proof of Theorem 1.1.
Proof of Theorem 1.1.: Consider the operator \(G_{\alpha}\) from (11) given by
\[G_{\alpha}[v]=(I-T_{\alpha})^{-1}(-F_{\alpha}-N_{\alpha}v^{2}).\]
Combining Lemmas 12.3, 12.7 and 12.11 we have \(\|T_{\alpha}\|<1\) for all \(\alpha\in(-1,0)\), so the inverse of the operator \(I-T_{\alpha}\) is well-defined. Combining Lemmas 12.4, 12.8 and 12.12 gives us that the inequality
\[\delta_{\alpha}<\frac{(1-D_{\alpha})^{2}}{4n_{\alpha}}\]
holds for all \(\alpha\in(-1,0)\). This, together with Proposition 2.2 and Banach fixed-point theorem, proves that for
\[\epsilon_{\alpha}=\frac{1-D_{\alpha}-\sqrt{(1-D_{\alpha})^{2}-4\delta_{\alpha}n_ {\alpha}}}{2n_{\alpha}}\]
the operator \(G_{\alpha}\) has a unique fixed-point \(v_{\alpha}\) in \(X_{\epsilon_{\alpha}}\subseteq L^{\infty}(\mathbb{T})\).
By the construction of the operator \(G_{\alpha}\) this means that the function
\[u(x)=u_{\alpha}(x)+w_{\alpha}(x)v_{\alpha}(x)\]
solves (5), given by
\[\frac{1}{2}u^{2}=-\mathcal{H}^{\alpha}[u].\]
For any wave speed \(c\in\mathbb{R}\) we then have that the function
\[\varphi(x)=c-u(x)\]
is a traveling wave solution to (1). This proves the existence of a \(2\pi\)-periodic highest cusped traveling wave.
The asymptotic behavior of \(u_{\alpha}\) close to \(x=0\) is given by
\[u_{\alpha}(x)=\nu_{\alpha}|x|^{-\alpha}+\mathcal{O}(|x|^{-\alpha+p_{\alpha}}).\]
with \(\nu_{\alpha}\) as in Lemma 4.1. The precise value for \(p_{\alpha}\) varies depending on the type of construction used, but in all cases it satisfies \(-\alpha+p_{\alpha}>1\). Furthermore the weight satisfies
\[w_{\alpha}(x)=\mathcal{O}(|x|^{p})\]
for some \(p\) with \(-\alpha<p\leq 1\). Hence
\[u(x)=u_{\alpha}(x)+w_{\alpha}(x)v_{\alpha}(x)=\nu_{\alpha}|x|^{-\alpha}+ \mathcal{O}(|x|^{p})\]
and
\[\varphi(x)=c-\nu_{\alpha}|x|^{-\alpha}+\mathcal{O}(|x|^{p}),\]
as we wanted to show.
Figure 6: Plots for \(\alpha\in I_{3}\). The leftmost figure shows a plot of \(N_{\alpha}(x)\). The two other figures show enclosures of \(\Delta_{M_{F}(x)}\) and \(\Delta_{M_{\mathcal{T}}(x)}\), the remainder terms of the Taylor models for \(F_{\alpha}(x)\) and \(\mathcal{T}_{\alpha}(x)\) respectively. The dashed green lines shows the upper/lower bounds given in Lemmas 12.9, 12.10 and 12.11. The dotted red line in the middle figure shows \(\frac{\widetilde{\Delta}_{D}^{2}}{4n_{3}}\), the value that \(\Delta_{\delta}\) needs to be smaller than.
Removable singularities
In several cases we have to compute enclosures of functions with removable singularities. For example the function
\[\Gamma(1-s)\cos(\pi(1-s)/2)\]
comes up when computing \(C_{s}\) through equation (39) and has a removable singularity whenever \(s\) is a positive even integer. For this we follow the same approach as in [15, Appendix A], where more details are given. The main tool is the following lemma:
**Lemma A.1**.: _Let \(m\in\mathbb{Z}_{\geq 0}\) and let \(I\) be an interval containing zero. Consider a function \(f(x)\) with a zero of order \(n\) at \(x=0\) and such that \(f^{(m+n)}(x)\) is absolutely continuous on \(I\). Then for all \(x\in I\) we have_
\[\frac{f(x)}{x^{n}}=\sum_{k=0}^{m}f_{k+n}(0)x^{k}+f_{m+n+1}(\xi)x^{m+1}\]
_for some \(\xi\) between \(0\) and \(x\). Furthermore, if \(f^{m+n+p}(x)\) is absolutely continuous for \(p\in\mathbb{Z}_{\geq 0}\) we have_
\[\frac{d^{p}}{dx^{p}}\frac{f(x)}{x^{n}}=\sum_{k=0}^{m}\frac{(k+p)!}{k!}f_{k+n+p }(0)x^{k}+\frac{(m+p+1)!}{(m+1)!}f_{m+n+p+1}(\xi)x^{m+1}\]
_for some \(\xi\) between \(0\) and \(x\)._
We also make use of the first statement of the lemma for \(x\in\mathbb{C}\), the proof is the same as for \(x\in\mathbb{R}\).
## Appendix B Taylor models
An important tool for handling the limit \(\alpha\to 0\) is the use of Taylor models for enclosing \(\delta_{\alpha}\) and \(D_{\alpha}\). We here give a brief introduction to Taylor models, for a more thorough introduction we refer to [44].
With a Taylor model we mean what in [44] is refereed to as a _Taylor model with relative error_. We use the following definition, compare with [44, Definition 2.3.2].
**Definition B.1**.: _A Taylor model \(M=(p,\Delta)\) of degree \(n\) for a function \(f\) on an interval \(I\) centered at a point \(x_{0}\in I\) is a polynomial \(p\) of degree \(n\) together with an interval \(\Delta\), satisfying that for all \(x\in I\) there is \(\delta\in\Delta\) such that_
\[f(x)-p(x-x_{0})=\delta(x-x_{0})^{n+1}.\]
For an \(n+1\) times differentiable function \(f\) the polynomial \(p\) is the Taylor polynomial of degree \(n\) of \(f\) centered at \(x_{0}\)[44, Lemma 2.3.3]. A Taylor model is thus given by a truncated Taylor expansion plus a bound on the remainder term valid on some interval \(I\). From Taylor's theorem we see that we can take \(\Delta\) to be an enclosure of \(\frac{f^{n+1}(x)}{(n+1)!}\) on the interval \(I\).
We can perform arithmetic on Taylor models. Given two functions \(f\) and \(g\) with corresponding Taylor models \(M_{f}=(p_{f},\Delta_{f})\) and \(M_{g}=(p_{g}+\Delta_{g})\) we can compute a Taylor model of \(f+g\) as \(M_{f+g}=(p_{f}+p_{g},\Delta_{f}+\Delta_{g})\), similarly for \(f-g\). With slightly more work we can compute a Taylor model of \(f\cdot g\), see [44, Algorithm 2.3.6], as well as of \(f/g\), see [44, Algorithm 2.3.12]. The division can directly handle removable singularities for \(f/g\), if there is a removable singularity of order \(k\) then we need Taylor models of degree \(n+k\) for \(f\) and \(g\) to get a Taylor model of degree \(n\) for \(f/g\). We can also compose Taylor models with arbitrary functions, given a Taylor model \(M_{f}\) of \(f\) and a function \(g\) we can compute a Taylor model \(M_{g\circ f}\) of \(g\circ f\), see [44, Algorithm 2.3.8].
### Taylor models in \(\alpha\) of expansions in \(x\)
For both \(F_{\alpha}(x)\) and \(\mathcal{T}_{\alpha}(x)\) we need to compute expansions in \(x\) to evaluate them near \(x=0\). For \(\alpha\in I_{3}\) we then need to compute Taylor models in \(\alpha\) from these expansions, see Section 10.3 and 11.2.3. This is done by computing Taylor models of the coefficients in the expansion in \(x\).
For example, consider the problem of computing an expansion in \(x\) of \(u_{\alpha}(x)\), used e.g. when computing \(\frac{x^{-\alpha}}{u_{\alpha}(x)}\). For \(\alpha\in I_{3}\) we have from Lemma 4.2, and using that \(N_{\alpha,1}=0\) in this case,
\[u_{\alpha}(x)=\sum_{j=0}^{N_{\alpha,0}}a_{\alpha,j}^{0}|x|^{-\alpha+jp_{\alpha }}+\sum_{m=1}^{\infty}\frac{(-1)^{m}}{(2m)!}\left(\sum_{j=1}^{N_{\alpha,0}}a_{ \alpha,j}\zeta(1-\alpha+jp_{\alpha}-2m)\right)x^{2m}.\]
It is straight forward to compute Taylor models of the coefficients \(a_{\alpha,j}^{0}\) and
\[\frac{(-1)^{m}}{(2m)!}\left(\sum_{j=1}^{N_{\alpha,0}}a_{\alpha,j}\zeta(1- \alpha+jp_{\alpha}-2m)\right).\]
for \(m<M\). What remains is computing a Taylor model bounding the tail. To compute a Taylor model enclosing a sum of the form
\[\sum_{m=M}^{\infty}c_{\alpha,m}x^{2m}.\]
it is enough to have a method for enclosing
\[\frac{d^{k}}{d\alpha^{k}}\sum_{m=M}^{\infty}c_{\alpha,m}x^{2m}.\]
for \(k\geq 0\). For the Clausen functions this is given in Lemma C.6.
### Taylor model of \(p_{\alpha}\)
The above is enough to compute Taylor models of functions in the paper for which we have explicit expressions. For \(p_{\alpha}\) from (17) we only have an implicit equation defining it, and we have to take a slightly different approach.
Consider the equation
\[f(x,y)=0.\]
We want to compute a Taylor model of \(y=y(x)\). Consider a Taylor model of degree \(n\), centered at \(x_{0}\) and which should be valid on the interval \(I\). As mentioned above it is enough to compute the Taylor polynomial of \(y(x)\) at \(x=x_{0}\) of degree \(n\) and enclose \(\frac{y^{n+1}(x)}{(n+1)!}\) on \(I\).
Let \(y_{0}\) be such that \(f(x_{0},y_{0})=0\). We first consider the case when \(f_{y}(x_{0},y_{0})\neq 0\). If \(f\) is sufficiently smooth we can use the implicit function theorem to compute a Taylor polynomial of \(y\) at \(x_{0}\) to degree \(n\). Expanding \(f(x,y(x))\) at \(x_{0}\) we get
\[f(x_{0},y_{0})+(f_{x}(x_{0},y_{0})+y^{\prime}(x_{0})f_{y}(x_{0},y_{0}))(x-x_{0})\\ +\frac{1}{2}(f_{xx}(x_{0},y_{0})+2y^{\prime}(x_{0})f_{xy}(x_{0},y _{0})+y^{\prime}(x_{0})f_{yy}(x_{0},y_{0})+y^{\prime\prime}(x_{0})f_{y}(x_{0},y_{0}))(x-x_{0})^{2}\cdots=0.\]
Solving for \(y^{\prime}(x_{0})\), \(y^{\prime\prime}(x_{0})\), etc., we get
\[y^{\prime}(x_{0}) =\frac{f_{x}(x_{0},y(x_{0}))}{f_{y}(x_{0},y(x_{0}))},\] \[y^{\prime\prime}(x_{0}) =\frac{f_{xx}(x_{0},y_{0})+2y^{\prime}(x_{0})f_{xy}(x_{0},y_{0})+y ^{\prime}(x_{0})f_{yy}(x_{0},y_{0})}{2f_{y}(x_{0},y(x_{0}))}\]
and similarly for higher orders.
While it is possible to use the above formulas for the derivatives to enclose \(\frac{y^{n+1}(x)}{(n+1)!}\) on \(I\) this gets very complicated for \(p_{\alpha}\) at \(\alpha=0\) due to the large number of removable singularities to handle. Instead, we take a slightly different, more direct, approach for computing \(\Delta\). We want to find \(\Delta\) such that for all \(x\in I\) there is \(\delta\in\Delta\) such that
\[f(x,p(x-x_{0})+\delta(x-x_{0})^{n+1})=0.\]
If we let
\[g(x,\delta)=f(x,p(x-x_{0})+\delta(x-x_{0})^{n+1})\]
we want \(\Delta\) to be an enclosure of a root in \(\delta\) of \(g\) for all \(x\in I\). To make the derivative in \(\delta\) non-zero at \(x=x_{0}\) we normalize \(g\) as
\[h(x,\delta)=\frac{g(x,\delta)}{(x-x_{0})^{n+1}}.\]
Given a guess \(\Delta=[\underline{\Delta},\overline{\Delta}]\), to prove that this is an enclosure of a root we only need to verify that \(h(x,\underline{\Delta})\) and \(h(x,\overline{\Delta})\) both have constant signs for all \(x\in I\) and that the two signs are different. The guess for \(\Delta\) can be given using non-rigorous means, for example by looking at the error at the endpoints of \(I\). Only the verification of the signs at the endpoints of \(\Delta\) needs to be done rigorously.
For \(p_{\alpha}\) we get from (17) the function
\[f(\alpha,p)=\frac{\Gamma(2\alpha-p)\cos\left(\frac{\pi}{2}(2\alpha-p)\right)} {\Gamma(\alpha-p)\cos\left(\frac{\pi}{2}(\alpha-p)\right)}-\frac{2\Gamma(2 \alpha)\cos(\pi\alpha)}{\Gamma(\alpha)\cos\left(\frac{\pi}{2}\alpha\right)}.\]
In this case \(\alpha=0\) we have \(f_{p}(0,p_{0})=0\), and we cannot apply the above approach directly. For the coefficients of polynomial of the Taylor model we expand
\[f(\alpha,p_{0}+p_{1}\alpha+p_{2}\alpha^{2})\]
at \(\alpha=0\). The constant coefficient is exactly zero, and we then solve for \(p_{0}\), \(p_{1}\) and \(p_{2}\) making the three coming coefficients zero. For the remainder term of the Taylor model we instead we consider the function \(\frac{f(\alpha,p)}{\alpha}\), for which the above approach does work.
## Appendix C Computing Clausen functions
To be able to compute bounds of \(D_{\alpha}\), \(\delta_{\alpha}\) and \(n_{\alpha}\) it is critical that we can compute accurate enclosures of \(C_{s}(x)\) and \(S_{s}(x)\), including expansions in the argument and derivatives in the parameter. For this we follow the same approach as in [15, Appendix B] with a few additions and improvements. We here give a shortened version of the approach while highlighting the additions and improvements.
We start by going through how to compute \(C_{s}(x)\) and \(S_{s}(x)\) for \(s,x\in\mathbb{R}\). Since both \(C_{s}(x)\) and \(S_{s}(x)\) are \(2\pi\)-periodic we can reduce it to \(x=0\) or \(0<x<2\pi\).
For \(x=0\) and \(s>1\) have \(C_{s}(0)=\zeta(s)\) and \(S_{s}(0)=0\). For \(s\leq 1\) both functions typically diverge at \(x=0\).
For \(0<x<2\pi\) we can compute the Clausen functions by going through the polylog function,
\[C_{s}(x)=\operatorname{Re}\left(\operatorname{Li}_{s}(e^{ix})\right),\quad S_ {s}(x)=\operatorname{Im}\left(\operatorname{Li}_{s}(e^{ix})\right).\]
However it is computationally beneficial to instead go through the periodic zeta function [18, Sec. 25.13],
\[F(x,s):=\operatorname{Li}_{s}(e^{2\pi ix})=\sum_{n=1}^{\infty}\frac{e^{2\pi inx }}{n^{s}},\]
for which we have
\[C_{s}(x)=\operatorname{Re}F\left(\frac{x}{2\pi},s\right),\quad S_{s}(x)= \operatorname{Im}F\left(\frac{x}{2\pi},s\right).\]
Using [18, Eq. 25.13.2] we get, for \(0<x<2\pi\),
\[C_{s}(x) =\frac{\Gamma(1-s)}{(2\pi)^{1-s}}\cos(\pi(1-s)/2)\left(\zeta\left(1 -s,\frac{x}{2\pi}\right)+\zeta\left(1-s,1-\frac{x}{2\pi}\right)\right), \tag{39}\] \[S_{s}(x) =\frac{\Gamma(1-s)}{(2\pi)^{1-s}}\sin(\pi(1-s)/2)\left(\zeta \left(1-s,\frac{x}{2\pi}\right)-\zeta\left(1-s,1-\frac{x}{2\pi}\right)\right). \tag{40}\]
This formulation works well as long as \(s\) is not a non-negative integer. For non-negative integers we have to handle some removable singularities, for details see [15, Appendix B].
Compared to [15] we also need to compute \(C_{s}(x)\) for real \(s\) but complex \(x\) during the rigorous integration discussed in Appendix D In those cases \(s\) is never an integer, and we use Equation (39) which holds for \(0<\operatorname{Re}x<2\pi\).
### Interval arguments
Let \(\mathbf{x}=[\underline{x},\overline{x}]\) and \(\mathbf{s}=[\underline{s},\overline{s}]\) be two finite intervals, we are interested in computing an enclosure of \(C_{s}(\mathbf{x})\) and \(S_{s}(\mathbf{x})\). Due to the periodicity we can reduce it to three different cases for \(\mathbf{x}\)
1. \(\mathbf{x}\) doesn't contain a multiple of \(2\pi\), by adding or subtracting a suitable multiple of \(2\pi\) we can assume that \(0<\mathbf{x}<2\pi\);
2. \(\mathbf{x}\) has a diameter of at least \(2\pi\), it then covers a full period and can without loss of generality be taken as \(\mathbf{x}=[0,2\pi]\);
3. \(\mathbf{x}\) contains a multiple of \(2\pi\) but has a diameter less than \(2\pi\), by adding or subtracting a suitable multiple of \(2\pi\) we can take \(\mathbf{x}\) such that \(-2\pi<\underline{x}\leq 0\leq\overline{x}<2\pi\).
The second and third case are handled exactly as in [15], for the first case we make some minor improvements.
For \(C_{s}\) we make use of the following lemma which is a slightly upgraded version of [15, Lemma B.1], see also [22, Lemma B.1].
**Lemma C.1**.: _For all \(s\in\mathbb{R}\) the Clausen function \(C_{s}(x)\) is monotone in \(x\) on the interval \((0,\pi)\). For \(s>0\) it is decreasing. For \(s\leq 0\) the sign of the derivative is the same as that of \(-\sin\left(\frac{\pi}{2}s\right)\)._
Proof.: The derivative in \(x\) is given by \(-S_{s-1}(x)\). For \(s>1\) the proof is the same as in [15, Lemma B.1].
For \(s<1\) we use equation (40) together with [18, Eq. 25.11.25] to get
\[S_{s-1}(x)=\frac{\sin\left(\frac{\pi}{2}(2-s)\right)}{(2\pi)^{2-s}}\int_{0}^{ \infty}t^{1-s}e^{-\frac{\pi t}{2\pi}}\frac{1-e^{(x/\pi-1)t}}{1-e^{-t}}\ dt,\]
which sign depends only on the value \(s\) and is the same as that of \(\sin\left(\frac{\pi}{2}s\right)\). In particular, for \(0<s<1\) we have \(\sin\left(\frac{\pi}{2}s\right)>0\) so \(C_{s}(x)\) is decreasing for \(0<s<1\).
For \(s=1\) the result follows directly from that \(C_{1}(x)=-\log(2\sin(x/2))\) on the interval.
For \(S_{s}\) we have the following, slightly weaker, result
**Lemma C.2**.: _For \(s\leq 1\) the Clausen function \(S_{s}(x)\) is monotone in \(x\) on the interval \((0,2\pi)\). The sign of the derivative is the same as that of \(-\cos\left(\frac{\pi}{2}s\right)\)._
Proof.: The derivative in \(x\) is given by \(C_{s-1}(x)\). For \(s<1\) we use equation (39) together with [18, Eq. 25.11.25] to get
\[C_{s-1}(x)=\frac{\cos\left(\frac{\pi}{2}(2-s)\right)}{(2\pi)^{2-s}}\int_{0}^{ \infty}t^{1-s}e^{-\frac{\pi t}{2\pi}}\frac{1+e^{(x/\pi-1)t}}{1-e^{-t}}\ dt,\]
which sign depends only on the value \(s\) and is the same as that of \(-\cos\left(\frac{\pi}{2}s\right)\).
For \(s=1\) the result follows directly from that \(S_{1}(x)=\frac{\pi}{2}-\frac{x}{2}\) on the interval.
For \(s\leq 1\) the extrema of \(S_{s}(\mathbf{x})\) are thus always attained at \(x=\underline{x}\) and \(\overline{x}\). We handle \(s>1\) by computing and enclosure of the derivative \(S_{\prime}^{\prime}(\mathbf{x})=C_{\mathbf{s}-1}(\mathbf{x})\), if the enclosure of the derivative doesn't contain zero then the function is monotone, and we evaluate it at the endpoints, if the derivative contains zero we instead use the midpoint approximation \(S_{\mathbf{s}}(\mathbf{x})=S_{\mathbf{s}}(x_{0})+(\mathbf{x}-x_{0})C_{\mathbf{ s}-1}(\mathbf{x})\) where \(x_{0}\) is the midpoint of \(\mathbf{x}\).
For \(\mathbf{s}\) we follow the same approach as in [15]. The only difference is that we also have to handle \(\mathbf{s}\) overlapping \(1\), in which case the implementation of the deflated zeta function,
\[\underline{\zeta}(s,x)=\zeta(s,x)+\frac{1}{1-s}=\sum_{n=0}^{\infty}\frac{(-1) ^{n}}{n!}\gamma_{n}(x)(s-1)^{n}, \tag{41}\]
in Arb does not work directly. It only supports \(s\) exactly equal to \(1\), and not intervals containing \(1\). Looking at the implementation it is however fairly easy to adapt it to also work for intervals containing \(1\), which we have done.
### Expansion in \(x\)
We now go through how to compute expansions of the Clausen functions in the argument \(x\). In general the procedure is the same as in [15], except that we also have to handle the case when \(s\) overlaps positive integers for the expansion at \(x=0\).
At \(x=0\) we have the following asymptotic expansions [22]
\[C_{s}(x) =\Gamma(1-s)\sin\left(\frac{\pi}{2}s\right)|x|^{s-1}+\sum_{m=0}^{ \infty}(-1)^{m}\zeta(s-2m)\frac{x^{2m}}{(2m)!}; \tag{42}\] \[S_{s}(x) =\Gamma(1-s)\cos\left(\frac{\pi}{2}s\right)\operatorname{sgn}(x )|x|^{s-1}+\sum_{m=0}^{\infty}(-1)^{m}\zeta(s-2m-1)\frac{x^{2m+1}}{(2m+1)!}. \tag{43}\]
For positive integers we have to handle the poles of \(\Gamma(s)\) at non-positive integers and the pole of \(\zeta(s)\) at \(s=1\).
For \(C_{s}(x)\) with positive even integers \(s\) the only problematic term is
\[\Gamma(1-s)\sin\left(\frac{\pi}{2}s\right)|x|^{s-1}\]
which has a removable singularity. Similarly, for \(S_{s}(x)\) with positive odd integers \(s\). For \(C_{s}(x)\) with positive odd integers \(s\) and \(S_{s}\) with positive even integers \(s\) the singularities are not removable. The only case we encounter in the paper is for \(C_{s}\), for which we make use of the following lemma to bound the sum of the two singular terms.
**Lemma C.3**.: _Let \(m\geq 1\) and \(s\in\left[2m+\frac{1}{2},2m+\frac{3}{2}\right]\), then_
\[\Gamma(1-s)\sin\left(\frac{\pi}{2}s\right)|x|^{s-1}+(-1)^{m}\zeta(s-2m)\frac{ x^{2m}}{(2m)!}=K_{1}|x|^{s-1}+K_{2}x^{2m}+K_{3}x^{2m}\frac{|x|^{s-(2m+1)}-1}{s-(2m+ 1)}\]
_with_
\[K_{1} =\frac{\frac{\Gamma(2m+2-s)}{(1-s)_{2m}}\sin\left(\frac{\pi}{2}s \right)-\frac{(-1)^{m}}{(2m!)}}{2m+1-s},\] \[K_{2} =-\frac{(-1)^{m}\underline{\zeta}(s-2m)}{(2m)!},\] \[K_{3} =-\frac{(-1)^{m}}{(2m)!}.\]
Proof.: The result follows directly from adding and subtracting
\[(-1)^{m}\frac{|x|^{s-1}}{(2m+1-s)(2m)!}\]
and collecting the terms.
Computing \(K_{2}\) and \(K_{3}\) is straight forward, for \(K_{1}\) there is a removable singularity to handle. The function
\[x^{2m}\frac{|x|^{s-(2m+1)}-1}{s-(2m+1)}\]
also has a removable singularity, to compute an accurate enclosure we use the following lemma.
**Lemma C.4**.: _For \(x\neq 0\) the function_
\[\frac{|x|^{t}-1}{t}\]
_is non-decreasing in \(t\). In the limit \(t\to 0\) it is equal to \(\log|x|\)._
Proof.: The derivative in \(t\) is given by
\[\frac{1+(t\log|x|-1)|x|^{t}}{t^{2}}. \tag{44}\]
The sign depends on the numerator, which we can write as
\[1+(t\log|x|-1)e^{t\log|x|}.\]
Letting \(v=t\log|x|\) we can write this as \(1+(v-1)e^{v}\), which has the unique root \(v=0\) and is positive for other values of \(v\). It follows that (44) is non-negative and hence the function non-decreasing.
For the limit \(t\to 0\) we directly get
\[\lim_{t\to 0}\frac{|x|^{t}-1}{t}=\lim_{t\to 0}\frac{\log|x|\ |x|^{t}}{1}=\log|x|.\]
These two lemmas are only strictly required for interval \(\mathbf{s}\) overlapping positive odd integers. However, it gives better enclosures also when \(\mathbf{s}\) is close to such integers, even if it doesn't overlap them. This is used to compute better enclosures in some cases.
The tails for the asymptotic expansions are bounded using the following lemma, see also [22, Lemma 2.1]. We omit the proof since it is very similar to that in [22].
**Lemma C.5**.: _Let \(s\geq 0\), \(2M\geq s+1\) and \(|x|<2\pi\), we then have the following bounds for the tails in equations (42) and (43)_
\[\left|\sum_{m=M}^{\infty}(-1)^{m}\zeta(s-2m)\frac{x^{2m}}{(2m)!} \right| \leq 2(2\pi)^{1+s-2M}\left|\sin\left(\frac{\pi}{2}s\right)\right| \zeta(2M+1-s)\frac{x^{2M}}{4\pi^{2}-x^{2}},\] \[\left|\sum_{m=M}^{\infty}(-1)^{m}\zeta(s-2m-1)\frac{x^{2m+1}}{(2 m+1)!}\right| \leq 2(2\pi)^{s-2M}\left|\cos\left(\frac{\pi}{2}s\right)\right| \zeta(2M+2-s)\frac{x^{2M+1}}{4\pi^{2}-x^{2}}.\]
### Derivatives in \(s\)
For \(C_{s}^{(\beta)}(x)\) and \(S_{s}^{(\beta)}(x)\) we use (39) and (40) and differentiate directly in \(s\). When \(s\) is not an integer this is handled directly using Taylor arithmetic, for integers we use the approach in Appendix A to handle the removable singularities.
To get asymptotic expansions at \(x=0\) we take the expansions (42) and (43) and differentiate them with respect to \(s\). Giving us
\[C_{s}^{(\beta)}(x) =\frac{d}{ds^{\beta}}\left(\Gamma(1-s)\sin\left(\frac{\pi}{2}s \right)|x|^{s-1}\right)+\sum_{m=0}^{\infty}(-1)^{m}\zeta^{(\beta)}(s-2m)\frac{ x^{2m}}{(2m)!}; \tag{45}\] \[S_{s}^{(\beta)}(x) =\frac{d}{ds^{\beta}}\left(\Gamma(1-s)\cos\left(\frac{\pi}{2}s \right)\mathrm{sgn}(x)|x|^{s-1}\right)+\sum_{m=0}^{\infty}(-1)^{m}\zeta^{( \beta)}(s-2m-1)\frac{x^{2m+1}}{(2m+1)!}. \tag{46}\]
These formulas work well when \(s\) is not a positive odd integer for \(C_{s}^{(\beta)}(x)\) or a positive even integer for \(S_{s}^{(\beta)}(x)\), and the derivatives can be computed using Taylor expansions. We mostly make use of the functions \(C_{2}^{(1)}(x)\) and \(C_{3}^{(1)}(x)\), in which case the expansions can be computed explicitly using [3, Eq. 16], for \(|x|<2\pi\) we have
\[C_{2}^{(1)}(x)= \zeta^{(1)}(2)-\frac{\pi}{2}|x|\log|x|-(\gamma-1)\frac{\pi}{2}|x| +\sum_{m=1}^{\infty}(-1)^{m}\zeta^{(1)}(2-2m)\frac{x^{2m}}{(2m)!}\] \[C_{3}^{(1)}(x)= \zeta^{(1)}(3)-\frac{1}{4}x^{2}\log^{2}|x|+\frac{3-2\gamma}{4}x^{ 2}\log|x|-\frac{36\gamma-12\gamma^{2}-24\gamma_{1}-42+\pi^{2}}{48}x^{2}\] \[+\sum_{m=2}^{\infty}(-1)^{m}\zeta^{(1)}(3-2m)\frac{x^{2m}}{(2m)!}.\]
Where \(\gamma_{n}\) is the Stieltjes constant and \(\gamma=\gamma_{0}\). To bound the tails we have the following lemma from [15].
**Lemma C.6**.: _Let \(\beta\geq 1\), \(s\geq 0\), \(2M\geq s+1\) and \(|x|<2\pi\), we then have the following bounds:_
\[\left|\sum_{m=M}^{\infty}(-1)^{m}\zeta^{(\beta)}(s-2m)\frac{x^{2 m}}{(2m)!}\right|\] \[\leq\sum_{j_{1}+j_{2}+j_{3}=\beta}\binom{\beta}{j_{1},j_{2},j_{3} }2\left(\log(2\pi)+\frac{\pi}{2}\right)^{j_{1}}(2\pi)^{s-1}|\zeta^{(j_{3})}(1 +2M-s)|\sum_{m=M}^{\infty}\left|p_{j_{2}}(1+2m-s)\left(\frac{x}{2\pi}\right)^{ 2m}\right|,\]
\[\left|\sum_{m=M}^{\infty}(-1)^{m}\zeta^{(\beta)}(s-2m-1)\frac{x^{2 m+1}}{(2m+1)!}\right|\] \[\leq\sum_{j_{1}+j_{2}+j_{3}=\beta}\binom{\beta}{j_{1},j_{2},j_{3} }2\left(\log(2\pi)+\frac{\pi}{2}\right)^{j_{1}}(2\pi)^{s-2}|\zeta^{(j_{3})}(2 +2M-s)|\sum_{m=M}^{\infty}\left|p_{j_{2}}(2+2m-s)\left(\frac{x}{2\pi}\right)^ {2m+1}\right|.\]
_Here \(p_{j_{2}}\) is given recursively by_
\[p_{k+1}(s)=\psi^{(0)}(s)p_{k}(s)+p_{k}^{\prime}(s),\quad p_{0}=1,\]
_where \(\psi^{(0)}\) is the polygamma function. It is given by a linear combination of terms of the form_
\[(\psi^{(0)}(s))^{q_{0}}(\psi^{(1)}(s))^{q_{1}}\cdots(\psi^{(j_{2}-1)}(s))^{q_{ j_{2}-1}}.\]
_We have the following bounds_
\[\sum_{m=M}^{\infty}\left|(\psi^{(0)}(1+2m-s))^{q_{0}}\cdots(\psi^{(j _{2}-1)}(1+2m-s))^{q_{j_{2}-1}}\left(\frac{x}{2\pi}\right)^{2m}\right|\\ \leq|(\psi^{(1)}(1+2M-s))^{q_{1}}\cdots(\psi^{(j_{2}-1)}(1+2M-s))^ {q_{j_{2}-1}}\frac{1}{2^{q_{0}/2}}(2\pi)^{-2M}\Phi\left(\frac{x^{2}}{4\pi^{2}}, -\frac{q_{0}}{2},M+\frac{1}{2}\right)x^{2M}\]
_and_
\[\sum_{m=M}^{\infty}\left|(\psi^{(0)}(2+2m-s))^{q_{0}}\cdots(\psi^ {(j_{2}-1)}(2+2m-s))^{q_{j_{2}-1}}\left(\frac{x}{2\pi}\right)^{2m+1}\right|\\ \leq|(\psi^{(1)}(2+2M-s))^{q_{1}}\cdots(\psi^{(j_{2}-1)}(2+2M-s))^ {q_{j_{2}-1}}|\frac{1}{2^{q_{0}/2}}(2\pi)^{-2M-1}\Phi\left(\frac{x^{2}}{4\pi^{ 2}},-\frac{q_{0}}{2},M+1\right)x^{2M+1}\]
_where_
\[\Phi(z,s,a)=\sum_{m=0}^{\infty}\frac{z^{m}}{(a+m)^{s}},\]
_is the Lerch transcendent._
For \(\alpha\in I_{1}\) the expansions in \(x\) of \(u_{\alpha}\) and \(\mathcal{H}^{\alpha}[u_{\alpha}]\) are given in Lemma 4.6. Bounding the remainder terms for the sums
\[\sum_{m=1}^{\infty}\frac{(-1)^{m}}{(2m)!}a_{\alpha,0}\left(\zeta(1-\alpha-2m) -\zeta(2+(1+\alpha)^{2}/2-2m)\right)x^{2m}\]
and
\[\sum_{m=2}^{\infty}\frac{(-1)^{m}}{(2m)!}a_{\alpha,0}\left(\zeta(1-2\alpha-2m) -\zeta(2-\alpha+(1+\alpha)^{2}/2-2m)\right)x^{2m}\]
requires a bound similar to that in Lemma C.6 above. Factoring out \(a_{\alpha,0}(1+\alpha)\), which is bounded, we are left bounding
\[\sum_{m=1}^{\infty}\frac{(-1)^{m}}{(2m)!}\frac{\zeta(1-\alpha-2m)-\zeta(2+(1+ \alpha)^{2}/2-2m)}{1+\alpha}x^{2m}\]
and
\[\sum_{m=2}^{\infty}\frac{(-1)^{m}}{(2m)!}\frac{\zeta(1-2\alpha-2m)-\zeta(2- \alpha+(1+\alpha)^{2}/2-2m)}{1+\alpha}x^{2m}.\]
The following lemma allows us to write them in a form similar to that in Lemma C.6.
**Lemma C.7**.: _For \(-1<\alpha<0\) and \(M_{1}\geq 1\) we have_
\[\left|\sum_{m=M_{1}}^{\infty}\frac{(-1)^{m}}{(2m)!}\frac{\zeta(1 -\alpha-2m)-\zeta(2+(1+\alpha)^{2}/2-2m)}{1+\alpha}x^{2m}\right|\\ \leq\left|\sum_{m=M_{1}}^{\infty}\frac{(-1)^{m}}{(2m)!}\zeta^{ \prime}(s_{1,m}-2m)x^{2m}\right|+\frac{1+\alpha}{2}\left|\sum_{m=M_{1}}^{ \infty}\frac{(-1)^{m}}{(2m)!}\zeta^{\prime}(s_{2,m}-2m)x^{2m}\right|\]
_with \(s_{1,m}\in[1-\alpha,2]\) and \(s_{2,m}\in[2,2+(1+\alpha)^{2}/2]\). Similarly, for \(M_{2}\geq 2\),_
\[\left|\sum_{m=M_{2}}^{\infty}\frac{(-1)^{m}}{(2m)!}\frac{\zeta(1 -2\alpha-2m)-\zeta(2-\alpha+(1+\alpha)^{2}/2-2m)}{1+\alpha}x^{2m}\right|\\ \leq 2\left|\sum_{m=M_{2}}^{\infty}\frac{(-1)^{m}}{(2m)!}\zeta^{ \prime}(s_{3,m}-2m)x^{2m}\right|+\left(1-\frac{1+\alpha}{2}\right)\left|\sum_{ m=M_{2}}^{\infty}\frac{(-1)^{m}}{(2m)!}\zeta^{\prime}(s_{4,m}-2m)x^{2m}\right|\]
_with \(s_{3,m}\in[1-2\alpha,3]\) and \(s_{4,m}\in[2-\alpha+(1+\alpha)^{2}/2,3]\)._
Proof.: The first inequality follows directly from that
\[\frac{\zeta(1-\alpha-2m)-\zeta(2+(1+\alpha)^{2}/2-2m)}{1+\alpha}=\frac{\zeta(1- \alpha-2m)-\zeta(2-2m)}{1+\alpha}+\frac{\zeta(2-2m)-\zeta(2+(1+\alpha)^{2}/2- 2m)}{1+\alpha}\]
together with
\[\frac{\zeta(1-\alpha-2m)-\zeta(2-2m)}{1+\alpha}=\zeta^{\prime}(s_{1,m}-2m)\]
for some \(s_{1,m}\in[1-\alpha,2]\) and
\[\frac{\zeta(2-2m)-\zeta(2+(1+\alpha)^{2}/2-2m)}{1+\alpha}=\frac{1+\alpha}{2} \zeta^{\prime}(s_{2,m}-2m)\]
for some \(s_{2,m}\in[2,2+(1+\alpha)^{2}/2]\). The second one is similar.
Lemma C.6 cannot be directly applied to the result of the above lemma since the argument of the zeta function depends on \(m\). This can however be fixed, as the following lemma shows.
**Lemma C.8**.: _Let \(\beta\geq 1\), \(\mathbf{s}=[\underline{s},\overline{s}]\) with \(\underline{s}\geq 0\), \(2M\geq\overline{s}+1\), \(s_{m}\in\mathbf{s}\) for \(m\geq M\) and \(|x|<2\pi\), we then have the following bounds:_
\[\left|\sum_{m=M}^{\infty}(-1)^{m}\zeta^{(\beta)}(s_{m}-2m)\frac{ x^{2m}}{(2m)!}\right|\] \[\leq\sum_{j_{1}+j_{2}+j_{3}=\beta}\binom{\beta}{j_{1},j_{2},j_{3} }2\left(\log(2\pi)+\frac{\pi}{2}\right)^{j_{1}}(2\pi)^{\overline{s}-1}|\zeta^ {(j_{3})}(1+2M-\overline{s})|\sum_{m=M}^{\infty}\left|p_{j_{2}}(1+2m-s_{m}) \left(\frac{x}{2\pi}\right)^{2m}\right|,\] \[\left|\sum_{m=M}^{\infty}(-1)^{m}\zeta^{(\beta)}(s_{m}-2m-1) \frac{x^{2m+1}}{(2m+1)!}\right|\] \[\leq\sum_{j_{1}+j_{2}+j_{3}=\beta}\binom{\beta}{j_{1},j_{2},j_{3 }}2\left(\log(2\pi)+\frac{\pi}{2}\right)^{j_{1}}(2\pi)^{s-2}|\zeta^{(j_{3})}( 2+2M-s)|\sum_{m=M}^{\infty}\left|p_{j_{2}}(2+2m-s_{m})\left(\frac{x}{2\pi} \right)^{2m+1}\right|.\]
_Here \(p_{j_{2}}\) is the same as in Lemma C.6 and given recursively by_
\[p_{k+1}(s)=\psi^{(0)}(s)p_{k}(s)+p_{k}^{\prime}(s),\quad p_{0}=1,\]
_where \(\psi^{(0)}\) is the polygamma function. It is given by a linear combination of terms of the form_
\[(\psi^{(0)}(s))^{q_{0}}(\psi^{(1)}(s))^{q_{1}}\cdots(\psi^{(j_{2}-1)}(s))^{q_{ j_{2}-1}}.\]
_We have the following bounds_
\[\sum_{m=M}^{\infty}\left|(\psi^{(0)}(1+2m-s_{m}))^{q_{0}}\cdots( \psi^{(j_{2}-1)}(1+2m-s_{m}))^{q_{j_{2}-1}}\left(\frac{x}{2\pi}\right)^{2m}\right|\] \[\leq|(\psi^{(1)}(1+2M-\overline{s}))^{q_{1}}\cdots(\psi^{(j_{2}- 1)}(1+2M-\overline{s}))^{q_{j_{2}-1}}|\frac{1}{2^{q_{0}/2}}(2\pi)^{-2M}\Phi \left(\frac{x^{2}}{4\pi^{2}},-\frac{q_{0}}{2},M+\frac{1}{2}\right)x^{2M}\]
_and_
\[\sum_{m=M}^{\infty}\left|(\psi^{(0)}(2+2m-s_{m}))^{q_{0}}\cdots( \psi^{(j_{2}-1)}(2+2m-s_{m}))^{q_{j_{2}-1}}\left(\frac{x}{2\pi}\right)^{2m+1}\right|\] \[\leq|(\psi^{(1)}(2+2M-\overline{s}))^{q_{1}}\cdots(\psi^{(j_{2}- 1)}(2+2M-\overline{s}))^{q_{j_{2}-1}}|\frac{1}{2^{q_{0}/2}}(2\pi)^{-2M-1}\Phi \left(\frac{x^{2}}{4\pi^{2}},-\frac{q_{0}}{2},M+1\right)x^{2M}\]
Proof.: The proof is the same as for Lemma C.6, and we therefore omit the details. The only difference is that care has to be taken so that the termwise bounds holds for all \(s_{m}\).
## Appendix D Rigorous integration with singularities
In this section we discuss how to compute enclosures of the integrals \(U_{\alpha,1,1}(x)\), \(U_{\alpha,1,2}(x)\) and \(U_{\alpha,2}(x)\) in the non-asymptotic case. We focus on the case when \(w_{\alpha}(x)\) is not given by \(|x|\), since otherwise Lemma 11.5 allows us to directly compute the integral. In particular this means that we don't have to consider the case when \(\alpha\to 0\), though we have to handle \(\alpha\to-1\).
Recall that
\[U_{\alpha,1,1}(x) =-x\int_{0}^{r_{\alpha,x}}\hat{I}_{\alpha}(x,t)w_{\alpha}(tx)\ dt,\] \[U_{\alpha,1,2}(x) =x\int_{r_{\alpha,x}}^{1}\hat{I}_{\alpha}(x,t)w_{\alpha}(tx)\ dt,\] \[U_{\alpha,2}(x) =\int_{x}^{\pi}I_{\alpha}(x,y)w_{\alpha}(y)\ dy.\]
The integrand for \(U_{1,2}\) has a (integrable) singularity at \(t=1\) and the integrand for \(U_{2}\) has one at \(y=x\). As a first step we split these off to handle them separately.
Let
\[U_{\alpha,1,2}(x)= x\int_{r_{\alpha,x}}^{1-\delta_{U,1}}\hat{I}_{\alpha}(x,t)w_{ \alpha}(tx)\ dt+x\int_{1-\delta_{U,1}}^{1}\hat{I}_{\alpha}(x,t)w_{\alpha}(tx) \ dt=U_{\alpha,1,2,1}(x)+U_{\alpha,1,2,2}(x),\] \[U_{\alpha,2}(x)= \int_{x}^{x+\delta_{U,2}}I_{\alpha}(x,y)w_{\alpha}(y)\ dy+\int_{ x+\delta_{U,2}}^{\pi}I_{\alpha}(x,y)w_{\alpha}(y)\ dy=U_{\alpha,2,1}(x)+U_{\alpha,2,2}(x).\]
These integrals are handled by noticing that \(w_{\alpha}\) is bounded on the interval of integration. This allows us to compute an enclosure as
\[U_{\alpha,1,2,2}(x)\in xw_{\alpha}(x[1-\delta_{U,1},1])\int_{1-\delta_{U,1}}^{1}\hat{I}_ {\alpha}(x,t)\ dt,\] \[U_{\alpha,2,1}(x)\in w_{\alpha}([x,x+\delta_{U,2}])\int_{x}^{x+\delta_{U,2}}I_{\alpha}(x,y) \ dy.\]
The integrals can be computed explicitly, using that
\[\int\hat{I}_{\alpha}(x,t)\ dt=\frac{1}{x}\left(-S_{1-\alpha}(x(1- t))+S_{1-\alpha}(x(1+t))-2S_{1-\alpha}(xt)\right),\] \[\int I_{\alpha}(x,y)\ dy=-S_{1-\alpha}(x-y)+S_{1-\alpha}(x+y)-2S _{1-\alpha}(y).\]
From which we get
\[\int_{1-\delta_{U,1}}^{1}\hat{I}_{\alpha}(x,t)\ dt=\frac{1}{x}\Big{(}S_{1- \alpha}(2x)-2S_{1-\alpha}(x)+S_{1-\alpha}(x\delta_{U,1})-S_{1-\alpha}(x(2- \delta_{U,1}))+2S_{1-\alpha}(x(1-\delta_{U,1}))\Big{)}\]
and
\[\int_{x}^{x+\delta_{U,2}}I_{\alpha}(x,y)\ dy=-S_{1-\alpha}(-\delta_{U,2})+S_{1 -\alpha}(2x+\delta_{U,2})-2S_{1-\alpha}(x+\delta_{U,2})-S_{1-\alpha}(2x)+2S_{ 1-\alpha}(x).\]
For \(\alpha\in I_{1}\) we use a slightly modified approach which gives better bounds. Using that \(w_{\alpha}(x)=|x|^{(1-\alpha)/2}\log(2e+1/|x|)\) in this case we have
\[U_{\alpha,1,2,2}(x)=x^{1+(1-\alpha)/2}\int_{1-\delta_{U,1}}^{1} \hat{I}_{\alpha}(x,t)t^{(1-\alpha)/2}\log(2e+1/(xt))\ dt\\ \in x^{1+(1-\alpha)/2}[1-\delta_{U,1},1]^{-(1+\alpha)/2}\log(2e+1 /(x[1-\delta_{U,1},1]))\int_{1-\delta_{U,1}}^{1}\hat{I}_{\alpha}(x,t)t\ dt\]
and can use the same approach as in Lemma 11.5 for computing the integral.
What remains is computing \(U_{\alpha,1,1}\), \(U_{\alpha,1,2,1}\) and \(U_{\alpha,2,2}\). In this case the integrands are bounded everywhere on the intervals of integration. To enclose the integrals we make use of two different rigorous numerical integrators, depending on the value of \(\alpha\). For \(\alpha\in I_{2}\) we use the rigorous numerical integrator implemented by Arb [38]. For functions that are analytic on the interval the integrator uses Gaussian quadratures with error bounds computed through complex magnitudes, we therefore need to evaluate the integrands on complex intervals. When the function is not analytic it falls back to naive enclosures using interval arithmetic. For \(\alpha\in I_{1}\) it is harder to enclose the integrand for complex values, we therefore use a quadrature rule given by
\[\int_{a}^{b}f(y)\ dy\in\frac{b-a}{2}\left(f(y_{1})+f(y_{2})\right)+\frac{1}{4 320}(b-a)^{5}f^{(4)}([a,b]),\]
where
\[y_{1}=\frac{b+a}{2}+\frac{b-a}{2\sqrt{3}},\quad y_{2}=\frac{b+a}{2}-\frac{b-a }{2\sqrt{3}},\]
which doesn't require us to evaluate on complex balls, see e.g. [46]. On intervals where the function is not two times differentiable it falls back to naive enclosures using interval arithmetic.
For \(U_{\alpha,1,1}\) the integrand is bounded but not analytic at \(t=0\). Slightly more work is required to compute an enclosure of the integrand for intervals containing \(t=0\). The only non-trivial part of the integrand is the term \(C_{-\alpha}(xt)w_{\alpha}(xt)\) where the first factor diverges at \(t=0\) and the second factor tends to zero. For \(\alpha\in I_{2}\) this is handled by expanding the Clausen function at zero, which allows us to simplify the terms and bound them individually. For \(\alpha\in I_{1}\) this doesn't work directly because the individual terms are not bounded as \(\alpha\to-1\), instead we use the following lemma.
**Lemma D.1**.: _For \(\alpha\in\left(-1,-\frac{1}{2}\right)\) and \(x\in(0,\pi)\) the function_
\[C_{-\alpha}(xt)|xt|^{(1-\alpha)/2}\log(2e+1/|xt|)\]
_is increasing in \(t\) on the interval \((0,t_{0})\) for_
\[t_{0}=\left(-\frac{2\Gamma(1+\alpha)\sin\left(-\frac{\pi}{2}\alpha\right)x^{- \alpha-1}\left(-\alpha-\frac{1}{2}\right)}{\zeta(-\alpha)}\right)^{\frac{1}{ \alpha+1}}.\]
Proof.: For \(t\geq 0\) can remove the absolute values and rewrite the function as
\[C_{-\alpha}(xt)(xt)^{1/2}\cdot(xt)^{-\alpha/2}\log(2e+1/(xt)).\]
It is enough to prove that the two factors both are increasing in \(t\).
For the first factor, \(C_{-\alpha}(xt)(xt)^{1/2}\), we expand the Clausen function, giving us
\[C_{-\alpha}(xt)(xt)^{1/2}=\Gamma(1+\alpha)\sin\left(-\frac{\pi}{2}\alpha \right)(xt)^{-\alpha-1/2}+\sum_{m=0}^{\infty}(-1)^{m}\zeta(-\alpha-2m)\frac{( xt)^{2m+1/2}}{(2m)!}.\]
All terms in the sum with \(m\geq 1\) are positive, due to the location of the zeros of the zeta function on the negative real axis, and hence increasing in \(t\). We are left with proving that
\[\Gamma(1+\alpha)\sin\left(-\frac{\pi}{2}\alpha\right)(xt)^{-\alpha-1/2}+\zeta(- \alpha)(xt)^{1/2}\]
is increasing in \(t\). Differentiating with respect to \(t\) gives us
\[\Gamma(1+\alpha)\sin\left(-\frac{\pi}{2}\alpha\right)x^{-\alpha-1/2}\left(- \alpha-\frac{1}{2}\right)t^{-\alpha-3/2}+\frac{1}{2}\zeta(-\alpha)x^{1/2}t^{- 1/2}.\]
Since we are only interested in the sign we can multiply by \(t^{\alpha+3/2}\), giving us
\[\Gamma(1+\alpha)\sin\left(-\frac{\pi}{2}\alpha\right)x^{-\alpha-1/2}\left(- \alpha-\frac{1}{2}\right)+\frac{1}{2}\zeta(-\alpha)x^{1/2}t^{\alpha+1}.\]
The positiveness of the first term together with \(1+\alpha>0\) means that it is positive at \(t=0\). The unique root for \(t>0\) is given by
\[\left(-\frac{2\Gamma(1+\alpha)\sin\left(-\frac{\pi}{2}\alpha\right)x^{-\alpha -1}\left(-\alpha-\frac{1}{2}\right)}{\zeta(-\alpha)}\right)^{\frac{1}{\alpha+1 }},\]
which is exactly the value for \(t_{0}\).
For the second factor, \((xt)^{-\alpha/2}\log(2e+1/(xt))\), differentiation with respect to \(t\) gives us
\[-xt^{-\frac{\alpha}{2}-1}\frac{1+\alpha(1+2ext)\log(2e+1/(xt))}{2(1+2ext)}.\]
Since \(xt^{-\frac{\alpha}{2}-1}\) and \(1+2ext\) are positive the sign is the same as that for
\[-(1+\alpha(1+2ext)\log(2e+1/(xt))).\]
For \(x,t>0\) a lower bound is given by \(-1-2\alpha\), which is positive for \(\alpha<-\frac{1}{2}\) and hence for \(\alpha\in I_{1}\). The derivative is hence positive for all \(t>0\) and the factor is always increasing.
For the endpoint \(r_{\alpha,x}\) of \(U_{\alpha,1,1}\) and \(U_{\alpha,1,2,1}\) we don't have an exact value but only an enclosure. The width of this enclosure depends on how large the intervals for \(\alpha\) and \(x\) are. The integrators we use don't handle wide endpoints very well, it is therefore beneficial to slightly modify the endpoints. If we have the enclosure \(r_{\alpha,x}\in[\underline{r}_{\alpha,x},\overline{r}_{\alpha,x}]\) then for \(U_{\alpha,1,1}\) we only integrate up to \(\underline{r}_{\alpha,x}\) and for \(U_{\alpha,1,2,1}\) we integrate from \(\overline{r}_{\alpha,x}\). The part we have missed is given by
\[\int_{\underline{r}_{\alpha,x}}^{\overline{r}_{\alpha,x}}|\hat{I}_{\alpha}(x, t)|w_{\alpha}(xt)\ dt.\]
Since the interval centered at the root of \(\hat{I}_{\alpha}(x,t)\) the integrand will be very small. It is therefore enough to just compute a naive enclosure given by the diameter of the interval times an enclosure of the integrand on the interval.
## Appendix E Details for evaluating \(F_{\alpha}(x)\) with \(\alpha\in I_{1}\) and \(x\) near zero
We here explain how to bound (28), given by
\[\frac{\mathcal{H}^{\alpha}[u_{\alpha}](x)+\frac{1}{2}u_{\alpha}(x)^{2}}{\Gamma (1+\alpha)\log(1/x)(1-x^{1+\alpha+(1+\alpha)^{2}/2})x^{1-\alpha}}\]
for \(\alpha\in I_{1}\) and \(x\) close to zero. This is used when bounding \(F_{\alpha}(x)\) for \(x\in[0,\epsilon]\). To simplify the notation we in this section let \(p_{\alpha}=1+\alpha+(1+\alpha)^{2}/2\), note that \(p_{\hat{\alpha}}\) still denotes a numerical value, we also assume that \(0<x<1\).
From Lemma 4.6 we can get the expansion of \(\mathcal{H}^{\alpha}[u_{\alpha}](x)+\frac{1}{2}u_{\alpha}(x)^{2}\) at \(x=0\). To handle the cancellations between some of the terms in the expansion we extract them and handle them separately.
To begin with we take out the leading terms in the expansions of \(u_{\alpha}\) and \(\mathcal{H}^{\alpha}[u_{\alpha}]\), namely
\[P =a_{\alpha,0}\left(\Gamma(\alpha)\cos\left(\frac{\pi}{2}\alpha \right)-\Gamma(-1-(1+\alpha)^{2}/2)\cos\left(\frac{\pi}{2}(1+(1+\alpha)^{2}/2 )\right)x^{p_{\alpha}}\right)x^{-\alpha},\] \[Q =-a_{\alpha,0}\Big{(}\Gamma(2\alpha)\cos\left(\pi\alpha\right)\] \[\qquad-\Gamma(-1+\alpha-(1+\alpha)^{2}/2)\cos\left(\frac{\pi}{2} (1-\alpha+(1+\alpha)^{2}/2)\right)x^{p_{\alpha}}\] \[\qquad+\frac{1}{2}(\zeta(-1-2\alpha)-\zeta(-\alpha+(1+\alpha)^{2 }/2))x^{2(1+\alpha)}\Big{)}x^{-2\alpha},\]
and treat them together. For small values of \(j\) the terms \(-a_{j}\tilde{C}_{1\alpha-\hat{\alpha}+jp_{\hat{\alpha}}}\) in \(\mathcal{H}^{\alpha}[u_{\alpha}]\) have large cancellations between the \(x^{-\alpha-\hat{\alpha}+jp_{\hat{\alpha}}}\) and \(x^{2}\) terms in their expansion, we therefore also treat these separately. More precisely we consider the terms
\[\sum_{j=1}^{M}-\hat{A}_{\alpha,j}^{0}x^{-\alpha-\hat{\alpha}+jp_{\hat{\alpha} }}+\frac{a_{\hat{\alpha},j}\zeta(-1-\alpha-\hat{\alpha}+jp_{\hat{\alpha}})}{2 }x^{2}\]
Where \(M\leq N_{0}\) is some fixed limit for which terms we treat separately.
### First part
For the first part we are interested in the term
\[\frac{Q+P^{2}/2}{\Gamma(1+\alpha)\log(1/x)(1-x^{p_{\alpha}})x^{1-\alpha}}\]
with \(P\) and \(Q\) as above. If we let \(c(\alpha)=\Gamma(\alpha)\cos\left(\frac{\pi}{2}\alpha\right)\) we can write \(P\) and \(Q\) as
\[P =a_{\alpha,0}(c(\alpha)-c(\alpha-p_{\alpha})x^{p_{\alpha}})x^{- \alpha};\] \[Q =-a_{\alpha,0}(c(2\alpha)-c(2\alpha-p_{\alpha})x^{p_{\alpha}})x^{ -2\alpha}+\frac{a_{\alpha,0}}{2}(\zeta(-1-2\alpha)-\zeta(-1-2\alpha+p_{\alpha} ))x^{2}.\]
This gives us
\[Q+P^{2}/2=a_{\alpha,0}\Big{(}(a_{\alpha,0}c(\alpha)^{2}/2-c(2 \alpha))x^{-2\alpha}+(c(2\alpha-p_{\alpha})-a_{\alpha,0}c(\alpha)c(\alpha-p_{ \alpha}))x^{-2\alpha+p_{\alpha}}\\ +\frac{1}{2}(\zeta(-1-2\alpha)-\zeta(-1-2\alpha+p_{\alpha}))x^{2} +\frac{a_{\alpha,0}}{2}c(\alpha-p_{\alpha})^{2}x^{-2\alpha+2p_{\alpha}}\Big{)},\]
where we can note that \(2<-2\alpha+2p_{\alpha}\). The value of \(a_{\alpha,0}\) is taken such that the first term is identically equal to zero, we are thus left with
\[Q+P^{2}/2=a_{\alpha,0}\Big{(}(c(2\alpha-p_{\alpha})-a_{\alpha,0 }c(\alpha)c(\alpha-p_{\alpha}))x^{-2\alpha+p_{\alpha}}\\ +\frac{1}{2}(\zeta(-1-2\alpha)-\zeta(-1-2\alpha+p_{\alpha}))x^{2} +\frac{a_{\alpha,0}}{2}c(\alpha-p_{\alpha})^{2}x^{-2\alpha+2p_{\alpha}}\Big{)}.\]
Now, cancelling the \(x^{1-\alpha}\) and reordering the terms a bit gives us
\[\frac{Q+P^{2}/2}{\Gamma(1+\alpha)\log(1/x)(1-x^{p_{\alpha}})x^{1- \alpha}}=\frac{a_{\alpha,0}}{\Gamma(1+\alpha)}\Big{(}\frac{(c(2\alpha-p_{\alpha })-a_{\alpha,0}c(\alpha)c(\alpha-p_{\alpha}))x^{-1-\alpha+p_{\alpha}}}{\log(1/x )(1-x^{p_{\alpha}})}\\ +\frac{\frac{1}{2}(\zeta(-1-2\alpha)-\zeta(-1-2\alpha+p_{\alpha}) )x^{1+\alpha}+\frac{a_{\alpha,0}}{2}c(\alpha-p_{\alpha})^{2}x^{-1-\alpha+2p_{ \alpha}}}{\log(1/x)(1-x^{p_{\alpha}})}\Big{)}.\]
The factor \(\frac{a_{\alpha,0}}{\Gamma(1+\alpha)}\) can be handled the same way as in Lemma 8.2. We consider the two terms separately.
The first term we split as
\[\frac{x^{-1-\alpha+p_{\alpha}}}{\log(1/x)}\frac{1+\alpha}{1-x^{p_{\alpha}}} \frac{c(2\alpha-p_{\alpha})-a_{\alpha,0}c(\alpha)c(\alpha-p_{\alpha})}{1+\alpha}\]
For the first factor we have \(-1-\alpha+p_{\alpha}=(1+\alpha)^{2}/2>0\), so the factor it is zero for \(x=0\) and increasing in \(x\), allowing us to compute an enclosure. The second factor is also increasing in \(x\), for \(x=0\) it is \(1+\alpha\) and for non-zero \(x\) we can handle the removable singularity in \(\alpha\). For the third factor we note that \(a_{\alpha,0}=2c(2\alpha)/c(\alpha)^{2}\), giving us
\[\frac{c(2\alpha-p_{\alpha})-a_{\alpha,0}c(\alpha)c(\alpha-p_{\alpha})}{1+ \alpha}=\frac{c(2\alpha-p_{\alpha})-2c(2\alpha)c(\alpha-p_{\alpha})/c(\alpha) }{1+\alpha}.\]
This can also be computed by handling the removable singularities.
The second term we split in a similar way
\[\frac{\frac{1}{2}(\zeta(-1-2\alpha)-\zeta(-1-2\alpha+p_{\alpha}))x ^{1+\alpha}+\frac{a_{\alpha,0}}{2}c(\alpha-p_{\alpha})^{2}x^{-1-\alpha+2p_{ \alpha}}}{\log(1/x)(1-x^{p_{\alpha}})}\\ =\frac{x^{1+\alpha}}{2}\frac{1+\alpha}{1-x^{p_{\alpha}}}\frac{ \frac{1}{2}(\zeta(-1-2\alpha)-\zeta(-1-2\alpha+p_{\alpha}))+\frac{a_{\alpha,0} }{2}c(\alpha-p_{\alpha})^{2}x^{(1+\alpha)^{2}}}{(1+\alpha)\log(1/x)},\]
where in the last exponent we have used \(-2-\alpha+2p_{\alpha}=(1+\alpha)^{2}\). The first factor is easily handled, and the second factor is the same as in the previous term. For the third factor we let
\[v_{\alpha} =\frac{1}{2}(\zeta(-1-2\alpha)-\zeta(-1-2\alpha+p_{\alpha})),\] \[w_{\alpha} =\frac{a_{\alpha,0}}{2}c(\alpha-p_{\alpha})^{2}.\]
Allowing us to write it as
\[\frac{v_{\alpha}+w_{\alpha}x^{(1+\alpha)^{2}}}{(1+\alpha)\log(1/x)}=\frac{v_{ \alpha}-w_{\alpha}}{1+\alpha}\frac{1}{\log(1/x)}+(1+\alpha)w_{\alpha}\frac{1- x^{(1+\alpha)^{2}}}{(1+\alpha)^{2}\log(1/x)}\]
The first factor in the first term has a removable singularity we can handle, and its second factor is easily enclosed. For the second term we can enclose \((1+\alpha)w_{\alpha}\) which has a removable singularity at \(\alpha=-1\). For the remaining factor we let \(t=(1+\alpha)^{2}\log(x)\) and rewrite it as
\[\frac{1-x^{(1+\alpha)^{2}}}{(1+\alpha)^{2}\log(1/x)}=\frac{x^{(1+\alpha)^{2}} -1}{(1+\alpha)^{2}\log(x)}=\frac{e^{t}-1}{t}. \tag{47}\]
This function is increasing in \(t\), so it is enough to get an enclosure of \(t\) and evaluate at the endpoints of \(t\).
### Second part
For the second part we are interested in the term
\[\sum_{j=1}^{M}\frac{-\hat{A}_{\alpha,j}^{0}x^{-\alpha-\hat{\alpha}+jp_{\alpha} }+\frac{a_{\hat{\alpha},j}\zeta(-1-\alpha-\hat{\alpha}+jp_{\hat{\alpha}})}{ \Gamma(1+\alpha)\log(1/x)(1-x^{p_{\alpha}})x^{1-\alpha}}}{\Gamma(1+\alpha)(1- x^{p_{\alpha}})x^{1-\alpha}}=\frac{1}{\Gamma(1+\alpha)(1-x^{p_{\alpha}})}\sum_{j=1}^{M} \frac{-\hat{A}_{\alpha,j}^{0}x^{-\alpha-\hat{\alpha}+jp_{\hat{\alpha}}}+ \frac{a_{\hat{\alpha},j}\zeta(-1-\alpha-\hat{\alpha}+jp_{\hat{\alpha}})}{2} x^{2}}{\log(1/x)x^{1-\alpha}}.\]
We begin by noting that
\[\frac{1}{\Gamma(1+\alpha)(1-x^{p_{\alpha}})}\]
is increasing in \(x\). For \(x=0\) it is lower bounded by \(0\) and increasing in \(\alpha\). For \(x>0\) it has a removable singularity at \(\alpha=-1\) which we can handle. What remains is to handle the sum, this we do term by term.
Let \(1\leq j\leq M\) and \(r_{j}=-1-\hat{\alpha}+jp_{\hat{\alpha}}=(j-1)(1+\hat{\alpha})+(1+\hat{\alpha}) ^{2}/2\), note that \(r_{j}>0\), this gives us
\[\frac{-\hat{A}_{\alpha,j}^{0}x^{1-\alpha+r_{j}}+\frac{a_{j}\zeta(-\alpha+r_{j })}{2}x^{2}}{\log(1/x)x^{1-\alpha}}=\frac{-\hat{A}_{\alpha,j}^{0}x^{r_{j}}+ \frac{a_{\hat{\alpha},j}\zeta(-\alpha+r_{j})}{2}x^{1+\alpha}}{\log(1/x)}.\]
Note that which exponent is the largest depends on the precise choice of both \(j\) and \(\alpha\) and is in general not the same for all \(\alpha\in I_{1}\). Using that
\[\hat{A}_{\alpha,j}^{0}=\Gamma(\alpha+\hat{\alpha}-jp_{\hat{\alpha}})\cos \left((\alpha+\hat{\alpha}-jp_{\hat{\alpha}})\frac{\pi}{2}\right)a_{\hat{ \alpha},j}=\Gamma(\alpha-1-r_{j})\cos\left((\alpha-1-r_{j})\frac{\pi}{2}\right) a_{\hat{\alpha},j}\]
we get
\[a_{\hat{\alpha},j}\frac{-\Gamma(\alpha-1-r_{j})\cos\left((\alpha-1-r_{j}) \frac{\pi}{2}\right)x^{r_{j}}+\frac{\zeta(-\alpha+r_{j})}{2}x^{1+\alpha}}{\log (1/x)}.\]
The \(a_{\hat{\alpha},j}\) is a constant, we therefore focus on the rest. Adding and subtracting \(\frac{\zeta(-\alpha+r_{j})}{2}x^{r_{j}}\) we can write this as
\[\left(-\Gamma(\alpha-1-r_{j})\cos\left((\alpha-1-r_{j})\frac{\pi}{2}\right)+ \frac{\zeta(-\alpha+r_{j})}{2}\right)\frac{x^{r_{j}}}{\log(1/x)}+\frac{\zeta(- \alpha+r_{j})}{2}\frac{x^{1+\alpha}-x^{r_{j}}}{\log(1/x)}.\]
For the first term we can easily enclose the factor depending on \(x\) using that it is monotone. The factor depending on \(\alpha\) we either enclose directly, if \(-\alpha-1-r_{j}\neq 0\), or handle the removable singularity otherwise.
For the second term we use the deflated zeta function (41) to split it further into two terms
\[\frac{\zeta(-\alpha+r_{j})}{2}\frac{x^{1+\alpha}-x^{r_{j}}}{\log(1/x)}-\frac{ 1}{2(1+\alpha-r_{j})}\frac{x^{1+\alpha}-x^{r_{j}}}{\log(1/x)}.\]
The first term can be enclosed directly. The second term we write as
\[\frac{x^{r_{j}}-x^{1+\alpha}}{2(r_{j}-(1+\alpha))\log(1/x)}.\]
We first consider the case when \(r\geq 1+\alpha\), we then factor out \(x^{1+\alpha}\), giving us
\[\frac{x^{1+\alpha}}{2}\frac{x^{r_{j}-(1+\alpha)}-1}{(r_{j}-(1+\alpha))\log(1/x )}=\frac{x^{1+\alpha}}{2}\frac{1-x^{r_{j}-(1+\alpha)}}{(r_{j}-(1+\alpha))\log (x)}\]
If we let \(t=(r_{j}-(1+\alpha))\log(x)\) the second factor can be written as
\[\frac{1-e^{t}}{t}\]
which we handle similarly to (47), note that since \(r_{j}\geq 1+\alpha\) and \(x<1\) we have \(t\leq 0\).
Next we consider the case when \(r_{j}<1+\alpha\), depending on the value of \(r_{j}\) this case might or might not occur. We then factor out \(x^{r_{j}}\), giving us
\[\frac{x^{r_{j}}}{2}\frac{1-x^{1+\alpha-r_{j}}}{(r_{j}-(1+\alpha))\log(1/x)}= \frac{x^{r_{j}}}{2}\frac{1-x^{1+\alpha-r_{j}}}{(1+\alpha-r_{j})\log(x)}.\]
We here let \(t=1+\alpha-r_{j}\) and the procedure is then the same as the previous case.
### Remaining part
Similarly to in the above section we factor out \(\frac{1}{\Gamma(1+\alpha)(1-x^{p_{\alpha}})}\) and bound it separately. For the terms in the expansion we note that they all have an exponent greater than \(1-\alpha\), we can thus cancel the \(x^{1-\alpha}\) in the denominator directly. It is then possible to enclose the terms in the numerator and the factor \(\frac{1}{\log(1/x)}\) separately. For some terms we can however improve the computed enclosures by incorporating the division by \(\log(1/x)\). These are the terms coming from \(u_{\alpha}(x)^{2}\) containing a factor \(P\). In that case we want to enclose the factor
\[\frac{a_{\alpha,0}(c(\alpha)-c(\alpha-p_{\alpha})x^{p_{\alpha}})}{\log(1/x)}.\]
This can be done using the following lemma and combining it with Lemma C.4 to enclose \(\frac{x^{p_{\alpha}}-1}{p_{\alpha}\log(1/x)}\).
**Lemma E.1**.: _The factor_
\[a_{\alpha,0}(c(\alpha)-c(\alpha-p_{\alpha})x^{p_{\alpha}}).\]
_can be written as_
\[C_{1}x^{p_{\alpha}}-C_{2}\frac{x^{p_{\alpha}}-1}{p_{\alpha}}\]
_with_
\[C_{1}=a_{\alpha,0}(c(\alpha)-c(\alpha-p_{\alpha}))\quad\text{ and }\quad C_{2} =a_{\alpha,0}c(\alpha)p_{\alpha}.\]
_Both \(C_{1}\) and \(C_{2}\) have removable singularities at \(\alpha=-1\) and are finite there._
Proof.: The expression follows directly by adding and subtracting \(a_{\alpha,0}c(\alpha)x^{p_{\alpha}}\) and collecting the terms. The removable singularities of \(C_{1}\) and \(C_{2}\) can be handled by writing them as
\[C_{1} =a_{\alpha,0}(1+\alpha)\cdot\frac{c(\alpha)-c(\alpha-p_{\alpha}) }{1+\alpha},\] \[C_{2} =a_{\alpha,0}(1+\alpha)\cdot c(\alpha)(1+(1+\alpha)/2)\]
and enclosing all the factors separately.
Appendix F Details for evaluating \(\mathcal{T}_{\alpha}(x)\) with \(\alpha\in I_{1}\) and \(x\) near zero
In this section we give bounds for \(G_{\alpha,1}\), \(G_{\alpha,2}\) and \(R_{\alpha}\) occurring in Lemma 11.7.
### \(G_{\alpha,1}\)
Recall that
\[G_{\alpha,1}(x)=\frac{\int_{0}^{1}\left|(1-t)^{-\alpha-1}+(1+t)^{-\alpha-1}-2 t^{-\alpha-1}\right|t^{(1-\alpha)/2}\log(2e+1/(xt))\ dt}{\log(1/x)(1-x^{1+\alpha+(1+\alpha)^{2}/2})}.\]
To bound it we use the following lemma.
**Lemma F.1**.: _For \(\alpha\in(-1,0)\) and \(x<1\), \(G_{\alpha,1}\) satisfies_
\[G_{\alpha,1}(x)\leq\frac{1+\alpha}{1-x^{1+\alpha+(1+\alpha)^{2}/2}}\left( \frac{\log(2e+1/x)}{\log(1/x)}J_{\alpha,1}-\frac{1}{\log(1/x)}J_{\alpha,2} \right),\]
_where \(J_{\alpha,1}\) and \(J_{\alpha,2}\) are independent of \(x\) and given by_
\[J_{\alpha,1} =\int_{0}^{1}\left|\frac{(1-t)^{-(1+\alpha)}+(1+t)^{-(1+\alpha)} -2t^{-(1+\alpha)}}{1+\alpha}\right|t^{(1-\alpha)/2}\ dt,\] \[J_{\alpha,2} =\int_{0}^{1}\left|\frac{(1-t)^{-(1+\alpha)}+(1+t)^{-(1+\alpha)} -2t^{-(1+\alpha)}}{1+\alpha}\right|t^{(1-\alpha)/2}\log(t)\ dt.\]
Proof.: Factoring out \(1+\alpha\) we have
\[G_{\alpha,1}(x)=\frac{1+\alpha}{1-x^{1+\alpha+(1+\alpha)^{2}/2}} \frac{1}{\log(1/x)}\\ \int_{0}^{1}\left|\frac{(1-t)^{-(1+\alpha)}+(1+t)^{-(1+\alpha)}-2t ^{-(1+\alpha)}}{1+\alpha}\right|t^{(1-\alpha)/2}\log(2e+1/(xt))\ dt.\]
Let \(J_{\alpha}(x)\) denote
\[J_{\alpha}(x)=\int_{0}^{1}\left|\frac{(1-t)^{-(1+\alpha)}+(1+t)^{-(1+\alpha)}- 2t^{-(1+\alpha)}}{1+\alpha}\right|t^{(1-\alpha)/2}\log(2e+1/(xt))\ dt\]
Splitting the logarithm as
\[\log(2e+1/(xt))=\log(1+2ext)-\log(x)-\log(t)\]
we can split \(J_{\alpha}(x)\) as
\[J_{\alpha}(x)=\int_{0}^{1}\left|\frac{(1-t)^{-(1+\alpha)}+(1+t)^ {-(1+\alpha)}-2t^{-(1+\alpha)}}{1+\alpha}\right|t^{(1-\alpha)/2}\log(1+2ext)\ dt\\ -\log(x)\int_{0}^{1}\left|\frac{(1-t)^{-(1+\alpha)}+(1+t)^{-(1+ \alpha)}-2t^{-(1+\alpha)}}{1+\alpha}\right|t^{(1-\alpha)/2}\ dt\\ -\int_{0}^{1}\left|\frac{(1-t)^{-(1+\alpha)}+(1+t)^{-(1+\alpha)}- 2t^{-(1+\alpha)}}{1+\alpha}\right|t^{(1-\alpha)/2}\log(t)\ dt.\]
Using that \(\log(1+2ext)\leq\log(1+2ex)\) on the interval of integration we have
\[J_{\alpha}(x)\leq\left(\log(1+2ex)-\log(x)\right)\int_{0}^{1} \left|\frac{(1-t)^{-(1+\alpha)}+(1+t)^{-(1+\alpha)}-2t^{-(1+\alpha)}}{1+ \alpha}\right|t^{(1-\alpha)/2}\ dt\\ -\int_{0}^{1}\left|\frac{(1-t)^{-(1+\alpha)}+(1+t)^{-(1+\alpha)}- 2t^{-(1+\alpha)}}{1+\alpha}\right|t^{(1-\alpha)/2}\log(t)\ dt=\log(2e+1/x)J_{ \alpha,1}-J_{\alpha,2}\]
and the result follows.
The computation of \(J_{\alpha,1}\) and \(J_{\alpha,2}\) follow the same approach as in Appendix D, rigorous numerical integration on most of the interval and explicitly handling the singularity at \(t=1\) by factoring out a bound for \(t^{(1-\alpha)/2}\) and \(t^{(1-\alpha)/2}\log(t)\) respectively.
### \(G_{\alpha,2}\)
Recall that
\[G_{\alpha,2}(x)=\frac{\int_{1}^{\pi/x}\left((t-1)^{-\alpha-1}+(1+t)^{-\alpha- 1}-2t^{-\alpha-1}\right)t^{(1-\alpha)/2}\log(2e+1/(xt))\ dt}{\log(1/x)(1-x^{1+ \alpha+(1+\alpha)^{2}/2})}.\]
Unfortunately it is not enough in this case to just factor out \(1+\alpha\) and bound the two factors independently, as for \(G_{\alpha,1}\), the second factor would not be bounded. Instead, we have to more carefully track the dependence on \(x\). We split the work from getting an upper bound into several lemmas. In Lemma F.2 we split \(G_{\alpha,2}\) into one main part, \(G_{\alpha,2,M}\), and one remainder part, \(G_{\alpha,2,R}\). In Lemma F.3 we give a bound for \(G_{\alpha,2,M}\). The bound for the \(G_{\alpha,2,R}\) is handled in Lemma F.4, F.5 and F.6. These bounds work well for very small values of \(x\), less than around \(10^{-10}\), but work poorly for larger values of \(x\). The methods based on direct integration, as discussed in Appendix D, work well for \(x\) larger than around \(0.1\). In between these two intervals, \(10^{-10}<x<0\)., we use a slightly different approach for bounding \(G_{\alpha,2}\). It is described at the end of this section, with most of the details in Lemma F.7.
**Lemma F.2**.: _For \(\alpha\in(-1,0)\) and \(x<1\), \(G_{\alpha,2}(x)\) satisfies_
\[G_{\alpha,2}(x)=G_{\alpha,2,M}(x)+G_{\alpha,2,R}(x)\]
_with_
\[G_{\alpha,2,M}(x)= \frac{1}{1-x^{1+\alpha+(1+\alpha)^{2}/2}}\frac{1}{\log(1/x)}\frac{ 4+2\alpha}{3}\] \[\left(\left(D_{x}-\log x-\frac{2}{3}\frac{1}{1+\alpha}\right) \left(1-\left(\frac{x}{\pi}\right)^{\frac{3}{2}(1+\alpha)}\right)+\log\left( \frac{\pi}{x}\right)\left(\frac{x}{\pi}\right)^{\frac{3}{2}(1+\alpha)}\right),\] \[G_{\alpha,2,R}(x)= \frac{1}{1-x^{1+\alpha+(1+\alpha)^{2}/2}}\frac{1}{\log(1/x)}\] \[\sum_{n=1}^{\infty}(-1)^{n}\frac{(1+\alpha)^{n}}{n!}\sum_{k=0}^{ n-1}\binom{n}{k}\frac{1}{2^{k}}\int_{1}^{\pi/x}\log(t)^{k}h_{n-k}(t)t\log(2e+1/( xt))\ dt,\]
_for some \(D_{x}\in[-\log(1+2e\pi),\log(1+2e\pi)]\) and_
\[h_{k}(t)=\log(t-1)^{k}+\log(t+1)^{k}-2\log(t)^{k}-\frac{k(k-1-\log t)\log(t)^{ k-2}}{t^{2}}.\]
Proof.: Let us start by studying the integral in of \(G_{\alpha,2}\), for which we use the notation
\[K_{\alpha}(x)=\int_{1}^{\pi/x}\left((t-1)^{-\alpha-1}+(1+t)^{-\alpha-1}-2t^{- \alpha-1}\right)t^{(1-\alpha)/2}\log(2e+1/(xt))\ dt.\]
We have
\[\left((t-1)^{-\alpha-1}+(1+t)^{-\alpha-1}-2t^{-\alpha-1}\right)t^ {(1-\alpha)/2}\\ =t\left(e^{-(1+\alpha)(\log(t-1)+\log(t)/2)}+e^{-(1+\alpha)(\log (t+1)+\log(t)/2)}-2e^{-(1+\alpha)3\log(t)/2}\right)\\ =t\sum_{n=1}^{\infty}(-1)^{n}\frac{(1+\alpha)^{n}}{n!}g_{n}(t)\]
with
\[g_{n}(t)=\left(\log(t-1)+\frac{\log t}{2}\right)^{n}+\left(\log(t+1)+\frac{ \log t}{2}\right)^{n}-2\left(\frac{3\log t}{2}\right)^{n}.\]
Inserting this into the integral we can write it as
\[K_{\alpha}(x)=\int_{1}^{\pi/x}t\sum_{n=1}^{\infty}(-1)^{n}\frac{(1+\alpha)^{n }}{n!}g_{n}(t)\log(2e+1/(xt))\ dt.\]
Switching the integral and the sum we get
\[K_{\alpha}(x)=\sum_{n=1}^{\infty}(-1)^{n}\frac{(1+\alpha)^{n}}{n!}\int_{1}^{ \pi/x}g_{n}(t)t\log(2e+1/(xt))\ dt.\]
Next we want to look closer at
\[I_{n}(x)=\int_{1}^{\pi/x}g_{n}(t)t\log(2e+1/(xt))\ dt.\]
Using the binomial theorem and splitting \(\frac{3\log t}{2}=\log t+\frac{\log t}{2}\) we can write \(g_{n}(t)\) as
\[g_{n}(t)=\sum_{k=0}^{n-1}\binom{n}{k}\left(\frac{\log t}{2}\right)^{k}\left( \log(t-1)^{n-k}+\log(t+1)^{n-k}-2\log(t)^{n-k}\right).\]
This gives us
\[I_{n}(x)=\sum_{k=0}^{n-1}\binom{n}{k}\frac{1}{2^{k}}\int_{1}^{\pi/x}\log(t)^{k} \left(\log(t-1)^{n-k}+\log(t+1)^{n-k}-2\log(t)^{n-k}\right)t\log(2e+1/(xt))\ dt.\]
Asymptotically as \(t\) goes to infinity we have that
\[\log(t-1)^{l}+\log(t+1)^{l}-2\log(t)^{l}\]
behaves like
\[\frac{l(l-1-\log t)\log(t)^{l-2}}{t^{2}}.\]
That this indeed is the case is seen in the proof of Lemma F.6, and this will be what gives the main part of the integral for small \(x\). If we let
\[h_{k}(t)=\log(t-1)^{k}+\log(t+1)^{k}-2\log(t)^{k}-\frac{k(k-1-\log t)\log(t)^{ k-2}}{t^{2}}\]
and
\[I_{n,M}(x)= \sum_{k=0}^{n-1}\binom{n}{k}\frac{1}{2^{k}}\int_{1}^{\pi/x}\log(t )^{k}\frac{(n-k)(n-k-1-\log t)\log(t)^{n-k-2}}{t^{2}}t\log(2e+1/(xt))\ dt,\] \[I_{n,R}(x)= \sum_{k=0}^{n-1}\binom{n}{k}\frac{1}{2^{k}}\int_{1}^{\pi/x}\log( t)^{k}h_{n-k}(t)t\log(2e+1/(xt))\ dt,\]
then we have
\[I_{n}(x)=I_{n,M}(x)+I_{n,R}(x)\]
and
\[K_{\alpha}(x)=\sum_{n=1}^{\infty}(-1)^{n}\frac{(1+\alpha)^{n}}{n!}I_{n,M}(x)+ \sum_{n=1}^{\infty}(-1)^{n}\frac{(1+\alpha)^{n}}{n!}I_{n,R}(x).\]
Multiplying the second term with the factor
\[\frac{1}{1-x^{1+\alpha+(1+\alpha)^{2}/2}}\frac{1}{\log(1/x)}\]
gives us \(G_{\alpha,2,R}\).
For \(I_{n,M}(x)\) we note that
\[\log(t)^{k}(n-k)(n-k-1-\log t)\log(t)^{n-k-2}=(n-k)\left((n-k-1)\log(t)^{n-2} -\log(t)^{n-1}\right).\]
Giving us
\[I_{n,M}(x)= \sum_{k=0}^{n-1}\binom{n}{k}\frac{1}{2^{k}}(n-k)\] \[\left((n-k-1)\int_{1}^{\pi/x}\frac{\log(t)^{n-2}}{t}\log(2e+1/( xt))\ dt-\int_{1}^{\pi/x}\frac{\log(t)^{n-1}}{t}\log(2e+1/(xt))\ dt\right)\] \[= \int_{1}^{\pi/x}\frac{\log(t)^{n-2}}{t}\log(2e+1/(xt))\ dt\sum_{k =0}^{n-1}\binom{n}{k}\frac{1}{2^{k}}(n-k)(n-k-1)\] \[-\int_{1}^{\pi/x}\frac{\log(t)^{n-1}}{t}\log(2e+1/(xt))\ dt\sum_{k =0}^{n-1}\binom{n}{k}\frac{1}{2^{k}}(n-k).\]
The sums can be computed explicitly,
\[\sum_{k=0}^{n-1}\binom{n}{k}\frac{1}{2^{k}}(n-k)(n-k-1) =\left(\frac{3}{2}\right)^{n-2}n(n-1),\] \[\sum_{k=0}^{n-1}\binom{n}{k}\frac{1}{2^{k}}(n-k) =\left(\frac{3}{2}\right)^{n-1}n.\]
Inserting this back we have
\[I_{n,M}(x)=\left(\frac{3}{2}\right)^{n-2}n\left((n-1)\int_{1}^{\pi/x}\frac{ \log(t)^{n-2}}{t}\log(2e+1/(xt))\ dt-\frac{3}{2}\int_{1}^{\pi/x}\frac{\log(t)^{ n-1}}{t}\log(2e+1/(xt))\ dt\right).\]
Next we are interested in computing integrals of the form
\[\int_{1}^{\pi/x}\frac{\log(t)^{l}}{t}\log(2e+1/(xt))\ dt.\]
Since \(n\geq 1\) we need to consider \(l\geq-1\). However, the only case for which we have \(l=-1\) is for \(n=1\), in which case the factor \(n-1\) in front is zero. Hence, we only need to consider \(l\geq 0\). Using that \(\log(2e+1/(xt))=\log(1+2ext)-\log(x)-\log(t)\) we get
\[\int_{1}^{\pi/x}\frac{\log(t)^{l}}{t}\log(2e+1/(xt))\ dt=\int_{1}^{\pi/x}\frac{ \log(t)^{l}}{t}\log(1+2ext)\ dt-\log(x)\int_{1}^{\pi/x}\frac{\log(t)^{l}}{t}\ dt-\int_{1}^{\pi/x}\frac{\log(t)^{l+1}}{t}\ dt.\]
For \(1\leq t\leq\pi/x\) we have \(0\leq\log(1+2ext)\leq\log(1+2e\pi)\), hence
\[\int_{1}^{\pi/x}\frac{\log(t)^{l}}{t}\log(1+2ext)\ dt=D_{l,x}\int_{1}^{\pi/x} \frac{\log(t)^{l}}{t}\ dt\]
for some \(D_{l,x}\in[0,\log(1+2e\pi)]\). This gives us
\[\int_{1}^{\pi/x}\frac{\log(t)^{l}}{t}\log(2e+1/(xt))\ dt =(D_{l,x}-\log(x))\int_{1}^{\pi/x}\frac{\log(t)^{l}}{t}\ dt-\int_ {1}^{\pi/x}\frac{\log(t)^{l+1}}{t}\ dt\] \[=(D_{l,x}-\log(x))\frac{\log(\pi/x)^{l+1}}{l+1}-\frac{\log(\pi/x )^{l+2}}{l+2},\]
valid for \(l\geq 0\). Inserting this back into \(I_{n,M}\) we get for \(n\geq 2\)
\[I_{n,M}(x)= \left(\frac{3}{2}\right)^{n-2}n\Bigg{(}(n-1)\left((D_{n-2,x}- \log(x))\frac{\log(\pi/x)^{n-1}}{n-1}-\frac{\log(\pi/x)^{n}}{n}\right)\] \[-\frac{3}{2}\left((D_{n-1,x}-\log(x))\frac{\log(\pi/x)^{n}}{n}- \frac{\log(\pi/x)^{n+1}}{n+1}\right)\Bigg{)}\] \[= \left(\frac{3}{2}\right)^{n-2}\log(\pi/x)^{n-1}\Bigg{(}n(D_{n-2, x}-\log(x))-(n-1)\log(\pi/x)\] \[-\frac{3}{2}(D_{n-1,x}-\log(x))\log(\pi/x)+\frac{3n}{2(n+1)}\log (\pi/x)^{2}\Bigg{)}.\]
For \(n=1\) we instead get
\[I_{1,M}(x)= \left(\frac{3}{2}\right)^{-1}\log(\pi/x)^{-1}\left(-\frac{3}{2}(D_{ -1,x}-\log(x))\log(\pi/x)+\frac{3}{4}\log(\pi/x)^{2}\right).\]
Inserting this into the sum for \(n\) and splitting it in parts we get
\[\sum_{n=1}^{\infty}(-1)^{n}\frac{(1+\alpha)^{n}}{n!}I_{n,M}(x)= \sum_{n=2}^{\infty}(-1)^{n}\frac{(1+\alpha)^{n}}{n!}\left(\frac{3} {2}\right)^{n-2}\log(\pi/x)^{n-1}nD_{n-2,x}\] \[-\log(x)\sum_{n=2}^{\infty}(-1)^{n}\frac{(1+\alpha)^{n}}{n!} \left(\frac{3}{2}\right)^{n-2}\log(\pi/x)^{n-1}n\] \[-\sum_{n=2}^{\infty}(-1)^{n}\frac{(1+\alpha)^{n}}{n!}\left(\frac{3 }{2}\right)^{n-2}\log(\pi/x)^{n}(n-1)\] \[-\sum_{n=1}^{\infty}(-1)^{n}\frac{(1+\alpha)^{n}}{n!}\left(\frac {3}{2}\right)^{n-1}\log(\pi/x)^{n}D_{n-1,x}\] \[+\log(x)\sum_{n=1}^{\infty}(-1)^{n}\frac{(1+\alpha)^{n}}{n!} \left(\frac{3}{2}\right)^{n-1}\log(\pi/x)^{n}\] \[+\sum_{n=1}^{\infty}(-1)^{n}\frac{(1+\alpha)^{n}}{n!}\left(\frac {3}{2}\right)^{n-1}\log(\pi/x)^{n+1}\frac{n}{n+1}.\]
We begin by noting that
\[\sum_{n=2}^{\infty}(-1)^{n}\frac{(1+\alpha)^{n}}{n!}\left(\frac{3 }{2}\right)^{n-2}\log(\pi/x)^{n-1}nD_{n-2,x} =-(1+\alpha)\sum_{n=1}^{\infty}(-1)^{n}\frac{(1+\alpha)^{n}}{n!} \left(\frac{3}{2}\right)^{n-1}\log(\pi/x)^{n}D_{n-1,x},\] \[\sum_{n=2}^{\infty}(-1)^{n}\frac{(1+\alpha)^{n}}{n!}\left(\frac{3 }{2}\right)^{n-2}\log(\pi/x)^{n-1}n =-(1+\alpha)\sum_{n=1}^{\infty}(-1)^{n}\frac{(1+\alpha)^{n}}{n!} \left(\frac{3}{2}\right)^{n-1}\log(\pi/x)^{n},\] \[\sum_{n=2}^{\infty}(-1)^{n}\frac{(1+\alpha)^{n}}{n!}\left(\frac{3 }{2}\right)^{n-2}\log(\pi/x)^{n}(n-1) =-(1+\alpha)\sum_{n=1}^{\infty}(-1)^{n}\frac{(1+\alpha)^{n}}{n!} \left(\frac{3}{2}\right)^{n-1}\log(\pi/x)^{n+1}\frac{n}{n+1},\]
and that since \(D_{l,x}\in[0,\log(1+2e\pi)]\) for all values of \(l\) and \(x\) we have that
\[\sum_{n=1}^{\infty}(-1)^{n}\frac{(1+\alpha)^{n}}{n!}\left(\frac{3}{2}\right)^ {n-1}\log(\pi/x)^{n}D_{n-1,x}=D_{x}\sum_{n=1}^{\infty}(-1)^{n}\frac{(1+\alpha )^{n}}{n!}\left(\frac{3}{2}\right)^{n-1}\log(\pi/x)^{n}\]
for some \(D_{x}\in[-\log(1+2e\pi),\log(1+2e\pi)]\). With this we get
\[\sum_{n=1}^{\infty}(-1)^{n}\frac{(1+\alpha)^{n}}{n!}I_{n,M}(x)= -(D_{x}-\log(x))(2+\alpha)\sum_{n=1}^{\infty}(-1)^{n}\frac{(1+ \alpha)^{n}}{n!}\left(\frac{3}{2}\right)^{n-1}\log(\pi/x)^{n}\] \[+(2+\alpha)\sum_{n=1}^{\infty}(-1)^{n}\frac{(1+\alpha)^{n}}{n!} \left(\frac{3}{2}\right)^{n-1}\log(\pi/x)^{n+1}\frac{n}{n+1}.\]
Explicitly computing the two sums we have
\[\sum_{n=1}^{\infty}(-1)^{n}\frac{(1+\alpha)^{n}}{n!}\left(\frac{3}{2 }\right)^{n-1}\log(\pi/x)^{n} =\frac{2}{3}\sum_{n=1}^{\infty}(-1)^{n}\frac{\left(\frac{3}{2}(1+ \alpha)\log(\pi/x)\right)^{n}}{n!}\] \[=\frac{2}{3}\left(e^{-\frac{3}{2}(1+\alpha)\log(\pi/x)}-1\right)\] \[=\frac{2}{3}\left(\left(\frac{x}{\pi}\right)^{\frac{3}{2}(1+ \alpha)}-1\right),\] \[\sum_{n=1}^{\infty}(-1)^{n}\frac{(1+\alpha)^{n}}{n!}\left(\frac{ 3}{2}\right)^{n-1}\log(\pi/x)^{n+1}\frac{n}{n+1} =-\left(\frac{2}{3}\right)^{2}\frac{1}{1+\alpha}\sum_{n=1}^{\infty }(-1)^{n+1}\frac{\left(\frac{3}{2}(1+\alpha)\log(\pi/x)\right)^{n+1}}{(n+1)!}n\] \[=-\left(\frac{2}{3}\right)^{2}\frac{1}{1+\alpha}\left(1-\left(1+ \frac{3}{2}(1+\alpha)\log(\pi/x)\right)e^{-\frac{3}{2}(1+\alpha)\log(\pi/x)} \right)\] \[=-\left(\frac{2}{3}\right)^{2}\frac{1}{1+\alpha}\left(1-\left(1+ \frac{3}{2}(1+\alpha)\log(\pi/x)\right)\left(\frac{x}{\pi}\right)^{\frac{3}{2 }(1+\alpha)}\right).\]
This gives us
\[\sum_{n=1}^{\infty}(-1)^{n}\frac{(1+\alpha)^{n}}{n!}I_{n,M}(x)=-( D_{x}-\log x)(2+\alpha)\frac{2}{3}\left(\left(\frac{x}{\pi}\right)^{\frac{3}{2 }(1+\alpha)}-1\right)\\ -(2+\alpha)\left(\frac{2}{3}\right)^{2}\frac{1}{1+\alpha}\left(1 -\left(1+\frac{3}{2}(1+\alpha)\log(\pi/x)\right)\left(\frac{x}{\pi}\right)^{ \frac{3}{2}(1+\alpha)}\right).\]
Simplifying we get
\[\sum_{n=1}^{\infty}(-1)^{n}\frac{(1+\alpha)^{n}}{n!}I_{n,M}(x)=\frac{4+2\alpha }{3}\left(\left(D_{x}-\log x-\frac{2}{3}\frac{1}{1+\alpha}\right)\left(1- \left(\frac{x}{\pi}\right)^{\frac{3}{2}(1+\alpha)}\right)+\log\left(\frac{ \pi}{x}\right)\left(\frac{x}{\pi}\right)^{\frac{3}{2}(1+\alpha)}\right).\]
Multiplying this with the factor
\[\frac{1}{1-x^{1+\alpha+(1+\alpha)^{2}/2}}\frac{1}{\log(1/x)}\]
gives us \(G_{\alpha,2,M}\).
A bound for \(G_{\alpha,2,M}(x)\) is given in the following lemma.
**Lemma F.3**.: _For \(\alpha\in(-1,0)\) and \(x<\frac{1}{\pi^{3}}\), \(G_{\alpha,2,M}\) satisfies the following bound_
\[G_{\alpha,2,M}(x)\leq\frac{4+2\alpha}{3}\left(\frac{2\log(1+2e\pi)}{\log(1/x )}+1\right).\]
Proof.: As a first step we split \(G_{\alpha,2,M}(x)\) into two terms
\[G_{\alpha,2,M}(x)=\frac{4+2\alpha}{3}\frac{1}{\log(1/x)}\frac{D_{ x}}{1-x^{1+\alpha+(1+\alpha)^{2}/2}}\left(1-\left(\frac{x}{\pi}\right)^{\frac{3} {2}(1+\alpha)}\right)\\ +\frac{4+2\alpha}{3}\frac{1}{1-x^{1+\alpha+(1+\alpha)^{2}/2}} \frac{1}{\log(1/x)}\left(\log\left(\frac{\pi}{x}\right)\left(\frac{x}{\pi} \right)^{\frac{3}{2}(1+\alpha)}-\left(\log x+\frac{2}{3}\frac{1}{1+\alpha} \right)\left(1-\left(\frac{x}{\pi}\right)^{\frac{3}{2}(1+\alpha)}\right)\right)\]
For the first term we focus on the part
\[\frac{D_{x}}{1-x^{1+\alpha+(1+\alpha)^{2}/2}}\left(1-\left(\frac{x}{\pi} \right)^{\frac{3}{2}(1+\alpha)}\right)=D_{x}\frac{1-e^{\frac{3}{2}(1+\alpha)( \log x-\log\pi)}}{1-e^{(1+\alpha)(1+(1+\alpha)/2)\log x}}.\]
Adding and subtracting \(e^{(1+\alpha)(1+(1+\alpha)/2)\log x}\) to the numerator we can write this as
\[D_{x}\left(1+e^{(1+\alpha)(1+(1+\alpha)/2)\log x}\frac{1-e^{\frac{ 3}{2}(1+\alpha)(\log x-\log\pi)-(1+\alpha)(1+(1+\alpha)/2)\log x}}{1-e^{(1+ \alpha)(1+(1+\alpha)/2)\log x}}\right)\\ =D_{x}\left(1+e^{(1+\alpha)(1+(1+\alpha)/2)\log x}\frac{1-e^{(1+ \alpha)\left(\frac{3}{2}(\log x-\log\pi)-(1+(1+\alpha)/2)\log x\right)}}{1-e^{( 1+\alpha)(1+(1+\alpha)/2)\log x}}\right)\]
Since \((1+\alpha)(1+(1+\alpha)/2)\log x\) is negative this is bounded by
\[D_{x}\left(1+\frac{1-e^{(1+\alpha)\left(\frac{3}{2}(\log x-\log\pi)-(1+(1+ \alpha)/2)\log x\right)}}{1-e^{(1+\alpha)(1+(1+\alpha)/2)\log x}}\right)\]
Since \(x<\frac{1}{\pi^{3}}\) we have
\[(1+\alpha)\left(\frac{3}{2}(\log x-\log\pi)-(1+(1+\alpha)/2) \log x\right)>(1+\alpha)\left(\frac{3}{2}\left(\log x+\frac{1}{3}\log x \right)-(1+(1+\alpha)/2)\log x\right)\\ =(1+\alpha)(1-(1+\alpha)/2)\log x.\]
From which we get
\[e^{(1+\alpha)\left(\frac{3}{2}(\log x-\log\pi)-(1+(1+\alpha)/2)\log x\right)} >e^{(1+\alpha)(1+(1+\alpha)/2)\log x}\]
and hence
\[\frac{1-e^{(1+\alpha)\left(\frac{3}{2}(\log x-\log\pi)-(1+(1+\alpha)/2)\log x \right)}}{1-e^{(1+\alpha)(1+(1+\alpha)/2)\log x}}<1.\]
From this we get the upper bound
\[\frac{D_{x}}{1-x^{1+\alpha+(1+\alpha)^{2}/2}}\left(1-\left(\frac{x}{\pi} \right)^{\frac{3}{2}(1+\alpha)}\right)<2D_{x}.\]
Together with the bound \(D_{x}<\log(1+2e\pi)\) this gives us the first part of the bound for \(G_{\alpha,2,M}(x)\).
For the second term we want to prove that
\[\frac{1}{1-x^{1+\alpha+(1+\alpha)^{2}/2}}\frac{1}{\log(1/x)}\left(\log\left( \frac{\pi}{x}\right)\left(\frac{x}{\pi}\right)^{\frac{3}{2}(1+\alpha)}-\left( \log x+\frac{2}{3}\frac{1}{1+\alpha}\right)\left(1-\left(\frac{x}{\pi} \right)^{\frac{3}{2}(1+\alpha)}\right)\right) \tag{48}\]
is bounded by \(1\). To begin with we note that
\[\log\left(\frac{\pi}{x}\right)\left(\frac{x}{\pi}\right)^{\frac{ 3}{2}(1+\alpha)}-\left(\log x+\frac{2}{3}\frac{1}{1+\alpha}\right)\left(1- \left(\frac{x}{\pi}\right)^{\frac{3}{2}(1+\alpha)}\right)\\ =\frac{1}{1+\alpha}\left(-\frac{2}{3}-(1+\alpha)\log x+\left( \frac{2}{3}+(1+\alpha)\log(\pi)\right)\left(\frac{x}{\pi}\right)^{\frac{3}{2}( 1+\alpha)}\right).\]
Allowing us to write (48) as
\[\frac{1}{\left(1-x^{1+\alpha+(1+\alpha)^{2}/2}\right)(1+\alpha)\log x}\left( \frac{2}{3}+(1+\alpha)\log x-\left(\frac{2}{3}+(1+\alpha)\log(\pi)\right) \left(\frac{x}{\pi}\right)^{\frac{3}{2}(1+\alpha)}\right).\]
If we let \(s=(1+\alpha)\log x\) we can write it as
\[\frac{1}{\left(1-e^{(1+(1+\alpha)/2)s}\right)s}\left(\frac{2}{3}+s-\left( \frac{2}{3}+(1+\alpha)\log\pi\right)e^{\frac{3}{2}s-\frac{3}{2}(1+\alpha)\log \pi}\right).\]
Since the denominator is negative it is enough to verify that
\[\frac{2}{3}+s-\left(\frac{2}{3}+(1+\alpha)\log\pi\right)e^{\frac{3}{2}s-\frac{3} {2}(1+\alpha)\log\pi}>\left(1-e^{(1+(1+\alpha)/2)s}\right)s\]
to prove that the quotient is bounded by \(1\). If we let \(v=(1+\alpha)\log\pi\) and simplify the inequality we get
\[\frac{2}{3}-\left(\frac{2}{3}-v\right)e^{\frac{3}{2}(s-v)}+se^{\frac{3+\alpha }{2}s}>0. \tag{49}\]
The left-hand side is increasing in \(v\), to see this we differentiate it with respect to \(v\) to get
\[\frac{4+3v}{2}e^{\frac{3}{2}(s-v)}.\]
Since \(0<v\) this is positive and the value for the left-hand side of (49) is hence lower bounded by the value for \(v=0\), from which we get the inequality
\[\frac{2}{3}-\frac{2}{3}e^{\frac{3}{2}s}+se^{\frac{3+\alpha}{2}s}>0. \tag{50}\]
Note that \(s=(1+\alpha)\log x<0\). We want to prove that the left-hand side is decreasing in \(s\) so that it is lower bounded by the value for \(s=0\), which is \(0\). To see that the left-hand side of (50) is decreasing in \(s\) we differentiate it with respect to \(s\), giving us
\[-e^{\frac{3}{2}s}+e^{\frac{3+\alpha}{2}s}+s\frac{3+\alpha}{2}e^{\frac{3+ \alpha}{2}s}=e^{\frac{3}{2}s}\left(e^{\frac{\alpha}{2}s}\left(1+s\frac{3+ \alpha}{2}\right)-1\right).\]
We want to check that
\[e^{\frac{\alpha}{2}s}\left(1+s\frac{3+\alpha}{2}\right)-1<0. \tag{51}\]
If we again differentiate the left-hand side with respect to \(s\) we get
\[\frac{1}{2}e^{\frac{\alpha}{2}s}\left(3+2\alpha+\frac{(3+\alpha)\alpha}{2}s \right).\]
We have \(3+2\alpha>0\) and \(\frac{(3+\alpha)\alpha}{2}s>0\), so this is positive and hence (51) is upper bounded by the value at \(s=0\), where it is \(0\). It follows that the left-hand side of (50) is decreasing in \(s\). With all of this we have shown that (48) is bounded by \(1\) and the result follows.
To bound \(G_{\alpha,2,R}(x)\) we factor out \(1+\alpha\) from the sum and take the absolute value termwise and move it inside the integral to get
\[G_{\alpha,2,R}(x)\leq\frac{1+\alpha}{1-x^{1+\alpha+(1+\alpha)^ {2}/2}}\frac{1}{\log(1/x)}\\ \sum_{n=1}^{\infty}\frac{(1+\alpha)^{n-1}}{n!}\sum_{k=0}^{n-1} \binom{n}{k}\frac{1}{2^{k}}\int_{1}^{\pi/x}\log(t)^{k}|h_{n-k}(t)|t\log(2e+1/( xt))\ dt.\]
Furthermore we use the bound \(\log(2e+1/(xt))<\log(2e+1/x)\) to get
\[G_{\alpha,2,R}(x)\leq\frac{1+\alpha}{1-x^{1+\alpha+(1+\alpha)^{2}/2}}\frac{ \log(2e+1/x)}{\log(1/x)}\sum_{n=1}^{\infty}\frac{(1+\alpha)^{n-1}}{n!}\sum_{k =0}^{n-1}\binom{n}{k}\frac{1}{2^{k}}\int_{1}^{\pi/x}\log(t)^{k}|h_{n-k}(t)|t\ dt. \tag{52}\]
We can bound
\[\frac{1+\alpha}{1-x^{1+\alpha+(1+\alpha)^{2}/2}}\]
by handling the removable singularity at \(\alpha=-1\) and the second factor by writing it as
\[\frac{\log(2e+1/x)}{\log(1/x)}=1+\frac{\log(1+2ex)}{\log(1/x)}.\]
For the remaining part the first term in the sum, with \(n=1\), is treated explicitly in Lemma F.4. For the remaining terms we split the interval of integration into two parts and bound them separately. More precisely we look at the two terms
\[G_{\alpha,2,R,1}(x)= \sum_{n=2}^{\infty}\frac{(1+\alpha)^{n-1}}{n!}\sum_{k=0}^{n-1} \binom{n}{k}\frac{1}{2^{k}}\int_{1}^{2}\log(t)^{k}|h_{n-k}(t)|t\ dt,\] \[G_{\alpha,2,R,2}(x)= \sum_{n=2}^{\infty}\frac{(1+\alpha)^{n-1}}{n!}\sum_{k=0}^{n-1} \binom{n}{k}\frac{1}{2^{k}}\int_{2}^{\pi/x}\log(t)^{k}|h_{n-k}(t)|t\ dt\]
which are handled in Lemma F.5 and F.6.
**Lemma F.4**.: _For \(\alpha\in(-1,0)\) and \(x<1\) the first term for the sum in (52) is bounded by \(\frac{1}{2}\)._
Proof.: The term is given by
\[\frac{(1+\alpha)^{n-1}}{n!}\sum_{k=0}^{n-1}\binom{n}{k}\frac{1}{2^{k}}\int_{1 }^{\pi/x}\log(t)^{k}|h_{n-k}(t)|t\ dt.\]
Inserting \(n=1\) gives us
\[\int_{1}^{\pi/x}|h_{1}(t)|t\ dt.\]
We have
\[h_{1}(t)=\log(t-1)+\log(t+1)-2\log(t)+\frac{1}{t^{2}},\]
giving us
\[\int_{1}^{\pi/x}|h_{1}(t)|t\ dt=\int_{1}^{\pi/x}\left|\log(t-1)+\log(t+1)-2 \log(t)+\frac{1}{t^{2}}\right|t\ dt.\]
Note that for \(t>1\) we have
\[h_{1}(t)=\log(1-1/t)+\log(1+1/t)+\frac{1}{t^{2}}=-\sum_{n=2}^{\infty}\frac{1 }{nt^{2n}}<0,\]
so the absolute value can be replaced with a minus sign. If we also integrate to infinity we get the upper bound
\[\int_{1}^{\pi/x}|h_{1}(t)|t\ dt\leq\int_{1}^{\infty}-\left(\log(t-1)+\log(t+1 )-2\log(t)+\frac{1}{t^{2}}\right)t\ dt.\]
The integral can be computed explicitly to be \(\frac{1}{2}\), giving us the required upper bound.
**Lemma F.5**.: _For \(\alpha\in(-1,0)\) and \(x<1\), \(G_{\alpha,2,R,1}\) satisfies the following bound_
\[G_{\alpha,2,R,1}(x)\leq 2\left(\sqrt{e}\frac{1+\alpha}{-\alpha}+9\frac{e^{3(1+ \alpha)}-3(1+\alpha)-1}{3(1+\alpha)}+(2+\alpha)e^{3(1+\alpha)/2}-1\right).\]
Proof.: As a first step our goal is to compute an upper bound of the integral
\[H_{k}=\int_{1}^{2}\log(t)^{k}|h_{n-k}(t)|t\ dt.\]
On the interval of integration we have
\[\log(t)^{k}t\leq 2\log(2)^{k}\]
and hence
\[H_{k}\leq 2\log(2)^{k}\int_{1}^{2}\left|h_{n-k}(t)\right|\ dt.\]
For \(1<t<2\) we have
\[\left|h_{k}(t)\right| =\left|\log(t-1)^{k}+\log(t+1)^{k}-2\log(t)^{k}-\frac{k(k-1-\log t )\log(t)^{k-2}}{t^{2}}\right|\] \[\leq(-1)^{k}\log(t-1)^{k}+\log(t+1)^{k}+2\log(t)^{k}+\frac{k(k-1 )}{t^{2}}\log(t)^{k-2}+\frac{k}{t^{2}}\log(t)^{k-1}\] \[\leq(-1)^{k}\log(t-1)^{k}+\log(3)^{k}+2\log(2)^{k}+k(k-1)\log(2)^ {k-2}+k\log(2)^{k-1}.\]
This, together with \(\int_{1}^{2}(-1)^{k}\log(t-1)^{k}\ dt=k!\), gives us
\[\int_{1}^{2}\left|h_{n-k}(t)\right|\ dt\leq(n-k)!+\log(3)^{n-k}+2\log(2)^{n-k }+(n-k)(n-k-1)\log(2)^{n-k-2}+(n-k)\log(2)^{n-k-1}.\]
Combining this with \(\log(2)<1<\log(3)<2\) and \(n\geq 2\) we get
\[H_{k}\leq 2\left((n-k)!+3\cdot 2^{n}+(n-k)^{2}\right).\]
This gives us the upper bound
\[G_{\alpha,2,R,1}(x)\leq 2\sum_{n=2}^{\infty}\frac{(1+\alpha)^{n-1}}{n!}\Bigg{(} \sum_{k=0}^{n-1}\binom{n}{k}\frac{(n-k)!}{2^{k}}+3\cdot 2^{n}\sum_{k=0}^{n-1} \binom{n}{k}\frac{1}{2^{k}}+\sum_{k=0}^{n-1}\binom{n}{k}\frac{(n-k)^{2}}{2^{k }}\Bigg{)}.\]
Explicitly computing the inner sums we have
\[\sum_{k=0}^{n-1}\binom{n}{k}\frac{(n-k)!}{2^{k}} =\sqrt{e}n\Gamma\left(n,\frac{1}{2}\right)<\sqrt{e}n!,\] \[\sum_{k=0}^{n-1}\binom{n}{k}\frac{1}{2^{k}} =\left(\frac{3}{2}\right)^{n}-\frac{1}{2^{n}}<\left(\frac{3}{2} \right)^{n},\] \[\sum_{k=0}^{n-1}\binom{n}{k}\frac{(n-k)^{2}}{2^{k}} =\frac{n(2n+1)}{3}\left(\frac{3}{2}\right)^{n-1},\]
here \(\Gamma(n,z)\) is the upper incomplete gamma function, which satisfies \(\Gamma(n,z)<\Gamma(n)=(n-1)!\) for \(z>0\). Inserting this back gives us
\[G_{\alpha,2,R,1}(x)\leq 2\sum_{n=2}^{\infty}\frac{(1+\alpha)^{n-1}}{n!}\left( \sqrt{e}n!+3\cdot 2^{n}\left(\frac{3}{2}\right)^{n}+\frac{n(2n+1)}{3}\left( \frac{3}{2}\right)^{n-1}\right).\]
Splitting it into the three sums we have
\[G_{\alpha,2,R,1}(x)\leq 2\left(\sqrt{e}\sum_{n=2}^{\infty}(1+\alpha)^{n-1}+9 \sum_{n=2}^{\infty}\frac{(1+\alpha)^{n-1}}{n!}3^{n-1}+\frac{1}{3}\sum_{n=2}^{ \infty}\frac{(1+\alpha)^{n-1}}{n!}n(2n+1)\left(\frac{3}{2}\right)^{n-1}\right).\]
For the individual sums we get
\[\sum_{n=2}^{\infty}(1+\alpha)^{n-1} =\frac{1+\alpha}{-\alpha},\] \[\sum_{n=2}^{\infty}\frac{(1+\alpha)^{n-1}}{n!}3^{n-1} =\frac{e^{3(1+\alpha)}-3(1+\alpha)-1}{3(1+\alpha)},\] \[\sum_{n=2}^{\infty}\frac{(1+\alpha)^{n-1}}{n!}n(2n+1)\left(\frac{ 3}{2}\right)^{n-1} =3(2+\alpha)e^{3(1+\alpha)/2}-3.\]
Combining all of this we get the bound
\[G_{\alpha,2,R,1}(x)\leq 2\left(\sqrt{e}\frac{1+\alpha}{-\alpha}+9\frac{e^{3(1+ \alpha)}-3(1+\alpha)-1}{3(1+\alpha)}+(2+\alpha)e^{3(1+\alpha)/2}-1\right),\]
which is what we wanted to prove.
**Lemma F.6**.: _For \(\alpha\in(-1,0)\) and \(x<1\), \(G_{\alpha,2,R,2}\) satisfies the following bound_
\[G_{\alpha,2,R,2}(x)<192\log(2)\frac{1+\alpha}{-\alpha}.\]
Proof.: To bound the integral we need to better understand the behavior of \(h_{k}(t)\). Recall that we have
\[h_{k}(t)=\log(t-1)^{k}+\log(t+1)^{k}-2\log(t)^{k}-\frac{k(k-1-\log t)\log(t)^{ k-2}}{t^{2}}.\]
As a first step we use that
\[\log(t-1)^{k}+\log(t+1)^{k}-2\log(t)^{k} =(\log(t)+\log(1-1/t))^{k}+(\log(t)+\log(1+1/t))^{k}-2\log(t)^{k}\] \[=\sum_{j=0}^{k-1}\binom{k}{j}\log(t)^{j}\left(\log(1-1/t)^{k-j}+ \log(1+1/t)^{k-j}\right)\]
to get
\[h_{k}(t)=\sum_{j=0}^{k-1}\binom{k}{j}\log(t)^{j}\left(\log(1-1/t)^{k-j}+\log( 1+1/t)^{k-j}\right)-\frac{k(k-1-\log t)\log(t)^{k-2}}{t^{2}}.\]
Next, using that
\[\log(1-1/t)=-t\sum_{i=0}^{\infty}\frac{1}{i+1}\frac{1}{t^{i}},\quad\log(1+1/t )=t\sum_{i=0}^{\infty}(-1)^{i}\frac{1}{i+1}\frac{1}{t^{i}},\]
we can write it as
\[h_{k}(t)=\sum_{j=0}^{k-1}\binom{k}{j}\log(t)^{j}\frac{1}{t^{k-j} }\left((-1)^{k-j}\left(\sum_{i=0}^{\infty}\frac{1}{i+1}\frac{1}{t^{i}}\right) ^{k-j}+\left(\sum_{i=0}^{\infty}(-1)^{i}\frac{1}{i+1}\frac{1}{t^{i}}\right)^{ k-j}\right)\\ -\frac{k(k-1-\log t)\log(t)^{k-2}}{t^{2}}.\]
Using formulas for powers of power series (see e.g. [28]) we have
\[\left(\sum_{i=0}^{\infty}\frac{1}{i+1}\frac{1}{t^{i}}\right)^{j} =\sum_{i=0}^{\infty}c_{i,j}\frac{1}{t^{i}},\] \[\left(\sum_{i=0}^{\infty}(-1)^{i}\frac{1}{i+1}\frac{1}{t^{i}} \right)^{j} =\sum_{i=0}^{\infty}d_{i,j}\frac{1}{t^{i}}.\]
with
\[c_{0,j}=1,\quad c_{i,j}=\frac{1}{i}\sum_{l=1}^{i}\frac{lj-i+l}{l+1}c_{i-l,j},\ i\geq 1\]
and
\[d_{0,j}=1,\quad d_{i,j}=\frac{1}{i}\sum_{l=1}^{i}(-1)^{l}\frac{lj-i+l}{l+1}d_{i- l,j},\ i\geq 1.\]
By induction it can be checked that \(d_{i,j}=(-1)^{i}c_{i,j}\), hence
\[\left(\sum_{i=0}^{\infty}(-1)^{i}\frac{1}{i+1}\frac{1}{t^{i}}\right)^{j}=\sum_ {i=0}^{\infty}(-1)^{i}c_{i,j}\frac{1}{t^{i}}.\]
Inserting this back into \(h_{k}(t)\) we get
\[h_{k}(t)=\sum_{j=0}^{k-1}\binom{k}{j}\log(t)^{j}\frac{1}{t^{k-j}}\sum_{i=0}^{ \infty}\left((-1)^{k-j}+(-1)^{i}\right)c_{i,k-j}\frac{1}{t^{i}}-\frac{k(k-1- \log t)\log(t)^{k-2}}{t^{2}},\]
which we can rewrite as
\[h_{k}(t)=\sum_{j=0}^{k-1}\sum_{i=0}^{\infty}\binom{k}{j}\left((-1)^{k-j}+(-1) ^{i}\right)c_{i,k-j}\log(t)^{j}\frac{1}{t^{k-j+i}}-\frac{k(k-1-\log t)\log(t)^ {k-2}}{t^{2}}.\]
Reordering the sums we can further write it as
\[h_{k}(t) =\sum_{m=0}^{\infty}\sum_{j=\max(k-m,0)}^{k-1}\binom{k}{j}\left( (-1)^{k-j}+(-1)^{m+j-k}\right)c_{m+j-k,k-j}\log(t)^{j}\frac{1}{t^{m}}-\frac{k( k-1-\log t)\log(t)^{k-2}}{t^{2}}\] \[=\sum_{m=1}^{\infty}\sum_{j=\max(k-m,0)}^{k-1}\binom{k}{j}\left( (-1)^{k-j}+(-1)^{m+j-k}\right)c_{m+j-k,k-j}\log(t)^{j}\frac{1}{t^{m}}-\frac{k( k-1-\log t)\log(t)^{k-2}}{t^{2}}\] \[=\sum_{m=1}^{\infty}\sum_{j=\max(k-m,0)}^{k-1}\binom{k}{j}(-1)^{ k-j}\left(1+(-1)^{m}\right)c_{m+j-k,k-j}\log(t)^{j}\frac{1}{t^{m}}-\frac{k(k-1- \log t)\log(t)^{k-2}}{t^{2}}\] \[=2\sum_{m=1}^{\infty}\sum_{j=\max(k-2m,0)}^{k-1}\binom{k}{j}(-1)^ {k-j}c_{2m+j-k,k-j}\log(t)^{j}\frac{1}{t^{2m}}-\frac{k(k-1-\log t)\log(t)^{k-2 }}{t^{2}}.\]
The \(m=1\) term exactly cancels the subtracted term, giving us
\[h_{k}(t)=2\sum_{m=2}^{\infty}\sum_{j=\max(k-2m,0)}^{k-1}\binom{k}{j}(-1)^{k-j }c_{2m+j-k,k-j}\log(t)^{j}\frac{1}{t^{2m}}.\]
Taking the absolute value we have
\[|h_{k}(t)|\leq 2\sum_{m=2}^{\infty}\sum_{j=\max(k-2m,0)}^{k-1}\binom{k}{j}c_{2m +j-k,k-j}\log(t)^{j}\frac{1}{t^{2m}}.\]
Using that \(\log(t)^{j}\leq 1+\log(t)^{k-1}\) and \(t^{-2m}\leq 16t^{-4}2^{-2m}\)for \(t\geq 2\) we get the upper bound
\[|h_{k}(t)|\leq 32\frac{1+\log(t)^{k-1}}{t^{4}}\sum_{m=2}^{\infty}\sum_{j=\max(k- 2m,0)}^{k-1}\binom{k}{j}\frac{c_{2m+j-k,k-j}}{2^{2m}}.\]
Note that the sum no longer depends on \(t\), we denote it by \(S_{k}\).
Inserting this back into \(G_{\alpha,2,R,2}\) we get the upper bound
\[G_{\alpha,2,R,2}(x)\leq 32\sum_{n=2}^{\infty}\frac{(1+\alpha)^{n-1}}{n!}\sum_{k=0} ^{n-1}\binom{n}{k}\frac{S_{n-k}}{2^{k}}\int_{2}^{\pi/x}\frac{\log(t)^{k}+\log(t )^{n-1}}{t^{3}}\ dt.\]
The integral can be explicitly computed to be
\[\int_{2}^{\pi/x}\frac{\log(t)^{k}+\log(t)^{n-1}}{t^{3}}\ dt=\frac{\Gamma(k+1,2 \log(2))-\Gamma\left(k+1,2\log\left(\frac{\pi}{x}\right)\right)}{2^{k+1}}+ \frac{\Gamma(n,2\log(2))-\left(n,2\log\left(\frac{\pi}{x}\right)\right)}{2^{n}}.\]
The differences between the incomplete gamma functions are bounded by \(\Gamma(k+1)=k!\) and \(\Gamma(n)=(n-1)!\) respectively, giving us
\[\int_{2}^{\pi/x}\frac{\log(t)^{k}+\log(t)^{n-1}}{t^{3}}\ dt\leq\frac{k!}{2^{k+1}}+ \frac{(n-1)!}{2^{n}}.\]
For \(0\leq k\leq n-1\) we have \(\frac{k!}{2^{k+1}}<2\frac{(n-1)!}{2^{n}}\) and hence
\[\int_{2}^{\pi/x}\frac{\log(t)^{k}+\log(t)^{n-1}}{t^{3}}\ dt\leq 3\frac{(n-1)!}{2^ {n}}.\]
Inserting this back into \(G_{\alpha,2,R,2}\) gives
\[G_{\alpha,2,R,2}(x) \leq 96\sum_{n=2}^{\infty}\frac{(1+\alpha)^{n-1}}{n!}\sum_{k=0}^{n -1}\binom{n}{k}\frac{S_{n-k}}{2^{k}}\frac{(n-1)!}{2^{n}}\] \[=48\sum_{n=2}^{\infty}\left(\frac{1+\alpha}{2}\right)^{n-1}\frac {1}{n}\sum_{k=0}^{n-1}\binom{n}{k}\frac{S_{n-k}}{2^{k}}.\]
The sum no longer depends on \(x\) and to bound it we need to analyze
\[S_{k}=\sum_{m=2}^{\infty}\sum_{j=\max(k-2m,0)}^{k-1}\binom{k}{j}\frac{c_{2m+j -k,k-j}}{2^{2m}}.\]
The bound \(\binom{k}{j}\leq 2^{k}\) gives us
\[S_{k}\leq 2^{k}\sum_{m=2}^{\infty}\sum_{j=\max(k-2m,0)}^{k-1}\frac{c_{2m+j-k,k- j}}{2^{2m}}.\]
Starting from \(m=0\) and adding back the odd powers of \(2\) gives us
\[S_{k}\leq 2^{k}\sum_{m=0}^{\infty}\sum_{j=\max(k-m,0)}^{k-1}\frac{c_{m+j-k,k- j}}{2^{m}}.\]
Reordering the sums we have
\[\sum_{m=0}^{\infty}\sum_{j=\max(k-m,0)}^{k-1}\frac{c_{m+j-k,k-j}}{2^{m}}=\sum_ {j=0}^{k-1}\frac{1}{2^{k-j}}\sum_{i=0}^{\infty}c_{i,k-j}\frac{1}{2^{i}}.\]
From the definition of \(c_{i,j}\) we get
\[\sum_{i=0}^{\infty}c_{i,k-j}\frac{1}{2^{i}}=\left(\sum_{i=0}^{\infty}\frac{1 }{i+1}\frac{1}{2^{i}}\right)^{k-j}=(-2\log(1-1/2))^{k-j}=\log(4)^{k-j}.\]
Inserting this back into \(S_{k}\) we have
\[S_{k}\leq 2^{k}\sum_{j=0}^{k-1}\left(\frac{\log(4)}{2}\right)^{k-j}=2^{k}\sum_{j =0}^{k-1}\log(2)^{k-j}=2^{k}\log(2)\frac{1-\log(2)^{k}}{1-\log(2)}.\]
We have \(\frac{1-\log(2)^{k}}{1-\log(2)}\leq 4\) and hence \(S_{k}\leq 2^{k+2}\log(2)\). Inserting this back into \(G_{\alpha,2,R,2}\) gives
\[G_{\alpha,2,R,2}(x) =48\sum_{n=2}^{\infty}\left(\frac{1+\alpha}{2}\right)^{n-1}\frac{ 1}{n}\sum_{k=0}^{n-1}\binom{n}{k}\frac{2^{k+2}\log(2)}{2^{k}}\] \[=192\log(2)\sum_{n=2}^{\infty}\left(\frac{1+\alpha}{2}\right)^{n -1}\frac{1}{n}\sum_{k=0}^{n-1}\binom{n}{k}\] \[=192\log(2)\sum_{n=2}^{\infty}\left(\frac{1+\alpha}{2}\right)^{n -1}\frac{1}{n}(2^{n}-1)\] \[\leq 192\log(2)\sum_{n=2}^{\infty}(1+\alpha)^{n-1}\] \[=192\log(2)\frac{1+\alpha}{-\alpha},\]
which is what we wanted to prove.
As mentioned in the beginning of the section the above methods work well for very small values of \(x\). For small, but not that small, values we have to take another approach. We let
\[G_{\alpha,2,1}(x)=\frac{1}{\log(1/x)}\int_{1}^{2}\frac{(t-1)^{-\alpha-1}+(1+t) ^{-\alpha-1}-2t^{-\alpha-1}}{1+\alpha}t^{(1-\alpha)/2}\log(2e+1/(xt))\ dt\]
and
\[G_{\alpha,2,2}(x)=\frac{1}{\log(1/x)}\int_{2}^{\pi/x}\frac{(t-1)^{-\alpha-1}+ (1+t)^{-\alpha-1}-2t^{-\alpha-1}}{1+\alpha}t^{(1-\alpha)/2}\log(2e+1/(xt))\ dt,\]
giving
\[G_{\alpha,2}(x)=\frac{1+\alpha}{1-x^{1+\alpha+(1+\alpha)^{2}/2}}(G_{\alpha,2, 1}(x)+G_{\alpha,2,2}(x)).\]
The integral for \(G_{\alpha,2,2}\) is bounded on the interval of integration and can be computed using numerical integration, as discussed in Appendix D. The only problematic part is evaluating
\[\frac{(t-1)^{-\alpha-1}+(1+t)^{-\alpha-1}-2t^{-\alpha-1}}{1+\alpha},\]
which has a removable singularity at \(\alpha=-1\), this can be handled as described in Appendix A. The integral for \(G_{\alpha,2,1}\) has a singularity at \(t=1\) and to compute it we use the following lemma.
**Lemma F.7**.: _We have_
\[G_{\alpha,2,1}(x)=\frac{C}{\log(1/x)}\frac{1+2^{-\alpha}-3^{-\alpha}+\alpha(- 4+5\cdot 2^{-\alpha}-2\cdot 3^{-\alpha})}{(\alpha-1)\alpha(\alpha+1)}\]
_for some_
\[C\in[2^{-(1+\alpha)/2}\log(2e+1/(2x)),\log(2e+1/x)].\]
Proof.: For \(t\in[1,2]\) we have
\[\log(2e+1/(xt))\in[\log(2e+1/(2x)),\log(2e+1/x)].\]
Since the integrand is positive it follows that
\[G_{\alpha,2,2}(x)=\frac{C_{1}}{\log(1/x)}\int_{1}^{2}\frac{(t-1)^{-\alpha-1}+(1 +t)^{-\alpha-1}-2t^{-\alpha-1}}{1+\alpha}t^{(1-\alpha)/2}\ dt,\]
for some \(C_{1}\in[\log(2e+1/(2x)),\log(2e+1/x)]\). Furthermore we have \(t^{(1-\alpha)/2}=tt^{-(1+\alpha)/2}\) and
\[t^{-(1+\alpha)/2}\in[2^{-(1+\alpha)/2},1].\]
This gives us
\[G_{\alpha,2,2}(x)=\frac{C_{1}C_{2}}{\log(1/x)}\int_{1}^{2}\frac{(t-1)^{- \alpha-1}+(1+t)^{-\alpha-1}-2t^{-\alpha-1}}{1+\alpha}t\ dt,\]
for some \(C_{2}\in[2^{-(1+\alpha)/2},1]\). The integral can now be computed explicitly to be
\[\int_{1}^{2}\frac{(t-1)^{-\alpha-1}+(1+t)^{-\alpha-1}-2t^{-\alpha-1}}{1+ \alpha}t\ dt=\frac{1+2^{-\alpha}-3^{-\alpha}+\alpha(-4+5\cdot 2^{-\alpha}-2 \cdot 3^{-\alpha})}{(\alpha-1)\alpha(\alpha+1)}.\]
Finally we note that if we let \(C=C_{1}C_{2}\) then
\[C\in[2^{-(1+\alpha)/2}\log(2e+1/(2x)),\log(2e+1/x)].\]
This gives us the result.
### \(R_{\alpha}\)
Recall that
\[R_{\alpha}(x)=2\sum_{m=1}^{\infty}(-1)^{m}\zeta(-\alpha-2m)\frac{x^{2m}}{(2m)! }\sum_{k=0}^{m-1}\binom{2m}{2k}\int_{0}^{\pi/x}t^{2k+(1-\alpha)/2}\log(2e+1/( xt))\ dt.\]
To begin with we can note that since \(0<-\alpha<1\) we have \((-1)^{m}\zeta(-\alpha-2m)>0\) for all \(m=1,2,\ldots\) due to the zeros of the zeta function on the real line being exactly the even negative integers. As a consequence all terms in the sum are positive. The following lemma gives a bound
**Lemma F.8**.: _For \(\alpha\in(-1,0)\), \(x<1\) and \(M\geq 1\), \(R_{\alpha}(x)\) satisfies the following bound_
\[R_{\alpha}(x)<2\log(2e+1/\pi)\Bigg{(}\sum_{m=1}^{M-1}(-1)^{m} \zeta(-\alpha-2m)\frac{\pi^{2m}}{(2m)!}\sum_{k=0}^{m-1}\binom{2m}{2k}\frac{1}{ 2k+1+(1-\alpha)/2}\left(\frac{x}{\pi}\right)^{2(m-1-k)}\\ +\pi^{2}2(2\pi)^{1-\alpha-2M}\left|\sin\left(\frac{\pi}{2}\alpha \right)\right|\zeta(2M+1+\alpha)\frac{(\pi(1+1/\pi))^{2M}}{4\pi^{2}-(\pi(1+1/ \pi))^{2}}\Bigg{)}.\]
Proof.: Looking at the integral in the inner sum we write
\[\log(2e+1/(xt))=\log(1+2ext)-\log(xt),\]
giving us
\[\int_{0}^{\pi/x}t^{2k+(1-\alpha)/2}\log(2e+1/(xt))\ dt=\int_{0}^{\pi/x}t^{2k+( 1-\alpha)/2}\log(1+2ext)\ dt-\int_{0}^{\pi/x}t^{2k+(1-\alpha)/2}\log(xt)\ dt.\]
For the first integral we compute an upper bound using that \(\log(1+2ext)\) is bounded by \(\log(1+2e\pi)\), for the second integral we integrate directly.
\[\int_{0}^{\pi/x}t^{2k+(1-\alpha)/2}\log(1+2ext)\ dt \leq\log(1+2e\pi)\int_{0}^{\pi/x}t^{2k+(1-\alpha)/2}\ dt\] \[=\log(1+2e\pi)\frac{1}{2k+1+(1-\alpha)/2}\left(\frac{\pi}{x} \right)^{2k+1+(1-\alpha)/2}\] \[\leq\frac{\log(1+2e\pi)}{2k+1+(1-\alpha)/2}\left(\frac{\pi}{x} \right)^{2(k+1)}.\]
The second integral can be computed explicitly to be
\[\int_{0}^{\pi/x}t^{2k+(1-\alpha)/2}\log(xt)\ dt=\frac{(2k+1+(1-\alpha)/2)\log \pi-1}{(2k+1+(1-\alpha)/2)^{2}}\left(\frac{\pi}{x}\right)^{2k+1+(1-\alpha)/2},\]
and we note that this is positive for \(k\geq 0\). An upper bound for the full integral is hence given by
\[\int_{0}^{\pi/x}t^{2k+(1-\alpha)/2}\log(2e+1/(xt))\ dt\leq\frac{\log(2e+1/\pi) }{2k+1+(1-\alpha)/2}\left(\frac{\pi}{x}\right)^{2(k+1)}\]
For the inner sum we thus get
\[\sum_{k=0}^{m-1}\binom{2m}{2k}\int_{0}^{\pi/x}t^{2k+(1-\alpha)/2}\log(2e+1/(xt ))\ dt\leq\log(2e+1/\pi)\sum_{k=0}^{m-1}\binom{2m}{2k}\frac{1}{2k+1+(1-\alpha) /2}\left(\frac{\pi}{x}\right)^{2(k+1)}.\]
Factoring out \(\left(\frac{\pi}{x}\right)^{2m}\) and inserting into \(R_{\alpha}\) we have
\[R_{\alpha}(x)\leq 2\log(2e+1/\pi)\sum_{m=1}^{\infty}(-1)^{m}\zeta(-\alpha-2m) \frac{\pi^{2m}}{(2m)!}\sum_{k=0}^{m-1}\binom{2m}{2k}\frac{1}{2k+1+(1-\alpha)/ 2}\left(\frac{x}{\pi}\right)^{2(m-1-k)}.\]
Taking \(M\geq 1\) we split this into one finite sum and one tail as
\[R_{\alpha}(x)\leq 2\log(2e+1/\pi)\Bigg{(} \sum_{m=1}^{M-1}(-1)^{m}\zeta(-\alpha-2m)\frac{\pi^{2m}}{(2m)!} \sum_{k=0}^{m-1}\binom{2m}{2k}\frac{1}{2k+1+(1-\alpha)/2}\left(\frac{x}{\pi} \right)^{2(m-1-k)}\] \[+\sum_{m=M}^{\infty}(-1)^{m}\zeta(-\alpha-2m)\frac{\pi^{2m}}{(2m)!}\sum_{k=0}^{m-1}\binom{2m}{2k}\frac{1}{2k+1+(1-\alpha)/2}\left(\frac{x}{\pi }\right)^{2(m-1-k)}\Bigg{)}.\]
The finite sum can be enclosed directly, for the second sum we note that for \(x<1\) we have
\[\sum_{k=0}^{m-1}\binom{2m}{2k}\frac{1}{2k+1+(1-\alpha)/2}\left( \frac{x}{\pi}\right)^{2(m-1-k)} <\sum_{k=0}^{m-1}\binom{2m}{2k}\frac{1}{2k+1}\left(\frac{1}{\pi} \right)^{2(m-1-k)}\] \[<\sum_{k=0}^{m}\binom{2m}{2k}\frac{1}{2k+1}\left(\frac{1}{\pi} \right)^{2(m-1-k)}\] \[=\pi^{1-2m}\frac{(\pi+1)^{2m}-(\pi-1)^{2m}+\pi(\pi+1)^{2m}+\pi( \pi-1)^{2m}}{2(1+2m)}\] \[\leq\pi^{1-2m}(\pi+1)^{2m}\frac{1+2\pi}{6}\] \[=\pi^{2}\left(1+\frac{1}{\pi}\right)^{2m}.\]
and hence
\[\sum_{m=M}^{\infty}(-1)^{m}\zeta(-\alpha-2m)\frac{\pi^{2m}}{(2m)!} \sum_{k=0}^{m-1}\binom{2m}{2k}\frac{1}{2k+1+(1-\alpha)/2}\left(\frac{x}{\pi} \right)^{2(m-1-k)}\\ \leq\pi^{2}\sum_{m=M}^{\infty}(-1)^{m}\zeta(-\alpha-2m)\frac{(\pi (1+1/\pi))^{2m}}{(2m)!}.\]
The last sum is the same as for the tail of the Clausen function occurring in Lemma C.5, since \(\pi(1+1/\pi)<2\pi\) this gives us
\[\sum_{m=M}^{\infty}(-1)^{m}\zeta(-\alpha-2m)\frac{(\pi(1+1/\pi))^{2m}}{(2m)!} \leq 2(2\pi)^{1-\alpha-2M}\left|\sin\left(\frac{\pi}{2}\alpha\right) \right|\zeta(2M+1+\alpha)\frac{(\pi(1+1/\pi))^{2M}}{4\pi^{2}-(\pi(1+1/\pi))^ {2}}.\]
Finally we then get
\[R_{\alpha}(x)<2\log(2e+1/\pi)\Bigg{(}\sum_{m=1}^{M-1}(-1)^{m} \zeta(-\alpha-2m)\frac{\pi^{2m}}{(2m)!}\sum_{k=0}^{m-1}\binom{2m}{2k}\frac{1}{ 2k+1+(1-\alpha)/2}\left(\frac{x}{\pi}\right)^{2(m-1-k)}\\ +\pi^{2}2(2\pi)^{1-\alpha-2M}\left|\sin\left(\frac{\pi}{2}\alpha \right)\right|\zeta(2M+1+\alpha)\frac{(\pi(1+1/\pi))^{2M}}{4\pi^{2}-(\pi(1+1/ \pi))^{2}}\Bigg{)},\]
which is what we wanted to prove.
Appendix G Details for evaluating \(\mathcal{T}_{\alpha}(x)\) with \(\alpha\) near \(-1\) and \(x\) near zero
In this section we discuss how to bound
\[\frac{U_{\alpha}(x)}{\log(1/x)x^{-\alpha+p}}\]
for small \(x\) in the hybrid case when \(\alpha\) is near \(-1\), see Section 11.2.4.
In this case the weight is given by \(w_{\alpha}(x)=|x|^{p}\log(2e+1/|x|)\), giving us
\[U_{\alpha}(x)=x^{1+p}\int_{0}^{\pi/x}|\hat{I}_{\alpha}(x,t)|t^{p}\log(2e+1/( xt))\ dt.\]
Using that
\[\log(2e+1/(xt))=\log(1+2ext)+log(1/x)+log(1/t)\]
we split \(U_{\alpha}\) into three parts
\[U_{\alpha}^{1}(x) =x^{1+p}\int_{0}^{\pi/x}|\hat{I}_{\alpha}(x,t)|t^{p}\log(1+2ext)\ dt,\] \[U_{\alpha}^{2}(x) =x^{1+p}\int_{0}^{\pi/x}|\hat{I}_{\alpha}(x,t)|t^{p}\ dt,\] \[U_{\alpha}^{3}(x) =x^{1+p}\int_{0}^{\pi/x}|\hat{I}_{\alpha}(x,t)|t^{p}\log(1/t)\ dt,\]
satisfying
\[U_{\alpha}(x)=U_{\alpha}^{1}(x)+\log(1/x)U_{\alpha}^{2}(x)+U_{\alpha}^{3}(x).\]
The integral \(U_{\alpha}^{2}\) is the same as what we would get with the weight \(w_{\alpha}(x)=|x|^{p}\), and we can therefore use Lemma 11.6 to bound it.
For \(U^{1}_{\alpha}\) we can use that \(\log(1+2ext)\) is bounded on the interval of integration to factor it out. We then recover the same integral as for \(U^{2}_{\alpha}\), and we can bound it in the same way. If \(x\) is very small this work well, but for \(x\) larger than around \(10^{-5}\) this gives fairly poor bounds. In that case we compute part of the integral using a rigorous integrator instead.
The computation of \(U^{3}_{\alpha}\) is the most complicate part. Similar to in Lemma 11.6 we expand the integrand and split it into two main parts and one remainder part as
\[U^{3}_{\alpha,M,1}(x) =\Gamma(1+\alpha)\sin\left(-\frac{\pi}{2}\alpha\right)x^{-\alpha+ p}\int_{0}^{1}\left|(1-t)^{-\alpha-1}+(1+t)^{-\alpha-1}-2t^{-\alpha-1}\right|t^{p} \log(1/t)\ dt\] \[U^{3}_{\alpha,M,2}(x) =\Gamma(1+\alpha)\sin\left(-\frac{\pi}{2}\alpha\right)x^{-\alpha +p}\int_{1}^{\pi/x}((t-1)^{-\alpha-1}+(1+t)^{-\alpha-1}-2t^{-\alpha-1})t^{p} \log(1/t)\ dt\] \[U^{3}_{\alpha,R}(x) =2x^{1+p}\sum_{m=1}^{\infty}(-1)^{m}\zeta(-\alpha-2m)\frac{x^{2m }}{(2m)!}\sum_{k=0}^{m-1}\binom{2m}{2k}\int_{0}^{\pi/x}t^{2k+p}\log(1/t)\ dt,\]
with
\[U^{3}_{\alpha}(x)\leq U^{3}_{\alpha,M,1}(x)+U^{3}_{\alpha,M,2}(x)+U^{3}_{ \alpha,R}(x).\]
For \(U^{3}_{\alpha,M,1}\) the integral doesn't depend on \(x\). We compute it by using a rigorous integrator from \(0\) to \(0.999\) and from \(0.999\) to \(1\) we use that \(\log(1/t)\) is bounded to factor it out and integrate explicitly.
For \(U^{3}_{\alpha,M,2}\) we note that \(\log(1/t)\) is negative and the integral is hence decreasing in \(x\). To get an upper bound it is thus enough to work with an upper bound of \(x\). We split the interval of integration into several parts that are all treated slightly different. We let
\[U^{3}_{\alpha,M,2,1}(x) =\int_{1}^{1.001}((t-1)^{-\alpha-1}+(1+t)^{-\alpha-1}-2t^{-\alpha -1})t^{p}\log(1/t)\ dt,\] \[U^{3}_{\alpha,M,2,2}(x) =\int_{1.001}^{10^{10}}((t-1)^{-\alpha-1}+(1+t)^{-\alpha-1}-2t^{ -\alpha-1})t^{p}\log(1/t)\ dt,\] \[U^{3}_{\alpha,M,2,3}(x) =\int_{10^{10}}^{10^{50}}((t-1)^{-\alpha-1}+(1+t)^{-\alpha-1}-2t^ {-\alpha-1})t^{p}\log(1/t)\ dt,\] \[U^{3}_{\alpha,M,2,4}(x) =\int_{10^{50}}^{\pi/x}((t-1)^{-\alpha-1}+(1+t)^{-\alpha-1}-2t^{ -\alpha-1})t^{p}\log(1/t)\ dt.\]
If \(\pi/x\geq 10^{50}\) we then have
\[U^{3}_{\alpha,M,2}=U^{3}_{\alpha,M,2,1}+U^{3}_{\alpha,M,2,2}+U^{3}_{\alpha,M, 2,3}+U^{3}_{\alpha,M,2,4}.\]
If \(\pi/x\) is less than \(10^{50}\) we simply skip the integrals above \(\pi/x\) and cut the last one off at \(\pi/x\). The integral \(U^{3}_{\alpha,M,2,2}\) is computed using a rigorous integrator. For the remaining ones we use that
\[\int_{a}^{b}((t-1)^{-\alpha-1}+(1+t)^{-\alpha-1}-2t^{-\alpha-1})t^{p}\log(1/t )\ dt\geq\log(1/a)\int_{a}^{b}((t-1)^{-\alpha-1}+(1+t)^{-\alpha-1}-2t^{-\alpha -1})t^{p}\ dt\]
and integrate explicitly. The primitive function is given by
\[\int((t-1)^{-\alpha-1}+(1+t)^{-\alpha-1}-2t^{-\alpha-1})t^{p}\ dt\\ =-\frac{t^{p-\alpha}}{\alpha-p}\left({}_{2}F_{1}\left(1+\alpha, \alpha-p;1+\alpha-p;\frac{1}{t}\right)+{}_{2}F_{1}\left(1+\alpha,\alpha-p;1+ \alpha-p;-\frac{1}{t}\right)-2\right).\]
The \({}_{2}F_{1}\) functions are evaluated using the hypergeometric series, valid when \(1+\alpha-p\) is not an integer and \(t>1\), with bounds from [42].
For \(U^{3}_{\alpha,R}(x)\) we use the following lemma to compute a bound.
**Lemma G.1**.: _Let \(0<\epsilon<\frac{\pi}{2}\), for \(\alpha\in(-1,0)\), \(x\in[0,\epsilon]\) and \(-\alpha<p<1\) with \(1+\alpha\neq p\) we have_
\[\frac{U_{\alpha,R}^{3}(x)}{x^{-\alpha+p}\log(1/x)}\leq\frac{2x^{2+ \alpha-p}\pi^{p-1}}{\log(1/x)}\sum_{m=1}^{M-1}(-1)^{m}\zeta(-\alpha-2m)\frac{ \pi^{2m}}{(2m)!}\sum_{k=0}^{m-1}\binom{2m}{2k}\frac{1}{(2k+1+p)^{2}}\left( \frac{x}{\pi}\right)^{2(m-1-k)}\\ -\frac{2x^{2+\alpha-p}\pi^{p-1}\log(\pi/x)}{\log(1/x)}\sum_{m=1}^ {M-1}(-1)^{m}\zeta(-\alpha-2m)\frac{\pi^{2m}}{(2m)!}\sum_{k=0}^{m-1}\binom{2m} {2k}\frac{1}{2k+1+p}\left(\frac{x}{\pi}\right)^{2(m-1-k)}\\ +\frac{6x^{2+\alpha-p}\pi^{p-1}}{\log(1/x)}\sum_{m=M}^{\infty}(-1 )^{m}\zeta(-\alpha-2m)\frac{(\frac{3\pi}{2})^{2m}}{(2m)!}.\]
Proof.: We have
\[\int_{0}^{\pi/x}t^{2k+p}\log(1/t)\ dt=\frac{1-(2k+1+p)\log(\pi/x)}{(2k+1+p)^{2} }\left(\frac{\pi}{x}\right)^{2k+1+p},\]
giving us
\[\frac{U_{\alpha,R}^{3}(x)}{x^{-\alpha+p}\log(1/x)}=\frac{2x^{1+\alpha}}{\log(1 /x)}\sum_{m=1}^{\infty}(-1)^{m}\zeta(-\alpha-2m)\frac{x^{2m}}{(2m)!}\sum_{k=0 }^{m-1}\binom{2m}{2k}\frac{1-(2k+1+p)\log(\pi/x)}{(2k+1+p)^{2}}\left(\frac{\pi }{x}\right)^{2k+1+p}.\]
Using a similar approach as in Lemma 11.6 we can rewrite this as
\[\frac{U_{\alpha,R}^{3}(x)}{x^{-\alpha+p}\log(1/x)}\\ =\frac{2x^{2+\alpha-p}\pi^{p-1}}{\log(1/x)}\sum_{m=1}^{\infty}(-1) ^{m}\zeta(-\alpha-2m)\frac{\pi^{2m}}{(2m)!}\sum_{k=0}^{m-1}\binom{2m}{2k} \frac{1-(2k+1+p)\log(\pi/x)}{(2k+1+p)^{2}}\left(\frac{x}{\pi}\right)^{2(m-1-k )}.\]
Splitting it into two sums we can write it as
\[\frac{U_{\alpha,R}^{3}(x)}{x^{-\alpha+p}\log(1/x)}=\frac{2x^{2+ \alpha-p}\pi^{p-1}}{\log(1/x)}\sum_{m=1}^{\infty}(-1)^{m}\zeta(-\alpha-2m) \frac{\pi^{2m}}{(2m)!}\sum_{k=0}^{m-1}\binom{2m}{2k}\frac{1}{(2k+1+p)^{2}} \left(\frac{x}{\pi}\right)^{2(m-1-k)}\\ -\frac{2x^{2+\alpha-p}\pi^{p-1}\log(\pi/x)}{\log(1/x)}\sum_{m=1}^ {\infty}(-1)^{m}\zeta(-\alpha-2m)\frac{\pi^{2m}}{(2m)!}\sum_{k=0}^{m-1}\binom{ 2m}{2k}\frac{1}{2k+1+p}\left(\frac{x}{\pi}\right)^{2(m-1-k)}.\]
We can treat the first few terms in the sums explicitly, what remains is bounding the tails
\[\sum_{m=M}^{\infty}(-1)^{m}\zeta(-\alpha-2m)\frac{\pi^{2m}}{(2m)!}\sum_{k=0}^ {m-1}\binom{2m}{2k}\frac{1}{(2k+1+p)^{2}}\left(\frac{x}{\pi}\right)^{2(m-1-k)} \tag{53}\]
and
\[\sum_{m=M}^{\infty}(-1)^{m}\zeta(-\alpha-2m)\frac{\pi^{2m}}{(2m)!}\sum_{k=0}^ {m-1}\binom{2m}{2k}\frac{1}{2k+1+p}\left(\frac{x}{\pi}\right)^{2(m-1-k)}. \tag{54}\]
For (54) the factor in front is negative, we hence only need to compute a lower bound. Since all terms in the sum are positive it is trivially lower bounded by zero.
For (53) we need an upper bound. We can notice that
\[\sum_{m=M}^{\infty}(-1)^{m}\zeta(-\alpha-2m)\frac{\pi^{2m}}{(2m)!}\sum_{k=0}^ {m-1}\binom{2m}{2k}\frac{1}{2k+1+p}\left(\frac{x}{\pi}\right)^{2(m-1-k)}\]
gives an upper bound and that this sum also occurs in the proof of Lemma 11.6. Following the same approach we therefore get that an upper bound is given by
\[3\sum_{m=M}^{\infty}(-1)^{m}\zeta(-\alpha-2m)\frac{(\frac{3\pi}{2})^{2m}}{(2m)!}.\]
## Acknowledgments
The author was partially supported by the ERC Starting Grant ERC-StG-CAPA-852741 as well as MICINN (Spain) research grant number PID2021- 125021NA-I00. This material is based upon work supported by the National Science Foundation under Grant No. DMS-1929284 while the author was in residence at the Institute for Computational and Experimental Research in Mathematics in Providence, RI, during the program "Hamiltonian Methods in Dispersive and Wave Evolution Equations". The author was partially supported by the Swedish-American foundation for the visit. We are also thankful for the hospitality of the Princeton Department of Mathematics and the Brown University Department of Mathematics where parts of this paper were done. The computations were enabled by resources provided by the National Academic Infrastructure for Supercomputing in Sweden (NAISS) and the Swedish National Infrastructure for Computing (SNIC) at the PDC Center for High Performance Computing, KTH Royal Institute of Technology, partially funded by the Swedish Research Council through grant agreements no. 2022-06725 and no. 2018-05973. The author would like to thank Javier Gomez-Serrano for his guidance and Erik Wahlen and Gabriele Brull for fruitful discussions about highest waves for related equations.
|
2309.14047 | Random-Energy Secret Sharing via Extreme Synergy | The random-energy model (REM), a solvable spin-glass model, has impacted an
incredibly diverse set of problems, from protein folding to combinatorial
optimization to many-body localization. Here, we explore a new connection to
secret sharing. We formulate a secret-sharing scheme, based on the REM, and
analyze its information-theoretic properties. Our analyses reveal that the
correlations between subsystems of the REM are highly synergistic and form the
basis for secure secret-sharing schemes. We derive the ranges of temperatures
and secret lengths over which the REM satisfies the requirement of secure
secret sharing. We show further that a special point in the phase diagram
exists at which the REM-based scheme is optimal in its information encoding.
Our analytical results for the thermodynamic limit are in good qualitative
agreement with numerical simulations of finite systems, for which the strict
security requirement is replaced by a tradeoff between secrecy and
recoverability. Our work offers a further example of information theory as a
unifying concept, connecting problems in statistical physics to those in
computation. | Vudtiwat Ngampruetikorn, David J. Schwab | 2023-09-25T11:23:16Z | http://arxiv.org/abs/2309.14047v1 | # Random-Energy Secret Sharing via Extreme Synergy
###### Abstract
The random-energy model (REM), a solvable spin-glass model, has impacted an incredibly diverse set of problems, from protein folding to combinatorial optimization to many-body localization. Here, we explore a new connection to secret sharing. We formulate a secret-sharing scheme, based on the REM, and analyze its information-theoretic properties. Our analyses reveal that the correlations between subsystems of the REM are highly synergistic and form the basis for secure secret-sharing schemes. We derive the ranges of temperatures and secret lengths over which the REM satisfies the requirement of secure secret sharing. We show further that a special point in the phase diagram exists at which the REM-based scheme is optimal in its information encoding. Our analytical results for the thermodynamic limit are in good qualitative agreement with numerical simulations of finite systems, for which the strict security requirement is replaced by a tradeoff between secrecy and recoverability. Our work offers a further example of information theory as a unifying concept, connecting problems in statistical physics to those in computation.
Keeping sensitive information from bad actors has long been a challenge. Lock a secret in a safe with one master key and the secret is gone forever should the key be lost. Make copies of the key and the risk is of one of them falling into the wrong hands. Secret sharing offers a solution [1; 2]. In a \((k,n)\) threshold scheme, a secret is split into \(n\) shares. Reading the secret requires \(k\) or more distinct shares, and \(k-1\) or fewer shares reveal absolutely no information about the secret. This strategy allows secure information storage whose degrees of security and fail safety are tunable through \(k\) and \(n\). Moreover, the threshold requirement provides a mechanism for enforcing coordination and agreement between parties with different shares, thus making secret-sharing schemes an important building block in distributed computing, see, e.g., Ref [3].
While constructions of secret sharing schemes are well known [1; 2], such schemes seem too contrived to appear in nature. A primary reason for this is that secret sharing requires a potentially extreme form of _synergy_, where information is stored in the joint state of a collection of random variables that is absent from its subsets (see Fig 1). Yet, some well-known models of physical and biological systems exhibit similar behaviors. For example, information in the interior of black holes is encoded in the entanglement between Hawking radiation subsystems; as a result, information recovery is possible only when all radiation parts are available [4]. Another example is combinatorial coding in an ensemble of neurons, in which two neurons together can carry many more bits about a stimulus than the sum of their individual information [5; 6]. In this case, the neural code represents a secret-sharing scheme, albeit an imperfect one since each individual neuron can still be predictive of the stimulus. Here, we show that a spin-glass model, namely the random-energy model [7; 8], also implements a secret-sharing scheme.
The random-energy model (REM) has proved remarkably versatile. It offers a minimal model and useful theoretical benchmarks for a wide range of problems, from protein folding [9; 10; 11] to self assembly [12] to biodiversity [13] to many-body localization [14; 15; 16]. It also connects statistical physics with concepts in computation such as error correction [17] and combinatorial optimization [18; 19] (see Ref [20] and references therein).
In this Letter, we investigate yet another intriguing connection, between the REM and cryptography. We analyze the information-theoretic properties of the REM and show that the correlations between its subsystems satisfy the information-theoretic requirement of secure secret-sharing schemes. We map the REM to \((k,n)\) threshold schemes and derive a phase diagram of secure regions. We show that for every \((k,n)\), a special set of model parameters exists such that the encoding of secret information in each of the \(n\) shares is at the physical limit and thereby optimal. In addition, we numerically compute the relevant information terms for finite-spin REMs and demonstrate that they also implement secret-sharing schemes, albeit with the security requirement replaced by a tradeoff between secrecy and recoverability.
The random-energy model describes a system of \(N\) Ising spins--\(\sigma=(\sigma_{1},\sigma_{2},\ldots,\sigma_{N})\) with \(\sigma_{i}=\pm 1\)--whose energy levels \(E_{\sigma}\) are _iid_ Gaussian variables. That is, the probability of a system configuration \(\sigma\) reads
\[P(\sigma)=e^{-\beta E_{\sigma}}/Z\quad\text{with}\quad E_{\sigma}\sim\mathcal{ N}(0,NJ^{2}/2), \tag{1}\]
where \(\beta\) denotes the inverse temperature and \(J\) is an intensive parameter. In the limit \(N\to\infty\), the partition function admits an analytical form [7; 8]
\[\ln Z=\ln\sum_{\sigma}e^{-\beta E_{\sigma}}=N\ln 2\times\left\{\begin{array}{ ll}1+(\frac{\beta}{\rho_{c}})^{2}&\text{if}\ \beta\leq\beta_{c}\\ 2\frac{\beta}{\rho_{c}}&\text{if}\ \beta>\beta_{c}.\end{array}\right. \tag{2}\]
The critical point \(\beta_{c}J=2\sqrt{\ln 2}\) marks a first-order phase transition between a paramagnetic state at high temperatures \(\beta<\!\beta_{c}\), and a _frozen_ state at low temperatures \(\beta>\beta_{c}\). We can obtain all thermodynamic variables from the above partition function. Of particular interest is the entropy (in bits) [21]
\[S(\sigma)=\log_{2}Z-\beta\frac{\partial\log_{2}Z}{\partial\beta}=N\times \left\{\begin{array}{ll}1-t^{-2}&\text{if}\ t\geq 1\\ 0&\text{if}\ t<1.\end{array}\right. \tag{3}\]
Here and throughout, we let \(t\equiv\beta_{c}/\beta\) denote the reduced temperature. In the high-temperature limit, thermal noise dominates
and the system becomes uncorrelated and random, resulting in one bit of entropy per spin. As the temperature drops, the entropy decreases until it vanishes at the critical point below which only an \(O(1)\) number of least energetic configurations prevail.
To turn the REM into a secret-sharing scheme, we divide the spins into two disjoint groups \(\sigma\!=\!(\sigma^{m},\sigma^{s})\): the secret message \(\sigma^{m}\) consists of \(M\) spins, and the other spins \(\sigma^{s}\) represent all of the shares that will be distributed to different parties, see Fig 1a. The information content of the secret is quantified by its entropy
\[S(\sigma^{m})=S(\sigma^{m},\sigma^{s})-S(\sigma^{s}\mid\sigma^{m}), \tag{4}\]
where \(S(\sigma^{m},\sigma^{s})\!=\!S(\sigma)\) is the entropy of the entire system, given by Eq (3), and \(S(\sigma^{s}\mid\sigma^{m})\) the conditional entropy. To compute the latter, we note that the energy levels are _iid_ random variables and thus fixing \(\sigma^{m}\) leaves the energy-level statistics unchanged. In other words, the conditional model is defined by the same energies as the original model, i.e., \(P(\sigma^{s}\mid\sigma^{m})\!\propto\!e^{-\beta E\sigma^{s}\mid\sigma^{m}}\) with \(E_{\sigma^{s}\mid\sigma^{m}}\!\sim\!N(0,NJ^{2}/2)\). We see that the conditional system is also an REM, but with \(\bar{N}\!=\!N\!-\!M\) spins and a variance parameter \(J\!=\!\sqrt{N/(N\!-\!M)}J\) (such that the energy variance is unchanged, i.e., \(\bar{NJ^{2}}\!=\!NJ^{2}\)). As a result, its critical point is given by \(\bar{\beta}_{c}\!=\!\sqrt{1\!-\!m}\beta_{c}\), and the conditional entropy by [see Eq (3)]
\[S(\sigma^{s}\mid\sigma^{m})=N\times\left\{\begin{array}{ll}1-m-t^{-2}&\mbox {if $t\geq 1/\sqrt{1-m}$}\\ 0&\mbox{if $t<1/\sqrt{1-m}$},\end{array}\right. \tag{5}\]
where \(m\!=\!M/N\). Combining Eqs (3-5) yields [22]
\[S(\sigma^{m})=\min[S(\sigma),M]=N\times\left\{\begin{array}{ll}\min(m,1-t^{ -2})&\mbox{if $t\geq 1$}\\ 0&\mbox{if $t<1$}.\end{array}\right. \tag{6}\]
We see that the entropy of the secret, like that of the entire system, vanishes in the frozen phase \(t\!<\!1\) and becomes finite in the paramagnetic phase \(t\!>\!1\). Quite remarkably, this entropy is _identical_ to that of the entire system for \(1\!\leq\!t\!\leq\!1/\sqrt{1\!-\!m}\); that is, different parts of the system are maximally correlated. But while the system's entropy approaches \(N\) bits as \(t\!\to\!\infty\) [Eq (3)], the secret entropy plateaus at \(M\) bits for \(t\!>\!1/\sqrt{1\!-\!m}\). Importantly, the information content of the secret reaches the maximum capacity of \(M\) bits at a finite temperature \(t\!=\!1/\sqrt{1\!-\!m}\) (see also Fig 2a-b).
Figure 1: The random-energy model exhibits extremely strong and highly synergistic correlations. (a) We divide \(N\) spins into three disjoint groups, \(\sigma=(\sigma^{m},\sigma^{v},\sigma^{h})\). The secret \(\sigma^{m}\) consists of \(M\!=\!mN\) spins. The other spins are the shares \(\sigma^{s}=(\sigma^{v},\sigma^{h})\), of which \(vN\) spins are visible or \(v\) and \(hN\) are hidden \(\sigma^{h}\). We consider the reconstruction of the secret \(\sigma^{m}\) from an observation of the visible spins \(\sigma^{v}\). (b & c) The amount of information in the secret is quantified by its entropy \(S(\sigma^{m})\!\sim\!M\) bits (dashed). The information that the visible spins have about the secret is measured by their mutual information \(I(\sigma^{m};\sigma^{v})\) (solid), here depicted as a function of the visible fraction \(v\) at a fixed secret fraction \(m\). (b) For the fully connected Ising model at criticality, the information increases with more visible spins. This increase diminishes as \(v\) grows, indicating _redundant_ coding of secret information. The logarithmic scaling of the information with \(N\) signifies strong correlations associated with critical behaviors; away from the critical point, the information does not grow with \(N\). (c) For the random-energy model in the paramagnetic phase (\(T\!=\!\sqrt{2}T_{c}\)), the information becomes positive only with enough visible spins. Moreover, the information is _extensive_ in this case, indicating even stronger correlations than those in typical critical systems. Importantly, this extensivity means that visible spins can encode _all_ of the secret information, thus allowing perfect secret reconstruction. This behavior is a signature of extreme synergy among the spins—that is, while individual spins leak no secret information, an adequately large collective of spins can completely reveal the secret.
Figure 2: Mutual information between subsystems _vs_ temperature and system composition. (a & b) The temperature dependence of the secret entropy (dashed) and the mutual information between the secret and visible spins (solid), see Fig 1a. The secret entropy \(S(\sigma^{m})\) vanished in the frozen phase \(t\!<\!1\) and grows with \(t\) until it plateaus at \(M\) bits [Eq (6)]. The information \(I(\sigma^{m};\sigma^{v})\) exhibits similar behaviors at low temperatures, vanishing for \(t\!<\!1\) and increasing with \(t\) near the onset of the paramagnetic phase. But this information is bounded by either the entropy of the secret or that of visible spins whichever smaller; as a result, it saturates at \(\min(m,v)N\) bits [Eq (8)]. At high temperatures, thermal noise dominates and the information decreases with \(t\), approaching zero at \(t\!=\!1/\sqrt{h}\). (c) This information depends on the composition of the secret, visible and hidden spins, parametrized by their fractions \((m,v,h)\) (Fig 1a). For \(t\!>\!1\), this ternary diagram has five regions, A-E. The information density, \(I(\sigma^{m};\sigma^{v})/N\), is zero in A, \(t^{-2}-h\) in B, \(m\) in C, \(v\) in D and \(1-t^{-2}\) in E. For \(t\!>\!\sqrt{2}\), Region E disappears. See main text for details.
We now turn to the recoverability of the secret. To this end, we further split the shares into two parts, \(\sigma^{s}=(\sigma^{v},\sigma^{h})\). The visible spins \(\sigma^{v}\) amount to a fraction \(v\) of the system and the hidden spins \(\sigma^{h}\) to a fraction \(h\). The secret, visible and hidden spins together make up the system, i.e., \(\sigma=(\sigma^{m},\sigma^{v},\sigma^{h})\) and \(m+v+h=1\), see Fig 1a.
Suppose we only have access to the visible spins. Reconstructing the secret from these spins is equivalent to an inference problem, characterized by the mutual information,
\[I(\sigma^{m};\sigma^{v})=S(\sigma^{m})-S(\sigma^{m}\mid\sigma^{v}), \tag{7}\]
which measures the reduction in the uncertainty, quantified by entropy, of the secret once we have observed the state of the visible spins. Equation (6) provides the expression for the secret entropy \(S(\sigma^{m})\). To obtain the conditional entropy, we recall \(S(\sigma^{m}\mid\sigma^{v})=S(\sigma^{m},\sigma^{v})-S(\sigma^{v})\). Since \(\sigma^{v}\) is a fraction \(v\) of the system, its marginal entropy is given by Eq (6) but with \(v\) in place of \(m\), i.e., \(S(\sigma^{v})=\min[S(\sigma),vN]\). And similarly \(S(\sigma^{m},\sigma^{v})=\min[S(\sigma),(m+v)N]\). As a result, we have [23]
\[I(\sigma^{m};\sigma^{v})=N\times\left\{\begin{array}{ll}\left[\min(m,1-t^{- 2})+\min(v,1-t^{-2})\right.\\ \left.-\min(m+v,1-t^{-2})\right]&\text{if }t\geq 1\\ 0&\text{if }t<1.\end{array}\right. \tag{8}\]
We note that this mutual information is _extensive_. This behavior is in contrast with typical statistical physics models, in which the information between macroscopic parts of the system grows with the system size only at critical points and only subextensively [24; 25; 26; 27; 28] (see also Fig 1b-c). The extensivity of this information signifies unusually strong correlations and implies that the REM is _unlearnable_ from finite measurements [29].
We depict the temperature dependence of this information in Fig 2a-b. Like entropy, it increases from zero as \(\sim 1-t^{-2}\) as the system leaves the frozen phase and enters the paramagnetic phase. But unlike entropy, it decreases at high temperatures as \(\sim t^{-2}-h\) until vanishing at \(t=1/\sqrt{h}\) (with \(h=1-m-v\)). At intermediate temperatures, it plateaus at either \(mN\) or \(vN\) bits, whichever is less. This plateau results from the upper bound of mutual information \(I(A;B)\leq\min[S(A),S(B)]\) for discrete random variables \(A\) and \(B\). In addition, we see that for \(t\leq 1/\sqrt{1-v}\) the visible spins encode _all_ of the information the secret spins carry, i.e., \(I(\sigma^{m};\sigma^{v})=S(\sigma^{m})\). In particular, when \(m<v\), the information plateau coincides with that of the secret entropy (Fig 2a); in other words, the secret reaches its maximum information capacity (of \(M\) bits) and all of that is encoded in the visible spins.
Figure 2c illustrates how the mutual information between secret and visible spins depends on the fractions \(m\), \(v\) and \(h\) in the paramagnetic state (\(t>1\)). This information vanishes in Region A where a large part of the system is hidden \(h>t^{-2}\). This region corresponds to the high-temperature limit \(t>1/\sqrt{h}\) in Fig 2a-b. Decreasing the hidden fraction pushes the ternary mixture into Region B, where the information becomes finite at \((t^{-2}-h)N\) bits. This region is the only one that grows with temperature. The others shrink; in particular, Region E completely disappears for \(t>\sqrt{2}\). When the secret fraction dominates \(m>1-t^{-2}\), the information is equal to the entropy of visible spins, \(vN\) and \((1-t^{-2})N\) bits in Regions D and E, respectively. Similarly, when the visible fraction dominates \(v>1-t^{-2}\), we have \(I(\sigma^{m};\sigma^{v})=S(\sigma^{m})\) (Regions C and E). In addition, the secret fully utilizes \(M\) spins, with \(S(\sigma^{m})=M\) bits, when \(m<1-t^{-2}\) (Regions A, B & C), see Eq (6).
That is, Regions A, B and C exhibit secret encoding at full capacity while allowing no, partial and perfect decoding, respectively [30]. This property is essential to information-theoretically secure secret sharing. In Region A, visible spins are too few, and zero information leakage guarantees absolute secrecy. In Region C, visible spins are adequately many, and vanishing _a posteriori_ entropy, \(S(\sigma^{m}\mid\sigma^{v})=S(\sigma^{m})-I(\sigma^{m};\sigma^{v})=0\), allows perfect inference of secret. However, this picture breaks down for long secrets \(m>1-t^{-2}\) (Regions D & E) for which the information is always finite and secrecy is at best partial.
These results form a basis for threshold secret-sharing schemes. So far, we consider the mutual information between the secret and visible spins where the visible fraction is taken to be continuous. We now turn to the case where the non-secret spins are divided into a finite number of shares and as a result, the visible fraction becomes discrete, taking only values that are multiples of the fraction of each share. As before, we reserve \(M\) spins to encode the secret, \(\sigma^{m}\). But now we split the remaining \(N-M\) spins, \(\sigma^{s}\), into \(n\) equal shares--i.e., we decompose the system as follows, \(\sigma=(\sigma^{m},\sigma^{s})=(\sigma^{m},\tau_{1},\tau_{2},\ldots,\tau_{n})\) where \(\tau_{i}\) denotes the spin configuration of the \(i^{\rm th}\) share, see Fig 3a.
Figure 3: Phase diagram for threshold secret-sharing schemes, based on the random-energy model. (a) We split \(N\) spins into an \(M\)-spin secret \(\sigma^{m}\) and \(n\) shares \((\tau_{1},\tau_{2},\ldots,\tau_{n})\), each with \((N-M)/n\) spins. (b) The secrecy and correctness requirements of a \((k,n)\) threshold scheme result in temperature lower and upper bounds, \(t^{-}\) and \(t^{+}\) respectively, see Eqs (9-12). These bounds depend on the secret length \(m=M/N\) (as a proportion of the system size) and define a secure region (shaded area). Longer secrets are secure over a smaller temperature range. This range disappears completely (\(t^{-}>t^{+}\)) when \(m>1/(n+1)\); that is, no secure scheme exists when the secret is longer than each share. (c) In the secure region, the entropy of the secret and of each share is equal to their lengths, i.e., \(S(\sigma^{m})=M\) and \(S(\tau_{i})=(N-M)/n\). We see that \(S(\tau_{i})\geq S(\sigma^{m})\) with equality for \(m=1/(n+1)\) at which \(t^{-}=t^{+}=1/\sqrt{1-k/(n+1)}\).
Here we also define \(\tau^{r}\) as any set of \(r\) distinct shares.
A secure \((k,n)\) threshold scheme must meet two requirements (see, e.g., Ref [3]). First, _secrecy_ demands that fewer than \(k\) shares leak no secret information,
\[I(\sigma^{m};\tau^{k-1})=0. \tag{9}\]
where \(\tau^{k-1}\) denotes \(k-1\) shares [31]. Since each share amounts to a fraction \((1-m)/n\) of the whole system, access to \(k-1\) shares is equivalent to visible spins \(\sigma^{v}\) with \(v=(k-1)(1-m)/n\). From Eq (8) and Fig 2a-b, we see that the secrecy condition implies either
\[t\geq t^{-}(m)\equiv\frac{1}{\sqrt{(1-m)(1-(k-1)/n)}}, \tag{10}\]
or \(t\leq 1\). But the latter corresponds to the frozen phase in which a secret carries no information, see Eq (6). Second, _correctness_ requires that \(k\) or more shares completely reveal the secret,
\[I(\sigma^{m};\tau^{k})=S(\sigma^{m}), \tag{11}\]
or equivalently \(S(\sigma^{m}\mid\tau^{k})=0\)[32]. Here, \(\tau^{k}\) is the same as \(\sigma^{v}\) with \(v=k(1-m)/n\). Recalling Eq (8) (see Fig 2a-b), the above condition gives
\[t\leq t^{+}(m)\equiv\frac{1}{\sqrt{1-k(1-m)/n}}. \tag{12}\]
In Fig 3b, we show that the secrecy and correctness requirements, Eqs (10) & (12), define a secure region as we vary the temperature \(t\) and secret length \(m\). We see that this region is finite for \(n\geq k>1\); we can always find a set of model parameters such that the REM implements a secure \((k,n)\) threshold scheme.
But the secret cannot be too long. In Fig 3b, we also see that the temperature upper and lower bounds, \(t^{+}(m)\) and \(t^{-}(m)\), cross at \(m=1/(n+1)\), thus ruling out secure schemes with a secret longer than this value. Indeed, this point is where the information contents of the secret and that of each share are equal. In Fig 3c, we depict the entropy of the secret and that of each share in the secure region. We see that for \(m<1/(n+1)\), each share has greater entropy than the secret, meaning that the secret amounts to only a fraction of the information in the share. At \(m=1/(n+1)\) however, the secret and share entropies are equal and the secret encoding makes use of _all_ available information content in each share. Furthermore, in the secure regime, the entropies are at the physical limit, \(M\) bits for the secret and \((N-M)/n\) bits for each share. In this sense, REM secret sharing at \(m=1/(n+1)\) is optimal. This optimality also requires a specific temperature (see Fig 3c),
\[t^{*}=\frac{1}{\sqrt{1-k/(n+1)}}, \tag{13}\]
which, quite remarkably, exists for any \((k,n)\) threshold scheme with \(1\leq k\leq n\).
So far we have shown that the REM in the thermodynamic limit (\(N\rightarrow\infty\)) can implement a threshold secret-sharing scheme that is both secure and optimal. We now demonstrate that a finite-spin REM displays similar behaviors, albeit with relaxed secrecy and correctness guarantees.
In Fig 4a, we depict the temperature dependence of the mutual information between the secret \(\sigma^{m}\) and \(r\) shares \(\tau^{r}\) when the number of shares is at and below the threshold (\(r=k\) and \(r=k-1\), respectively). The dashed lines show the result for the thermodynamic case, which corresponds to sending \(N\rightarrow\infty\) while fixing \(m=M/N\). In this case, the information vanishes in the frozen phase, increases as the temperature rises above the critical point, plateaus at intermediate temperatures, and decays until it vanishes again at an adequately high temperature, see Eq (8). The temperature range over which the information plateaus and the point at which it vanishes depend on the number of available shares \(r\). This behavior yields a range of temperatures (shaded region) in which the REM meets both secrecy and correctness requirements, Eqs (9) & (11).
For the finite-system case, we consider \(N=27\) and \(M=5\), and depict the average information (thick lines) for 20 REMs (thin lines), each with independently generated energy levels. We see that the information exhibits temperature dependence that is qualitatively similar to the thermodynamic case, but the sharp features become smooth. In particular, a finite system does not completely freeze at low temperatures and the information remains finite below the thermodynamic critical temperature. Still, at intermediate temperatures (shaded region), a significant information difference exists between the cases where the number of shares is at and below the threshold (\(r=k\) and \(r=k-1\), respectively). In other words, enough shares (\(r\geq k\)) reveal most of the secret, but subthreshold shares (\(r<k\)) tell us
Figure 4: Finite-spin REMs implement secret-sharing schemes with a tradeoff between security and recoverability. (a) We show the mutual information between the secret \(\sigma^{m}\) and \(r\) distinct shares \(\tau^{r}\) for the cases where \(r\) is at the threshold (blue) and subthreshold (red). The thick lines are the average of 20 independent realizations of the REM, each depicted by thin lines. The dashed lines correspond to the thermodynamic limit (\(N\rightarrow\infty\)) and the shaded area to the temperature range which satisfies the requirements of a secure scheme, see Eqs (9-12). (b) We display the parametric curve of the at-threshold and subthreshold information terms, (vertical and horizontal axes, respectively). The thick curve is the average of 20 independent REMs (thin curves), and the dashed line is the corresponding parametric plot for the thermodynamic limit. This plot illustrates how far finite-spin REM secret sharing is from an information-theoretically secure scheme (\(\star\)). Here \(N=27\), \(M=5\) and \((k,n)=(2,2)\). For the thermodynamic case, we let \(N\rightarrow\infty\) while fixing the ratio \(M/N\).
very little about the secret. This behavior captures the essence of a secret-sharing scheme, albeit without strict secrecy and correctness guarantees. Indeed, a finite-spin REM exhibits a tradeoff between secrecy and recoverability.
In Fig 4b, we illustrate this tradeoff via the parametric curve of the information between the secret and \(k\) shares (vertical axis) and between the secret and \(k-1\) shares (horizontal axis). The former, \(I(\sigma^{m};\tau^{k})\), measures the recoverability of the secret given access to enough shares whereas the latter, \(I(\sigma^{m};\tau^{k-1})\), quantifies the deviation from perfect secrecy when accessible shares fall below the threshold. Both information terms vanish at zero temperature. Raising the temperature from \(t=0\) increases both information at approximately the same rate, \(I(\sigma^{m};\tau^{k})\approx I(\sigma^{m};\tau^{k-1})\) at small \(t\), suggesting a redundant coding regime. That is, each share encodes the same bits about the secret, therefore an additional share provides little extra information about the secret. But \(I(\sigma^{m};\tau^{k-1})\) reaches its maximum and begins to decay with \(t\) while \(I(\sigma^{m};\tau^{k})\) still grows, resulting in enhanced secrecy and recoverability. This mutual enhancement signifies the onset of synergistic coding; however, it only occurs for a limited temperature range (between the maxima of \(I(\sigma^{m};\tau^{k})\) and \(I(\sigma^{m};\tau^{k-1})\), see Fig 4a). As the temperature rises further, \(I(\sigma^{m};\tau^{k})\), and recoverability, eventually peaks and decreases while \(I(\sigma^{m};\tau^{k-1})\) approaches zero. In this regime, an increase in security is accompanied by a decrease in recoverability and vice versa. Importantly, this tradeoff defines the optimal frontier of maximum recoverability for each value of security budgets.
In the thermodynamic limit, this frontier collapses into a single point that satisfies the requirements of a secure secret-sharing scheme, \(I(\sigma^{m};\tau^{k})=M\) and \(I(\sigma^{m};\tau^{k-1})=0\) [see Eqs (11) & (9)]. We note that this point corresponds to not one but a range of temperatures (i.e., shaded range in Fig 4a and shaded region in Fig 3).
In finite systems, improving the optimal frontier of random-energy secret sharing requires increasing the system size. In Fig 5, we consider a fixed secret size of \(M=5\) and compare the optimal frontiers (averaged over 20 independent REMs) for different numbers of spins in each share (see legend) for threshold schemes with \((k,n)=(2,2),(3,3),(2,3)\) (left to right). In all cases, we see that as share size grows and the system becomes larger, the optimal frontier edges closer to the ideal secret-sharing scheme.
To summarize, we characterize the thermodynamics of information in the REM and show that the synergistic correlations among spins provide a basis for a secret-sharing scheme. We further demonstrate that this scheme is information-theoretically secure and optimal. Our analytical results for the thermodynamic limit are in good qualitative agreement with the numerical simulations of finite-spin REMs. Our work represents a further example of the curious statistical properties of the REM and highlights information theory as a unifying element of problems in physics and computer science. The mapping between secret sharing and the exactly solvable REM may facilitate the application of statistical physics techniques in cryptography and inspire novel physical models based on secret-sharing schemes.
This work was supported in part by the National Science Foundation, through the Center for the Physics of Biological Function (PHY-1734030), the Simons Foundation and the Sloan Foundation.
|
2303.00069 | ClArTTS: An Open-Source Classical Arabic Text-to-Speech Corpus | At present, Text-to-speech (TTS) systems that are trained with high-quality
transcribed speech data using end-to-end neural models can generate speech that
is intelligible, natural, and closely resembles human speech. These models are
trained with relatively large single-speaker professionally recorded audio,
typically extracted from audiobooks. Meanwhile, due to the scarcity of freely
available speech corpora of this kind, a larger gap exists in Arabic TTS
research and development. Most of the existing freely available Arabic speech
corpora are not suitable for TTS training as they contain multi-speaker casual
speech with variations in recording conditions and quality, whereas the corpus
curated for speech synthesis are generally small in size and not suitable for
training state-of-the-art end-to-end models. In a move towards filling this gap
in resources, we present a speech corpus for Classical Arabic Text-to-Speech
(ClArTTS) to support the development of end-to-end TTS systems for Arabic. The
speech is extracted from a LibriVox audiobook, which is then processed,
segmented, and manually transcribed and annotated. The final ClArTTS corpus
contains about 12 hours of speech from a single male speaker sampled at 40100
kHz. In this paper, we describe the process of corpus creation and provide
details of corpus statistics and a comparison with existing resources.
Furthermore, we develop two TTS systems based on Grad-TTS and Glow-TTS and
illustrate the performance of the resulting systems via subjective and
objective evaluations. The corpus will be made publicly available at
www.clartts.com for research purposes, along with the baseline TTS systems
demo. | Ajinkya Kulkarni, Atharva Kulkarni, Sara Abedalmonem Mohammad Shatnawi, Hanan Aldarmaki | 2023-02-28T20:18:59Z | http://arxiv.org/abs/2303.00069v1 | # CIArTTS: An Open-Source Classical Arabic Text-to-Speech Corpus
###### Abstract
At present, Text-to-speech (TTS) systems that are trained with high-quality transcribed speech data using end-to-end neural models can generate speech that is intelligible, natural, and closely resembles human speech. These models are trained with relatively large single-speaker professionally recorded audio, typically extracted from audiobooks. Meanwhile, due to the scarcity of freely available speech corpora of this kind, a larger gap exists in Arabic TTS research and development. Most of the existing freely available Arabic speech corpora are not suitable for TTS training as they contain multi-speaker casual speech with variations in recording conditions and quality, whereas the corpus curated for speech synthesis are generally small in size and not suitable for training state-of-the-art end-to-end models. In a move towards filling this gap in resources, we present a speech corpus for Classical Arabic Text-to-Speech (CIArTTS) to support the development of end-to-end TTS systems for Arabic. The speech is extracted from a LibriVox audiobook, which is then processed, segmented, and manually transcribed and annotated. The final CIArTTS corpus contains about 12 hours of speech from a single male speaker sampled at 40100 kHz. In this paper, we describe the process of corpus creation and provide details of corpus statistics and a comparison with existing resources. Furthermore, we develop two TTS systems based on Grad-TTS and Glow-TTS and illustrate the performance of the resulting systems via subjective and objective evaluations. The corpus will be made publicly available at www.clartts.com for research purposes, along with the baseline TTS systems demo.
Ajinkya Kulkarni\(\star\), Atharva Kulkarni, Sara Abedalmon'em Mohammad Shatnawi\(\star\), Hanan Aldarmaki\(\star\)MBZUAI UAE\(\star\), Erisha Labs India
[email protected], [email protected], [email protected], [email protected]
**Index Terms**: arabic speech corpus, text-to-speech
## 1 Introduction
Neural text-to-speech (TTS) models are becoming mainstream due to their superior performance in synthesizing intelligible and natural-sounding speech. Compared to older concatenative (e.g. [1]) or HMM-based [2] TTS models, neural models can generate raw waveform directly from text inputs without complex pre-processing and phonetic feature extraction. Neural TTS models commonly have two main components: an acoustic model that generates acoustic features (e.g. mel-spectrograms) directly from text, and a vocoder to generate a waveform from the acoustic features (see for example [3]). Fully end-to-end TTS models that combine both stages have also been explored [4]. While these neural architectures can be complex, end-to-end training alleviates the need for feature engineering and other design choices that are prone to be suboptimal. One of the bottlenecks in TTS system design, however, is the availability and quality of the corpus used for training. Unlike ASR datasets, where it is desirable to have a variety of speakers and recording conditions to achieve robust performance, it is far more advantageous to have consistent single-speaker corpora for TTS to achieve intelligible and natural sounding synthesis. Therefore, speech data used for training TTS models need to have more consistent acoustic features that ideally only vary along phonetic and prosodic dimensions.
Most existing corpora for Arabic TTS are carefully designed and reduced datasets that are optimized for phonetic coverage while maintaining a relatively small number of units [5][6]. This choice is partially a remnant of early concatenative models that have a real-time computational cost proportional to the size of the dataset. Another reason for this choice is the relative difficulty of constructing consistent datasets that are suitable for TTS training, especially if they need to be annotated at the phonetic level for traditional TTS systems, so a reduced dataset that maintains phonetic coverage is more manageable to construct. For example, one of the most commonly used public TTS datasets for Arabic is the Arabic Speech Corpus (ASC) [5], which has around 3.4 hours of speech. The ASC was designed to maximize phonetic coverage using a greedy optimization strategy. While such optimization technique is the most commonly used in most TTS data construction projects, there is some evidence that a random subset of the same size could potentially lead to similar or even more natural-sounding speech synthesis [7]. In addition, for neural TTS models, quantity is more beneficial to the overall quality of the synthesized speech as they are more robust to small variations in input conditions. Moreover, neural TTS models can work directly with text utterances as input without the need for phonetic annotations, which makes the construction of larger datasets more feasible.
In this work, we construct a relatively large single-speaker corpus for the purpose of developing neural TTS systems for Arabic. In particular, the corpus consists of audio recordings by a male speaker of a book written in Classical Arabic. The audiobook is publicly available in the LibriVox project. To create a corpus for text-to-speech synthesis, we segmented the corpus into short utterances, checked for quality and consistency of recording conditions, then manually annotated the audio segments with fully diacitized transcriptions. Samples can be found at clartts.com. We will make the corpus available publicly for research use. As text transcripts were not available for Arabic audiobooks, we had to perform a manual annotation process to create the CIArTTS corpus. This corpus comprises 12 hours and 10 minutes of speech, consisting of 10,334 utterances from a single male speaker, and was sampled at 40,100 Hz. We also build several neural TTS systems using this corpus and demonstrate the quality of the synthesized speech using subjective and objective evaluations. We show the synthesis performance for both Classical and Modern Standard Arabic. Furthermore, we show the performance of the models using raw character inputs vs. phonetic inputs using a rule-based grapheme-to-phoneme algorithm.
This paper is organized as follows: the first section gives a brief overview of the related works. Then, corpus construction provides the details of building CIArTTS corpus from audiobooks and the annotation process used for it. In section 4, we present the corpus statistics and comparison of the CIArTTS corpus with existing Arabic speech synthesis corpora. We created baseline TTS systems on two Arabic speech synthesis corpus using Glow-TTS and Grad-TTS as described in Section 5. Furthermore, we also explained the experimentation setup along with the evaluation approach to estimate the performance of TTS systems in Section 6, followed by the conclusion in Section 7.
## 2 Related Work
Currently, Arabic speech synthesis systems are of lower quality compared to their English counterparts, largely due to the limited availability of Arabic speech synthesis data in corpora [8]. The most commonly used approaches for Arabic speech synthesizers are either based on unit selection or parametric speech synthesis [9, 10, 11]. However, many speech synthesis corpora for English have been developed using audiobooks for which text transcripts are readily available, whereas Arabic audiobooks lack these transcripts, making it difficult to develop speech synthesis corpora [8].
In this study, we present the CIArTTS corpus, which is based on an audiobook with manually annotated text transcripts. The Arabic Speech Corpus (ASC) contains around 3.4 hours of south Levantine Arabic speech recorded at 48KHz using fully diacritized text collected from Aljazeera Learn, a language learning website [12]. Diphone-based greedy optimization strategies were used to reduce the size of the transcripts, and non-sense or dummy utterances were recorded to cover the gaps of underrepresented phonemes.
Another approach proposed a fully unsupervised framework to build a TTS system using broadcast news recordings [13]. They used both manual and automatic dataset selection and transfer learning by using high-resource languages in the TTS model from the LJSpeech dataset and fine-tuned it with one hour of Arabic speech. In [14] NatiQ, a Tacotron 2 [15] based Arabic TTS system, high-quality speech data was recorded at a sampling rate of 44kHz from two speakers. In another study, a pre-recorded Audobook from the Masmoo3 Audobooks website was used to create a 4-hour Arabic speech synthesis corpus for TTS applications [16]. The balanced Arabic speech corpus was explicitly designed to ensure phonetically balanced Arabic speech (BAC), which was specifically designed for the unit selection and rule-based speech synthesis approach [17]. The main objective of the BAC corpus was to ensure that all potential phonemes and some impossible phoneme combinations between words were included.
## 3 Corpus Construction
In this section, we describe the steps involved in building our Classical Arabic speech synthesis corpus: audio pre-processing, the annotation process, final corpus creation, and corpus statistics.
### Audio Pre-processing
For the creation of a classical Arabic text-to-speech (CIArTTS) corpus, we selected an audiobook recorded by a single speaker from the LibriVox project1. The classical book is titled _Kitab Adab al-Dunya w'al-Din_ by Abu al-Hasan al-Mawardi (972-1058 AD). The audiobook is recorded by a single speaker and consists of approximately 16 hours of audio without accompanying text. While scanned copies of the book exist, we opted for manual annotation of the audio data to create text transcripts that truly match the audio recording using the Praat annotation tool2.
Footnote 1: www.LibriVox.org
Footnote 2: [https://www.fon.hum.uva.nl/praat/](https://www.fon.hum.uva.nl/praat/)
The audiobook consists of 20 long audio files, each representing a chapter of the book in MP3 format. We converted this audio to WAV format using _ffmpeg_ command-line tool to ensure compatibility with the Praat program. We kept the original sampling rate of 40100 Hz. We ran a rule-based Praat script to mark pauses and speech segments in the long audio files. This script created a TextGrid object for a LongSound object and set boundaries at pauses based on intensity analysis. We validated the marking of pauses and speech segments provided by the Praat tool using energy-based VAD from the Kaldi toolkit3.
Footnote 3: [https://kaldi-asr.org](https://kaldi-asr.org)
### Annotation Process
The process of annotating an audiobook involved transcribing audio content into written text, along with additional tags for speech pauses, background noise, inaudible speech segments, and stuttering. The Praat tool was used for the annotation, and the annotators were given TextGrid Praat files that contained the audio recording and a framework for marking speech and pause segments. This helped the annotators efficiently and accurately transcribe the speech segments into written text.
A team of three Arabic annotators was involved in the transcription process to ensure a reliable and accurate final transcript that considered multiple perspectives. To enhance the quality of the transcripts, two rounds of validation were conducted. The first validation was done by the annotators themselves, followed by a check by two other annotators for accuracy and consistency. The text transcripts were marked with Arabic diacritical marks to increase the accuracy of the transcripts for speech analysis and pronunciation.
In addition to the TextGrid Praat files, the annotators were also given a text image of the original book for reference. This made it easier for the annotators to transcribe the speech segments accurately by referring to the original text. Guidelines were provided to the annotators during the annotation process, including instructions for using abbreviations, numbers, special characters, and punctuation according to Arabic language rules. Specific speech segments were marked with tags, including [B] for background noise, [H] for stuttering or hesitation, [*] for unclear speech, and [O] for human noise. The combination of the Praat tool, three annotators, two levels of validation, text transcripts with Arabic diacritization markers, and reference materials helped ensure the accuracy and reliability of the final transcripts.
### Final Corpus Creation
The total amount of original audio is around 16hrs, spanning 20 chapters, so it was recorded in multiple sessions. We observed slight variations in speaking style between the chapters, even though it was neutral (non-emotional) overall. Therefore, we conducted subjective listening tests by listening to random parts of each chapter and removed three chapters that diverge in
speaking style compared to the rest. We split each long-audios using the textgrid obtained through the Praat tool and manual annotation process with speech and silence segments. For ensuring high audio quality, we used signal-to-noise ratio (SNR) to guide the selection process. We estimated the waveform amplitude distribution analysis SNR [1] by taking into account the noise power in silence (non-speech) segments adjacent to the given speech segment. We used a threshold value of 20dB SNR for the first level of speech segment selection.
We concentrated adjacent speech segments to create a minimum speech segment duration of 2 seconds. Furthermore, during the concatenation process, we kept only 20% of silence segments between two speech segments if the silence segment duration was exceeding the average silence duration computed across the given long audio. We also removed the preamble speech segments, during which the reader briefly talked about the LibriVox project, stated their name and book information, and may have mentioned copyright descriptions or LibriVox project-related content.
During the segmentation process, we ensured that each segmented speech utterance had a duration of at least 2 seconds and a maximum duration of 10 seconds. Furthermore, we also observed that the Praat pause marking script was unable to tag the last silence segments. Therefore, we manually removed the silence frame in the last audio segments marked by Praat tools. We also removed the speech segments consisting of text transcripts with non-Arabic characters.
We used 3.34% of the corpus as the test set and 96.66% as the training set. All text files were saved in UTF-16 encoding and non-Arabic characters were removed. The number of training speech utterances was 10000, and test speech utterances were 334. The total duration for training data is 11 hours 45 mins and for the test 25 mins.
## 4 Corpus statistics
The corpora that are recorded specifically for the purpose of speech synthesis typically follow a specific procedure to maximize phonetic coverage while minimizing total corpus size [6]. However, since we do not record the corpus and instead use a pre-existing audiobook, we are constrained only by the size of the audiobook. As a result, ClArTTS may not include all possible phonetic combinations, but instead follows the phonetic distribution of the language. In Figure 1, we stated the comparison of monophone coverage across the three corpora namely Arabic speech corpus (ASC), Arabic diacitizer text corpus, Tashkela, and [18] presented ClArTTS corpus. Figure 1 indicates similar monophone distribution from text information for all the corpus.
We compare our corpus statistic with the Balanced Arabic Corpus (BAC) described in [6] and the Arabic Speech Corpus (ASC) in Table 1. ClArTTS is the largest corpus in terms of the number of sentences, words, unique words, phonemes, and diphones, indicating that it is a more extensive and diverse corpus than the other two. ASC has the second-largest number of sentences and words, but its unique words, phonemes, and diphones are lower than ClArTTS. BAC is the smallest corpus in terms of all the measures listed in the table, suggesting that it may not be as comprehensive or representative of Arabic speech as the other two corpora. The only statistic where we observe a shortage is the number of unique diphones. In the ASC, dummy utterances are recorded to artificially maximize the total number of diphones, even though these diphones are rare or impossible in the language. Therefore, this shortage in diphone coverage is unlikely to degrade TTS performance for most utterances. In Table 2, we present the percentage of diphone coverage in the ClArTTS corpus and a large text corpus, the Takeshela Arabic diacritization corpus, where diphone symbols are represented using Buckwalter transcription format. It clearly indicates that ClArTTS have diphone coverage similar to the Arabic text corpus for both the most frequent and most infrequent diphone combinations.
Arabic speech corpus displays better coverage for a few phonemes than ClArTTS possibl
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline Count & BAC & ASC & ClArTTS \\ \hline Sentences & 202 & 1,913 & 10,334 \\ Words & 1,254 & 17,275 & 82,970 \\ Words/sentence (Avg) & 6 & 9 & 8 \\ Unique words & 975 & 12,144 & 27,870 \\ Phonemes & 6,174 & 135,232 & 518,682 \\ Diphones & 3,614 & 72,797 & 282,487 \\ Unique diphones & - & 682 & 520 \\ \hline \end{tabular}
\end{table}
Table 1: Corpus statistics comparison between Arabic speech corpus (ASC), Balanced Arabic corpus (BAC) and ClArTTS.
\begin{table}
\begin{tabular}{|c|c|c|} \hline Diphone & ClArTTS & Tashkela \\ \hline w-a & 3.62 & 3.21\% \\ l-a & 3.09 & 3.00\% \\ <-a & 2.89 & 3.53\% \\ l-aa & 2.82 & 1.6\% \\ a-l & 2.53 & 1.39\% \\ E-a & 2.32 & 2.34\% \\ m-a & 2.24 & 1.95\% \\ n-aa & 2.19 & 1.03\% \\ \hline u1-S &.00035 &.00065\% \\ u1-T &.00035 &.00175\% \\ u1-\({}^{-}\) &.00035 &.0034\% \\ i1-T &.00035 &.00163\% \\ u1-E &.00035 &.00009\% \\ A-j &.00070 &.00011\% \\ A-x &.00070 &.00082\% \\ u1-g &.00070 &.00154\% \\ \hline \end{tabular}
\end{table}
Table 2: Percentage of a subset of frequent (Top) and infrequent (bottom) diphones in the ClArTTS corpus vs. a larger text corpus (Tashkela)
Figure 1: Percentage coverage of phonemes for ASC, ClarTTS, and Tashkhela corpus
dummy utterances. CIArTTS corpus still has better coverage for the majority of the phonemes naturally.
## 5 Baseline TTS systems
The goal of our research was to compare the performance of two baseline text-to-speech (TTS) systems, Grad-TTS [19] and Glow-TTS [20], on the CIArTTS corpus and Arabic speech corpus. We used the default network parameters as mentioned in the papers [19] and [20] respectively for these TTS systems without using any explicit Arabic grapheme to phoneme module on text transcripts. We used the train set and test set as discussed in section 3.4 for training baseline TTS systems on ASC and CIArTTS. We trained the Grad-TTS and Glow-TTS systems individually on both corpus for 1000 epochs.
To synthesize the speech from the predicted Mel spectrograms, we opted for a Hi-Fi GAN-based neural vocoder [21]. The ASC and CIArTTS corpora have speech utterances with different sampling rate that is 48000 Hz and 40100 Hz. Therefore, we trained two Hi-Fi GAN neural vocoders to create compatibility with the different sampling rates of both corpus. We used ASC for training Hi-Fi GAN neural vocoder with 48000 Hz, while for 40100 Hz, we used CIArTTS corpus. We used the V1 configuration of the Hi-Fi GAN neural vocoder for training both neural vocoders as detailed in [21]. We applied the short-time Fourier transform (STFT) with an FFT length of 1024, a hop length of 256, and a window size of 1024, and extracted Mel spectrograms using 80 Mel filters.
## 6 Evaluation and Results
In Table 1, we present the performance of baseline TTS systems and subjective evaluation of ASC and CIArTTS corpus. We evaluated E2E TTS systems using a Mean Opinion Score (MOS) [22] based listening test. Each listener had to assign a score for synthesized speech utterance on a scale between 1 to 5 considering the intelligibility, naturalness, and quality of speech utterance. A total of 30 Arabic listeners participated in this MOS test and results are displayed in Table 1 with an associated 95% confidence interval. Furthermore, To validate the coherence of subjective listening test with objective evaluation, we opted for Perceptual Evaluation of Speech Quality (PESQ) [23] as an automated assessment of audio quality which takes into account various factors such as Audio sharpness, volume, background noise, lag in audio, clipping and audio interference. PESQ is computed on a scale from -0.5 to 4.5, where 4.5 represents the best similarity.
We used MCD (Mel Cepstral Distortion), an objective evaluation metric that measures the spectral distortion between the synthesized speech and the original speech signal. Lf0 RMSE (Root Mean Square Error of Log F0): an objective evaluation metric that measures the pitch accuracy of synthesized speech. BAP (Band Aperiodicity): an objective evaluation metric that measures the spectral envelope accuracy of synthesized speech. These evaluations are conducted by computing errors between reference speech utterances and synthesized speech utterances aligned using the dynamic time-warping algorithm.
We selected a cosine distance-based speaker similarity score [24] to measure the consistency of the speaker's voice quality in synthesized speech. We utilized the pre-trained ECAPA-TDNN-based speaker embedding extractor to measure the similarity scores from synthesized speech and reference speech from the original speech synthesis corpus [25].
Table 3 shows that the ground truth samples of both corpora have higher MOS scores than the synthesized speech generated by the two TTS systems. The Glow-TTS system outperforms the Grad-TTS system in terms of MOS and PESQ scores for both corpora. The CIArTTS corpus has higher MOS scores and lower MCD, Lf0 RMSE, and BAP scores than the ASC corpus, indicating that the CIArTTS corpus is easier to synthesize. Finally, the speaker similarity scores of the synthesized speech are relatively low for ASC-based TTS systems, compared to the CIArTTS corpus counterpart. Thus, it shows that CIArTTS-based systems are better at retaining the speaker's voice characteristics in synthesized speech.
## 7 Conclusion
In this work, we presented a single-speaker classical Arabic TTS corpus named CIArTTS corpus based on an audiobook in a Male speaker's voice. The CIArTTS is developed with aiming to facilitate the research in Arabic end-to-end TTS system with a large-scale speech synthesis dataset consisting of a total of 12 hours and 10 mins of annotated speech. Furthermore, we have shown the comparative study on corpus statistics with two Arabic speech synthesis corpora namely Arabic speech corpus (ASC) and balanced Arabic speech corpus (BAC). We trained Glow-TTS and Grad-TTS with CIArTTS corpus and ASC explicitly. The system was evaluated using subjective metrics, Mean Opinion Score, and objective metrics such as MCD, BAP, Lf0 RMSE, PESQ, and speaker similarity. The obtained results indicated a better quality of synthesized speech when using the CIArTTS corpus compared to ASC. In addition to this, we made available the CIArTTS corpus to the public domain for research purposes along with an Arabic TTS demo and Hi-Fi GAN pre-trained neural vocoder model. In the future, we would like to use transfer learning methods to exploit the large-scale CIArTTS corpus for speaker adaptation and voice-cloning in the Arabic language.
\begin{table}
\begin{tabular}{|c|c||c|c|c|c|c|c|} \hline System & Corpus & MOS & PESQ & MCD & Lf0RMSE & BAP & Speaker similarity \\ \hline GroundTruth & ASC & 4.01 & — & — & — & — & — \\ GroundTruth & CIArTTS & 4.39 & — & — & — & — & — \\ \hline Grad-TTS & ASC & 3.02 & 1.48 & 6.38 & 12.25 & 1.14 & 0.51 \\ Glow-TTS & ASC & 3.19 & 1.41 & 6.27 & 10.03 & 1.12 & 0.56 \\ \hline Grad-TTS & CIArTTS & 3.63 & 2.25 & 4.94 & 9.03 & 0.85 & 0.71 \\ Glow-TTS & CIArTTS & 3.84 & 2.23 & 4.83 & 8.04 & 0.93 & 0.78 \\ \hline \end{tabular}
\end{table}
Table 3: Evaluation metrics computed to measure the performance of baseline end-to-end TTS systems on two Arabic speech synthesis corpora, namely Arabic speech corpus (ASC) and Classical ArabicTTS corpus (CIArTTS). |
2309.12436 | Rapidash: Efficient Constraint Discovery via Rapid Verification | Denial Constraint (DC) is a well-established formalism that captures a wide
range of integrity constraints commonly encountered, including candidate keys,
functional dependencies, and ordering constraints, among others. Given their
significance, there has been considerable research interest in achieving fast
verification and discovery of exact DCs within the database community. Despite
the significant advancements in the field, prior work exhibits notable
limitations when confronted with large-scale datasets. The current
state-of-the-art exact DC verification algorithm demonstrates a quadratic
(worst-case) time complexity relative to the dataset's number of rows. In the
context of DC discovery, existing methodologies rely on a two-step algorithm
that commences with an expensive data structure-building phase, often requiring
hours to complete even for datasets containing only a few million rows.
Consequently, users are left without any insights into the DCs that hold on
their dataset until this lengthy building phase concludes. In this paper, we
introduce Rapidash, a comprehensive framework for DC verification and
discovery. Our work makes a dual contribution. First, we establish a connection
between orthogonal range search and DC verification. We introduce a novel exact
DC verification algorithm that demonstrates near-linear time complexity,
representing a theoretical improvement over prior work. Second, we propose an
anytime DC discovery algorithm that leverages our novel verification algorithm
to gradually provide DCs to users, eliminating the need for the time-intensive
building phase observed in prior work. To validate the effectiveness of our
algorithms, we conduct extensive evaluations on four large-scale production
datasets. Our results reveal that our DC verification algorithm achieves up to
40 times faster performance compared to state-of-the-art approaches. | Zifan Liu, Shaleen Deep, Anna Fariha, Fotis Psallidas, Ashish Tiwari, Avrilia Floratou | 2023-09-21T19:07:49Z | http://arxiv.org/abs/2309.12436v1 | # Rapidash: Efficient Constraint Discovery via Rapid Verification
###### Abstract.
Denial Constraint (DC) is a well-established formalism that captures a wide range of integrity constraints commonly encountered, including candidate keys, functional dependencies, and ordering constraints, among others. Given their significance, there has been considerable research interest in achieving fast verification and discovery of exact DCs within the database community. Verification entails detecting whether a given DC holds true within a specific dataset, while discovery focuses on the automated mining of DCs satisfied on the dataset. Despite the significant advancements in the field, prior work exhibits notable limitations when confronted with large-scale datasets. The current state-of-the-art exact DC verification algorithm demonstrates a quadratic (worst-case) time complexity relative to the dataset's number of rows. In the context of DC discovery, existing methodologies rely on a two-step algorithm that commences with an expensive data structure-building phase, often requiring hours to complete even for datasets containing only a few million rows. Consequently, users are left without any insights into the DCs that hold on their dataset until this lengthy building phase concludes.
In this paper, we introduce Rapidash, a comprehensive framework for DC verification and discovery. Our work makes a dual contribution. First, we establish a connection between orthogonal range search and DC verification. We introduce a novel exact DC verification algorithm that demonstrates near-linear time complexity, representing a theoretical improvement over prior work. Second, we propose an anytime DC discovery algorithm that leverages our novel verification algorithm to gradually provide DCs to users, eliminating the need for the time-intensive building phase observed in prior work. To validate the effectiveness of our algorithms, we conduct extensive evaluations on four large-scale production datasets. Our results reveal that our DC verification algorithm achieves up to \(40\times\) faster performance compared to state-of-the-art approaches. Furthermore, we demonstrate the superiority of our DC discovery algorithm by showcasing its ability to produce constraints within the initial 10 minutes of execution, while prior methods fail to generate any output within the first 48 hours of execution.
+
Footnote †: 2023 Association for Computing Machinery.
+
Footnote †: 2023 Association for Computing Machinery.
In recent years, substantial advancements happened in the field of exact DC verification and discovery (Nakamura et al., 2017; Wang et al., 2018; Wang et al., 2019; Wang et al., 2020). However, our practical experience in applying some of these approaches to real-world production datasets has unveiled noteworthy limitations of existing methods (refer to Section 6 for comprehensive details). First, in the context of DC verification, the best-known algorithm, Facet(Wang et al., 2018), has a worst-case time and space complexity \(\Omega(|\mathbf{R}|^{2})\) on a given relation \(\mathbf{R}\) with cardinality \(|\mathbf{R}|\) (number of rows). In this work, we make a connection between the problem of DC verification and _orthogonal range search_(Kang et al., 2018; Wang et al., 2019), a celebrated line of work in computational geometry, which studies the problem of determining which \(k\)-dimensional objects in a set intersect with a given query object. We show that it is possible to design a near-optimal algorithm for verifying a given DC over a specific dataset by leveraging techniques employed for orthogonal range search. Our proposed algorithm has a time complexity of \(O(|\mathbf{R}|\log^{f(\varphi)}|\mathbf{R}|)\), where \(f(\varphi)\) is a parameter that is dependent only on the characteristics of the DC \(\varphi\) and not on the input dataset \(\mathbf{R}\). This represents a theoretical improvement over prior work and translates into an order of magnitude better performance in practice.
In the context of exact DC discovery, prior work (Kang et al., 2018; Wang et al., 2018; Wang et al., 2019; Wang et al., 2020) follows a two-step process: (1) building an intermediate data structure called _evidence set_ from the input, which is the most computationally demanding aspect of the DC discovery process (Wang et al., 2019), and (2) mining the DCs from the evidence set using various set-covering algorithms, which could also be costly depending on the number of DCs and the size of the evidence set. Our experience of applying this two-phase approach has unveiled that the time required to construct the evidence set is often prohibitive. Even for medium-sized datasets (e.g., 5 million rows and 30 columns), it can take up to several hours just to construct the evidence set. Recent work (Wang et al., 2019) has shown that parallelization can reduce the time taken for evidence-set construction. However, parallelization improves performance by a constant factor, and, thus, is not a substitute for better worst-case complexity. Moreover, it does not necessarily lead to more scalable performance as the dataset size increases since there is a limit in the degree of parallelism.
After talking to various customers to better understand their requirements and expectations regarding constraint discovery1, we came to the conclusion that this two-phase approach has a fundamental limitation that can lead to poor user experience. In particular, the approach is "all or none", i.e., to produce any DC, it needs to complete the full evidence set construction (which is time-consuming), at the end of which it reports all DCs. However, in practical scenarios, users often have distinct preferences and requirements. Firstly, they prioritize receiving confirmed DCs promptly, starting from simpler constraints and gradually progressing towards more complex ones (\(R_{1}\)). Secondly, users value the flexibility of terminating the discovery process prematurely if they are satisfied with the set of DCs already mined at any specific point in time (\(R_{2}\)). This perspective underscores the necessity for a more flexible DC-discovery process to enhance the overall user experience and motivates the need for designing a new solution.
Footnote 1: We are omitting more details due to double-anonymization considerations.
In this work, we propose a novel _anytime_(Wang et al., 2019) DC discovery algorithm that allows for progressive constraint discovery and early termination, and, thus, satisfies requirements \(R_{1}\) and \(R_{2}\). At a high-level, our algorithm performs a lattice-based traversal of the space of DCs and invokes our novel DC verification algorithm to confirm whether a given constraint holds. Unlike prior work, our algorithm does not have a blocking building phase and bypasses the evidence-set-construction-based paradigm.
**Our contributions.** Our key contribution is a general framework, Rapidash, that relies on a novel approach for exact DC verification and discovery leveraging the connection to orthogonal range search. Specifically, we make the following contributions:
1. _A novel DC verification algorithm._ We present a near-optimal algorithm for verifying a given DC on a dataset \(\mathbf{R}\). We prove that our proposed algorithm can achieve a near-linear time and space complexity wrt. dataset size. This represents a significant improvement over the best-known verification algorithm (Wang et al., 2018), which has a worst-case quadratic complexity (both time and space). Further, we show that in certain scenarios, our algorithm can run in only linear space while still achieving provably sub-quadratic running time.
2. _Efficient DC discovery._ We introduce the problem of _anytime_ DC discovery and propose a lattice-based algorithm that relies on our novel DC verification algorithm to provide better performance than prior work, which relies on evidence sets.
3. _Experimental evaluation._ We conduct an extensive empirical evaluation over four production datasets that are an order of magnitude larger than those used in prior work. We show that Rapidash achieves up to 40\(\times\) speedup over the state of the art (Wang et al., 2018) for exact DC verification. For DC discovery, our anytime algorithm can produce all single-column DCs (e.g., single-column candidate keys, column with identical values etc.) within the first 10 minutes, while prior work (Kang et al., 2018; Wang et al., 2019) fails to produce any output within the first 48 hours. We also show that Rapidash scales better than prior work.
## 2. Background
In this section, we provide background on terminology and notations that will be used throughout the paper. We also define the two problems that we are tackling (DC verification and discovery).
**Relations.** Let \(\mathbf{R}\) be the input relation and \(\mathsf{vars}(\mathbf{R})\) denote the finite set of attributes (i.e. the columns). We use \(|\mathbf{R}|\) to denote the cardinality of the relation. We will use \(\mathbf{A}\), \(\mathbf{B}\) to denote attributes, \(s\) and \(t\) to denote _tuples_, and \(t\)\(\mathbf{A}\) denotes the value of an attribute \(\mathbf{A}\) in a tuple \(t\). Throughout the paper, we assume bag semantics where the relation can have the same tuple present multiple times.
**Denial Constraints (DCs).** DCs express predicate conjunctions to determine conflicting combinations of column values. They generalize other integrity constraints, including unique column combinations, functional dependencies, and order dependencies. We define a predicate \(p\) as the expression \(s\)\(A\)\(\mathsf{op}\)\(t\).\(\mathsf{B}\) where \(s,t\in\mathbf{R}\), \(\mathsf{op}\in\{\mathsf{=},\#,\geq,>,\leq,<\}\) and \(A,\mathsf{B}\in\mathsf{vars}(\mathbf{R})\). We will refer to \(\neq\) as dis-equality and \(\geq,>,\leq,<\) as inequalities. All operators except equality will be collectively referred to as non-equality operators. A DC \(\varphi\) is a conjunction of predicates of the following form:
\[\forall_{S}\,t\in R,s\neq t:\quad\neg(p_{1}\wedge\cdots\wedge p_{m})\]
A tuple pair (\(s\)\(t\)) is said to be a violation if all predicates in \(\varphi\) evaluate to true. We will say that a \(\varphi\) holds on \(\mathbf{R}\) if there are no violations, i.e., the DC is _exact_. An exact DC is said to be minimal if no proper subset of its predicates forms another exact DC.
A predicate is said to be _homogeneous_ if it is of the form \(s\)A op \(t\)A or \(s\)A op \(s\)B, i.e. it is either defined over a single column \(A\) or it is defined over a single tuple but two different columns, and _heterogeneous_ if it is of the form \(s\)A op \(t\)B. We will refer to \(s\)A op \(t\)A as row-level homogeneous predicate since such a predicate is comparing across two rows and \(s\)A op \(s\)B as column-level homogeneous predicate since it compares two columns of the same row.
Since most DCs of interest contain only row-level homogeneous predicates (such as ordering dependencies (Kumar et al., 2017), functional dependencies, candidate keys, etc.), for simplicity, we will use the term homogeneous DC to refer a DC that contains only row-level homogeneous predicates. We will use the term mixed homogeneous DC to refer to DCs that contain both row and column-level homogeneous DC. A heterogeneous DC can contain all types of predicates. Without loss of generality, we will assume that each column of \(\mathbf{R}\) participates in at most one predicate of a homogeneous DC. We will use \(\operatorname{varsop}(\varphi)\) to denote the set of columns in a homogeneous DC that appear in some predicate with the operator as op.
**Example 2**.: _Continuing from Example 1, each constraint can be expressed using a DC as follows: (1) \(\varphi_{1}:\neg(s.\textsc{SSN}=t.\textsc{SSN})\); (2) \(\varphi_{2}:\neg(s.\textsc{Zip}=t.\textsc{Zip}\wedge s.\textsc{State}\neq t. \textsc{State})\); (3) \(\varphi_{3}:\neg(s.\textsc{State}=t.\textsc{State}\land s.\textsc{Salary}<t. \textsc{Salary}\land s.\textsc{FedTaxRate}>t.\textsc{FedTaxRate})\). The universal quantification is left implicit. Let us fix our attention to \(\varphi_{3}\). Note that \(\operatorname{vars}_{=}(\varphi_{3})=\{\textsc{State}\}\), \(\operatorname{vars}_{<}(\varphi_{3})=\{\textsc{Salary}\}\), and \(\operatorname{vars}_{>}(\varphi_{3})=\{\textsc{FedTaxRate}\}\)._
_Observe that all the DCs above are homogeneous (i.e. contain only row-level homogeneous predicates). An example of a heterogeneous DC is \(\varphi_{4}:\neg(s.\textsc{Salary}<t.\textsc{FedTaxRate})\). All the DCs hold on the relation \(\mathtt{Tax}\) defined in Table 1 and are minimal exact DCs. \({}_{\blacksquare}\)_
**Predicate Space.** The space of DCs is governed by the _predicate space_, the set of all predicates that are allowed on \(\mathbf{R}\). As noted in (Sararrett et al., 2017; Sarrett et al., 2017), a predicate is meaningful when a proper comparison operator is applied to a pair of comparable attributes. Specifically, all the six operators can be used on numerical attributes (i.e. they are continuous), e.g., age and salary, but only \(=\) and \(\neq\) can be used on categorical attributes such as name and address. Two attributes are said to be comparable if: (_i_) they have the same type; (_ii_) the active domain overlap is at least 30% (Sarrett et al., 2017; Sarrett et al., 2017). For example, in Example 1, column Salary and State are not comparable since they have different type, and SSN and Zip are not comparable since the values do not have any overlap.
### Problem Statement
We use the term _DC verification_ for the process of determining whether a DC holds on a relation \(\mathbf{R}\) and _DC discovery_2 to refer to the process of finding (some or all) exact, minimal DCs over \(\mathbf{R}\). In this paper, we focus on the following two problems.
Footnote 2: We will use the term discovery and mining interchangeably.
**Problem 1**.: _Given a relation \(\mathbf{R}\) and a DC \(\varphi\), determine whether \(\varphi\) holds on \(\mathbf{R}\)._
**Problem 2**.: _Given a relation \(\mathbf{R}\), design an efficient, anytime DC discovery algorithm._
An anytime algorithm is required to produce an increasing number of exact DCs as time progresses in a way that we have some exact DCs even if the algorithm is interrupted before it terminates.
**Computational Model.** We focus on evaluation in the main-memory setting. We assume the RAM (Kumar et al., 2017) model of computation where tuple values and integers take \(O(1)\) space and arithmetic operations on integers, as well as memory lookups, are \(O(1)\) operations. Further, we assume perfect hashing for our hash tables where insertions and deletions can be reflected in \(O(1)\) time and a hash table takes space linear in the number of entries it stores. Throughout the paper, we will consider the data complexity of the problems where the DC size is assumed to be a constant.
## 3. Limitations of Existing Solutions
We now discuss the limitations of the existing solutions for exact DC verification and discovery. In Section 6, we experimentally demonstrate some of these limitations using real-world datasets.
**DC Verification.** We begin by giving a brief description of the key ideas underlying Facet, the state-of-the-art system for DC verification. Let \(\operatorname{tids}\) denote a set of tuple identifiers. All tuples in relation \(\mathbf{R}\) can be represented as \(\operatorname{tids}_{\mathbf{R}}=\{t_{1},\ldots,t_{|\mathbf{R}|}\}\). An ordered pair \((\operatorname{tids}_{1},\operatorname{tids}_{2})\) represents all tuples pairs \((s,t)\) that \(s\in\operatorname{tids}_{1},t\in\operatorname{ids}_{2},s\neq t\). Facet processes one predicate of the DC at a time, taking a set of ordered pairs \((\operatorname{tids}_{1},\operatorname{tids}_{2})\) as input and generating another set of ordered pairs \((\operatorname{tids}_{1}^{\prime},\operatorname{tids}_{2}^{\prime})\) that represent tuple pairs that satisfy the predicate as the output. This process is known as _refinement_ and Facet refines each predicate using specialized algorithms for each operator. The output of a refinement is consumed as the input for refining the next predicate. At the end of processing all the predicates, we get all tuples pairs that satisfy all the predicates and thus, represent the violations.
**Example 3**.: _Consider the DC \(\varphi_{3}:\neg(s.\textsc{State}=t.\textsc{State}\land s.\textsc{Salary}<t. \textsc{Salary}\land s.\textsc{FedTaxRate}>t.\textsc{FedTaxRate})\). The refinement of predicate \(p_{1}:s.\textsc{State}=t.\textsc{State}\) produces the set \(\{(\{t_{2},t_{3},t_{4}\},\{t_{2},t_{3},t_{4}\})\}\) with a single ordered pair. This ordered pair represents the set of tuple pairs: \((t_{2},t_{3}),\)\((t_{2},t_{4}),\)\((t_{3},t_{2}),\)\((t_{3},t_{4}),\)\((t_{4},t_{2}),\)\((t_{4},t_{3})\) since each of them satisfy \(p_{1}\). Next, this singleton set is provided as input to predicate \(p_{2}:s.\textsc{Salary}<t.\textsc{Salary}\) which produces a new set \(\{(\{t_{4}\},\{t_{2},t_{3}\}),\)\((\{t_{2}\},\{t_{3}\})\}\) since the \(\operatorname{Salary}\) for \(t_{4}\) is smaller than both \(t_{2}\) and \(t_{3}\) but Salary for \(t_{2}\) is smaller than \(t_{3}\). Finally, we process predicate \(p_{3}:(s.\textsc{FedTaxRate}>t.\textsc{FedTaxRate})\). However, note that none of the tuple pairs \((t_{4},t_{2}),(t_{4},t_{3}),\)\((t_{2},t_{3})\) satisfy the predicate and thus, the output is the empty set. Hence, \(\phi_{3}\) holds on the whole dataset \(\mathtt{Tax}\). Let us modify \(\mathtt{Tax}\) by setting \(t_{4}.\textsc{FedTaxRate}\) to \(\mathtt{22}\) and call it \(\mathtt{Tax}^{\prime}\). Then the output of the refinement by predicate \(p_{3}\) would be \((\{(\{t_{4}\},\{t_{2}\}),(\{t_{4}\},\{t_{3}\})\})\) which represents the two violations of \(\varphi_{3}\) on \(\mathtt{Tax}^{\prime}\). \({}_{\blacksquare}\)_
Facet contains algorithms that are custom-designed for the different predicate structures. We now highlight the three key sources of inefficiency in Facet.
(1) **Complexity of IEJoin.**Facet and Hydra both use IEJoin (Sarrett et al., 2017) as the algorithm for processing inequalities. The algorithm is
designed to process two inequalities at a time and thus operates on two sets of tuple pairs simultaneously (akin to two relations). The running time complexity of \(\mathsf{I}\mathsf{E}\mathsf{j}\mathsf{o}\mathsf{i}\mathsf{n}\) is \(O(|R|\cdot|S|)\) for processing inequality joins between two relations \(R\) and \(S\) (although its space complexity is only \(O(|R|+|S|)\)). As noted in (Sutton et al., 2017), \(\mathsf{I}\mathsf{E}\mathsf{j}\mathsf{o}\mathsf{i}\mathsf{n}\) is severely under-performing for predicates of low selectivity.
**(2) Complexity of Hash-Sort-Merge.** Since \(\mathsf{I}\mathsf{E}\mathsf{j}\mathsf{o}\mathsf{n}\) is designed for at least two predicates with inequality, \(\mathsf{Fact}\) proposed two novel optimizations to process DCs with a single inequality predicate: \(\mathsf{Hash}\mathsf{Sort}\mathsf{-}\mathsf{Merge}\) (\(\mathsf{HSM}\)) and \(\mathsf{Binning}\mathsf{Hash}\mathsf{Sort}\mathsf{-}\mathsf{Merge}\) (\(\mathsf{BHSM}\)). However, it can be shown that both \(\mathsf{HSM}\) and \(\mathsf{BHSM}\) still require a quadratic amount of running time and space in the worst-case. Similarly, processing of predicates containing disequality also requires quadratic time and space in the worst-case.
Since \(\mathsf{Fact}\) processes one predicate at a time, it needs to make at least one full pass over the dataset. As we will see later, this is not always necessary.
**DC Discovery.** As mentioned in Section 1, the first (and the most expensive) step performed by existing DC discovery algorithms is the computation of the evidence set. Given a predicate space \(P\) and a pair of tuples \((s,t)\), the evidence \(e(s,t)\subseteq P\) is the subset of predicates satisfied by the tuple pair. The evidence set is the set of evidences for all tuple pairs in the dataset. For example, in Table 1 assuming the predicate space \(P=\{p_{1}:s.\textsc{SSN}\neq t.\textsc{SSN},p_{2}:s.\textsc{I}\mathsf{p} \neq t.\textsc{I}\textsc{E}\mathsf{i}\mathsf{p},p_{3}:s.\textsc{Z}\mathsf{i} \mathsf{p}=t.\textsc{Zip},p_{4}:s.\textsc{FedTaxRate}\neq t.\textsc{FedTaxRate},p _{5}:s.\textsc{FedTaxRate}=t.\textsc{FedTaxRate},p_{6}:s.\textsc{FedTaxRate}>t.\textsc{FedTaxRate},p_{7}:s.\textsc{FedTaxRate}<t.\textsc{FedTaxRate}\), the evidences \(e\) for all the tuple pairs \((t_{i},t_{j})\) are as follows (we show the cases where \(i<j\), and the rest can be implied by symmetricity):
\[e(t_{1},t_{2})=\{p_{1},p_{2},p_{4},p_{6}\}, e(t_{1},t_{3})=\{p_{1},p_{2},p_{5}\}\] \[e(t_{1},t_{4})=\{p_{1},p_{2},p_{4},p_{6}\}, e(t_{2},t_{3})=\{p_{1},p_{3},p_{4},p_{7}\}\] \[e(t_{2},t_{4})=\{p_{1},p_{3},p_{4},p_{7}\}, e(t_{3},t_{4})=\{p_{1},p_{3},p_{4},p_{6}\}\]
The evidence set will contain \(4\) evidences since \(e(t_{1},t_{2})\) and \(e(t_{1},t_{4})\) are identical (and so are \(e(t_{2},t_{3})\) and \(e(t_{2},t_{4})\)). As discussed before, evidence set construction is a blocking step since discovery cannot start until the computation has been completed. The time complexity of the construction process is (possibly) super-linear dependency on \(|\mathbf{R}|\) depending on the characteristics of the tuples and columns in the input. Our experiments in Section 6 demonstrate super-linear (closer to \(|\mathbf{R}|^{3/2}\)) complexity in practice on our datasets. In terms of space, in the worst case, the size of the evidence set could be as large as \(|\mathbf{R}|^{2}\), which is undesirable.
These drawbacks motivate the necessity for designing a new algorithm that has the _anytime_ property. The reader may wonder whether it is possible to adjust evidence set construction to enable an anytime DC discovery algorithm that starts emitting simpler constraints progressing towards more complex ones over time. Intuitively, such an algorithm would be possible if we could create an evidence set catered to DC constraints consisting of one predicate only, discover the ones that are satisfied and return them to the user, increment the existing evidence set to cover constraints with two predicates and repeat the process until the full space of constraints has been explored or the user terminates the process. However, as shown in the example above, the evidence set construction relies on the predicate space and not the DC constraint space. As a result, the evidence set used to mine constraints with one predicate is exactly the same as the one used to mine constraints with two predicates. Thus, incremental construction of the evidence set (and evidence set-based anytime DC discovery) is unlikely.
## 4. Rapidash Verification
In this section, we describe the Rapidash verification algorithm. Our algorithm builds appropriate data structures to store the input data (leveraging existing work on orthogonal range search), and issues appropriate queries to find violations of a given DC.
### Orthogonal Range Search
In this section, we present some background on orthogonal range search. Given a totally ordered domain \(\mathbb{N}\), let \(A\subseteq\mathbb{N}^{k}\), for some \(k\geq 1\), of size \(N\). Let \(\mathbf{L}=(\ell_{1},\ldots,\ell_{k})\) and \(\mathbf{U}=(u_{1},\ldots,u_{k})\) be such that \(\mathbf{L},\mathbf{U}\in\mathbb{N}^{k}\) and \(\ell_{i}\leq u_{i}\) for all \(i\in[k]\).
Definition 1 ().: _An orthogonal range search query is denoted by \((\mathbf{L},\mathbf{U})\), and its evaluation over \(A\) consists of enumerating the set_
\[Q(A)=\{a\in A\mid\bigwedge_{i\in k}\ell_{i}\ \mathsf{op}_{1}\ a_{i}\ \mathsf{op}_{2}\ u_{i}\}\] \[\textit{where}\ \mathsf{op}_{1},\mathsf{op}_{2}\in\{<,\leq\}.\]
In other words, \(\mathbf{L}\) and \(\mathbf{U}\) form an axis-aligned hypercube in \(k\) dimensions, and \(Q(A)\) reports all points in \(A\) that lie on/within the hypercube. The Boolean version of the orthogonal range search problem consists of determining if \(Q(A)\) is empty or not.
Example 4 ().: _Consider the Table 1() from Example 1. Let \(A\) be the set of two-dimensional points obtained by projecting \(\mathsf{Tax}\) on \(\mathsf{Salary}\) and \(\mathsf{FedTaxRate}\). Let \(\mathbf{L}=(3500,5)\) and \(\mathbf{U}=(4500,22)\). Then, the orthogonal range query \((\mathbf{L},\mathbf{U})\) is asking for all points such that the \(\mathsf{Salary}\) is between \(3500\) and \(4500\), and the \(\mathsf{FedTaxRate}\) is between \(5\) and \(22\). In Table 1(), only \(t_{4}\) satisfies the criteria (its values of \(\mathsf{Salary}\) and \(\mathsf{FedTaxRate}\) are \(4000\) and \(10\) respectively). Thus, the result of the orthogonal range search query \((\mathbf{L},\mathbf{U})\) is \(\{(4000,10)\}\). _
In the presentation of the algorithms, we will assume that the range search data structure is built over \(k\) dimensions and has two methods in its API:
1. booleanRangeSearch\((\mathbf{L},\mathbf{U})\): returns a Boolean value if there is a point that lies in the axis-aligned hypercube formed by \(\mathbf{L}\) and \(\mathbf{U}\). The operators \(\mathsf{op}_{1}\) and \(\mathsf{op}_{2}\) used in Definition 1 will be clear from the context in which the function is called.
2. insert\((t)\): inserts a \(k\)-dimensional tuple \(t\) into the data structure.
The two most celebrated data structures for orthogonal range search that are widely used in practice are range trees (Dong et al., 2017) and \(k\)-d trees (Dong et al., 2017). We will review their complexity and trade-offs when analyzing the complexity of our DC verification algorithm.
### Verification Algorithm
In this section we present our verification algorithm that leverages prior work on orthogonal range search. Without loss of generality, we will assume that all predicates of the DC contain only equalities and inequalities but no disequality and that the DC is homogeneous. Both of these assumptions will be removed later. Finally, we assume
that the categorical columns in \(\mathbf{R}\) have been dictionary-encoded to integers, a standard assumption in line with prior work [37, 39].
Algorithm 1 describes the details for verifying a homogeneous DC \(\varphi\) over a relation \(\mathbf{R}\). On Line 1, we compute the number \(k\) of columns that appear in non-equality predicates in \(\varphi\). If \(\varphi\) contains only equality in all the predicates, then \(k=0\). For each tuple \(t\) in \(\mathbf{R}\), we first project \(t\) on all columns that participate in an equality predicate (Line 4) to get \(\varphi\). If the projection \(v\) has not been seen before, then we insert \(v\) in the hash table \(H\) and initialize \(H[v]\) (Lines 5-9). Next, we process the projected tuple \(v\) based on whether the DC contains only equality predicates or not (Lines 10-19). If the DC only contains an equality operator in all the predicates, it is sufficient to check if there exist two tuples whose projection over \(\text{vars}_{=}(\varphi)\) is equal which would constitute a violation. This is done by storing the count in a hash map which is incremented (Lines 17-19). If the DC contains a predicate with inequalities, we build a range search data structure of dimension \(k\). The \(k\) dimensional point inserted into the tree is the tuple obtained by projecting \(t\) on all non-equality columns (Line 15). Before we insert, we check that the new point would not satisfy all the inequality predicates (i.e., form a violation) when grouped with any previously processed point (Lines 11-15). Next, we give an example of how the algorithm works.
Figure 1 helps visualizing the ideas behind the example.
**Example 5**.: _Consider the \(\mathtt{Tax}\) table from our running example and the DC \(\varphi_{3}:(\mathtt{s.State}=t.\mathtt{State}\wedge\mathtt{s.Salary}<t. \mathtt{Salary}\wedge\mathtt{s.FedTaxRate}>t.\mathtt{FedTaxRate})\) which contains one equality and two inequality predicates. Algorithm 1 will first start with the equality predicate, and place \(t_{1}\) in a hash partition by hashing \(t_{1}.\mathtt{State}=\mathtt{New}\) York. Since the range tree for the hash bucket is empty, the range search will return false and we insert \((t_{1}.\mathtt{Salary},t_{1}.\mathtt{FedTaxRate})\) in the tree. Next, we process \(t_{2}\) which is placed in a different partition since \(t_{2}.\mathtt{State}=\mathtt{Wisconsin}\). The algorithm performs a range search which returns false since the tree corresponding to that partition is empty. We then insert \((5000,15)\) in the tree. When \(t_{3}\) is processed, it is placed in the same partition as \(t_{2}\) since they have the same \(\mathtt{State}\) value. At this point, we have two tuples in the same partition and thus we need to consider the remaining predicates in the DC to establish whether there is a violation. Such a violation would occur in two scenarios: 1) if the tuple already present in the tree (\(t_{2}\)) has lower \(\mathtt{Salary}\) than \(6000\) but larger than \(20\,\mathtt{FedTaxRate}\) which are the corresponding values of \(\mathtt{Salary}\) and \(\mathtt{FedTaxRate}\) for tuple \(t_{3}\), or 2) if \(t_{2}\) has salary higher than \(6000\) but a \(\mathtt{FedTaxRate}\) smaller than \(20\). To identify whether this is the case, we perform an orthogonal range search with \(\mathbf{L}=(-\infty,20)\) and \(\mathbf{U}=(6000,\infty)\) (scenario 1). Then, we also search in the inverted _range \(\mathbf{L}^{\prime}=(6000,-\infty)\) and \(\mathbf{U}^{\prime}=(\infty,20)\) (scenario 2). Since \(t_{2}\) does not lie in the desired range, both range searches return false and we insert \((6000,20)\) in the tree. Finally, \(t_{4}\) is processed and placed in the same partition as \(t_{2}\) and \(t_{3}\). We thus query the tree with \(\mathbf{L}=(-\infty,10)\) and \(\mathbf{U}=(4000,\infty)\) (and the inverted \(\mathbf{L}^{\prime}=(4000,-\infty),\mathbf{U}^{\prime}=(\infty,10)\)) but no point satisfies the criteria as shown in Figure 1. Both searches return false, we insert \(t_{4}\) in the tree, and return true (Line 20)._
_To demonstrate an example of a violation, consider Table 1 with the modified tuple \(t_{4}\) with \(\mathtt{FedTaxRate}=22\) (shown as \(t_{4}\) in red in Figure 1). Then, the range search queries would be \(\mathbf{L}=(-\infty,22),\mathbf{U}=(4000,\infty)\) and \(\mathbf{L}^{\prime}=(4000,-\infty),\mathbf{U}^{\prime}=(\infty,22)\). Then, \(t_{2}\) and \(t_{3}\) form a violation with \(t_{4}\) since both the points represent a higher salary than \(4000\) but a smaller tax rate than \(22\), and Line 1 returns false. \(\Box\)_
Figure 1. Salary and FedTaxRate for each tuple in Tax. The grey (upper left quadrant centered at \(t_{2}\)) and blue shaded areas (lower right quadrant centered at \(t_{2}\)) show the regions where the tuples that could form a violation \(t_{2}\) lie.
We now establish the correctness of Algorithm 1.
Lemma 1: _Algorithm 1 correctly determines whether a homogeneous \(DC\,\varphi\) is satisfied._
Proof: We first show that Algorithm 1 is correct when \(\varphi\) only contains equality predicates. In this case, it is sufficient to determine whether there exist two distinct tuples \(t_{1}\) and \(t_{2}\) such that \(\pi_{\text{vars}_{-(\varphi)}}(t_{1})=\pi_{\text{vars}_{-(\varphi)}}(t_{2})\). The hash table \(H\) stores a counter for each distinct \(\pi_{\text{vars}_{-(\varphi)}}(t)\) and increments it for each tuple \(t\in R\) (line 17). Thus, the algorithm will correctly return false as soon as some counter becomes greater than one and return true only if no such \(t_{1},t_{2}\) exists.
Next, we consider the case when there exists at least one predicate with inequality. We show the proof for the case when all inequality predicate operators are \(<\), i.e., all predicates in the DC are of the form \((s.A=t.A)\) or \((s.A<t.A)\). The proof for other operators is similar. We first state the following claim.
Claim 1: _Let \(w\) be the set of attributes that appear in the predicates with inequalities. Two tuples \(t_{1}\) and \(t_{2}\) in the same partition can form a violation if and only if \(t_{1}(w)\prec t_{2}(w)\) or \(t_{2}(w)\prec t_{1}(w)\), where the notation \(t(w)\) denotes the projection, \(\pi_{w}(t)\), of tuple \(t\) on attributes \(w\)._
Here, \(\prec\) is the standard coordinate-wise strict dominance checking operator. Claim 1 follows directly from the semantics of the operator under consideration and the definition of a violation. Suppose \(t\) is the tuple being inserted in the tree. Line 13 will query the range tree with \(\mathbf{L}=(-\infty,\ldots,-\infty),\mathbf{U}=(t(\varphi_{1}),\ldots,t(\varphi _{k}))\) and \(\mathbf{L}^{\prime}=(t(\varphi_{1}),\ldots,t(\varphi_{k}))\), \(\mathbf{U}^{\prime}=(\infty,\ldots,\infty)\). In other words, the algorithm searches for a point in the tree such that \(t\) is strictly smaller or larger for each of the \(k\) coordinates. The existence of such a point would imply that there exists a pair that forms a violation.
If the orthogonal range search finds no point, Claim 1 tells us that \(t\) cannot form a violation with any tuple \(s\) already present in the range tree. In each iteration of the loop, we insert one tuple into the range tree. Therefore, if \(t_{1}\) and \(t_{2}\) form a violation, it will be discovered when one of them (say \(t_{2}\)) is already inserted in the range tree and \(t_{1}\) is being processed by the for loop. This completes the proof.
**Time and Space Complexity.** We next establish the running time of the algorithm. First, observe that if \(k=0\), then the algorithm takes \(O(|\mathbf{R}|)\) time since the for loop only performs a constant number of hash table operations. If \(k\geq 1\), the algorithm performs one insertion and two Boolean orthogonal range search queries in each iteration of the for loop. Suppose the insertion time complexity, denoted by \(I(n)\), is of the form3\(\log^{\alpha}n\) and search time complexity is \(T(n)\) when the data structure has \(n\) points in it. The running time can be bounded as
Footnote 3: Throughout the paper, we use \(\log^{k}N\) to mean \((\log N)^{k}\) and not iterated logarithms.
\[\sum_{i=1}^{|\mathbf{R}|}\big{(}\underbrace{\log^{\alpha}i}_{ \text{insertion time}}+\underbrace{2\cdot T(i)}_{\text{query time}}\big{)}\] \[<\int_{1}^{|\mathbf{R}|+1}\log^{\alpha}i\,di+\int_{1}^{|\mathbf{ R}|+1}2\cdot T(i)\,di\] \[=O(|\mathbf{R}|\cdot\log^{\alpha}|\mathbf{R}|)+\int_{1}^{| \mathbf{R}|+1}2\cdot T(i)\,di\]
Seminal work by Overmars [35] showed that using range trees and \(k\)-d trees, one can design an algorithm with the parameters as shown in Table 2.
The integral in the second term in the equation above can be bounded by setting \(T(i)=\log^{k}i\) or \(T(i)=i^{1-1/k}\). In both cases, the second term evaluates to \(O(|\mathbf{R}|\cdot T(|\mathbf{R}|))\). For space usage, note that the hash table takes a linear amount of space in the worst case. Thus, the space requirement of the tree data structure determines the space complexity. The main result can be stated as follows.
Theorem 1: _Algorithm 1 runs in time \(O(|\mathbf{R}|\cdot(I(|\mathbf{R}|)+T(|\mathbf{R}|)))\) and uses space \(S(|\mathbf{R}|)\) when using range tree or \(k\)-d tree with parameters as shown in Table 2._
With range trees, the running time is \(O(|\mathbf{R}|\cdot\log^{k}|\mathbf{R}|)\) and space usage is \(O(|\mathbf{R}|\log^{k-1}|\mathbf{R}|)\); for \(k\)-d trees, the running time is \(O(|\mathbf{R}|^{2-\frac{k}{k}})\) and space requirement is \(O(|\mathbf{R}|)\).
**Comparison with Facet.** Our approach is superior to Facet in three respects. First, we use polynomially less space and time in the worst-case. Second, there exist instances where our algorithm saves a significant amount of time and space by early termination.
Proposition 1: _For every homogeneous \(DC\,\varphi\) with at least one non-equality predicate, there exists a relation \(\mathbf{R}\) such that Algorithm 1 takes \(O(1)\) time and Facet requires \(\Omega(|\mathbf{R}|)\) time._
Proof: We sketch the proof for \(\varphi:\forall\,\forall\,\mathbf{s}\,t\in\mathbf{R}\), \(\neg(s.A=t.A\wedge s.B<t.B)\) which can be extended straightforwardly for other DCs of interest. We construct a unary relation \(\mathbf{R}(A,B)\) of size \(N\) as follows: the first tuple \(t_{1}\) is \((a_{1},b_{1})\) and the remaining \(N-1\) tuples are \((a_{1},b_{2})\) where \(b_{1}<b_{2}\). Note that \(t_{1}\) forms a violation with every other tuple in the relation. Algorithm 1 initializes one range tree when processing \(t_{1}\) (line 7) and inserts \(t_{1}\) in it (line 15). Thereafter when tuple \(t_{2}\) is processed, the range search query (line 13) will return true and the algorithm will terminate. Note that all the operations take \(O(1)\) time since the tree only contains two tuples. However, Facet requires \(\Omega(|\mathbf{R}|)\) time for processing the refinement of \(sA=t.A\) predicate.
Lastly, the space requirement of Facet is relation dependent. If the machine has only linear amount of memory, Facet will be unable to
\begin{table}
\begin{tabular}{l l l l} \hline \hline DS & Insertion \(I(n)\) & Answering \(T(n)\) & Space \(S(n)\) \\ \hline Range tree & \(O(\log^{k}n)\) & \(O(\log^{k}n)\) & \(O(n\cdot\log^{k-1}n)\) \\ \(k\)-d tree & \(O(\log n)\) & \(O(n^{1-\frac{1}{k}})\) & \(O(n)\) \\ \hline \hline \end{tabular}
\end{table}
Table 2. Data structure parameter on input of size \(n\)[35]. \(k\) is the number of dimensions of the points inserted in the tree.
complete the refinements and fail. On the other hand, our solution allows verification with linear space using \(k\)-d trees. This flexibility is important for resource-constrained production scenarios.
### Generalizations and Optimizations
In the previous section, we made some assumptions on the type of constraints processed by Algorithm 1. We now gradually remove these assumptions and present appropriate examples and proofs.
**Allowing inequality heterogeneous predicates.** We first extend our algorithm to also handle heterogeneous predicates, namely predicates of the form s.A op \(t.\)B, where op is \(<,\leq,>\) or \(\geq\). Let \(\varphi\) be a DC containing row-level homogeneous predicates (as before) and some inequality heterogeneous predicates. The main difference from the previous case is that we now need to generalize our procedure for computing the ranges \((\mathbf{L},\mathbf{U})\) and inverted ranges \((\mathbf{L}^{\prime},\mathbf{U}^{\prime})\) for range search. Algorithm 2 shows the generalization. The main idea is that if \(\varphi\) has a predicate \(s.C<t.D\), then when we process a new tuple \(r\), the upper-bound for _attribute_\(C\) is set to \(r.D\) in the forward check, and the lower-bound for attribute \(D\) is set to \(r.C\) in the inverted check (because we are comparing attribute \(C\) of \(s\) with attribute \(D\) of \(t\) in the predicate). When \(C=D\), we recover our original algorithm. We also note that the new generalization also extends our algorithm to handle the case when attributes occur in more than one predicate. Thus, a heterogeneous equality, \(s.C=t.D\), can be handled by rewriting it to \(s.C\leq t.D\wedge s.C\geq t.D\) and using the generalized range computation from Algorithm 2. The range trees store projections of tuples on the attributes that are involved in inequality predicates.
Heterogeneity also enables an optimization. Let \(L_{1}\) be all the attributes present in inequality predicates and referenced by \(s\), and \(L_{2}\) be those that are referenced by \(t\). For example, if \(s.C<t.D\) is a predicate in \(\varphi\), then \(L_{1}\) will contain \(C\) and \(L_{2}\) will contain \(D\). Now, rather than having one range search data structure of dimension \(|L_{1}\cup L_{2}|\), we can instead have two potentially smaller range search data structures of dimension \(|L_{1}|\) and \(|L_{2}|\) - one to perform the forward search and the other to perform the inverted search. In absence of heterogeneous predicates, we had \(L_{1}=L_{2}\) and both these were identical, but in presence of heterogeneous constraints, \(L_{1}\) and \(L_{2}\) can each be strictly smaller than their union.
**Example 6**.: _Consider the DC \(\varphi:-(s.\texttt{Salary}\leq t.\texttt{FedTaxRate})\). Note that \(L_{1}=\{\texttt{Salary}\}\) and \(L_{2}=\{\texttt{FedTaxRate}\}\). Suppose we are processing \(\mathit{luplet}\in\mathbf{R}\). We will create two range search data structures \(H_{1}\) (in which we will store \(t.\texttt{Salary}\)) and \(H_{2}\) (in which we will store \(t.\texttt{FedTaxRate}\)). Given \(r\), we first do a range search in \(H_{1}\) to check if there is a point that is no larger than \(r.\texttt{FedTaxRate}\). If there is a point, we have found a violation. Otherwise, we insert \(r.\texttt{Salary}\) in \(H_{1}\) and check whether there is a point in \(H_{2}\) that is no smaller than \(r.\texttt{Salary}\). If there is a point, we have found a violation and we insert \(r.\texttt{FedTaxRate}\) into \(H_{2}\) otherwise._
**Allowing disequality predicates.** Any predicate \(s.\mathbf{A}\neq t.\texttt{B}\) can be written as a union of two predicates: \((s.\mathbf{A}<t.\texttt{B})\vee(s.\mathbf{A}>t.\texttt{B})\). Therefore, a DC containing \(\ell\) predicates with op as \(\neq\) can be equivalently written as a conjunction of \(2^{\ell}\) DCs containing no disequality operator.
If the original homogeneous DC contains no inequality predicate, then it is possible to reduce the number of equivalent DCs from \(2^{\ell}\) to \(2^{\ell-1}\). The idea is that a violation \((s,\ell)\) is symmetric (i.e. \((t,s)\) is also a violation) if the DC contains only equality and disequality predicates. Therefore, when converting a DC to only have inequalities, it suffices to expand \((s.A\neq t.A)\) to just \((s.A<t.A)\) for one last disequality predicate instead of \((s.A<t.A)\vee(s.A>t.A)\).
**Proposition 2**.: _Given a homogeneous DC \(\varphi\) with only equality and \(\ell\) disequality predicates, there exists an equivalent conjunction of \(2^{\ell-1}\) DCs that contain only equality and inequality predicates._
**Proof.** Consider the constraint \(\varphi:-(\phi\wedge s.A\neq t.A)\), where \(\phi\) is a conjunction of homogeneous equality and disequality predicates. Let \((q,r)\) be a violation to \(\varphi\). Without loss of generality, we assume that \(q.A<r.A\), and then \((q,r)\) is also a violation to \(\varphi_{1}:-(\phi\wedge s.A<t.A)\). Since \(\phi\) only contains equality and disequality predicates, \((r,q)\) also satisfies \(\phi\) by symmetricity, and therefore \((r,q)\) is a violation to \(\varphi\) and \(\varphi_{2}:-(\phi\wedge s.A>t.A)\). In fact, for any violation \((r,q)\) to \(\varphi\), one of \((r,q)\) and \((q,r)\) must violate \(\varphi_{1}\) while the other violates \(\varphi_{2}\). Thus, we only need to check \(\varphi_{1}\) for violations, which contains \(l-1\) disequality predicates and can be written as a conjunction of \(2^{l-1}\) DCs containing no disequality predicates by logical equivalence. \(\Box\)
Note that although Algorithm 1 is described for a single homogeneous DC, it can be extended to verify multiple DCs in a non-serial fashion. For each tuple in \(\mathbf{R}\), one can perform the processing for each DC and return as soon as any of them detect a violation. This ensures we retain the ability to terminate early whenever possible.
**Allowing only one inequality predicate.** If a DC has row homogeneous equality predicates and at most one predicate (homogeneous or heterogeneous) containing an inequality, then the verification can be done in linear time. Algorithm 3 shows the algorithm. Like the previous algorithms, we begin by partitioning the input into a hash table based on the equality predicates. Let the inequality predicate be s.A op \(t.\)B. The main idea is to maintain the running minimum and maximum values for the Columns A and B _for each partition of the input_. Since the comparison is one-dimensional, it is sufficient to compare against the minimum (or maximum) value.
Lines 4-6 initialize these minimum and maximum values to +\(\infty\) and \(-\infty\) respectively when we see a tuple that belongs to no existing partition. Lines 7-10 then perform the inequality check between all previously seen tuples (in this new tuple's partition \(v\)) and this new tuple. If the checks fail, we update the minimum and maximum values for partition \(v\) based on this new tuple on Lines 11-14. The algorithm makes only one pass over the entire dataset and the overall time complexity is \(O(|\mathbf{R}|)\). While this optimization is simple, it has important implications. In particular, popular constraints such as functional dependencies (FD) are DCs that contain exactly one inequality predicate. Algorithm 3 recovers the standard linear time algorithm to verify FDs (Zafar et al., 2017). However, it is unclear that Facett, in its present form, can achieve the same provable guarantee.
```
Input : Relation \(\mathbf{R}\) Output : List of exact, minimal DCs
1\(\mathcal{L}\leftarrow\text{new list()}\) / \(\mathcal{T}\) stores exact, minimal DCs '/ \(k\gets 0\) while\(k\leq|\text{\it wars(\mathbf{R})}|\)do
2\(k\gets k+1\) foreach candidate \(\varphi\) formed using size \(k\) subset of \(\text{\it wars(\mathbf{R})}\)do /* Minimal borrowed from (Facett, \(\varphi\)) if\(\textsc{Minimal}(\mathcal{L},\varphi)\) and\(\textsc{NotPrinted}(\mathcal{L},\varphi)\) and\(\textsc{Verify}(\mathcal{R},\varphi)\)then output \(\varphi\) \(\mathcal{L}\).append(\(\varphi\)) returnprocessed\(\mathcal{L}\) using implication test from (Facett, \(\varphi\)) foreach\(\varphi^{\prime}\in\mathcal{L}\)do \(p_{1},\ldots,p_{m}\leftarrow\) predicates in \(\varphi^{\prime}\) foreach\(j\in[m]\)do if\(\varphi\) contains \(\{p_{1}\}_{i\neq j}\) and\(\neg p_{j}\)then returnfalse returntrue
```
**Algorithm 4**DC discovery
**Allowing mixed homogeneous constraints.** We now extend our verification algorithm to work also for mixed homogeneous constraints that can contain predicates of the form \(s.A\) op \(s.B\) as well as \(s.A\) op \(t.A\). Let \(\forall s,t:\neg(s,t)\) be a mixed homogeneous denial constraint. We first rewrite \(\phi\) in the form \(\phi_{S}(s)\wedge\phi_{T}(t)\wedge\phi_{ST}(s,t)\) where \(\phi_{S}\) contains all predicates that mention only \(s\) (and not \(t\)), \(\phi_{T}\) contains all predicates that mention only \(t\) (and not \(s\)), and \(\phi_{ST}\) contains all predicates that mention both \(s\) and \(t\). The constraint \(\forall s,t:\neg\phi\) that we need to verify over a given \(\mathbf{R}\) can be equivalently rewritten as follows:
\[\forall s,t:\neg\phi \Leftrightarrow \forall s,t:\neg(\phi_{S}(s)\wedge\phi_{T}(t))\vee\neg\phi_{ST}(s,t)\] \[\Leftrightarrow \forall s,t:(\phi_{S}(s)\wedge\phi_{T}(t))\Rightarrow\neg\phi_{ST }(s,t)\] \[\Leftrightarrow \forall s\in\mathbf{S}:\forall t\in\mathbf{T}:\neg\phi_{ST}(s,t)\]
where \(\mathbf{S}\) is the set of all tuples in \(\mathbf{R}\) s.t. \(\phi_{S}\) is true, and \(\mathbf{T}\) is the set of all tuples in \(\mathbf{R}\) s.t. \(\phi_{T}\) is true. Note that \(\mathbf{S}\) and \(\mathbf{T}\) can overlap.
We maintain two range search data structures (same as the \(H\) in Algorithm 1) \(H_{\mathbf{S}}\) and \(H_{\mathbf{T}}\) for points in \(\mathbf{S}\) and \(\mathbf{T}\) respectively. For each tuple (aka point) \(q\in\mathbf{R}\), we first check whether it belongs to \(\mathbf{S}\) and \(\mathbf{T}\).
1. If \(q\in\mathbf{S}\), we perform range search on \(H_{\mathbf{T}}\) to find any point \(r\) such that \(\phi_{ST}(q,r)\) is true. If there is no such point, we insert \(q\) into \(H_{\mathbf{S}}\). Otherwise, the constraint does not hold and the algorithm terminates. This step checks whether there is any previously seen point \(r\) in \(\mathbf{T}\) such that \((q,r)\) forms a violation.
2. Similarly, if \(q\in\mathbf{T}\), we perform range search on \(H_{\mathbf{S}}\) to find any point \(r\) such that \(\phi_{ST}(r,q)\) is true. If there is no such point, we insert \(q\) into \(H_{\mathbf{T}}\). Otherwise, we output false.
The correctness of the algorithm follows from the logical equivalences established above. Whenever there exist \(s\in\mathbf{S},t\in\mathbf{T}\) such that \(\phi_{ST}(s,t)\) is true, the algorithm is able to identify them no matter whether \(s\) precedes \(t\) or not in the input relation. For each point, the algorithm performs at most two range queries, and therefore the big-O complexity is the same as the original algorithm.
## 5. Rapidash discovery
In this section, we propose a fast DC discovery algorithm. To find all exact, minimal DCs, we use a lattice-based approach where we generate candidate DCs and leverage the verification algorithm to verify whether the DC holds.
Similar to prior works on functional dependency discovery (Zafar et al., 2017), we start with singleton sets of attributes and traverse larger sets in a level-by-level fashion. For each set of attributes, we generate candidate DCs by generating all possible predicates. Then, we apply the verification algorithm to check whether the DC holds over the input. If the DC is true, we output the DC to the user and store it in a list \(\mathcal{L}\), which is used to check the minimality of a candidate DC. We also prune candidate DCs whose validity is implied by other DCs. When \(\neg(\wedge_{i\in[m]}p_{i})\) is verified, for any \(j\in[m]\), we remove all DCs containing \(\{p_{i}\}_{i\neq j}\cup\{\neg p_{j}\}\) from the search space.
**Example 7**.: _Figure 2 shows an example search sub-space showing the first two levels of the lattice. Level one (with incoming arrows from the \(\mathtt{Root}\)) contains all DCs over a single column and level two contains candidates generated from predicates in level one. Consider the \(\mathtt{DC}\phi_{1}:\neg(s.\mathtt{SSN}=t.\mathtt{SSN})\) and suppose that it holds. Once this \(\mathtt{DC}\) is verified, it is added to \(\mathcal{L}\) and does not contribute any new candidates in the search space. The next candidate \(\phi_{1}:\neg(s.\mathtt{SSN}=t.\mathtt{SSN})\) is superfluous as it is guaranteed to be false due to logical implication. We also remove all descendants of \(\phi_{1}\) because they will be equivalent to other DCs. For instance, \(\neg(s.\mathtt{SSN}\neq t.\mathtt{SSN}\land s\mathtt{Zip}=t.\mathtt{Zip})\) is equivalent to \(\neg(s.\mathtt{Zip}=t.\mathtt{Zip})\), and the latter has already been checked on level one. On level two, the first candidate \(\neg(s.\mathtt{Zip}=t.\mathtt{Zip}\land s.\mathtt{State}\neq t.\mathtt{State})\) holds and added to \(\mathcal{L}\), which helps us prune the two other candidates in the second level which are marked in the figure. \(\Box\)_
Algorithm 4 describes the steps for DC discovery. It is straightforward to see that the time complexity of the algorithm is the product of # candidate DCs considered and the verification time complexity. Although the DCs in the output of Algorithm 4 are minimal and we prune candidates during the search, there may still exist exact, minimals DCs that can be implied by other exact, minimal DCs. As a post-processing step, we can use the implication test algorithm proposed by Chu et al. (2014) to find as many such DCs as possible and remove them. This step is also followed by (Zhu et al., 2017; Zhu et al., 2018). Note that the implication test does not guarantee removing all of the implied DCs, and the complete test is a coNP-complete problem (Beng et al., 2017).
Note that our algorithm can be further improved by incorporating sampling-based verification as a pre-filter and collecting multiple DCs to verify and use the ideas from (Zhu et al., 2017) to exploit common predicates between the candidates. Our solution is also _embarrassingly parallel_ and can be easily extended to use multiple processors for verifying candidates. However, we intentionally keep our implementation simple since it already works well in practice for our production customer use cases. We leave the important topic of incorporating these optimizations as a future study topic.
**Comparison with prior work.** First, observe that we have the capability to output the candidate DC to the user immediately after the verification has been done. Thus, the user can terminate the algorithm at any point in time, a desirable property since the user can interrupt the algorithm and still get answers. This _anytime_ property for DC discovery is a direct consequence of faster DC verification. Indeed, since all prior methods for DC verification may require a quadratic amount of space (and thus, time), they cannot be used for our setting. Second, since the lattice is traversed in increasing size of the candidate constraints, we get the added benefit of generating _succinct_ constraints, a desirable property (Deng et al., 2017) and in line with the minimum descriptor length principle. Lastly, our proposed solution is space efficient and requires only \(O(|\mathbf{R}|+|\mathcal{L}|)\) amount of space when the verification method uses \(k\)-d trees, allowing our solution to run on commodity machines. This property is not achievable by any other non-trivial discovery method known so far.
## 6. Experimental Evaluation
In this section, we report the results of our experimental evaluation. In particular, we seek to answer the following questions:
1. [label=**(Q.0)**, ref=]
2. What is the performance improvement (time and space) of the Rapidash verification algorithm compared to Facet?
3. What is the impact of the optimizations proposed in Section \(4.3\) in the overall verification time?
4. How does the performance and scalability of Rapidash discovery compare to existing solutions?
### Experimental Setting
Table 3 lists twelve DCs over four production datasets that we use in our experiments4. Two of these datasets contain banking information, one dataset is related to the shipping of documents and products, and the last dataset contains sales information. Each dataset contains a mix of categorical, numeric, and datetime columns. To stress-test our algorithm and prior work, we use complex data quality rules that are either discovered automatically or manually verified to be meaningful. Further, to make sure that the DCs are not trivial to falsify, we pick 3 DCs by taking a 10% sample of each dataset and discover DCs that are true over the sample. The fourth DC (denoted by \(\phi_{i,4}\) for dataset \(D_{i}\)) holds over the full dataset.
Footnote 4: The column names in the DCs have been omitted due to security and privacy concerns.
We ran all experiments on an Intel(R) Xeon(R) W-2255 CPU @ 3.70GHz machine with 128GB RAM running Windows 10 Enterprise (version 22H2). All of our experiments are executed over a single core and in the main memory setting. Similar to all prior work, we
Figure 2. The search space of homogeneous DC discovery on Table 1 when we only consider \(\mathtt{SSN}\), \(\mathtt{Zip}\), \(\mathtt{State}\) and limit the number of predicates to two. DCs marked as “Holds” have been verified to be true and DCs marked as “Pruned” are candidates implied by DCs that are true.
implemented our algorithm in Java. Despite our repeated attempts, we were not able to obtain the original Facet source code from the authors of (Zitrin et al., 2017). Therefore, we implemented Facet in Java using the Metanome infrastructure from (K
was a further order of magnitude lower in its space requirement compared to Rapidash(\(\bot\))in line with the theoretical predictions presented in Section 4.
**Scalability.** To study the scalability of Rapidash, we use dataset \(D_{1}\) and vary the number of rows to understand the impact of input size on the running time of the DCs. Figure 5 shows the results when vary the dataset size of \(D_{1}\) from 0.5M to 50M. Both Rapidash(\(\bot\)) and Rapidash(kd) scale almost linearly for the first three DCs. For \(\varphi_{1,4}\), while Rapidash(\(\bot\)) scales linearly, Rapidash(kd) has super linear scalability, which is in line with the expectation. The behavior of Rapidash on other datasets was also very similar. The performance gap between Facet and our solution narrows when the dataset size is small. This is expected since Facet performance depends on the sizes of cluster pairs generated by refinements. If the size of the partitions generated after processing equality predicates is small, refinement processing of other non-equality predicates is not as expensive compared to larger partitions.
**Inequality Predicate Optimization.** From the set of DCs considered, the single inequality optimization as described in Section 4.3 is applicable to \(\varphi_{3,3}\) and \(\varphi_{4,1}\)5. Using the homogeneous version of Algorithm 3, we observed a speedup of 1.2x and 1.1x respectively, compared to the fastest implementation Rapidash(\(\bot\)). The main reason for the improvement is that instead of creating a binary tree on the column in the inequality predicate, we only keep track of the minimum and maximum value of the column for the tuples present in a partition after hashing.
Footnote 5: After converting disequality to inequality
**Discequality Predicate Optimization.** The disequality predicate optimization is applicable to \(\varphi_{1,1}\), \(\varphi_{1,2}\), \(\varphi_{1,3}\) from the four DCs on \(D_{1}\). Each of the three constraints has exactly two disequality predicates. With the optimization switched on, we observed an improvement of \(2\times\) for each of the three DCs owing to the fact that the optimization generates two candidates of four. Thus, the algorithm only needs to do half the work.
### DC Discovery
This section is dedicated to answering Q.3.
**Performance.** We evaluate the performance of our DC discovery algorithm (Rapidash(disc)) in comparison to Hydra and DCFinder using all four datasets. We'd like to note that our datasets are much larger than those used in prior work (Hybra and DCFinder were evaluated on datasets consisting of up to 1M rows). We run the experiments with a time limit of 48 hours. For all datasets, both Hydra and DCFinder**could not finish** the computation of the evidence set within the time limit _for any dataset_. This is not surprising since the evidence-set can be super linear in the size of the dataset and has an exponential dependency on the number of columns. Therefore, even after spending a lot of computing resources, the user does not get any information at all about whether there even exists a DC or not. In contrast, Rapidash(disc) was able to discover all constraints over a single attribute (i.e. \(k=1\)) within 10 minutes for all datasets. Constraints over single attributes are already interesting since it includes single-column primary keys, finding whether columns that are empty, or sorted in a particular order. All constraints are generated over pairs of attributes (\(k=2\)) within one hour of starting the discovery process. Further, since Rapidash(disc) continuously outputs DCs, the user is able to still get useful information even if the algorithm isn't allowed to run to completion.
Figure 4. Space requirement of different algorithms for DC verification. For Facet, space usage is the cardinality (in millions) of cluster pairs constructed and the number of nodes in the tree constructed for Rapidash(\(\bot\)) and Rapidash(kd).
Figure 5. Running time (in seconds) for DC verification on \(D_{1}\) with varying cardinality.
**Scalability micro-benchmark.** To understand the scaling behavior, we create a micro-benchmark where we vary the number of rows and columns in dataset \(D_{1}\) and run DCFinder6 to generate all constraints over at most three columns. We intentionally keep the dataset size small to ensure that DCFinder can actually terminate. Figure 6 shows the scalability wrt. to varying the cardinality of \(D_{1}\). DCFinder is faster than our solution when the dataset size is \(10^{5}\) rows but its running time grows very quickly as the dataset size increases, demonstrating the super-linear running time empirically. This growth is entirely due to the evidence-set computation step. Rapidash(disc) on the other hand has a much slower growth in running time. The same behavior was also observed when varying the number of columns but keeping the cardinality as \(5\times 10^{5}\), as shown in Figure 7. Even for only 25 columns in a small dataset, the evidence set computation becomes a blocker. Note that the jump in DCFinder running time when going from 10 to 15 columns is larger than when going from 15 to 20 columns. This is because the evidence set computation is sensitive to column cardinality and whether it is numeric or categorical. Recall that categorical columns only admit \(=,\neq\) as operators but numerical columns can have any operator in the predicate. When numerical columns are added, not only do they generate 6 row-level homogeneous predicates but also column-level and heterogenous predicates. This leads to a blowup in predicate space which in-turn makes the evidence set larger.
Footnote 6: We omit a microbenchmark with Hydra since DCFinder is known to be faster (Nguyen et al., 2017)
## 7. Related Work
DCs as an integrity constraint language was originally proposed by Chu et al. (Chu et al., 2014). We refer the reader to (Chu et al., 2014; Chen et al., 2014) for a general overview.
**DC Verification.** To the best of our knowledge, Facet(Facet, 2017) is the state-of-the-art algorithm for DC verification. In more detail, given a DC, Facet is able to find all constraint violation pairs which is sufficient to detect whether a DC holds or not. Facet follows the design of ViofFinder(Facet, 2017) and uses an operator called _refinement_ to evaluate DC predicates. Our proposed algorithm has better worst-case time/space complexity than Facet which results in significant performance improvements in practice. Previous works on data cleaning (Chu et al., 2014; Chen et al., 2014; Chen et al., 2014) rely on relational DBMSs to detect DC violations, where DCs are translated into SQL queries. Those DBMS-based methods often fall short when DCs contain inequalities, and they are slower than DC-specific methods by orders of magnitude as shown in (Facet, 2017; Facet, 2017).
**DC Discovery.** The two state-of-the-art systems that cater to **exact** DC discovery are Hydra(Chu et al., 2014) and DCFinder(Nguyen et al., 2017). Both of these systems rely on the two-step process of first building the evidence set, followed by enumerating the DCs. In particular, (Chu et al., 2014) proposed a hybrid strategy that combines DC discovery on a small sample with further refinement based on DC violations on the full instance. DCFinder is designed for approximate DC discovery, but its optimizations also apply to exact DC discovery. The two-step approach is also been successfully used for other dependency discovery algorithms (Facet, 2017; Facet, 2017). Following DCFinder, several systems have been proposed for efficient approximate constraint discovery. ADCMiner(Facet, 2017) is an extension to DCFinder that supports user-defined semantics of approximation and sampling is used to reduce the computation cost. ECP (ECP, 2018) introduces customized data representations, indexes and algorithms for both evidence set building and DC enumeration to achieve better parallelism. FastADC(Facet, 2018) utilizes a condensed representation of evidence sets to support more efficient bit operations and cache utilization in the evidence set building stage and extends the evidence inversion technique from Hydra for approximate DC enumeration. All these algorithms are evidence-set based whose limitations have been discussed extensively in this paper. Our work focuses on exact DC discovery. Extending it to account for approximate discovery and compare with the techniques discussed above is left for future work.
The lattice-based approach for restricted classes of constraint discovery (such as functional dependencies) has been employed by several works in the past (Chu et al., 2014; Chen et al., 2014; Chen et al., 2014). However, to the best of our knowledge, methods using lattice-based discovery have not been used for general DC mining as the validation of DCs using existing algorithms is expensive. Our work remedies this issue by proposing near-optimal algorithms for verifying any DC.
**Range Searching.** The connection between geometric algorithms and general join query processing has been made by several prior works (Facet, 2017; Facet, 2017). Specifically, range searching has been used for aggregate query processing (Facet, 2017). In fact, optimizations introduced in this paper could also be applied to certain queries considered in (Facet, 2018) since DCs can be expressed as CQs with comparisons. Range trees and their variants have also been extensively used in geospatial information systems (see (Chu et al., 2014; Chen et al., 2014; Chen et al., 2014; Chen et al., 2014; Chen et al., 2014; Chen et al., 2014) for an overview) and indexes for database systems (Facet, 2018). Ours is the first work to make the connection between constraint verification and discovery using ideas from computational geometry. For an overview of the theoretical aspects of range searching, we refer the reader to (Chu et al., 2014).
Figure 6. Running time of DCFinder vs. Rapidash(disc) for varying cardinality of \(D_{1}\) with 15 columns.
Figure 7. Running time of DCFinder vs. Rapidash(disc) for varying number of columns with \(|D_{1}|=5\cdot 10^{5}\). Criss-cross hashed bar means experiment could not complete in \(24\) hours.
## 8. Conclusion and Future Work
In this paper, we studied the problem of DC verification and discovery. We presented Rapidash, a DC verification algorithm with near-linear time complexity with respect to the dataset size that leverages prior work on orthogonal range search. We also developed an anytime DC discovery algorithm which does lattice search based on our verification algorithm. Unlike previous works, our discovery algorithm eliminates the reliance on the construction of evidence sets, which can be computationally expensive. Through empirical evaluation, we demonstrated that our DC verification algorithm is faster than the state of the art by an order of magnitude on large-scale production datasets. Our DC discovery algorithm is able to output valid DCs incrementally whereas existing methods fail to provide any useful information. This paper opens up a line of work that can leverage close connections between problems related to denial constraints and computational geometry. Our techniques can be extended to the approximate DC setting as well where we wish to ensure that the number of DC violations is at most some fraction of the dataset. Potential directions for future work include considering dynamic data and external memory setting, as well as further improving complexity (Kumar et al., 2020; Wang et al., 2021).
|
2309.14721 | Scalable High-Mobility Graphene/hBN Heterostructures | Graphene-hexagonal boron nitride (hBN) scalable heterostructures are pivotal
for the development of graphene-based high-tech applications. In this work, we
demonstrate the realization of high-quality graphene-hBN heterostructures
entirely obtained with scalable approaches. hBN continuous films were grown via
ion beam-assisted physical vapor deposition directly on commercially available
$SiO_2/Si$ and used as receiving substrates for graphene single-crystal
matrixes grown by chemical vapor deposition on copper. The structural,
chemical, and electronic properties of the heterostructure were investigated by
atomic force microscopy, Raman spectroscopy, and electrical transport
measurements. We demonstrate graphene carrier mobilities exceeding $10,000
cm^2/Vs$ in ambient conditions, 30% higher than those directly measured on
$SiO_{2}/Si$. We prove the scalability of our approach by measuring more than
100 transfer length method devices over a centimeter scale, which present an
average carrier mobility of $7500 \pm 850 cm^{2}/Vs$. The reported high-quality
all-scalable heterostructures are of relevance for the development of
graphene-based high-performing electronic and optoelectronic applications. | Leonardo Martini, Vaidotas Mišeikis, David Esteban, Jon Azpeitia, Sergio Pezzini, Paolo Paletti, Michał Ochapski, Domenica Convertino, Mar Hernandez, Ignacio Jimenez, Camilla Coletti | 2023-09-26T07:28:46Z | http://arxiv.org/abs/2309.14721v1 | # Scalable high-mobility graphene/hBN heterostructures
###### Abstract
Graphene-hexagonal boron nitride (hBN) scalable heterostructures are pivotal for the development of graphene-based high-tech applications. In this work we demonstrate the realization of high-quality graphene-hBN heterostructures entirely obtained with scalable approaches. hBN continuous films were grown via ion beam assisted physical vapor deposition (IBAD-PVD) directly on commercially-available SiO\({}_{2}\)/Si, and used as receiving substrates for graphene single-crystal matrixes grown by chemical vapor deposition (CVD) on copper. The structural, chemical and electronic properties of the heterostructure were investigated by atomic force microscopy (AFM), Raman spectroscopy and electrical transport measurements. We demonstrate graphene carrier mobilities exceeding 10000 cm\({}^{2}\)/Vs in ambient conditions, 30% higher than those directly measured on SiO\({}_{2}\)/Si. We prove the scalability of our approach by measuring more than 100 transfer length method (TLM) devices over centimeter scale, which present an average carrier mobility of 7500 \(\pm\) 850 cm\({}^{2}\)/Vs. The reported high-quality all-scalable heterostructures are of relevance for the development of graphene-based high-performing electronic and optoelectronic applications.
Graphene, hBN, van der Waals heterostructures, CVD, scalability, carrier mobility 1. Center for Nanotechnology Innovation (@NEST, Istituto Italiano di Tecnologia, Piazza San Silvestro 12, 56127, Pisa, Italy 2. Graphene Labs, Istituto Italiano di Tecnologia, Via Morego 30, I-16163 Genova, Italy 3. Instituto de Ciencia de Materiales de Madrid, Consejo Superior de Investigaciones Cientificas, E-28049 Madrid, Spain 4. NEST, Istituto Nanoscienze-CNR and Scuola Normale Superiore, Piazza San Silvestro 12, 56127, Pisa, Italy *[email protected], [email protected]
## Introduction
In recent years hexagonal boron nitride (hBN) has attracted attention as a promising encapsulant for graphene[1][2] and other 2D-materials[3], due to its remarkable structural, chemical and electronic properties. Like graphene, hBN is a layered material with a hexagonal lattice, it can be
conveniently obtained via mechanical exfoliation from bulk crystals, and presents high chemical stability. Thanks to the small (\(\sim\)1.8%) difference in lattice parameters between graphene and hBN[4] and its atomically flat surface, hBN can be integrated into graphene-based heterostructures with an effective minimization of extrinsic disorder[5]. Moreover, hBN presents a bandgap as large as 6 eV[6][7], a dielectric constant of 3.4[8] comparable with that of silicon dioxide (SiO\({}_{2}\)) and a very high breakdown voltage (i.e. 21 MV/cm [9]), which make it a suitable dielectric for the realization of field-effect transistor (FET) devices. When used to encapsulate graphene and other two-dimensional (2D) materials, hBN is effective in preserving the material quality and stability[1][10] and reducing the ambient induced contamination[11] with a beneficial effect on the electrical transport properties[12][1].
For most envisaged high-tech applications in the fields of photonics, optoelectronics and spintronics, hBN has soon become the ideal encapsulant material, capable of yielding graphene-based devices with the required performances[13][14]. Therefore, the scalable synthesis of hBN has become a crucial field of research. hBN thin films have been obtained via chemical vapor deposition (CVD) and molecular beam epitaxy (MBE) on several metallic substrates, such as copper[15][16], platinum[17], cobalt[18], and nickel[19]. Indeed, the CVD synthesis of monolayer and few-layer hBN is by now an established technique and the material is presently commercially available[20]. However, the synthesis of hBN films with a thickness of tens of nanometers, suitable to be adopted for bottom and top graphene encapsulation, as well as serving as a gate dielectric in electronic and photonic devices, is still considered a challenge. In first place, there is an objective difficulty in obtaining hBN whose quality matches that of exfoliated flakes from bulk crystals[21][1][22]. Also, although progresses have been reported for the CVD growth of thick hBN films both on metallic[23][24] and dielectric[25][26] substrates, there are significant challenges in identifying synthesis processes which comply with industrial requirements for CMOS integration such as metal contamination control (below 10\({}^{10}\) atoms/cm\({}^{2}\)) [14][27]. The use of insulating substrates for the synthesis of hBN offers advantages such as the absence of metal contamination, though temperatures as high as 1400 \({}^{\circ}\)C [28] are often needed to obtain high quality hBN, which are not appealing from an industrial point of view. Other approaches have been explored to grow BN at low temperatures, such as microwave-assisted CVD (PECVD)[29] and plasma-enhanced atomic layer deposition (ALD)[30], but both those methods present safety limitations due to the use of toxic precursors as _n-Ethylmethylamine[30]_. The definition of a
scalable and safe hBN growth approach yielding controlled thickness on insulating substrates would indeed be extremely attractive.
In this work, we report the realization of high-quality hBN/graphene heterostructures by employing scalable techniques which could be of potential interest for fab integration\({}^{[33][33]}\)\({}^{[34]}\). First, continuous films of nanocrystalline hBN with thicknesses of 10 nm are synthesized on SiO\({}_{2}\)/Si at 1000 \({}^{\circ}\)C through a physical vapor deposition (PVD) approach, namely Ion Beam Assisted Deposition (IBAD). Subsequently, arrays of monolayer graphene single-crystals are grown via chemical vapor deposition (CVD) on copper and transferred with a semidry approach\({}^{[33]}\) on the target IBAD-hBN substrates.
Combined analyses of the spectroscopic, microscopic and transport properties of the heterostructure indicate that IBAD-hBN is a promising substrate for graphene devices as it provides a high-quality landscape for the graphene carriers. When measuring 109 devices the room temperature (RT) carrier mobility in graphene on IBAD-hBN is found to average at \(\sim\)7500 cm\({}^{2}\)/Vs, and the residual carrier density at the charge neutrality point is \(\sim\)2x10\({}^{11}\) cm\({}^{-2}\). As-processed devices initially show displacement of the Dirac point and gate hysteresis (attributed to the presence of trapped charges at the hBN/SiO\({}_{2}\) interface), which are both significantly reduced by vacuum treatment of the heterostructure.
## 1 Materials and Methods
1.1.hBN growth
Nanocrystalline hBN films were grown by IBAD on commercially available p-doped silicon substrates covered with 275 nm of thermally grown SiO\({}_{2}\), using nitrogen gas and solid boron as sources. The films can be grown with thicknesses ranging from 1 to 100 nm; in this work a thickness of 10 nm was used. The lateral size of the resulting hBN film is limited by the diameter of the ion gun, and in our setup homogenous films up to 3" wafers could be produced. Solid boron (Alfa-Aesar 12134) was evaporated using a 7 kV electron beam evaporator, while low energy nitrogen ions (average energy of 5 eV) were provided by a Kauffman ion gun fed with 5 standard cubic centimeters per minute (sccm) of high purity gas. The chamber base pressure was 10-7 mbar, reaching 10-4 mbar during the growth. The sample was maintained at 1000\({}^{\rm o}\)C during the growth. With the adopted growth technique it is possible to tune the properties of the material by changing the solid boron precursor from pure boron to boron-carbide (B\({}_{4}\)C) thus yielding BN films (i.e., BNC) with a limited content of carbon (\(<\) 10%) and a different dielectric constant\({}^{[31][34][35]}\)
While both kinds of BN films (with and without carbon additive) were synthesized in this work, only pure hBN films were those ultimately adopted because of the higher crystallinity (see SI). Calibration of growth rates was done by contact profilometry, and the actual thickness was verified on test samples by UV-VIS spectrometry and spectroscopic ellipsometry. The quality and orientation of the adopted hBN films, which were found to exhibit a basal plane parallel to the substrate, was determined by X-ray Absorption Near Edge Structure (XANES)[36].
1.2.Graphene growth and transfer
Graphene single-crystal matrixes were grown by CVD in a deterministic pattern on electro-polished copper foils (Alpha-Aesar 99.8%) in a commercially-available cold-wall reactor (Aixtron 4" BM Pro), as reported in previous work[33]. Specifically, the substrate was first annealed in non-reducing argon atmosphere for 10 minutes, and the growth was then performed at 1060\({}^{\circ}\)C with an argon flow of 900 sccm, 100 sccm of hydrogen and 1 sccm of methane, with a base pressure of 25 mbar. The graphene crystal arrays were transferred on the target substrates (i.e., SiO\({}_{2}\)/Si with and without IBAD-hBN) through a deterministic semi-dry procedure[33]. The graphene on copper foil was covered with a double polymeric membrane of PMMA/PPC and baked at 90 \({}^{\circ}\)C[37], while a few millimetre-thick PDMS frame was applied on the edges of the sample, to ensure mechanical rigidity. The graphene was delaminated from the copper in a solution of 1 molar of NaOH [38][39] and transferred to the target substrate using a micromechanical stage to ensure the deterministic transfer. Once transferred, the polymer was removed by subsequent immersion in acetone and isopropanol. 2-step cleaning using remover AR 600-71 (Allresist) was performed to ensure the cleanliness of the graphene surface[40].
1.3.AFM and Raman characterization
Atomic force microscopy (AFM) was used to investigate the topography of the samples; it was performed using an Anasys AFM+ tool in non-contact mode and a Bruker Dimension Icon microscope used in ScanAsyst mode. AFM micrographs were analyzed using the software Gwyddion 2.54.
Raman spectroscopy was used to characterize the crystalline quality of both graphene and hBN. Raman data were acquired using a commercial Renishaw InVia spectrometer, with a laser wavelength of 532 nm. The Raman setup is linked to a microscope with mechanically controlled stage, thus allowing to perform spatially-resolved micro-Raman characterization with spot size in the order of 1 \(\upmu\)m\({}^{2}\), defined by the 100x magnification lens. Raman characterization of hBN was
performed using a laser power density of \(\sim 10\) mW/\(\upmu\)m\({}^{2}\). An acquisition time of 600 s was needed to detect representative hBN peaks[10]. Graphene was measured with a laser power density of 1.7 mW/\(\upmu\)m\({}^{2}\). The statistics reported in the paper were obtained from spectra acquired on areas of 15x15 \(\upmu\)m\({}^{2}\) with a step of 1 \(\upmu\)m.
### Device fabrication
Optical lithography and metal thermal evaporation (50 nm of gold on top of 5 nm of chromium) were used to pattern an array of markers on top of the hBN substrate, before the transfer of graphene. Hall-bar and Transfer Length Method (TLM) devices were fabricated using standard e-beam technique (EBL), with a Zeiss UltraPlus scanning electron microscope and Raith Multibeam lithography module. The graphene channels were defined with the first lithographic step. Reactive ion etching (RIE) with Ar/O\({}_{2}\) atmosphere for 45 seconds was used to remove graphene from the patterned areas. Subsequently, the metal contacts were defined via a second EBL step and thermal metal deposition of 50 nm of gold on top of 5 nm of chromium.
### Electrical characterization
Electrical characterization was performed at room temperature and in air, using five micrometric positioners (MPI-corporation MP40) with 7 \(\upmu\)m tungsten tips to provide signal and check the readout. A Keithley 2450 sourcemeter, in high tension configuration, was used as DC source for the gate potential, with constant reading of the current to check for eventual leakages from the back gate. DC measurements, both in two and four-terminal configuration, were performed through a second Keithley 2450 sourcemeter. To assess the real graphene electrical performance, avoiding any contribution to the resistivity arising from the contacts, 4-probe measurement, both in Hall Bar devices and TLM were performed. AC measurements were carried out with a Signal Recovery 7260DSP in low frequency (\(10-100\) Hz) configuration and differential voltage read-out. The constant-current was achieved using a large pre-resistor (4.7 M\(\Omega\)) in series with the measured device. The orthogonal magnetic field in the Hall measurements (up to 1000 Oe) was provided through a commercial resistive electromagnet operated at room temperature.
## 2 Results and discussion
The morphology of the hBN film synthesized via IBAD was examined via AFM and compared to that of SiO\({}_{2}\)/Si used as growth substrate. As shown in Figure 1a, the hBN film shows uniform nanoscale flatness over an area of 10x10 \(\upmu\)m\({}^{2}\). The morphology is qualitatively comparable to that of the bare silica substrate (Figure 1b). Indeed, we retrieve an average root mean square (RMS)
roughness of 450 pm for the SiO\({}_{2}\)/Si substrate used as target for the growth, and of 935 pm for the hBN (see Figure S1). In inset of Figure 1a-b we report a representative AFM line profiles for commercial SiO\({}_{2}\)/Si and IBAD-hBN. This result indicates that the IBAD growth process maintains the surface morphology in a range potentially suitable for high-quality graphene-based devices, even for thick hBN films. A very low roughness is in fact instrumental for a material to be used as graphene substrate, since local strain variations are regarded as a major source of carrier scattering in graphene[41].
In Figure 1c we shown the angle dependent study of the x-ray absorption near edge spectroscopy (XANES) for our hBN: around 192 eV we have the energy transition from _1s_ to \(\pi\)* for boron [32]; the peak intensity in this transition follow a cosine square dependence with the incident angle[42] suggesting that the hBN crystals have a preferential orientation parallel to the substrate plane.
In Figure S1c we report a representative Raman spectrum of the synthesized hBN film. We observe two main Raman modes: the one at \(\sim 1370\) cm-1 is attributed the characteristic E\({}_{2\text{g}}\) vibrational peak of hBN, while the Si third order transverse optical (3\({}^{\text{rd}}\) TTO) peak [43] is located at \(\sim 1450\) cm-1. The full-width-at-half-maximum (FWHM) of the E\({}_{2\text{g}}\)(hBN) peak, an indication of the material crystallinity, is 37 cm-1, higher than that measured for single crystal exfoliated hBN (\(\sim\)8 cm-1) [44], but comparable to that reported for CVD-grown hBN[45].
Figure 2a reports representative Raman spectra for graphene single-crystals transferred on SiO\({}_{2}\) (black) and on hBN (green). The characteristic graphene 2D and G
Figure 1: AFM micrograph of (a) hBN, and (b) SiO\({}_{2}\)/Si over an area of 10x10 \(\mu\)m\({}^{2}\). The color map range for both images is 0-15 nm. In inset: representative AFM line profiles of SiO\({}_{2}\)/Si (black) and hBN (green). c) B(1s) angular XANES from hBN to determine the orientation of basal planes.
around \(\sim\)2675 cm-1 and 1582 cm-1, respectively, while the D-peak (\(\sim\)1350 cm-1) is absent, indicating that defects are negligible[46]. The 2D-peak can be fitted with a single Lorentzian, as expected for monolayer graphene [47], with comparable FWHM values averaging at 25 cm-1 and 23 cm-1 on SiO2 and hBN, respectively, suggesting a low amount of strain fluctuations (Figure 2b). The FWHM of the G-peak is found to average at \(\sim\)12 cm-1 and 10.5 cm-1 for SiO2 and hBN, respectively, as shown in Figure 2c. The A(2D)/A(G) values, reported in Figure 2f, suggest a carrier concentration within the intrinsic limit for graphene on hBN and close to 100 meV for graphene on SiO2[48]. Also, the increased I(2D)/I(G) value for graphene on hBN (Figure 2e) indicates a reduction of the doping level. In Figure 2d we report the correlation plot between the 2D and G peak position[49][50]. Although the data collected for graphene on SiO2/Si present a narrower dispersion than those on hBN, compatible with the higher roughness of the hBN measured from the AFM, both are indicative of slight compressive strain.
Figure 2: a) Representative Raman spectra of graphene transferred on SiO2 (black) and IBAD-hBN (green). b) Distribution of the 2D-peak FWHM on the two substrates. c) Distribution of the G-peak FWHM. d) Correlation plot of 2D-peak position as a function of the G-peak position. In d), we show as reference the dependence on strain for undoped graphene (red, according to the Grüneisen parameter[51]), as well as the dependence on doping for the unstrained case (gray). e) Histogram of the distribution of the 2D/G peak intensity ratio and f) distribution of the A(2G)/A(G).
Figure 3a shows a sketch of the typical graphene field effect transistors (g-FETs) fabricated to investigate the transport properties of graphene when transferred on top of the IBAD-hBN. Figure 3b shows a representative transfer curve for a graphene/hBN device: employing the constant-mobility model (\(R=\frac{L/W}{en\mu}\)) for this device we obtained mobility of \(\mu_{\mathrm{e}}=9500\) cm\({}^{2}\)/Vs and \(\mu_{\mathrm{h}}=10400\) cm\({}^{2}\)/Vs for electron and holes, respectively. Those values represent an increase of \(\sim\)30% compared to the same graphene crystals on SiO\({}_{2}\)[52][40]. In Figure 3c we show the carrier concentration \(n\) as function of the back-gate voltage measured from the Hall effect: the obtained values are in line, within a 5% of error, with the expected value \(n=\frac{(V_{Gate}-V_{Dirac})^{\varepsilon\varepsilon}0}{et}\) for a 275 nm thick SiO\({}_{2}\) plus 10 nm thick hBN, used in the constant-mobility model. The increase in the carrier mobility correlates with a reduction of the residual carrier density close to charge neutrality, as reported in Figure 3e: n\({}^{*}\) were retrieved to be 2x10\({}^{11}\) cm\({}^{-2}\) and 3.8x10\({}^{11}\) cm\({}^{-2}\) for graphene on hBN and SiO\({}_{2}\), respectively. Although the morphology of the IBAD-hBN is slightly rougher than SiO\({}_{2}\), these results indicate that a higher-quality potential landscape for the graphene carriers is provided by the nanocrystalline substrate. Furthermore, the Dirac point for this device is retrieved at 5 V, which corresponds to a charge density of 3.8 10\({}^{11}\) cm\({}^{-2}\), in line with the Raman estimation. Overall, both Raman spectroscopy[41] and electrical measurements indicate a reduced doping and residual carrier density for graphene on hBN (see Figure 2b, Figure 2e and 3e), which explain the improved transport properties measured in the graphene/hBN heterostack.
It should be mentioned that the device above was measured after keeping the structure in static vacuum (\(\sim\)10 mbar) for prolonged time (\(>\)4 months) after fabrication. When measured immediately after fabrication, the devices presented a pronounced hysteresis of the transfer, as shown in red curve in Figure 3f, while after storage in vacuum the hysteresis was strongly reduced, as shown by the black curve in Figure 3f. Concurrently, we also observed a shift in the Dirac point to lower doping values (i.e., V\({}_{\textrm{Gate}}\)\(<\)10 V). Also, no significant variation in the carrier mobility was observed after vacuum storing, as reported in Figure S 4. All the electrical measurements reported have been performed in ambient (not vacuum) conditions, and it should be mentioned that there was no reappearance of hysteretic behavior in the sample after vacuum storage. The gate hysteresis can be attributed to charge traps that partially screen the back-gate potential. These traps can be present in the sample after vacuum storage.
Figure 3: a) Schematic representation of a g-FET device and measurement scheme. b) Representative transfer curve of a graphene/hBN device: from constant-mobility model we obtained mobility of 9500 and 10400 cm\({}^{2}\)/Vs for electrons and holes, respectively. c) carrier concentration on graphene as function of the backgate voltage: the values obtained from a direct Hall measure (green dots) are in good agreement with those expected for a parallel-plate capacitor with dielectric of 285 nm thickness and 3.9 dielectric constant (red continuous line). d) Schematic representation of the charge trapping in the hBN/SiO\({}_{2}\) interface, affecting the actual field effect on graphene. e) Double log plot of conductivity as a function of carrier concentration. The intersection of the minimum conductivity (horizontal lines) and a linear fit to ln(\(\sigma\)) vs ln(n) determine the value of the residual carrier density \(n^{*}\), for graphene on SiO2 (black) and hBN (green). f) Transfer curve of a g-FET immediately after fabrication (red) and after 4 months in vacuum (black): the initial hysteresis largely reduced, and the carrier mobility is kept 30% higher than what was tested on SiO\({}_{2}\)/ Si substrate.
either at the SiO\({}_{2}\)/hBN or at the hBN/graphene interfaces. We report that these traps are characterized by slow charging and discharging times, as we observe different transfer curve behaviors for different gate sweeping rates (Figure S3). We were able to induce reduction of the hysteresis also by annealing the sample at 130 \({}^{\circ}\)C overnight in high vacuum (10\({}^{\text{-9}}\) mbar). However, annealing at higher temperature for shorter times affected the transport properties of the device with a consequent mobility reduction of \(\sim\)50% (see Figure S5). Moreover, the carrier concentration in the graphene shows a linear dependence on the gate voltage (see Figure 3c), with no sign of direct charge transfer.
To further assess the location of the charge traps we performed transport measurements in a top-gated exfoliated hBN/graphene/IBAD-hBN/SiO\({}_{2}\) heterostructure. When using the top-gate (that is, applying the gate potential through the exfoliated hBN flake), we did not observe significant hysteresis, nor gate sweeping speed dependence. The electrical behaviour of the fully-encapsulated device was found to be qualitatively compatible with that measured for similar devices fabricated on SiO\({}_{2}\)/Si substrates (see Figures S7b and S7d): the fully-encapsulated graphene presented a significantly higher carrier mobility of \(\sim\) 15000 cm\({}^{2}\)/Vs (at 5x1011 cm\({}^{\text{-2}}\)) and lower residual carrier density of n\({}^{*}\) = 8.5x10\({}^{10}\) cm\({}^{\text{-2}}\), as the top hBN protects from environmental contaminations. Instead, measuring the same device in back-gated configuration led to the observation of a large hysteresis (see Figure S7c) comparable to that reported by the red curves in Figure 3f. This confirms that the charge traps are present at the remote hBN/SiO\({}_{2}\) interface, likely forming during the IBAD growth process. In Figure 3d we show a representative schematic of the expected effect that charge impurities present at the interface hBN/SiO\({}_{2}\) may have on the transfer curve: when a gate potential is applied these impurities can act as traps for the electrons, leading to a non-linear change of the electric field on the graphene with the applied gate potential. Vacuum storage appears effective in removing such traps, hence ultimately eliminating hysteretic effects.
Finally, to prove the scalability of our approach we realized and electrically characterized more than 100 graphene devices, transferred on IBAD-hBN. As previously reported[33], we have developed approaches to grow single-crystal graphene in a deterministic pattern and to precisely transfer such matrixes on a desired substrate; thus, we can scale-up the statistics on the device number by realizing a matrix of TLM devices. Each TLM device was realized on a different graphene single-crystal of the array, as shown in Figure 4a-c. Fabrication of each TLM channel
Figure 4: a) Optical image of a seeded single-crystal graphene array deterministically transferred on hBN substrate. b) Optical image of an array of TLM devices, the total amount of devices tested is 109. c) Schematic representation of the device fabricated on the graphene array deterministically transferred on a IBAD-hBN substrate, as described in methods; the regular graphene pattern allows to realize a matrix of identical devices with constant spacing. d) Representative transfer curve of the four channels of a TLM device. In the inset, we report the resistance measured at fixed carrier concentration (10\({}^{12}\) cm\({}^{-2}\)) as a function of the channel length for a representative device. By the linear fit as a function of the length, it is possible to isolate the contribution of the contact and channel resistance, and then obtain the graphene mobility [53]. e) Distribution of the measured mobility as a function of the position of the Dirac point.
(width 10 \(\upmu\)m; length varying from 10 to 25 \(\upmu\)m, in 5 \(\upmu\)m steps) within the same graphene crystal allowed us to isolate the contact and channel contribution to the resistance, and thus estimate the graphene mobility[53]. The devices are realized with the same orientation, which means appreciatively the same crystallographic orientation, as previously reported[54]; however, the electrical behavior of the devices should not be influenced by the crystallographic orientation in our regime of measure. In Figure 4d are reported the transfer curves for a representative TLM device. In Figure 4e we report a plot of the carrier mobility as a function of the gate voltage position of the Dirac point. The mobility obtained from the constant mobility model in the array, was found to be as high as 9000 cm\({}^{2}\)/Vs, with average \(\upmu\) at \(\sim\) 7500 \(\pm\) 850 cm\({}^{2}\)/Vs. The TLM sample was subjected to a shorter (few weeks) vacuum treatment with respect to the gFET and for this reason the average Dirac point is found to be significantly higher than expected, i.e., 20.5 \(\pm\) 4 V, while the hysteresis is still reduced. The mobility values reported indicate a substantial improvement with respect to large-scale characterization of the same CVD graphene crystals on SiO\({}_{2}\) (average \(\upmu\sim\) 5000 cm\({}^{2}\)/Vs)[55], in line with the 30% improvement reported for the gFET devices. The results were obtained on a 1x1 cm\({}^{2}\) chip, but could be straightforwardly extended to wafer-scale via multiple tile transfer of the graphene matrixes[55].
## 3 Conclusions
In summary, we have realized and characterized scalable vertical hBN/graphene heterostructures which allow for the realization of devices with promising electronic performances. Nanocrystalline hBN was grown via IBAD on SiO\({}_{2}\)/Si, and presents a thickness of 10 nanometers, which makes it suitable to be used both as an encapsulant and as a gate dielectric. Microscopic characterization was performed to investigate the surface morphology of the scalable hBN, which was found to be comparable in terms of flatness to those of the SiO\({}_{2}\) growth substrate. Spectroscopic and transport measurements were carried out to compare the properties of graphene when transferred on commercial SiO\({}_{2}\)/Si and hBN. We observe a relevant improvement in terms of residual carrier density and carrier mobility, that indicates how the adopted hBN provides a high-quality landscape for the graphene carriers. Also, we demonstrate that the hysteretic behaviour observed in heterostructure realized with as-received hBN can be significantly reduced by vacuum treatment of the material. To prove the scalability of our approach we tested an array of graphene crystals, transferred on IBAD-hBN over centimeter scale, and obtained reproducible mobility values exceeding 7500 cm\({}^{2}\)/Vs. Future developments might concern a deeper investigation of the charge
trapping mechanism at the hBN/SiO\({}_{2}\) interface, as well as the optimization of the IBAD-hBN transfer to realize fully-encapsulated and scalable hBN/graphene/hBN heterostructures.
## Acknowledgments
The research leading to these results has received funding from the European Union's Horizon 2020 research and innovation program under grant agreement no. 881603-Graphene Core3.
|
2307.06851 | A Framework for Universality in Physics, Computer Science, and Beyond | Turing machines and spin models share a notion of universality according to
which some simulate all others. Is there a theory of universality that captures
this notion? We set up a categorical framework for universality which includes
as instances universal Turing machines, universal spin models, NP completeness,
top of a preorder, denseness of a subset, and more. By identifying necessary
conditions for universality, we show that universal spin models cannot be
finite. We also characterize when universality can be distinguished from a
trivial one and use it to show that universal Turing machines are non-trivial
in this sense. Our framework allows not only to compare universalities within
each instance, but also instances themselves. We leverage a Fixed Point Theorem
inspired by a result of Lawvere to establish that universality and negation
give rise to unreachability (such as uncomputability). As such, this work sets
the basis for a unified approach to universality and invites the study of
further examples within the framework. | Tomáš Gonda, Tobias Reinhart, Sebastian Stengele, Gemma De les Coves | 2023-06-30T15:35:45Z | http://arxiv.org/abs/2307.06851v3 | # A Framework for Universality in Physics,
###### Abstract
Turing machines and spin models share a notion of universality according to which some simulate all others. Is there a theory of universality that captures this notion? We set up a categorical framework for universality which includes as instances universal Turing machines, universal spin models, NP completeness, top of a preorder, denseness of a subset, and more. By identifying necessary conditions for universality, we show that universal spin models cannot be finite. We also characterize when universality can be distinguished from a trivial one and use it to show that universal Turing machines are non-trivial in this sense. Our framework allows not only to compare universalities within each instance, but also instances themselves. We leverage a Fixed Point Theorem inspired by a result of Lawvere to establish that universality and negation give rise to unreachability (such as uncomputability). As such, this work sets the basis for a unified approach to universality and invites the study of further examples within the framework.
###### Contents
* 1 Introduction
* 2 The Set-Up
* 2.1 Motivation For the Set-Up
* 2.2 Ambient Category
* 2.3 Simulators
* 2.4 Behavior Structure
* 2.5 Intrinsic Behavior Structure
* 3 Reductions and Universality of Simulators
* 3.1 Definition of Universality
* 3.2 Universal Spin Models as Universal Simulators
* 3.3 Abstract Examples of Universal Simulators
* 3.4 No-Go Theorem for Universality
* 4 Comparing Simulators
* 4.1 Processings of Simulators
* 4.2 The Simulator Category
* 4.3 Parsimony of Universal Simulators |
2309.10547 | Towards Generative Modeling of Urban Flow through Knowledge-enhanced
Denoising Diffusion | Although generative AI has been successful in many areas, its ability to
model geospatial data is still underexplored. Urban flow, a typical kind of
geospatial data, is critical for a wide range of urban applications. Existing
studies mostly focus on predictive modeling of urban flow that predicts the
future flow based on historical flow data, which may be unavailable in
data-sparse areas or newly planned regions. Some other studies aim to predict
OD flow among regions but they fail to model dynamic changes of urban flow over
time. In this work, we study a new problem of urban flow generation that
generates dynamic urban flow for regions without historical flow data. To
capture the effect of multiple factors on urban flow, such as region features
and urban environment, we employ diffusion model to generate urban flow for
regions under different conditions. We first construct an urban knowledge graph
(UKG) to model the urban environment and relationships between regions, based
on which we design a knowledge-enhanced spatio-temporal diffusion model
(KSTDiff) to generate urban flow for each region. Specifically, to accurately
generate urban flow for regions with different flow volumes, we design a novel
diffusion process guided by a volume estimator, which is learnable and
customized for each region. Moreover, we propose a knowledge-enhanced denoising
network to capture the spatio-temporal dependencies of urban flow as well as
the impact of urban environment in the denoising process. Extensive experiments
on four real-world datasets validate the superiority of our model over
state-of-the-art baselines in urban flow generation. Further in-depth studies
demonstrate the utility of generated urban flow data and the ability of our
model for long-term flow generation and urban flow prediction. Our code is
released at: https://github.com/tsinghua-fib-lab/KSTDiff-Urban-flow-generation. | Zhilun Zhou, Jingtao Ding, Yu Liu, Depeng Jin, Yong Li | 2023-09-19T11:52:57Z | http://arxiv.org/abs/2309.10547v1 | # Towards Generative Modeling of Urban Flow through Knowledge-enhanced Denoising Diffusion
###### Abstract.
Although generative AI has been successful in many areas, its ability to model geospatial data is still underexplored. Urban flow, a typical kind of geospatial data, is critical for a wide range of applications from public safety and traffic management to urban planning. Existing studies mostly focus on predictive modeling of urban flow that predicts the future flow based on historical flow data, which may be unavailable in data-sparse areas or newly planned regions. Some other studies aim to predict OD flow among regions but they fail to model dynamic changes of urban flow over time. In this work, we study a new problem of urban flow generation that generates dynamic urban flow for regions without historical flow data. To capture the effect of multiple factors on urban flow, such as region features and urban environment, we employ diffusion model to generate urban flow for regions under different conditions. We first construct an urban knowledge graph (UKG) to model the urban environment and relationships between regions, based on which we design a knowledge-enhanced spatio-temporal diffusion model (KSTDiff) to generate urban flow for each region. Specifically, to accurately generate urban flow for regions with different flow volumes, we design a novel diffusion process guided by a volume estimator, which is learnable and customized for each region. Moreover, we propose a knowledge-enhanced denoising network to capture the spatio-temporal dependencies of urban flow as well as the impact of urban environment in the denoising process. Extensive experiments on four real-world datasets validate the superiority of our model over state-of-the-art baselines in urban flow generation. Further in-depth studies demonstrate the utility of generated urban flow data and the ability of our model for long-term flow generation and urban flow prediction. Our code is released at: [https://github.com/tsinghua-fib-lab/KSTDiff-Urban-flow-generation](https://github.com/tsinghua-fib-lab/KSTDiff-Urban-flow-generation).
Generative model, urban flow, knowledge graph, diffusion model +
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
+
Footnote †: journal: Journal of LaTeX Templates
## 1. Introduction
Generative AI, especially large pre-trained models, have shown great success in many areas such as natural language processing and computer vision, with recently popular GPT-4 (Zhou et al., 2017) and Stable Diffusion (Zhou et al., 2017) as examples. However, their ability to model geospatial data is still underexplored (Zhou et al., 2017).
Urban flow is a typical kind of geospatial data that depicts the dynamic human mobility patterns in urban regions (Zhou et al., 2017), playing an important role in public safety, traffic management, and urban planning. For example, the inflow and outflow refer to the number of people entering or leaving an urban region during a given time interval. With such data, the government can implement preventive measures in advance to avoid stampede or traffic congestion in regions experiencing a large inflow. Moreover, ride-sharing platforms can dispatch more taxis to regions with large outflows to meet the demand. In recent years, many studies have been conducted on _predictive modeling_ of urban flow. As shown in Figure 1(a), they mostly focus on urban flow prediction, which trains a prediction model based on historical flow data to predict the future urban flow of regions (Liu et al., 2018; Liu et al., 2018; Wang et al., 2018). However, these studies rely heavily on historical flow, and thus cannot be adapted to data-sparse areas such as newly planned regions or suburban regions. In addition, some studies have made efforts to predict static origin-destination flows from one region to another given their characteristics (Zhou et al., 2018; Wang et al., 2018; Wang et al., 2018). Nevertheless, they fail to capture temporal patterns since they cannot generate dynamic urban flow data, where the flow may vary significantly at different times of the day.
In this work, we take a step towards _generative modeling_ of urban flow by studying a new problem of urban flow generation, which aims to train a generation model based on existing regions in order to estimate the urban flow of new regions without historical flow data, as shown in Figure 1(b). Urban flow generation can help estimate the urban flow for data-sparse regions or evaluate the flow patterns of newly planned regions in advance of region construction, which is essential for urban planning (Wang et al., 2018).
The urban flow of a region is affected by multiple factors including regional characteristics, urban environment, and urban flow in nearby regions. Therefore, urban flow generation can be formulated as a conditional generation problem, which aims to generate flow for different regions under different conditions. Recently, diffusion models have shown outstanding performance in a wide range of tasks like image synthesis (Chen et al., 2018), audio synthesis (Liu et al., 2018), and time series
modeling [27; 32]. The key idea of diffusion models is to first gradually add noise to data until it becomes random noise, and then train a model to learn the reverse process, i.e., convert random noise into expected data distribution step by step. Moreover, additional information can be injected into each step of the reverse process, making it easy to generate data under given conditions using diffusion models [45]. As a result, we propose to leverage conditional diffusion models to solve the urban flow generation problem so as to better control the generation process by multiple factors.
However, accurately generating urban flow under diverse conditions still faces the following challenges: (1) **Significant variance in the spatial distribution of urban flow volume**. The volume of urban flow varies significantly across regions [10]. For example, newly planned regions located in suburban areas typically exhibit lower volumes than established regions in downtown areas due to underdeveloped infrastructure and transportation networks. We visualize the flow volume in different regions of Washington, D.C., and Baltimore, which is defined as the average flow in each hour of a region. As shown in Figure 2, the flow volumes of regions in downtown areas are generally much larger than those in suburban areas. Such spatial heterogeneity in flow volume distribution makes it a challenging task to generate urban flow accurately for different regions with different volumes without historical flow data. (2) **Highly complex and varying spatio-temporal dependencies or urban flow across different regions**. There exist complex spatio-temporal dependencies of urban flow among regions, and such dependencies are affected by urban environment and vary across different regions. For example, in downtown business areas with dense road networks, the urban flow of a region may be largely influenced by nearby regions, resulting in strong spatial correlations. On the contrary, for residential regions in suburban areas, the flow transition among regions may be rather small, and urban flow there exhibits stronger temporal patterns, e.g., large outflow during weekday morning and large inflow at night. Most of the existing diffusion models can only model temporal correlation in time series [27; 32], while they fail to consider the spatio-temporal dependencies in urban flow and the effect of urban environment.
To tackle the complexity of modeling urban flow and regions, we construct an urban knowledge graph (UKG) to model the urban environment as well as various relationships among regions. Based on UKG, we propose a **K**nowledge-enhanced **S**patio-**T**emporal **D**iffusion model (KSTDiff) to generate dynamic urban flow for regions. Specifically, in the forward diffusion process, we gradually add Gaussian noise to the urban flow data of each region. To handle the variance of flow volume among regions, we design a volume estimator to directly estimate the flow volume for each region, which is then used to guide the diffusion process by adjusting the mean of added Gaussian noise accordingly. Moreover, the volume estimator is jointly trained with the diffusion model to make the diffusion process of each region learnable and customized, and thus overcome the first challenge. In the denoising process, we aim to train a denoising network to convert random noise to urban flow data step by step. To solve the second challenge, we design a knowledge graph enhanced spatio-temporal block (KGST block) to capture spatio-temporal dependencies of urban flow based on environmental information, where we utilize relation-aware graph convolution layers to capture multiple types of spatial dependencies based on UKG and use transformer layers to model temporal dependencies. Moreover, we leverage KG embeddings of regions with environmental information preserved to guide the spatio-temporal fusion. In this way, we manage to capture the spatio-temporal dependencies as well as the environmental factors that affect such dependencies.
Overall, our contributions can be summarized as follows:
* We study the problem of urban flow generation, which aims to generate urban flow for regions without historical flow data, by proposing a knowledge-enhanced spatio-temporal diffusion model. To our best knowledge, we are the first to introduce diffusion model for generative modeling of geospatial data.
* We employ diffusion model to control the urban flow generation process based on multiple factors. Specifically, we design a volume estimator to guide the diffusion process with estimated flow volume for each region separately. Moreover, we construct an urban KG to model urban environment as well as complex relationships among regions, and devise an STKG block to capture spatio-temporal dependencies of urban flow.
* We conduct extensive experiments on four real-world urban flow datasets, and our model outperforms state-of-the-art models by over 13.3% in terms of MAE on all datasets, highlighting the superiority and robustness of our model. Several in-depth studies further demonstrate the effectiveness of our model design and its ability to generate long-term flow data. Moreover, experiments also demonstrate the utility of generated data for downstream applications and our model's capability for predictive modeling of urban flow.
Figure 1. Comparison of urban flow prediction and urban flow generation.
Figure 2. The average flow volume varies significantly in different regions.
## 2. Related Work
### Urban Flow Modeling
Understanding urban flow is crucial for many areas like urban planning and transportation system. In recent years, plenty of works have studied the problem of urban flow prediction through deep learning methods (Han et al., 2017; Wang et al., 2018; Wang et al., 2019), which leverage neural networks to predict the future inflow and outflow for citywide regions based on historical flow. The core of urban flow prediction is to model the spatio-temporal dependencies of urban flow. Existing works mostly use convolutional neural networks (CNN) or graph neural networks (GNN) to capture spatial dependencies, and use recurrent neural networks (RNN) or transformers to capture temporal dependencies. However, these works depend on historical flow and thus cannot be used for new regions without such data.
Another series of works aims to predict OD flow among regions. Deep Gravity (Wang et al., 2018) leverages neural networks to predict OD flow among each pair of regions based on their features, distance as well as outflows. GMEL (Gall et al., 2018) first learns an embedding for each region through graph attention network (GAT), and then use regression model to estimate the OD flow based on region embeddings. However, these works can only generate static urban flow, which is not enough for modeling urban flow with dynamic changes over time.
### Diffusion Model for Time Series Modeling
Diffusion models have been widely used in time series forecasting, imputation, and generation owing to their ability to model high dimensional data distributions. For example, TimeGrad (Zhou et al., 2017) is used for time series forecasting in an autoregressive way. It combines diffusion model with RNN and generates the value of the next time step conditioned on the hidden state of RNN. CSDI (Zhou et al., 2017) uses diffusion models for probabilistic time series imputation by generating missing values conditioned on observed values. Diffusion models have also been used in time series generation applications like electronic health records (EHR) synthesis (Han et al., 2017; Wang et al., 2019; Wang et al., 2019).
Most of these works use the denoising network architecture proposed in Diffwave (Han et al., 2017), which leverages bidirectional dilated convolution to capture the correlation between different time steps. However, these works fall short of modeling spatial dependencies among different regions in our urban flow generation task. Moreover, a recent work named DiffSTG (Wang et al., 2019) designs a novel UGenet for spatio-temporal graph forecasting, which can capture spatio-temporal dependencies of different nodes, while it depends on historical data to learn such dependencies and cannot manage to generate urban flow without historical flow data.
## 3. Preliminaries
### Problem Statement
In this section, we formally define our problem based on some concepts, and give a brief introduction of diffusion model. The notations used in the following sections are summarized in Table 1.
Definition 3.1 (**Urban Region**).: Urban regions are defined as non-overlapping areas in a city partitioned by main road networks, or defined as administrative areas such as census tracts.
Definition 3.2 (**Urban Flow**).: In this study, we focus on two kinds of urban flow, namely the inflow and outflow, which are defined as the number of people entering or leaving the region in a given time interval. They are usually calculated using user trajectory data or taxi trip data. In addition, the inflow of a region can also be calculated as the total number of visits to POIs in the region based on POI check-in data. The urban flow at time \(t\) can be represented as \(F_{t}\in\mathbb{R}^{N_{I}\times d_{f}}\), where \(N_{I}\) is the number of target regions and \(d_{f}\) is the dimension of urban flow, e.g., \(d_{f}=2\) for inflow and outflow.
Based on these concepts, we define the research problem as follows.
Definition 3.3 (**Urban Flow Generation**).: Given a set of regions \(\mathcal{S}_{I}=\{l_{1},l_{2},\ldots,l_{N_{I}}\}\) with attributes like population and POI information, generate urban flow for these regions in different time intervals \(F_{1},\ldots,F_{T}\) without historical flow data. The distribution of generated urban flows should be similar to real flows.
The urban flow of each region is affected by many factors like region features, urban environment, and interactions among regions. To better utilize such factors for urban flow generation, we employ denoising diffusion probabilistic model (DDPM) as the backbone of our model. Moreover, we construct an urban knowledge graph (UKG) to provide a comprehensive description of urban environment as well as complex relationships among regions, and generate urban flow based on the UKG. We first give a brief introduction to DDPM and the construction of UKG.
### Denoising Diffusion Probabilistic Model
Denoising diffusion probabilistic models (DDPM) (Deng et al., 2017) are deep generative models that learn a data distribution of variable \(x\) by gradually denoising a Gaussian noise. DDPM consists of a forward noising process and a reverse denoising process.
In the forward process, Gaussian noise is gradually added to the data \(x_{0}\sim q(x_{0})\) to produce a Markov chain:
\[q(x_{1:N}|x_{0})=\prod_{n=1}^{N}q(x_{n}|x_{n-1}), \tag{1}\]
where \(q(x_{n}|x_{n-1})=\mathcal{N}(\sqrt{1-\beta_{t}}x_{t-1},\beta_{t}I)\) and \(\beta_{n}\in(0,1)\) represents the noise level. With the notation \(\alpha_{n}=1-\beta_{n}\) and
\begin{table}
\begin{tabular}{c|l} \hline
**Notations** & **Description** \\ \hline \(N_{I}\) & The number of regions \\ \(T\) & The length of generated flow (number of time intervals) \\ \(d_{f,d_{raw}}\) & Dimension of urban flow and region features \\ \(d_{KG,\hat{u}_{R}}\) & Dimension of KG embedding and hidden size \\ \(\mathbf{\epsilon}\) & Gaussian noise \\ \(\mathbf{\epsilon}_{B}\) & Denoising module with parameter \(\theta\) \\ \(\mathbf{\ell}\) & Noise predicted by denoising module \\ \(N\) & The number of diffusion steps \\ \(\mathbf{x}_{0,\hat{x}_{R}}\) & Sample from real data, and the noised data at the \(n\)-th diffusion step \\ \(\beta_{n}\) & Noise level at the diffusion step \(n\) \\ \(\alpha_{w},\alpha_{n}\) & \(\alpha_{n}=1-\beta_{n},\alpha_{n}=\alpha_{n}\alpha_{n}\alpha_{n}\) \\ \(f_{b}\) & Volume estimator with parameter \(\phi\) \\ \(\mathbf{\epsilon}\) & Region features \\ \(\hat{\mathbf{s}},\mathbf{s}\) & Predicted and real flow volume \\ \(\mathbf{E}_{KG}\) & KG embedding of regions \\ \(\mathbf{h}_{I}\) & Input of region \(I\) to the spatial block \\ \(\mathbf{H}_{n},\mathbf{H}_{R}\) & Output of the spatial temporal block and temporal block \\ \(M_{i}\) & Number of epochs for pretraining volume estimator \\ \(\mathbf{\Lambda}_{E}\) & We update the volume estimator once every \(\mathbf{\Lambda}_{E}\) epochs \\ \hline \end{tabular}
\end{table}
Table 1. Notations.
\(\alpha_{1}\alpha_{2}\ldots\alpha_{n}\), sampling \(x_{n}\) at an arbitrary step \(n\) can be written in a close form:
\[q(x_{n}|x_{0})=\mathcal{N}(x_{n};\sqrt{a}x_{0},(1-\bar{a}_{n})I), \tag{2}\]
which can also be reparameterized as
\[x_{n}=\sqrt{\bar{a}_{n}}x_{0}+\sqrt{1-\bar{a}_{n}}\epsilon \tag{3}\]
with \(\epsilon\) sampled from a standard gaussian noise \(\epsilon\sim\mathcal{N}(\mathbf{0},I)\).
The reverse process recurrently denoises pure Gaussian noise \(\mathbf{x}_{T}\sim\mathcal{N}(\mathbf{0},I)\) to recover the original data \(\mathbf{x}_{0}\):
\[p_{\theta}(\mathbf{x}_{0:N})= p(\mathbf{x}_{n})\prod_{n=1}^{N}p_{\theta}(\mathbf{x}_{n-1}|\mathbf{x}_{n}),\] \[p_{\theta}(\mathbf{x}_{n-1}|\mathbf{x}_{n})= \mathcal{N}(\mathbf{x}_{n-1};\mu_{\theta}(\mathbf{x}_{n},n),\sigma_{ \theta}(\mathbf{x}_{n},n)I). \tag{4}\]
To obtain \(\mathbf{x}_{n-1}\) from \(\mathbf{x}_{n}\), we train a denoising network \(\epsilon_{\theta}\) to predict the noise \(\epsilon\) added to \(\mathbf{x}_{0}\) using \(\mathbf{x}_{n}\). Then, \(\mathbf{x}_{0}\) can be calculated by Equation 3 as \(\mathbf{x}_{0}=\frac{1}{\sqrt{\bar{a}_{n}}}(\mathbf{x}_{n}-\sqrt{1-\bar{a}_{n}} \epsilon_{\theta}(\mathbf{x}_{n},n))\). On the other hand, we have the forward process posteriors:
\[q(\mathbf{x}_{n-1}|\mathbf{x}_{n},\mathbf{x}_{0})=\mathcal{N}(\mathbf{x}_{n-1};\tilde{\mathbf{\mu} }(\mathbf{x}_{n},\mathbf{x}_{0}),\tilde{\beta}_{n}I), \tag{5}\]
where
\[\tilde{\mathbf{\mu}}(\mathbf{x}_{n},\mathbf{x}_{0})= \frac{\sqrt{\bar{a}_{n-1}}\beta_{n}}{1-\bar{a}_{n}}\mathbf{x}_{0}+ \frac{\sqrt{\bar{a}_{n}}(1-\bar{a}_{n-1})}{1-\bar{a}_{n}}\mathbf{x}_{n},\] \[\tilde{\beta}_{n}= \frac{1-\bar{a}_{n-1}}{1-\bar{a}_{n}}\beta_{n}. \tag{6}\]
Consequently, we can obtain the parameterization of \(p_{\theta}(\mathbf{x}_{n-1}|\mathbf{x}_{n})\) in Equation 4:
\[\mu_{\theta}(\mathbf{x}_{n},n)= \frac{1}{\sqrt{\bar{a}_{n}}}(\mathbf{x}_{n}-\frac{\beta_{n}}{\sqrt{1- \bar{a}_{n}}}\epsilon_{\theta}(\mathbf{x}_{n},n)),\] \[\sigma_{\theta}(\mathbf{x}_{n},n)= \frac{1-\bar{a}_{n-1}}{1-\bar{a}_{n}}\beta_{n}. \tag{7}\]
The denoising network \(\epsilon_{\theta}\) can be trained by optimizing the following L1 loss function:
\[\mathcal{L}(\theta)=\mathbb{E}_{\mathbf{x}_{0}\sim q(\mathbf{x}_{0}),\mathbf{e}\sim \mathcal{N}(\mathbf{0},I),n}||\mathbf{e}-\epsilon_{\theta}(\mathbf{x}_{n},n)||. \tag{8}\]
### Urban Knowledge Graph Construction
A KG is a graph that consists of an entity set \(\mathcal{E}\), a relation set \(\mathcal{R}\) and a fact set \(\mathcal{F}\), where each fact is represented as a triplet \((h,r,t)\in\mathcal{F}\), denoting a directional edge from head entity \(h\) to tail entity \(t\) with relation type \(r\). KG has been widely used in urban computing because of its ability to model urban environment as well as capture complex relationships between entities (Han et al., 2017; Zhang et al., 2018; Zhang et al., 2018).
Inspired by previous works (Han et al., 2017; Zhang et al., 2018; Zhang et al., 2018), we construct an urban KG (UKG) to model the environmental information in the city and interactions among regions. Specifically, we model the urban regions as entities in UKG, and use relations _BorderBy_ and _NearBy_ to describe the spatial adjacency relationship between regions because urban flows in nearby regions are easily affected by each other. Furthermore, the flow pattern of a region is also correlated with its function (Han et al., 2017; Zhang et al., 2018), which is mainly reflected by the point of interests (POIs) in the region. As a result, we add POIs and POI categories as entities into UKG, and use relations _LocateAt_ and _CateOf_ to describe the location and category of POIs. In addition, we consider the geographical influence and competitive relationships between POIs by relation _CoCheckin_ and _Competitive_. We also calculate the functional similarity for each pair of regions, i.e., the cosine similarity of POI category distribution, and use relation _SimilarFunc_ to link regions with similar function. To further enhance the semantics of urban environment, we model business areas (BA) as entities and their relationships with regions and POIs by _ProvideService_ and _BelongTo_. The details of relations in UKG are shown in Table 2.
To better make use of the environmental knowledge in UKG, we leverage KG embedding model to learn an embedding vector for each region from UKG. Specifically, we choose a state-of-the-art model TuckER (Chen et al., 2017), which uses Tucker decomposition (Tucker, 2018) as the scoring function to measure the plausibility of triplets in UKG:
\[\mathcal{W}\times_{1}\mathbf{e}_{k}\times_{2}\mathbf{e}_{r}\times_{3}\mathbf{e}_{t}, \tag{9}\]
where \(\mathcal{W}\in\mathbb{R}^{dk_{G}\times dk_{G}\times dk_{G}}\) is a learnable core tensor, \(\times_{n}\) is the tensor product along the \(n\)-th dimension, and \(\mathbf{e}_{h},\mathbf{e}_{r},\mathbf{e}_{t}\in\mathbb{R}^{dk_{G}}\) are the embeddings of head entity \(h\), tail entity \(t\) and relation \(r\), respectively. The goal of KG embedding model is to make the scoring function high for triplets that exist in the UKG, and thus the knowledge in UKG can be preserved in KG embeddings.
The benefits of UKG are two fold. First, it integrates various types of entities and relationships in the city comprehensively, and thus the learned embedding of a region can provide a description of the urban environment where it locates. Second, various types of relationships between regions in UKG can help capture the spatial dependencies of urban flow between regions, which will be elaborated in the following section.
Definition 3.4 (Spatio-temporal Flow Generation on KG).: The UKG organizes regions in a graph structure, and the urban flow \(\mathbf{F}_{t}\) of different regions is not independent but connected by various types of relations in the UKG. Consequently, the _urban flow generation_ problem can be reformulated as generating a dynamic graph signal for a subset of nodes corresponding to the target regions in the UKG, denoted as \(\mathbf{x}=[\mathbf{F}_{1},\ldots,\mathbf{F}_{T}]\in\mathbb{R}^{N_{t}\times T\times df}\), given the node features and graph structure \(\mathcal{G}=(\mathcal{E},\mathcal{R},\mathcal{F})\).
## 4. Methods
Based on the aforementioned DDPM and UKG, we propose a knowledge-enhanced spatio-temporal diffusion model for urban flow generation. The framework of our model is shown in Figure 3(a). Specifically, we design an auxiliary module to estimate the flow volume of each region, and use the predicted volume to guide the diffusion process for each region separately, which enables learnable and more accurate flow generation process for regions with different volumes. Moreover, we design a novel knowledge-enhanced denoising network that leverages UKG to capture spatio-temporal
\begin{table}
\begin{tabular}{c|c|c} \hline
**Relation** & **Head \& Tail Entity Types** & **Semantic Information** \\ \hline _Borderby_ & (Region, Region) & Regions share part of the boundary \\ _Nearby_ & (Region, Region) & Regions lie within a certain distance \\ _CoCheckin_ & (POI, POI) & POIs visited by a user consecutively \\ _Competitive_ & (POI, POI) & Nearby POIs with the same brand \\ _SimilarFunc_ & (Region, Region) & Regions with similar POI distribution \\ _LocateAt_ & (POI, Region) & POIs Locates at the region \\ _CateOf_ & (POI, Category) & Category of POI \\ _ProvideService_ & (BA, Region) & BA covers the region \\ _RelangTo_ & (POI, BA) & POI locates at the BA \\ \hline \end{tabular}
\end{table}
Table 2. The details of relations in UKG. BA represents business area.
dependencies of urban flow and fuse them adaptively based on urban environment.
### Region Customized Diffusion Process
The urban flow of a region is closely correlated with its features, such as socioeconomic indicators and demographics (Han et al., 2016; Wang et al., 2017). Therefore, we leverage region features to guide the flow generation process through a condition module, as shown in Figure 3(b). Specifically, we design a volume estimator to estimate the flow volume for each region, which is trained together with the diffusion model. On one hand, the estimated flow volume is combined with region features as the condition of the diffusion model. On the other hand, we design a novel learnable and customized diffusion process for each region based on the estimated flow volume.
The volume of urban flow, which we define as the average flow per hour, may vary significantly across different regions. For example, regions in downtown areas usually have a much larger flow volume than suburban regions because of more developed transportation system. Such spatial variance may lead to the bias of model towards popular regions with larger volume, and thus should be considered directly (Han et al., 2016). However, the vanilla diffusion models assume the same endpoint of diffusion process. In other words, the generation process for all regions starts from the same Gaussian noise \(\mathcal{N}(0,I)\), which makes it difficult to distinguish regions with different flow volumes. Therefore, we employ the technique proposed in (Han et al., 2016) and make the diffusion process learnable and customized for each region.
Specifically, We design a volume estimator \(f_{\phi}\) with parameters \(\phi\) to predict the flow volume based on region features (denoted as \(\mathbf{c}\)), and use the predicted volume \(f_{\phi}(\mathbf{c})\) to guide the diffusion process for each region. Here \(\mathbf{c}\in\mathbb{R}^{N_{\mathcal{H}}\times d_{feat}}\) represents the features of all target regions, and the output of volume estimator is expanded to the same dimension as \(\mathbf{x}\), i.e., \(f_{\phi}(\mathbf{c})\in\mathbb{R}^{N_{\mathcal{H}}\times T\times d_{f}}\). Then we change the endpoint of diffusion process to Gaussian noise with different mean values:
\[p(\mathbf{x}_{N}|\mathbf{c})=\mathcal{N}(f_{\phi}(\mathbf{c}),I). \tag{10}\]
Accordingly, with the same notations as in Section 3.2, the forward diffusion process is modified as:
\[q(\mathbf{x}_{n}|\mathbf{x}_{0},f_{\phi}(\mathbf{c}))=\mathcal{N}(\mathbf{x}_{n};\sqrt{\alpha _{n}}\mathbf{x}_{0}+(1-\sqrt{\alpha_{n}})f_{\phi}(\mathbf{c}),(1-\alpha_{n})I). \tag{11}\]
Besides, in the backward denoising process, the posterior in Equation 5 should be changed to:
\[q(\mathbf{x}_{n-1}|\mathbf{x}_{n},\mathbf{x}_{0},\mathbf{c})=\mathcal{N}(\mathbf{x}_{n-1};\tilde{ \mathbf{\mu}}(\mathbf{x}_{n},\mathbf{x}_{0},f_{\phi}(\mathbf{c})),\tilde{\beta}_{n}I), \tag{12}\]
where
\[\begin{split}\tilde{\mathbf{\mu}}(\mathbf{x}_{n},\mathbf{x}_{0},f_{\phi}( \mathbf{c}))=&\frac{\sqrt{\alpha_{n-1}}\beta_{n}}{1-\tilde{\alpha} _{n}}\mathbf{x}_{0}+\frac{\sqrt{\alpha_{n}}(1-\tilde{\alpha}_{n-1})}{1-\tilde{ \alpha}_{n}}\mathbf{x}_{n}\\ &+(1+\frac{(\sqrt{\tilde{\alpha}_{n}}-1)(\sqrt{\tilde{\alpha}_{ n}}+\sqrt{\tilde{\alpha}_{n-1}})}{1-\tilde{\alpha}_{n}})f_{\phi}(\mathbf{c}).\end{split} \tag{13}\]
For the architecture of \(f_{\phi}\), we simply adopt a two-layer feed-forward neural network with Leaky ReLU as the activation function (Han et al., 2016). In this way, the start point of flow generation process is approximately the mean value of the flow for each region, which enables more accurate flow generation for regions with different flow volumes.
We consider volume estimation as an auxiliary task and use MSE loss to optimize the volume estimator:
\[\mathcal{L}_{2}=||\hat{\mathbf{s}}-\mathbf{s}||^{2}, \tag{14}\]
where \(\hat{\mathbf{s}}=f_{\phi}(\mathbf{c})\) is the predicted flow volume and \(s\in\mathbb{R}^{N_{\mathcal{H}}\times T\times d_{f}}\) is the real flow volume expanded to the same shape. In the training process, the volume estimator \(f_{\phi}\) is trained with the denoising network \(\epsilon_{\theta}\) in an alternative manner. As shown in Algorithm 1, we first pretrain \(f_{\phi}\) for \(M_{1}\) epochs, and then alternatively update the whole diffusion model and \(f_{\phi}\). Specifically, we update the diffusion model for \(M_{2}\) epochs by taking gradient decent step on \(\mathcal{L}_{1}\), and then update \(f_{\phi}\) once with loss function \(\mathcal{L}_{2}\).
```
1:Step 1: pretraining \(f_{\phi}\)
2:for\(epoch\in\{1,2,\ldots,M_{1}\}\)do
3: Calculate predicted flow volume \(\hat{\mathbf{s}}=f_{\phi}(\mathbf{c})\)
4: Update \(f_{\phi}\) using \(\mathcal{L}_{2}=||\hat{\mathbf{s}}-\mathbf{s}||^{2}\)
5:Step 2: joint training \(\epsilon_{\theta}\) and \(f_{\phi}\)
6:repeat
7:\(\triangleright\)Train diffusion model
8:for\(epoch\in\{1,2,\ldots,M_{2}\}\)do
9: Sample flow \(\mathbf{x}_{0}\sim q(\mathbf{x}_{0})\), noise \(\mathbf{\epsilon}\sim\mathcal{N}(0,I)\)
10: Sample diffusion step \(n\sim Uniform(\{1,2,\ldots,N\})\)
11: Calculate \(\mathbf{x}_{n}\) with Equation 11
12: Calculate predicted noise \(\hat{\mathbf{c}}\) with denoising network
13: Update \(\epsilon_{\theta}\) and \(f_{\phi}\) using \(\mathcal{L}_{1}=||\hat{\mathbf{c}}-\mathbf{c}||\)
14:\(\triangleright\)Train volume estimator
15: Calculate predicted flow volume \(\hat{\mathbf{s}}=f_{\phi}(\mathbf{c})\)
16: Update \(f_{\phi}\) using \(\mathcal{L}_{2}=||\hat{\mathbf{s}}-\mathbf{s}||^{2}\)
17:until Convergence
```
**Algorithm 1** Training of our model
### Knowledge-enhanced Denoising Network
The core of DDPM lies in the denoising network \(\mathbf{\epsilon}_{\theta}\), which aims to predict the noise \(\mathbf{\epsilon}\) from noised data \(\mathbf{x}_{n}\). The architecture of denoising network is usually designed according to the specific task. Previous works about sequence generation typically use WaveNet-based structure (Han et al., 2016; Wang et al., 2017), which shows great performance in modeling time series while it cannot capture spatial dependencies in urban flow data. To bridge the gap, we propose a novel knowledge-enhanced network empowered by UKG to capture spatio-temporal dependencies of urban flow for better denoising ability.
Figure 3. The (a) framework of our model and (b) design of condition module.
#### 4.2.1. Denoising Network Architecture
The architecture of our denoising network is shown in Figure 4. Specifically, we adopt the DiffWave architecture as the backbone, which consists of multiple residual layers. It takes noised data \(\mathbf{x}_{n}\in\mathbb{R}^{N_{t}\times T\times d_{f}}\), diffusion step \(n\), KG embeddings of regions \(\mathbf{E}_{KG}\) and conditions as input, and output the predicted noise:
\[\hat{\mathbf{\epsilon}}=\epsilon_{\theta}(\mathbf{x}_{n},n,\mathbf{E}_{KG},\mathit{Cond}). \tag{15}\]
The output \(\hat{\mathbf{\epsilon}}\in\mathbb{R}^{N_{t}\times T\times d_{f}}\) is of the same shape as \(\mathbf{x}_{0}\) and \(\mathbf{\epsilon}\).
Specifically, we first use 128-dimensional diffusion step embedding for diffusion step \(n\) in the diffusion step embedding block:
\[\begin{split} n_{embedding}&=[sin(10^{0\times 4 /63}n),\dots,sin(10^{63\times 4/63}n),\\ cos(10^{0\times 4/63}n),\dots,cos(10^{63\times 4/63}n)],\end{split} \tag{16}\]
followed by two fully connected layers before being fed into residual layers. In the meanwhile, the input noised data \(\mathbf{x}_{n}\) is mapped by a Conv\(1\times 1\) and sent to the first residual layer. In each residual layer, we first combine the input with projected diffusion step embedding to obtain flow representation \(\mathbf{h}\in\mathbb{R}^{N_{t}\times T\times d_{h}}\), and then design a KG-enhanced ST-block (KGST-Block) to capture spatio-temporal dependencies of urban flow with the guidance of KG embedding, i.e., \(\mathbf{H}_{st}=KGST(\mathbf{h},\mathbf{E}_{KG})\), which will be elaborated in Section 4.2.2. The conditioner of each region \(\mathit{Cond}\) is added to the output of KGST-Block \(\mathbf{H}_{st}\). After a gated activation unit, part of the output is connected to the next residual layer as input, while the rest is added to the final output through skip connection. The Conv\(1\times 1\) blocks in the network are used to map data to proper dimensions. Finally, the output \(\hat{\mathbf{\epsilon}}\) is the sum of data through skip-connections from each residual layer after projection by two Conv\(1\times 1\) blocks. We then introduce the details of KGST-Block.
#### 4.2.2. KG-enhanced ST-Block Design
The KGST block consists of a spatial block and a temporal block for modeling spatial and temporal dependencies, after which an attention block is used for spatio-temporal fusion with the guidance of KG embeddings.
For the spatial block, we adopt 1-layer R-GCN (Wang et al., 2017) to capture the spatial dependency of urban flow among regions. R-GCN is a kind of graph convolutional network that aggregates information from the neighborhood of entities through different relations separately. As a result, the flow information of regions that are connected in UKG, i.e., nearby regions and functionally similar regions, can be propagated through the graph. Note that here we only use the subgraph of UKG that consists of training or testing regions and relations among them. Specifically, the output of R-GCN can be obtained as:
\[\mathbf{H}_{sJ}=\sigma(\sum_{r\in\mathbb{R}}\sum_{j\in\mathcal{N}_{t}^{r}}\mathbf{W}_{ r}\mathbf{h}_{j}+\mathbf{W}_{0}\mathbf{h}_{l}), \tag{17}\]
where \(\mathbf{h}_{l}\in\mathbb{R}^{T\times d_{h}}\) is the input of region \(l\), \(\mathcal{N}_{l}^{r}\) is the set of regions connected to region \(l\) via relation \(r\) and \(\mathbf{W}_{r}\), \(\mathbf{W}_{0}\in\mathbb{R}^{d_{h}\times d_{h}}\) are learnable weight matrices. \(\sigma\) is a nonlinear activation function.
In the temporal block, we use Transformer layer (Wang et al., 2017) to model the temporal dependency of urban flow, which consists of a multi-head attention layer, a fully-connected layer, and layer normalization.
For different regions, the impact of spatial and temporal flow dependencies may be different depending on the urban environment. For instance, regions in downtown areas usually have a large flow transition with nearby regions, and thus the spatial dependency should be emphasized. However, the flow patterns in suburban residential regions exhibit strong temporal patterns as people usually leave for work in the morning and go home in the evening, while the flow there is not influenced much by nearby regions.
Therefore, we utilize the KG embeddings of regions with environmental information preserved to guide the fusion of spatial and temporal dependencies. Specifically, we leverage the attention mechanism with KG embedding as the query. For each region \(l\), let \(\mathbf{H}_{sJ}\), \(\mathbf{H}_{tJ}\in\mathbb{R}^{T\times d_{h}}\) denote the representation after spatial and temporal block respectively, and \(\mathbf{E}_{KG,l}\in\mathbb{R}^{d_{KG}}\) be its KG embedding. We first project the vectors to get the query vector \(\mathbf{Q}=\mathbf{E}_{KG,l}\mathbf{W}_{0}\), the key vector \(\mathbf{K}_{t}=\mathbf{H}_{tJ}\mathbf{W}_{K}\) and the value vector \(\mathbf{V}_{t}=\mathbf{H}_{t}\mathbf{W}_{V}\) (\(i\in\{s,t\}\)). Then we calculate the importance of spatial and temporal representations as:
\[\alpha_{i}=softmax(\frac{\mathbf{Q}\mathbf{K}_{i}^{T}}{\sqrt{d_{h}}}),i\in\{s,t\}. \tag{18}\]
Figure 4. (a) The architecture of denoising network, (b) the detail of spatial block and (c) the detail of temporal block.
Finally, the output is calculated as
\[H_{st,l}=(\alpha_{S}V_{s}+\alpha_{l}V_{t})W_{O}, \tag{19}\]
where \(W_{O}\) is the output projection matrix.
## 5. Experiments
### Experiment Settings
#### 5.1.1. Datasets
We conduct experiments on four real-world datasets to examine the effectiveness of our model. We construct a UKG for each dataset according to methods in Section 3.3, and the basic information of datasets and the statistics of corresponding UKGs are reported in Table 3. Due to the lack of data in NYC, D.C., and Baltimore datasets, we do not incorporate business areas and several relations, i.e., _CoCheckin_, _Competitive_, _ProvideService_ and _BelongTo_, in their UKGs.
Each dataset contains the hourly urban flow of all regions in multiple days, and we use each day (24 hours) as a training sample. Since the flow patterns are quite different on weekdays and weekends, we only use the data on weekdays in our experiments, and our method can be directly applied to the weekends in the same way. We present the details of datasets in Appendix A.
#### 5.1.2. Baselines
Since there are no other works that use the same setting as ours, we compare the performance with the following conditional deep generative models. To ensure fair comparison, we use the same condition as our model to generate flow for each region, i.e., region features and KG embeddings.
* **Conditional GAN**(Wang et al., 2017): It is the conditional version of vanilla GAN, whose generator and discriminator are both MLPs.
* **DGAN**(Dong et al., 2018): It is a GAN-based model that first generates metadata and then generated time series conditioned on metadata. In our experiments, we omit the metadata generation process and use our condition as metadata.
* **RCGAN**(Chen et al., 2018): It uses conditional GAN structure with LSTM as generator and discriminator to generate time series.
* **CVAE**(Chen et al., 2018): It is a kind of conditional variational autoencoder (VAE) used for financial time series generation conditioned on market states. We adapt it for urban flow generation conditioned on region features.
* **Diffwave**(Dong et al., 2018): It is a diffusion-based model for conditional waveform generation. The denoising network of it is similar to ours while it does not consider spatial dependency among regions.
In addition, we also compare with several state-of-the-art urban flow prediction models.
* **STGCN**(Wang et al., 2017): It employs graph convolution layers and gated convolution layers to capture spatial and temporal correlations, respectively.
* **CCRNN**(Wang et al., 2017): It proposes a novel graph convolution architecture for travel demand prediction where the adjacency matrices in different layers are self-learned during training, and it adopts GRU to capture temporal dynamics.
* **MSDR**(Kang et al., 2018): It uses a new variant of RNN that consider multiple historical time step to capture temporal dependencies, and combine it with graph-based neural networks for spatio-temporal forecasting.
* **3DGCN**(Wang et al., 2017): It uses an expanded GCN to capture spatio-temporal correlations of urban flow among regions at different time steps.
The urban flow prediction models above need historical flow as input to predict the future flow, which however is not available for test regions. Therefore, we sample the historical flow according to the distribution of train regions, and then generate the next steps recurrently (Chen et al., 2018; Chen et al., 2018).
Furthermore, to evaluate the effectiveness of our denoising network design, we examine a variant of our model with a different denoising network.
* **KSTDiff(UGnet)**: In this model, we change the denoising network in KSTDiff to UGenet proposed in (Wang et al., 2017). It uses an Unet-like architecture and adopts Temporal Convolution Network (TCN) and graph convolution network (GCN) to capture temporal and spatial dependencies. Apart from the denoising network, the framework is the same as ours as shown in Figure3.
The implementation details are shown in Appendix B.
#### 5.1.3. Evaluation Metrics
We adopt widely used Mean Absolute Error (MAE), Root Mean Square Error (RMSE), and Symmetric Mean Absolute Percentage Error (SMAPE) to measure the performance. We generate the same number of samples as real data for test regions, and use the mean of all generated samples to calculate MAE, RMSE, and SMAPE. In addition, we also use maximum mean discrepancy (MMD) to measure the distance between generated data distribution and real data distribution, the details of which are shown in Appendix C. Specifically, we follow previous work (Chen et al., 2018) and treat the flows for each region as vectors to calculate MMD, and report the average MMD of all test regions as the final metric.
### Overall Performance
The comparison of our model with baselines on four datasets is shown in Table 4, from which we have the following observations.
\begin{table}
\begin{tabular}{c c|c c c c} \hline & **Location** & New York City & Beijing & Washington, D.C. & Baltimore \\ \hline \multirow{6}{*}{**Basic information**} & **Flow type** & inflow \& outflow & inflow & inflow \\ & **Time span** & 2016.01.01- & 2018.01.01- & 2019.01.01- & 2019.01.01- \\ & **Time span** & 2016.06.30 & 2018.01.31 & 2019.05.31 & 2019.05.31 \\ & **Time interval** & 1 hour & 1 hour & 1 hour & 1 hour \\ & **Flow length** & 24 hours & 24 hours & 24 hours & 24 hours \\ & **\#Train regions** & 219 & 648 & 178 & 193 \\ & **\#Test regions** & 61 & 362 & 59 & 210 \\ \hline \multirow{2}{*}{**UKG statistics**} & **\#Entities** & 26,604 & 36,752 & 11,988 & 16,579 \\ & **\#Relations** & 5 & 9 & 5 & 5 \\ \multirow{2}{*}{**} & **\#Facts** & 67,418 & 149,350 & 33,112 & 47,884 \\ \hline \end{tabular}
\end{table}
Table 3. The basic information of four real-world datasets and basic statistics of UKGs.
First, our model outperforms all baselines on four datasets by a large margin. For example, our model outperforms existing methods by 14.6% to 35.2% in terms of MAE. Such great improvement demonstrates our model's ability to accurately generate urban flow based on region characteristics.
Second, RCGAN and Diffwave achieve relatively better performance among baselines because they are able to capture temporal dependencies of urban flow. However, they still perform worse than our model mainly because they do not consider spatial dependencies among regions, while our model successfully captures spatio-temporal dependencies through UKG and KGST block. The UGnet architecture is able to model spatio-temporal dependencies by GCN and TCN, while it does not show good performance, which further demonstrates the effectiveness of incorporating urban environmental information by UKG.
Third, the performance of many baselines is sensitive to the data. For example, Diffwave performs almost the second best on NYC, Beijing, and Baltimore datasets, while on D.C. dataset it fails to generate flow accurately. Similarly, the performance of RCGAN is considerable on NYC and D.C. datasets but relatively low on Beijing and Baltimore datasets. The volatile performance may result from that the four real-world datasets have different region features, time spans, and data sources of urban flow. However, our model consistently achieves the best performance on all these datasets, which demonstrates the robustness of our model.
At last, urban flow prediction models generally perform much worse than our model on all datasets. This is because they heavily rely on real historical flow, while the historical flow data on test regions is not available in our setting. The finding demonstrates that predictive modeling of urban flow, though extensively studied, cannot be adapted to data-sparse regions, which further shows the significance of generative modeling of urban flow.
### Ablation Study
We conduct an ablation study to further evaluate the influence of region-customized diffusion process (RCDP) and KGST block in the denoising network. Specifically, to validate the flexibility of adapting the diffusion process to each region, we remove the volume prediction module in Figure 3 as well as its training process, and only keep region features in the condition module. In other words, the diffusion process is the same for all regions with \(f_{\phi}(\mathbf{c})=0\). As for KGST, we remove the spatial block as well as the attention module in the denoising network, and only preserve the temporal block in KGST block.
We visualize the SMAPE and MMD metrics on four real-world datasets in Figure 5. It can be observed that the performance drops in all cases after omitting RCDP or KGST, which demonstrates the effectiveness of such design. Besides, we find that the performance deteriorates significantly after removing KGST, which suggests that modeling spatial dependencies and urban environment plays an important role in urban flow generation.
### Case Study
#### 5.4.1. Long-term Flow Generation
In real-world scenarios, knowing long-term urban flow distribution in advance can better help urban planning (Wang et al., 2019). To evaluate the performance of our model in generating longer flow sequences, we expand the generated flow length to one week, two weeks, and four weeks. For example, we generate the flow with \(7\times 24\) time steps for the one-week case. We conduct experiments on NYC dataset and D.C. dataset, and compare the performance of our model with the two best baselines, i.e. RCGAN and Diffwave. As shown in Figure 6, the error generally increases as the flow gets longer, while our model consistently achieves the best performance under various flow lengths, which demonstrates its robustness in modeling urban flow with various lengths. We further present the MMD metric in Appendix D.
#### 5.4.2. Downstream Application
To evaluate the utility of generated urban flow in supporting other urban flow related tasks, we conduct experiments by using our generated urban flow for OD generation. The OD generation task aims to generate origin-destination (OD) flow among regions, i.e., the number of people moving from one region to another, based on region features and total outflow of
\begin{table}
\begin{tabular}{c|c c c c|c c c c|c c c c|c c c c} \hline \hline & \multicolumn{4}{c}{**NYC**} & \multicolumn{4}{c|}{**Beijing**} & \multicolumn{4}{c|}{**Washington, D.C.**} & \multicolumn{4}{c}{**Baltimore**} \\
**Model** & **MAE** & **RMSE** & **SMAPE** & **MMD** & **MAE** & **RMSE** & **SMAPE** & **MMD** & **MAE** & **RMSE** & **SMAPE** & **MMD** & **MAE** & **RMSE** & **SMAPE** & **MMD** \\ \hline STGCN & 33.04 & 45.63 & 0.94 & 3.87 & 7.29 & 11.05 & 1.24 & 6.32 & 25.56 & 59.19 & 1.16 & 5.33 & 14.34 & 30.78 & 0.99 & 4.81 \\ CCRNN & 37.78 & 46.73 & 0.93 & 6.02 & 7.47 & 11.18 & 1.25 & 6.35 & 31.98 & 57.01 & 1.12 & 6.17 & 13.72 & 28.21 & 0.93 & 3.70 \\ MSRD & 32.98 & 43.31 & 0.81 & 3.36 & 6.87 & 11.27 & 1.22 & 5.06 & 32.45 & 58.30 & 1.12 & 5.38 & 15.28 & 29.32 & 1.04 & 4.16 \\ JDGCN & 27.82 & 41.38 & 0.80 & 3.94 & 4.37 & 10.58 & 1.08 & 3.51 & 22.44 & 56.68 & 0.91 & 2.14 & 13.89 & 29.92 & 9.26 \\ \hline Conditional GAN & 26.75 & 41.73 & 0.71 & 4.88 & 3.93 & 9.98 & 1.04 & 4.46 & 19.51 & 54.43 & 0.86 & 4.31 & 35.90 & 87.03 & 1.02 & 4.80 \\ DGAN & 52.90 & 81.95 & 0.81 & 5.24 & 8.14 & 25.93 & 1.23 & 4.34 & 29.12 & 59.16 & 1.31 & 4.55 & 12.92 & 26.94 & 1.38 & 4.09 \\ CVAE & 36.51 & 58.48 & 0.70 & 4.19 & 2.67 & 6.61 & 0.74 & 4.17 & 28.76 & 64.30 & 1.82 & 4.17 & 28.97 & 254.28 & 0.97 & 4.50 \\ RCGAN & 23.07 & 36.12 & 0.67 & 5.09 & 4.11 & 10.47 & 1.21 & 5.45 & 18.35 & 48.28 & 0.82 & 2.40 & 40.54 & 128.67 & 1.08 & 5.68 \\ Diffwave & 24.53 & 36.12 & 0.68 & 3.20 & 2.30 & 6.37 & 0.78 & 1.85 & 36.82 & 51.07 & 1.19 & 5.70 & 12.88 & 21.26 & 1.05 & 3.71 \\ \hline KSTDIH(UGnet) & **39.46** & **32.07** & **0.52** & **2.58** & **1.96** & **3.15** & **0.65** & **1.43** & **15.67** & **41.83** & 0.65 & **1.79** & **3.83** & **15.56** & **0.68** & **2.00** \\ Improvement & 15.66 & 11.25 & 22.45 & 19.45 & 13.35 & 3.02 & 12.22 & 21.45 & 14.65 & 10.05 & 20.7v & 16.4v & 35.25 & 26.88 & 26.1v & 30.1v \\ \hline \hline \end{tabular}
\end{table}
Table 4. Performance comparison with baselines on four datasets. Best results are presented in bold, and the second best results are underlined.
Figure 5. Performance comparison of models without KGST block or region-customized diffusion process (RCDP).
each region, which is important for a wide range of applications like transportation planning and epidemics spread modeling (Beng et al., 2017; Wang et al., 2018; Wang et al., 2018).
Specifically, we leverage the state-of-the-art OD generation model Deep Gravity (Wang et al., 2018), which uses feed-forward neural network with multiple layers to calculate the OD flow between each pair of regions based on their features and outflows. Note that OD generation task does not consider the temporal dynamics of flow, and only aims to generate the total OD flow among regions. Therefore, we use the sum of outflow in all time intervals as the input of Deep Gravity model. For evaluation, we adopt the same metrics as the original setting including Common Part of Commuters (CPC), Normalized Root Mean Squared Error (NRMSE), Pearson correlation coefficient, and Jenson-Shannon divergence (JSD). Here the benchmark is the performance of Deep Gravity model using real flow data, and a performance closer to the benchmark indicates the better utility of our generated flow.
We compare the performance of OD generation using real outflow data, data generated by our model or baseline models including RCGAN and Diffwave. It can be observed from Table 5 that real data achieves the best performance, while the data generated by our model perform closer to real data than other baselines. Furthermore, the performance of our generated data is not much worse than real data. For example, the CPC of our model is only 7.50% and 4.81% lower than the performance using real data on NYC and Beijing datasets, respectively, which demonstrates that our model can be used to help OD generation task in case of lacking real flow data without much performance loss.
#### 5.4.3. Predictive Modeling of Urban Flow.
The generative modeling of urban flow essentially aims to learn its conditional distribution. Therefore, it is possible to adapt our model for predictive modeling of urban flow by using historical flow as the condition. To validate this hypothesis, we slightly modify our model for urban flow prediction task. Specifically, in this task, we aim to predict the urban flow for all regions in the next \(T_{2}\) time intervals in the future (\(F_{t+1},\dots,F_{t+T_{2}}\)) based on urban flow in the past \(T_{1}\) time intervals (\(F_{t-T_{1}+1},\dots,F_{t}\)), where \(T_{1}\) and \(T_{2}\) are both 12 in our setting. Since historical flow is available in the flow prediction task, we first remove the volume estimator and adopt the vanilla diffusion process without guidance from the predicted volume. Following previous work (Wang et al., 2018), we consider the combination of historical and future flow [\(F_{t-T_{1}+1:t}\); \(F_{t+1:t+T_{2}}\)] as the data \(x_{0}\). Moreover, the data \(x_{0}\) with future flow masked [\(F_{t-T_{1}+1:t}\); \(\mathbf{0}\)] is served as condition, with which we replace the condition module. The training process is similar to Algorithm 1 while we no longer need the pretraining and updating of \(f_{\phi}\).
We compare the performance of our model with the urban flow prediction baselines mentioned in Section 5.1.2, i.e., 3DGCN, CCRNN, STGCN, and MSDR, on NYC and D.C. datasets. We use four weeks' data as the training set, one week for validation, and one week for test in chronological order. We choose widely used MAE and RMSE to evaluate the performance and present the results in Figure 7. It can be observed that our model outperforms the baselines by 14.2% in terms of RMSE on D.C. dataset, and achieves comparable performance with the best baselines 3DGCN and CCRNN on NYC dataset (5.5% improvement in MAE and -7.6% in RMSE). The findings suggest that our model can be effectively adapted for urban flow prediction, which further demonstrates the power and versatility of generative modeling of urban flow.
## 6. Conclusion
In this paper, we investigate the generative modeling of a typical kind of geospatial data, i.e., urban flow, by studying the urban flow generation problem, which aims to generate urban flow in multiple time steps for regions without historical flow data. We leverage diffusion model to generate urban flow for different regions under different conditions. Specifically, we construct a UKG to model spatial relationships between regions as well as the urban environment, and further propose a knowledge-enhanced spatio-temporal diffusion model to generate urban flow based on UKG. Our model employs a region customized diffusion process to tackle the variance in flow volumes of different regions, and uses a knowledge-enhanced denoising network to capture the spatio-temporal dependencies of urban flow and the impact of environment. Extensive experiments show the superiority of our model design and the utility of generated flow data. Further studies demonstrate its ability for long-term flow generation and urban flow prediction. In the future, we aim to improve the efficiency of our model by accelerating the sampling process of diffusion model, e.g., adopting techniques like DPM-Solver (Wang et al., 2018). In addition, a promising direction is to adapt our model to the cross-city scenario that generates urban flow for new cities (Wang et al., 2018).
|
2309.11881 | Using Saliency and Cropping to Improve Video Memorability | Video memorability is a measure of how likely a particular video is to be
remembered by a viewer when that viewer has no emotional connection with the
video content. It is an important characteristic as videos that are more
memorable are more likely to be shared, viewed, and discussed. This paper
presents results of a series of experiments where we improved the memorability
of a video by selectively cropping frames based on image saliency. We present
results of a basic fixed cropping as well as the results from dynamic cropping
where both the size of the crop and the position of the crop within the frame,
move as the video is played and saliency is tracked. Our results indicate that
especially for videos of low initial memorability, the memorability score can
be improved. | Vaibhav Mudgal, Qingyang Wang, Lorin Sweeney, Alan F. Smeaton | 2023-09-21T08:30:46Z | http://arxiv.org/abs/2309.11881v1 | # Using Saliency and Cropping to Improve Video Memorability
###### Abstract
Video memorability is a measure of how likely a particular video is to be remembered by a viewer when that viewer has no emotional connection with the video content. It is an important characteristic as videos that are more memorable are more likely to be shared, viewed, and discussed. This paper presents results of a series of experiments where we improved the memorability of a video by selectively cropping frames based on image saliency. We present results of a basic fixed cropping as well as the results from dynamic cropping where both the size of the crop and the position of the crop within the frame, move as the video is played and saliency is tracked. Our results indicate that especially for videos of low initial memorability, the memorability score can be improved.
## 1 Introduction
The saturation of contemporary society with digital content has rendered the ability to capture and sustain human attention increasingly elusive. In this replete landscape, the concept of "video memorability" surfaces as a valuable construct. At its core, video memorability is commonly defined as encapsulating the propensity of a viewer to recognise a video upon subsequent viewing [6, 22, 24], a phenomenon that transcends the boundaries of emotional biases or personal connections. This assertion may appear counter-intuitive, given the prevailing inclination to associate memory with emotional resonance or subjective biases. However, the cognitive processes underlying memorability reveal it as an emotionally impersonal cognitive metric, intrinsically woven into the fabric of the content itself [5], and hence, impervious to the viewer's unique emotional landscape or individual predilections. This conceptualisation of video memorability as an emotionally impersonal entity underscores the notion that certain videos innately possess characteristics that enhance their likelihood of being remembered, irrespective of the viewer's cognitive milieu. This intriguing facet of human cognition necessitates a more profound investigation, as it not only elucidates the complexities of our cognitive machinery, but also bears significant implications for a multitude of domains, including content creation, digital marketing, and education, among others. An exploration of the literature reveals a paucity of research specifically dedicated to video memorability manipulation, despite a
sizeable body of work on its corollary, image memorability. This disparity is, in part, attributable to the inherent complexities associated with videos, which, unlike static images, encompass dynamic spatial-temporal information. This additional layer of complexity engenders a host of challenges that have yet to be elegantly surmounted. Ultimately, a deeper appreciation of video memorability holds the promise of not only advancing our understanding of the human mind but also revolutionising the way we create, consume, and interact with digital content in an increasingly digital world.
This paper presents the findings from a series of experiments designed to augment memorability in short, 3-second videos, using a technique of selective frame cropping guided by visual saliency. Defined as the distinctiveness of certain elements or areas within an image or video frame that capture human visual attention, visual saliency [7] serves as a cornerstone for our exploration into diverse cropping strategies. These range from fundamental fixed cropping to the more nuanced dynamic cropping, which not only adjusts the dimensions of the cropping frame but also its position, aligning with the shifts and movements of saliency throughout the video.
## 2 Background
The characteristics of images that make them more or less memorable than others were first explored from a computational viewpoint more than a decade ago in [11]. That work opened up a new domain for researchers to explore the field of image memorability and why some still images are more memorable than others. The work in [11] posited that the memorability of an image is an intrinsic property and it aimed at finding visual attributes that make an image more, or less, memorable than others. It was found that images with people in them tend to be more memorable than those without and that image memorability further depends upon more precise attributes such as the peoples' ages, hair colour and clothing.
Driven primarily by the MediaEval Media Memorability tasks [22, 20], computational memorability has since evolved to encompass more complex visual stimuli, such as videos. In 2018, a video memorability annotation procedure was established, and the first ever large video memorability dataset--10,000 short soundness videos with both long-term and short-term memorability scores--was created [6]. The integration of deep visual features with semantically rich attributes, such as captions, emotions, and actions, has been identified as a particularly efficacious strategy for predicting video memorability [4, 17, 23, 19]. This confluence of modalities not only amplifies prediction precision but also furnishes a comprehensive perspective on the myriad factors that collectively shape video memorability. Furthermore, dimensionality reduction has been shown to enhance prediction outcomes, and certain semantic categories of objects or locales have been found to be intrinsically more memorable than others [6].
Building upon this foundation, recent research has further expanded the multifaceted approach by incorporating auditory features into the prediction model.
A study by [21] illuminated the contextual role of audio in either augmenting or diminishing memorability prediction in a multimodal video context, thereby accentuating the necessity of a holistic approach that integrates diverse modalities, including visual, auditory, and semantic features, for more robust and accurate memorability predictions.
Moreover, the practical applications of video memorability prediction have been explored in real-world scenarios. A study by [3] conducted a comprehensive analysis of the memorability of video clips from the Crime Scene Investigation (CSI) TV series. Utilising a fine-tuned Vision Transformer architecture, the study predicted memorability scores based on multiple video aspects, dissecting the relationship between the characters in the TV series and the memorability of the scenes in which they appear. This analysis also probed the correlations between the significance of a scene and its visual memorability, revealing intriguing insights into the nexus between narrative importance and visual memorability.
Despite these advancements, the manipulation of video memorability remains a relatively uncharted territory. Existing literature predominantly focuses on predicting memorability scores rather than actively modifying them. This gap in the research underscores the need for novel approaches that not only predict but also enhance video memorability. The current study aims to address this gap by exploring the potential of saliency-based cropping as a technique for manipulating video memorability. By selectively cropping frames based on visual saliency, we hypothesise that it is possible to highlight the most memorable regions of a video, thereby enhancing its overall memorability.
### The MediaEval Benchmarking Task on Predicting Video Memorability
In recent years, much of the work on computational prediction of the memorability of short form videos has been brought together as a task within the annual MediaEval benchmarking activity. Each year the organisers of the task share a collection of videos among participants who are asked to compute and submit runs which predict the memorability of each video in the collection. Once runs are submitted, the organisers compare the participants' submitted runs against human annotated, ground-truth memorability scores, and announce performance evaluation metrics for each participant's runs. The task has run for several years [22, 20, 9] and has led to significant improvements in the performance of automatic memorability prediction for short form videos.
Memorability scores calculated for this work come from an an updated and adapted version of a video memorability prediction model presented in 2021 [2]. The approach uses a Bayesian Ridge Regressor (BRR) to model the memorability prediction as regression task. The BRR model was trained on CLIP (Contrastive Language-Image Pre-training) features [18] extracted from the Mel mento10k training dataset. Given an input image, the model outputs a memorability scores ranging from 0 to 1, where a higher score indicates higher memorability. A single memorability score is generated for a video by averaging the memorability scores for a set of selected video frames.
### Image Saliency
The concept of image saliency pertains to the extent to which an object or a specific region within an image or video frame differentiates itself from the surrounding elements in a visual scene [25]. Essentially, it quantifies the capacity of a particular segment of a visual scene to capture the attention of a typical human observer. This concept is of paramount importance in the domains of computer vision and image processing, as it facilitates the strategic allocation of computational resources to regions deemed visually significant. Models of saliency endeavour to ascertain the regions of an image that are most likely to captivate human attention, and are thus instrumental in an array of applications, ranging from image compression and object recognition to visual search [1].
The first DeepGaze saliency model was introduced in 2015. This work developed a novel approach to improving fixation prediction models by reusing existing neural networks trained on object recognition [1]. DeepGaze II used different feature spaces but the same readout architecture to predict human fixations in an image. This highlighted the importance of understanding the contributions of low and high-level features to saliency prediction [13]. DeepGaze IIE [14], the current version, showed that by combining multiple backbones in a principled manner, a good confidence calibration on objects in unseen image datasets can be achieved, resulting in a significant leap in benchmark performance with a 15% improvement over DeepGaze II reaching an AUC of 88.3% on the MIT/Tuebingen Saliency Benchmark [14].
Several studies have found a small correlations between visual salience and image memorability in specific contexts [11, 10, 16]. This relationship is strongest when images feature a single or limited number of objects presented in an uncluttered, close-up context. However, the introduction of multiple objects and fixation points considerably diminishes the association between the two [10], thereby signifying the distinctiveness of memorability and salience. Building upon this understanding, we leverage saliency to isolate specific sections of video frames. The underlying premise is that by magnifying the most salient part of a video frame--achieved by cropping the surrounding areas--the resultant cropped video, now more concentrated on the salient elements, could potentially enhance its memorability.
### The Memento10k Dataset and Memorability Scores
In work in this paper we use the Memento10k dataset [17], the most substantial video memorability dataset available for public use. The dataset as a whole comprises over 10,000 short form videos and nearly 1 million human annotations, providing a rich information source for analysing video media memorability. The videos cover diverse everyday events captured in a casual, homemade manner, enhancing the real-world applicability of the findings of those who use it. To ensure a robust evaluation process, we used a subset of 1,500 videos from the dataset for testing and evaluation, the same 1,500 videos as used in the evaluations in the MediaEval benchmark.
Each video in Memento10k has a duration of approximately 3 seconds or 90 frames per video. We selected every 10th frame of each video for analysis to reduce computational complexity thus each video has 9 representative frames. We used the model in [2] to compute memorability scores for each of the 1,500 test videos. The results of this are shown in Figure 1 which is a comparison between the ground truth of manually determined memorability scores provided with the dataset and memorability scores generated by the model in [2]. The manually determined memorability scores for the test videos ranges from 0.38 to 0.99, where a lower score implies lower memorability of a video and vice-versa. Figure 1 shows the performance of the model to be quite accurate though it is more conservative than the ground truth. For our work we use the generated scores as a baseline for improving memorability by cropping.
To illustrate, Figure 2 shows sample frames from some videos, their corresponding memorability scores for individual frames, and the cumulative memorability score for each video. We can see in Figure 2 that the first two videos, which seem more aesthetically pleasing, surprisingly exhibit lower memorability scores than the third video.
Figure 1: Memorability scores calculated from [2] vs. manually determined memorability scores in the Memento10k dataset provided for 1,500 test videos
## 3 Experimental Methodology
### Predicting Video Memorability
We used a vision transformer model fine-tuned on the task of predicting video memorability to predict memorability [3]. A quantitative investigation of the memorability of a popular crime-drama TV series, CSI was made through the use this model. The CLIP [18] pre-trained image encoder was used to train a Bayesian Ridge Regressor (BRR) on the Memento10k dataset. This work found that video shot memorability scores for lead cast members in the TV series heavily correlate with the characters' importance, considering on-screen time as importance.
### Saliency-Based Cropping
When people observe an image without a specified task to complete, we do not focus the same level of intensity on each part of the image. Instead, attentive mechanisms direct our attention to the most important and relevant elements of an image [8]. The importance of different parts of an image can be computed yielding a heatmap of importance, a saliency map. This can assist in understanding the significance of different parts of an image. Cropping a video around the centre of the saliency heatmap of individual frames and moving that crop around the screen from frame to frame, and/or changing the size of the crop as the saliency map changes, may increase the memorability of the overall video. Our rationale for employing frame cropping is eliminating noise from
Figure 2: Sample frames from 3 videos, memorability scores for those frames, and average memorability scores for the videos.
video frames and making the viewer focus on the most salient parts of the frame by removing the remainder.
We used a pre-trained DeepGaze IIE model [14] to compute the saliency map of video frames, the output of which is a saliency map with individual pixel values ranging from 0 to 1. The saliency map is denoted \(S\) and the within-frame location of the cropping center for a frame \((x,y)\) is calculated as the weighted center of all saliency values in the frame with coordinates given as:
\[x=\frac{\sum_{i,j}i*S[i][j]}{\sum_{i,j}S[i][j]}\qquad y=\frac{\sum_{i,j}j*S[i][j]} {\sum_{i,j}S[i][j]} \tag{1}\]
We computed the area for frame cropping from the average and the standard deviation of the saliency map of every frame in a video, denoted as \(a_{0},a_{1},...a_{n}\) where \(n\) is the number of frames. We then fit frame indices and used \(\sqrt{a}\) for smoothing purposes to a linear function using a linear least least squares method. Videos with smaller fitting error were selected for linear zooming.
Linear zooming is a basic setup for zooming and involves changing the size of the frame cropping throughout the duration of a video. When we add a zoom to the cropping of a video we do so from the first to the last frame and in a liner fashion. We did not investigate variable crop zooming, or even zooming followed by reverse zoom within a video as this would be too disorientating for viewers because videos in Memento10k have short duration.
We investigated three approaches to combining saliency-based cropping and zooming. In the first, we fixed the crop to the centre of the frame in order to allow us to measure the impact of zooming on memorability. In the second approach we use tracking to follow the movement of the most salient part of frames while keeping the crop size fixed and the final approach allowed the crop size to change as the saliency moved and increased/decreased throughout the video. We now describe each approach.
**1. Cropping at the video centrepoint:** Here we fixed the cropping of each video frame at the centrepoint and analysed the changes in memorability scores resulting from this. We systematically cropped frames at percentage levels ranging from 10% to 90%, along a linear x- or y-axis, i.e. not by area. It is important to note that not all videos had their salient part in the middle of the frame and thus this should be considered a preliminary approach, laying the groundwork for more sophisticated video cropping.
**2. Tracking saliency with a fixed crop size:** In the second approach we determined the centrepoints of the saliency from the output of the DeepGaze IIE saliency model [14] for each frame. We then tracked their movement around the frame throughout each video while maintaining a fixed size for the bounding box surrounding this region. The saliency map was generated at different thresholds in order to binarise it as shown below and the results at different saliency thresholds are shown in the bottom-half of Figure 3. The thresholds used are shown below and an illustration of this cropping is shown in the top sequence of frames in Figure 4.
An illustrative example of the resulting frames can be observed in the top row of images corresponding to three frames from a video in Figure 4. It can be seen in this example that the size of the bounding box shown in red remains constant throughout the video. The main limitation of fixing the bounding box is that it does not consider any changes in the spread or contraction of the most salient part of the frames as the video progresses and might allow either a non-salient part in the cropped frame or might crop out a salient part.
In practice, the thresholding condition forces the viewer to focus on areas of the frame with higher saliency values corresponding to regions of interest compared to the background noise or the less visually significant regions. It is important to note that the spread of the saliency within the frame might vary over time, either increasing and spreading or reducing and contracting, thus introducing dynamic changes in the salient region size. However, the fixed cropping size used for the bounding box in this approach did not account for temporal variations in saliency spread.
## 3 Tracking saliency with a variable crop size:
In the third approach, tracking involved monitoring the salient part with the crop size dynamically adjusted by linearly increasing, or decreasing, or neither, based on the size of the identified salient region.
An example of the resulting frame for this approach is shown in the second row of images in Figure 4. Here it can be seen that the size of the bounding box
Figure 3: Video frame (top left) and its generated saliency map with the centre point of the saliency spread marked as a point (top right). The image on the left also shows the saliency map at two different thresholds. The graph on the right shows the movement of the centrepoint of saliency for the duration of the video.
increases as the video progresses. To accommodate changes in the spread or decrease in saliency throughout the video, only videos with a consistent increase, or decrease, in thresholded saliency size were cropped linearly thus taking changes in saliency spread into consideration. Finally, in all cases where we used cropping we incorporated padding around the crop in order to provide some context for the most salient parts of the video frames.
## 4 Experimental Results
### Cropping at Video Centrepoints
The range plot on the left in Figure 5 shows the range for 90% of the new memorability prediction scores from our initial centrepoint cropping across all 1,500 test videos. The image on the right illustrates that cropping. As we reduce the sizes of the frames by cropping from 90% down to 10% of their original sizes, the 90% ranges of memorability scores show a noticeable decrease. This is probably caused by cropping ignoring the within-frame locations of the most salient parts of the frame and most likely cropping through, rather than around, objects of interest. This result justifies a further exploration into a more saliency-informed approach to cropping.
### Cropping with Saliency Tracking
The graphs in Figure 6 show results when comparing predicted memorability scores after cropping with scores for the original 1,500 videos for both fixed (top graph) and variable (bottom graph) crop sizes. Videos in the graphs are sorted left to right by increasing values of their initial memorability scores. Blue lines show where cropping improved the score, orange lines show where it was reduced.
Figure 4: Sample frames for fixed-sized (second approach) and for variable-sized cropping (third approach) with saliency tracking.
For each graph it can be seen that for low initial memorability scores blue lines tend to dominate, and this reverses as scores increase with orange lines dominating. For the fixed crop size with tracking, 707 videos have improved memorability prediction scores while 794 have decreased scores. When we introduce variable crop sizes with tracking we find that 718 videos have improved scores with 783 with decreased and this explains why the graphs are so similar, the variable crop size did not have much impact on the results. The distribution of these changes in scores for variable crop sizes with tracking is summarised in Table 1 where we see 83.1% of videos with an initial score above 0.7 have improvements, falling to less than half when the threshold is 0.9.
One explanation for this is that for videos which are more memorable, cropping removes memorable information or at best it removes some of the context for memorable information in the video frame thus reducing the video's overall memorability. For videos which already have lower memorability scores, cropping generates improved memorability because it removes visual noise from the frames, allowing the viewer to concentrate on the most salient, and probably more memorable, aspects of the videos.
This interpretation of the results can be shown by the graph in Figure 7 which is the distribution of the "cumulative mean" for memorability score changes as a result of both fixed and variable cropping of videos. In calculating this, the cumulative mean at step \(i=\frac{\sum_{k=1}^{i}x_{k}}{i}\) where \(x_{k}\) represents the data point at step \(k\) in the sequence and \(i\) is the current step for which the cumulative mean is
\begin{table}
\begin{tabular}{l l l l l l l} \hline Threshold memorability score & 0.70 & 0.75 & 0.80 & 0.85 & 0.90 & 0.95 \\ Videos with improved scores & 83.1\% & 73.0\% & 58.8\% & 51.7\% & 45.7\% & 47.8\% \\ \hline \end{tabular}
\end{table}
Table 1: Distribution of videos with improved scores from variable sized cropping by initial memorability scores.
Figure 5: Changes in predicted video memorability as a result of varying crop sizes where a crop size of 90% means discarding 10% of the frame
being calculated. We removed outliers from this using the inter-quartile method to make an unbiased plot.
There are two notable conclusions from Figure 7 which are that the variable cropping method (orange line) performs slightly better than the fixed cropping method and that the cumulative mean decreases as the original memorability score increases for a video. Hence we can say that cropping improves memora
Figure 6: Results for changes in predicted memorability from fixed crop size (top) and variable crop size (bottom), both with saliency tracking. Blue lines show where scores after cropping are improvements on the original memorability score and orange lines are the opposite.
bility scores for most videos with an original memorability score lower than a threshold and beyond that, cropping starts to remove salient part of the frames.
## 5 Conclusions
The investigation outlined in this paper explores the potential of improving video memorability prediction by applying frame cropping based on visual saliency. Using a state-of-the-art saliency model, DeepGaze IIE [14], we detected and tracked saliency throughout a video, cropped the frames around the most salient parts, and re-generated the video. Our results on the Memento10k test set [17] demonstrate a notable improvement in memorability prediction for the majority of videos with initial memorability scores up to a certain threshold.
While this approach is undoubtedly effective for videos with lower memorability scores--eliminating noise or less salient parts of a video--it does have limitations. Videos already exhibiting high memorability scores are likely to have inherently lower levels of noise and a more prominent narrative. In such cases, exploring more sophisticated visual manipulations, such as temporal segmentation (selectively retaining contextually important segments while removing less crucial/incoherent ones), employing advanced object recognition models to identify and alter key elements within the frame with video in-painting [12], or applying techniques like video super-resolution [15] to improve the quality of the videos, is necessary. Furthermore, employing dynamic scene composition adjustments to better highlight key actions or objects in the video may also prove fruitful. Ultimately, a multifaceted approach that explores various advanced visual manipulation techniques will be necessary to optimise the memorability of videos which already manifest high memorability scores.
Figure 7: Line graph showing score comparison between fixed and variable cropping – the x-axis shows 1,500 videos and the y-axis is the cumulative mean of changes in memorability score. |
2309.17228 | Roundoff error analysis of the double exponential formula-based method
for the matrix sign function | In this paper, we perform a roundoff error analysis of an integration-based
method for computing the matrix sign function recently proposed by Nakaya and
Tanaka. The method expresses the matrix sign function using an integral
representation and computes the integral numerically by the double-exponential
formula. While the method has large-grain parallelism and works well for
well-conditioned matrices, its accuracy deteriorates when the input matrix is
ill-conditioned or highly nonnormal. We investigate the reason for this
phenomenon by a detailed roundoff error analysis. | Tomoya Miyashita, Shuhei Kudo, Yusaku Yamamoto | 2023-09-29T13:27:39Z | http://arxiv.org/abs/2309.17228v2 | # Roundoff error analysis of the double exponential formula-based method for the matrix sign function
###### Abstract
In this paper, we perform a roundoff error analysis of an integration-based method for computing the matrix sign function recently proposed by Nakaya and Tanaka. The method expresses the matrix sign function using an integral representation and computes the integral numerically by the double-exponential formula. While the method has large-grain parallelism and works well for well-conditioned matrices, its accuracy deteriorates when the input matrix is ill-conditioned or highly nonnormal. We investigate the reason for this phenomenon by a detailed roundoff error analysis.
matrix sign function, double-exponential formula, numerical integration, roundoff error analysis, elliptic integral 1
## 1 Introduction
Let \(A\in\mathbb{R}^{n\times n}\) be a real matrix and \(A=XJX^{-1}\) be its Jordan decomposition. Here, \(X\in\mathbb{R}^{n\times n}\) is a nonsingular matrix and \(J=J_{n_{1}}(\lambda_{1})\oplus\cdots\oplus J_{n_{r}}(\lambda_{r})\), where \(J_{n_{i}}(\lambda_{i})\) (\(i=1,\ldots r\)) is a Jordan cell of size \(n_{i}\times n_{i}\) with eigenvalue \(\lambda_{i}\) and \(n_{1}+\cdots+n_{r}=n\). We assume that \(A\) has no purely imaginary eigenvalues. Also, let \(I_{n}\) be the identity matrix of order \(n\) and \(\operatorname{sign}(z)\), where \(z\in\mathbb{C}\), be the (scalar) sign function defined by
\[\operatorname{sign}(z)=\begin{cases}1,&\operatorname{Re}(z)>0,\\ -1,&\operatorname{Re}(z)<0.\end{cases} \tag{1}\]
Then, the _matrix sign function_[1] is defined by
\[\operatorname{sign}(A)\equiv X(\operatorname{sign}(\lambda_{1})I_{n_{1}} \oplus\cdots\oplus\operatorname{sign}(\lambda_{r})I_{n_{r}})X^{-1}. \tag{2}\]
Note that the matrix sign function is undefined when \(A\) has purely imaginary eigenvalues. The matrix sign function has applications to the solution of the Sylvester equation and computation of eigendecomposition [2].
Several approaches have been proposed for computing the matrix sign function, including Schur's method, Newton's method and rational function approximation [1]. Recently, Nakaya and Tanaka proposed a new method for computing the matrix sign function using numerical quadrature [3]. The idea is to expresses \(\operatorname{sign}(A)\) using an integral representation:
\[\operatorname{sign}(A)=\frac{2}{\pi}\int_{0}^{\infty}(t^{2}I+A^{2})^{-1}A\,dt \tag{3}\]
and compute it numerically with the double-exponential (DE) formula [4]. In the following, We refer to this method as the _DE-based method_ for the matrix sign function. Since the computation of \((t^{2}I+A^{2})^{-1}\) can be done for each sample point independently, the method has large-grain parallelism and is well-suited for modern high performance computers. Detailed analysis of the discretization and truncation errors of the DE-based method, assuming exact arithmetic, is given in [3]. The reader is also referred to [5] for an analysis in the case where \(A\) is diagonalizable.
According to our numerical experiments, the method works well when \(A\) is a well conditioned and close-to-normal matrix and delivers results with accuracy comparable to that of Schur's method. However, we observed that as \(A\) becomes ill-conditioned or deviates from normality (which means that \(X\) becomes ill-conditioned), the numerical error of the method increases rapidly. This problem is not fixed even if we make the step size sufficiently small or make the truncation points sufficiently distant from the origin. Hence, we suppose that this degradation of accuracy originates not from discretization or truncation errors but from rounding errors.
In this paper, we present a roundoff error analysis of the DE-based method for the matrix sign function to find the cause of this accuracy degradation and look for a direction for possible improvement. For simplicity, we focus on the case where \(A\) is diagonalizable and consider two main sources of rounding errors, namely, the error arising in the computation of \((t^{2}I+A^{2})^{-1}A\) at each sample point and the error that occurs when summing up the contributions from the sample points.
The rest of this paper is structured as follows. In Section 2, we review the DE-based method for the matrix sign function. Roundoff error analysis of this method is given in Section 3. Numerical results that illustrate the validity of the derived error bound are presented in Section 4. Finally, Section 5 provides some conclusion.
## 2 DE-based method for the matrix sign function
In the DE-based method, we approximate the integral (3) using the DE formula. By letting \(Y(t)\equiv(t^{2}I+A^{2})^{-1}A\) and \(\phi(x)=\exp(\frac{\pi}{2}\sinh x)\), we have
\[\mathrm{sign}(A) =\frac{2}{\pi}\int_{0}^{\infty}Y(t)dt=\frac{2}{\pi}\int_{-\infty }^{\infty}Y(\phi(x))\phi^{\prime}(x)dx\] \[\simeq\frac{2}{\pi}h\sum_{k=N^{-}}^{N^{+}}Y(\phi(kh))\phi^{\prime }(kh), \tag{4}\]
where \(h\) is the step size and \(-N^{-}>0\) and \(N^{+}>0\) are the number of sample points in the negative and positive part of the \(x\)-axis, respectively. It is shown that by choosing \(-N^{-}=N^{+}=N\) and \(h=\log(8dN)/N\), where \(d\) is some constant depending on \(A\), the discretization and truncation errors of the DE-based method decrease almost exponentially with \(N\)[3]. In this method, the most computationally intensive part is the calculation of \(Y(\phi(kh))\) for \(k=N^{-},\ldots,N^{+}\). Since this can be done for each \(k\) independently, the method has large-grain parallelism.
## 3 Roundoff error analysis
We present a roundoff error analysis of the DE-based method. We denote a quantity computed in floating-point arithmetic by \(fl(\cdot)\) or by a symbol with a hat. \(\mathbf{u}\) denotes the unit roundoff and \(\gamma_{m}\equiv m\mathbf{u}/(1-m\mathbf{u})\). For a matrix \(A=(a_{ij})\), \(|A|\) means a matrix whose elements are \(|a_{ij}|\). For matrices \(A\) and \(B\) of the same dimension, \(A\leq B\) means componentwise inequality. \(\|\cdot\|_{2}\) and \(\|\cdot\|_{F}\) denote the 2-norm and the Frobenius norm, respectively. The condition number of \(A\) is denoted by \(\kappa_{2}(A)\).
### Sources of roundoff errors
The true roundoff error arising in the computation of the DE-based method can be written as follows.
\[E_{\mathrm{true}}\] \[=\overline{\mathcal{I}}\Bigg{[}\frac{2}{\pi}h\cdot fl\left\{\sum_ {k=N^{-}}^{N^{+}}\overline{fl}\big{(}fl(Y(\overline{fl}(\phi(kh))))\cdot \overline{fl}(\phi^{\prime}(kh))\big{)}\right\}\Bigg{]}\]
\[-\frac{2}{\pi}h\sum_{k}Y(\phi(kh))\phi^{\prime}(kh). \tag{5}\]
To simplify the analysis, we assume that scalar functions such as \(\phi(x)\) and \(\phi^{\prime}(x)=\frac{\pi}{2}\exp(\frac{\pi}{2}\sinh x)\cosh x\) can be computed without errors. We also assume that multiplications by scalars such as \(\phi^{\prime}(x)\) and \(\frac{2}{\pi}h\) can be done without errors. This amounts to assuming that the computations denoted by \(\overline{fl}(\cdot)\) can be done exactly. Under this assumption, we can write the total roundoff error as
\[E =\frac{2}{\pi}hf\left(\sum_{k}fl(Y(\phi(kh)))\phi^{\prime}(kh)\right)\] \[\quad-\frac{2}{\pi}h\sum_{k}Y(\phi(kh))\phi^{\prime}(kh)\] \[=\frac{2}{\pi}h\cdot fl\left(\sum_{k}fl(Y(\phi(kh)))\phi^{\prime }(kh)\right)\] \[\quad-\frac{2}{\pi}h\sum_{k}fl(Y(\phi(kh)))\phi^{\prime}(kh)\] \[\quad+\frac{2}{\pi}h\sum_{k}fl(Y(\phi(kh)))\phi^{\prime}(kh)\] \[\quad-\frac{2}{\pi}h\sum_{k}Y(\phi(kh))\phi^{\prime}(kh)\] \[=\Bigg{\{}\frac{2}{\pi}h\cdot fl\left(\sum_{k}fl(Y(\phi(kh)))\phi ^{\prime}(kh)\right)\] \[\quad-\frac{2}{\pi}h\sum_{k}fl(Y(\phi(kh)))\phi^{\prime}(kh)\Bigg{\}}\] \[\quad+\frac{2}{\pi}h\sum_{k}\left(fl(Y(\phi(kh)))-Y(\phi(kh)) \right)\phi^{\prime}(kh). \tag{6}\]
The last expression suggests that the total roundoff error consists of the following two parts.
* Errors in the computation of \(Y(t)\): Let \(\hat{Y}(t)\equiv fl(Y(t))\) and \(\tilde{E}_{1}(t)\equiv\hat{Y}(t)-Y(t)\), where \(t=\phi(kh)\) and \(N^{-}\leq k\leq N^{+}\). The weighted sum of these errors, \[E_{1}\equiv\frac{2}{\pi}h\sum_{k=N^{-}}^{N^{+}}|\tilde{E}_{1}(\phi(kh))|\phi^{ \prime}(kh),\] (7) contributes to the total roundoff error.
* Errors in the summation. Strictly speaking, the summand is \(fl(Y(\phi(kh))\phi^{\prime}(kh))\), but we substitute it with its exact counterpart, \(Y(\phi(kh))\phi^{\prime}(kh)\), for simplicity. This can be justified because their difference is \(O(\mathbf{u})\), as will be shown later (see (16)), and therefore causes only \(O(\mathbf{u}^{2})\) difference in the value of \(E_{2}\). Thus, the error is defined as \[E_{2} \equiv\frac{2}{\pi}h\cdot fl\left(\sum_{k=N^{-}}^{N^{+}}Y(\phi(kh ))\phi^{\prime}(kh)\right)\] \[\quad-\frac{2}{\pi}h\sum_{k=N^{-}}^{N^{+}}Y(\phi(kh))\phi^{\prime }(kh).\] (8)
In the following, we evaluate \(E_{1}\) and \(E_{2}\) separately.
### Evaluation of \(\tilde{E}_{1}(t)=\hat{Y}(t)-Y(t)\)
Let \(B\equiv t^{2}I+A^{2}\) and denote the \(j\)th column of \(A\), \(Y(t)\) and \(\hat{Y}(t)\) by \(\mathbf{a}_{j}\), \(\mathbf{y}_{j}\) and \(\hat{\mathbf{y}}_{j}\), respectively. Then, \(\mathbf{y}_{j}\) is computed as the solution of the linear simultaneous equations \(B\mathbf{y}_{j}=\mathbf{a}_{j}\). Thus, \(\hat{\mathbf{y}}_{j}-\mathbf{y}_{j}\) consists of two parts, the error in the formation of \(B\) and that in the solution of the linear simultaneous equations. Denote the former error by \(\Delta B^{\prime}\). Then, if we consider only the error in the computation of \(A^{2}\) and ignore the error arising from the addition of \(t^{2}I\), we have \(|\Delta B^{\prime}|\leq\gamma_{n}|A^{2}\) from the result of the standard error analysis [6, SS3.5] and therefore
\[\|\Delta B^{\prime}\|_{F}\leq\gamma_{n}\|A\|_{F}^{2}. \tag{9}\]
Now, \(\hat{\mathbf{y}}_{j}\) is obtained by solving the linear simultaneous equation with the coefficient matrix \(\bar{B}\equiv B+\Delta B^{\prime}\) using Gaussian elimination with partial pivoting in floating-point arithmetic. In that case, it is well known that \(\hat{\mathbf{y}}_{j}\) satisfies the following equation [6, Theorem 9.4]:
\[(\bar{B}+\Delta B^{\prime\prime}_{j})\hat{\mathbf{y}}_{j}=\mathbf{a}_{j},\quad |\Delta B^{\prime\prime}_{j}|\leq\gamma_{3n}\|\hat{L}\|\hat{U}|, \tag{10}\]
where \(\hat{L}\) and \(\hat{U}\) are computed LU factors of \(\bar{B}\) and \(\Delta B^{\prime\prime}_{j}\) is the backward error in the solution of the linear simultaneous equation. Now, we evaluate \(\|\Delta B^{\prime\prime}_{j}\|_{F}\). First, since \(|\tilde{l}_{ij}|\leq 1\), we have \(\|\hat{L}\|_{F}\leq n\). Next, let the coefficient matrix in the \(k\)th step of the Gaussian elimination be \(\tilde{B}^{(k)}=(\tilde{b}^{(k)}_{ij})\) and define the _growth factor_ as
\[\hat{\rho}_{n}=\frac{\max_{i,j,k}|\tilde{b}^{(k)}_{i,j}|}{\max_{i,j}|\tilde{b} _{i,j}|}. \tag{11}\]
Then,
\[|\hat{u}_{i,j}|=|\tilde{b}^{(i)}_{i,j}|\leq\hat{\rho}_{n}\max_{i^{\prime},j^{ \prime}}|\tilde{b}_{i^{\prime},j^{\prime}}|\leq\hat{\rho}_{n}\|\bar{B}\|_{2} \tag{12}\]
and we have \(\|\hat{U}\|_{F}\leq n\hat{\rho}_{n}\|\bar{B}\|_{2}\). From these results, it follows that
\[\|\Delta B^{\prime\prime}_{j}\|_{F} \leq n^{2}\gamma_{3n}\hat{\rho}_{n}\|\tilde{B}\|_{2}\] \[\leq n^{2}\gamma_{3n}\hat{\rho}_{n}(\|B\|_{2}+\gamma_{n}\|A\|_{F} ^{2})\] \[=n^{2}\gamma_{3n}\hat{\rho}_{n}\|t^{2}I+A^{2}\|_{2}+O(\mathbf{u}^ {2}).\] \[\simeq n^{2}\gamma_{3n}\hat{\rho}_{n}\|t^{2}I+A^{2}\|_{2}. \tag{13}\]
It is known that the growth factor \(\hat{\rho}_{n}\) is almost independent of \(n\) and is typically around \(10\)[6, SS9.4]. Combining (9), (10) and (13), we know that \(\hat{\mathbf{y}}_{j}\) satisfies
\[(B+\Delta B_{j})\hat{\mathbf{y}}_{j}=\mathbf{a}_{j}, \tag{14}\] \[\|\Delta B_{j}\|_{F}\leq\gamma_{n}\|A\|_{F}^{2}+n^{2}\gamma_{3n} \hat{\rho}_{n}\|t^{2}I+A^{2}\|_{2}, \tag{15}\]
where \(\Delta B_{j}=\Delta B^{\prime}+\Delta B^{\prime\prime}_{j}\). Now, assume that\(\|B^{-1}\|_{2}\|\Delta B_{j}\|_{2}\leq 1/2\). Applying the perturbation theory for linear simultaneous equations [6, Theorem 7.2] gives
\[\|\hat{\mathbf{y}}_{j}-\mathbf{y}_{j}\| \leq\frac{\|B^{-1}\|_{2}\|\Delta B_{j}\|_{2}}{1-\|B^{-1}\|_{2}\| \Delta B_{j}\|_{2}}\,\|\mathbf{y}_{j}\|\] \[\leq 2\|B^{-1}\|_{2}\|\Delta B_{j}\|_{2}\|\mathbf{y}_{j}\|\] \[\leq 2\|(t^{2}I+A^{2})^{-1}\|_{2}\] \[\quad\times(\gamma_{n}\|A\|_{F}^{2}+n^{2}\gamma_{3n}\hat{\rho}_{n }\|t^{2}I+A^{2}\|_{2})\|\mathbf{y}_{j}\|.\]
From this, it is immediate to show that
\[\|\tilde{E}_{1}(t)\|_{F}=\|\hat{Y}(t)-Y(t)\|_{F}\] \[\leq 2\|(t^{2}I+A^{2})^{-1}\|_{2}(\gamma_{n}\|A\|_{F}^{2}+n^{2} \gamma_{3n}\hat{\rho}_{n}\|t^{2}I+A^{2}\|_{2})\] \[\quad\times\|(t^{2}I+A^{2})^{-1}A\|_{F}. \tag{16}\]
### Evaluation of \(E_{1}\)
Now we evaluate \(E_{1}\) using (7) and (16). From the assumption of diagonalizability, \(A\) can be written as \(A=X\Lambda X^{-1}\), where \(\Lambda=\text{diag}(\lambda_{1},\ldots,\lambda_{n})\). Hence,
\[\|A\|_{F}^{2}=\|X\Lambda X^{-1}\|_{F}^{2}\] \[\leq(\|X\|_{2}\|\Lambda\|_{F}\|X^{-1}\|_{2})^{2}=(\kappa_{2}(X))^{ 2}\|\Lambda\|_{F}^{2}, \tag{17}\]
where we used \(\|AB\|_{F}\leq\|A\|_{2}\|B\|_{F}\). Similarly, we have
\[\|t^{2}I+A^{2}\|_{2} \leq\kappa_{2}(X)\|t^{2}I+\Lambda^{2}\|_{2}, \tag{18}\] \[\|(t^{2}I+A^{2})^{-1}\|_{2} \leq\kappa_{2}(X)\|(t^{2}I+\Lambda^{2})^{-1}\|_{2},\] (19) \[\|(t^{2}I+A^{2})^{-1}A\|_{F} \leq\kappa_{2}(X)\|(t^{2}I+\Lambda^{2})^{-1}\Lambda\|_{F}. \tag{20}\]
Substituting (17) through (20) into (16) gives
\[\|\tilde{E}_{1}(t)\|_{F}\] \[\leq 2\left\{\gamma_{n}(\kappa_{2}(X))^{4}\|\Lambda\|_{F}^{2}+n^{2 }\gamma_{3n}\hat{\rho}_{n}(\kappa_{2}(X))^{3}\|t^{2}I+\Lambda^{2}\|_{2}\right\}\] \[\times\|(t^{2}I+\Lambda^{2})^{-1}\|_{2}\|(t^{2}I+\Lambda^{2})^{- 1}\Lambda\|_{F}. \tag{21}\]
From (7) and (21), the contribution to the total roundoff error can be computed as
\[\|E_{1}\|_{F}\leq\frac{2}{\pi}h\sum_{k=N^{-}}^{N^{+}}\|\tilde{E}_{1 }(\phi(kh))\|_{F}\phi^{\prime}(kh)\] \[\simeq\frac{2}{\pi}\int_{-\infty}^{\infty}\|\tilde{E}_{1}(\phi(x)) \|_{F}\phi^{\prime}(x)\,dx=\frac{2}{\pi}\int_{0}^{\infty}\|\tilde{E}_{1}(t)\|_{F }\,dt\] \[\leq\frac{4}{\pi}\int_{0}^{\infty}\left\{\gamma_{n}(\kappa_{2}(X))^ {4}\|\Lambda\|_{F}^{2}\right.\] \[\qquad\qquad\left.+n^{2}\gamma_{3n}\hat{\rho}_{n}(\kappa_{2}(X))^ {3}\|t^{2}I+\Lambda^{2}\|_{2}\right\}\] \[\qquad\qquad\times\|(t^{2}I+\Lambda^{2})^{-1}\|_{2}\|(t^{2}I+ \Lambda^{2})^{-1}\Lambda\|_{F}\,dt. \tag{22}\]
Let us write the integrand of the last integral as \(e_{n,X,\Lambda}(t)\). To evaluate the integral, we let \(t_{1}=\sqrt{2}\|\Lambda\|_{F}\) and divide the integration interval into two parts, \([0,t_{1}]\) and \([t_{1},\infty)\). When \(t\geq t_{1}\), we have
\[\|\Lambda\|_{F}^{2} \leq\frac{t^{2}}{2}, \tag{23}\] \[\|t^{2}I+\Lambda^{2}\|_{2} =\max_{1\leq j\leq n}|t^{2}+\lambda_{j}^{2}|\leq\max_{1\leq j \leq n}(t^{2}+|\lambda_{j}|^{2})\] \[\leq t^{2}+\frac{t^{2}}{2}=\frac{3}{2}t^{2},\] (24) \[\|(t^{2}I+\Lambda^{2})^{-1}\|_{2} =\max_{1\leq j\leq n}\frac{1}{|t^{2}+\lambda_{j}^{2}|}\leq\max_{1 \leq j\leq n}\frac{1}{t^{2}-|\lambda_{j}|^{2}}\] \[\leq\frac{1}{t^{2}-\frac{1}{2}t^{2}}=\frac{2}{t^{2}},\] (25) \[\|(t^{2}I+\Lambda^{2})^{-1}\Lambda\|_{F} =\sqrt{\sum_{j=1}^{n}\frac{|\lambda_{j}|^{2}}{|t^{2}+\lambda_{j}^{2}| }}\leq\frac{2}{t^{2}}\sqrt{\sum_{j=1}^{n}|\lambda_{j}|^{2}}\] \[=\frac{2}{t^{2}}\|\Lambda\|_{F}. \tag{26}\]
Now,
\[=c_{n,X,\Lambda}\|\Lambda\|_{F}\cdot\frac{\sqrt{2}}{\|\Lambda\|_{F}}\] \[=\sqrt{2}\,c_{n,X,\Lambda}. \tag{28}\]
On the other hand, when \(0\leq t\leq t_{1}\),
\[\|t^{2}I+\Lambda^{2}\|_{2} \leq\max_{1\leq j\leq n}(t^{2}+|\lambda_{j}|^{2})\leq 3\|\Lambda\|_{F}^ {2}, \tag{29}\] \[\|(t^{2}I+\Lambda^{2})^{-1}\|_{2} =\max_{1\leq j\leq n}\frac{1}{|t^{2}+\lambda_{j}^{2}|},\] (30) \[\|(t^{2}I+\Lambda^{2})^{-1}\Lambda\|_{F} \leq\|\Lambda\|_{F}\|(t^{2}I+\Lambda^{2})^{-1}\|_{2}\] \[\leq\|\Lambda\|_{F}\max_{1\leq j\leq n}\frac{1}{|t^{2}+\lambda_{ j}^{2}|}. \tag{31}\]
Hence,
\[0\leq\int_{0}^{t_{1}}e_{n,X,\Lambda}(t)\,dt\leq c_{n,X,\Lambda} \|\Lambda\|_{F}^{3}\int_{0}^{t_{1}}\max_{1\leq j\leq n}\frac{dt}{|t^{2}+ \lambda_{j}^{2}|^{2}}\] \[\quad\leq c_{n,X,\Lambda}\|\Lambda\|_{F}^{3}\sum_{j=1}^{n}\int_{ 0}^{t_{1}}\frac{dt}{|t^{2}+\lambda_{j}^{2}|^{2}}\] \[\quad\leq\frac{1}{2}c_{n,X,\Lambda}\|\Lambda\|_{F}^{3}\sum_{j=1} ^{n}\int_{-\infty}^{\infty}\frac{dt}{|t^{2}+\lambda_{j}^{2}|^{2}}. \tag{32}\]
Noting that \(\mathrm{Re}(\lambda_{j})\neq 0\), we can evaluate this integral as
\[\int_{-\infty}^{\infty}\frac{dt}{|t^{2}+\lambda_{j}^{2}|^{2}}=\frac{\pi}{2| \lambda_{j}|^{2}|\mathrm{Re}(\lambda_{j})|}. \tag{33}\]
See Lemma 1 in the Appendix. Thus, it follows that
\[0\leq\int_{0}^{t_{1}}e_{n,X,\Lambda}(t)\,dt\leq\frac{\pi}{4}c_{ n,X,\Lambda}\|\Lambda\|_{F}^{3}\sum_{j=1}^{n}\frac{1}{|\lambda_{j}|^{2}| \mathrm{Re}(\lambda_{j})|}\] \[=\frac{\pi}{4}c_{n,X,\Lambda}\|\Lambda\|_{F}^{3}\|\Lambda^{-1}| \mathrm{Re}(\Lambda)|^{-\frac{1}{2}}\|_{F}^{2}. \tag{34}\]
Combining (22), (28) and (34) gives
\[\|E_{1}\|_{F} \leq c_{n,X,\Lambda}\left(\frac{4\sqrt{2}}{\pi}+\|\Lambda\|_{F}^{3} \|\Lambda^{-1}|\mathrm{Re}(\Lambda)|^{-\frac{1}{2}}\|_{F}^{2}\right)\] \[=\left(\gamma_{n}(\kappa_{2}(X))^{4}+3n^{2}\gamma_{3n}\hat{\rho}_ {n}(\kappa_{2}(X))^{3}\right)\] \[\quad\times\left(\frac{4\sqrt{2}}{\pi}+\|\Lambda\|_{F}^{3}\| \Lambda^{-1}|\mathrm{Re}(\Lambda)|^{-\frac{1}{2}}\|_{F}^{2}\right). \tag{35}\]
### Evaluation of \(E_{2}\)
Next, we evaluate \(E_{2}\), the roundoff error in the summation. Let us consider the sum of \(m\) matrices, \(S_{m}=\sum_{i=1}^{m}T_{i}\), and denote the computed result by \(\hat{S}_{m}\). Then, from the formula of roundoff error bound for scalar summation [6, Problem 4.3], \(|\hat{S}_{m}-S_{m}|\) can be bounded as follows [7].
\[|\hat{S}_{m}-S_{m}|\leq\gamma_{m-1}(|T_{1}|+|T_{2}|)+\sum_{i=3}^{m}\gamma_{m- i+1}|T_{i}|. \tag{36}\]
Taking the Frobenius norm of both sides and replacing \(\gamma_{m-i+1}\) with \(\gamma_{m-1}\) for simplicity gives
\[\|\hat{S}_{m}-S_{m}\|_{F}\leq\gamma_{m-1}\sum_{i=1}^{m}\|T_{i}\|_{F}. \tag{37}\]
By applying this result to (8) and writing \(M=N_{+}-N_{-}\), we have
\[\|E_{2}\|_{F} =\frac{2}{\pi}h\gamma_{M}\sum_{k=N^{-}}^{N^{+}}\|Y(\phi(kh))\|_{F} \phi^{\prime}(kh)\] \[\simeq\frac{2}{\pi}\gamma_{M}\int_{-\infty}^{\infty}\|Y(\phi(x)) \|_{F}\phi^{\prime}(x)\,dx\] \[=\frac{2}{\pi}\gamma_{M}\int_{0}^{\infty}\|Y(t)\|_{F}\,dt\] \[=\frac{2}{\pi}\gamma_{M}\int_{0}^{\infty}\|(t^{2}I+A^{2})^{-1}A\| _{F}\,dt\] \[\leq\frac{2}{\pi}\gamma_{M}\kappa_{2}(X)\int_{0}^{\infty}\|(t^{2} I+\Lambda^{2})^{-1}\Lambda\|_{F}\,dt\] \[=\frac{2}{\pi}\gamma_{M}\kappa_{2}(X)\int_{0}^{\infty}\sqrt{\sum_{ j=1}^{n}\frac{|\lambda_{j}|^{2}}{|t^{2}+\lambda_{j}^{2}|^{2}}}\,dt\] \[\leq\frac{2}{\pi}\gamma_{M}\kappa_{2}(X)\int_{0}^{\infty}\sum_{j=1 }^{n}\frac{|\lambda_{j}|}{|t^{2}+\lambda_{j}^{2}|}\,dt\] \[=\frac{2}{\pi}\gamma_{M}\kappa_{2}(X)\sum_{j=1}^{n}|\lambda_{j}| \int_{0}^{\infty}\frac{1}{|t^{2}+\lambda_{j}^{2}|}\,dt. \tag{38}\]
Now, we explain how to evaluate the integral in the last expression. Let \(\lambda_{j}=\mu_{j}+\mathrm{i}\nu_{j}\), where \(\mathrm{i}=\sqrt{-1}\). Then, the denominator of the integrand can be written as the square root of a real quartic polynomial in \(t\). Hence, as is well known, the integral can be transformed into the complete elliptic integral of the first kind:
\[K(k)=\int_{0}^{\frac{\pi}{2}}\frac{d\theta}{\sqrt{1-k^{2}\sin^{2}\theta}}. \tag{39}\]
Details of this transformation are given in Lemma 2. Further, by employing an upper bound on \(K(k)\), which is provided in Lemma 3, we can obtain an upper bound on this integral. The evaluation proceeds as follows.
\[\int_{0}^{\infty}\frac{1}{|t^{2}+\lambda_{j}^{2}|}\,dt\] \[=\int_{0}^{\infty}\frac{1}{\sqrt{t^{4}+2(\mu_{j}^{2}-\nu_{j}^{2})t ^{2}+(\mu_{j}^{2}+\nu_{j}^{2})^{2}}}\,dt\] \[=\frac{1}{\sqrt{\mu_{j}^{2}+\nu_{j}^{2}}}K\left(\frac{|\nu_{j}|}{ \sqrt{\mu_{j}^{2}+\nu_{j}^{2}}}\right)\] \[\leq\frac{\pi}{2}\cdot\frac{1}{\sqrt{\mu_{j}^{2}+\nu_{j}^{2}}} \left\{1-\frac{1}{\pi}\log\left(\frac{\mu_{j}^{2}}{\mu_{j}^{2}+\nu_{j}^{2}} \right)\right\}\] \[=\frac{\pi}{2}\cdot\frac{1}{|\lambda_{j}|}\left\{1-\frac{1}{\pi} \log\left(\frac{(\mathrm{Re}(\lambda_{j}))^{2}}{|\lambda_{j}|^{2}}\right) \right\}, \tag{40}\]
where we used Lemma 2 in the second equality and Lemma 3 in the first inequality. Inserting this into
Eq. (38) gives
\[\|E_{2}\|_{F}\leq\gamma_{M^{\prime}}\kappa_{2}(X)\left\{n-\frac{2}{\pi}\sum_{j=1 }^{n}\log\left(\frac{|\text{Re}(\lambda_{j})|}{|\lambda_{j}|}\right)\right\}. \tag{41}\]
### Total roundoff error
The total roundoff error is given as the sum of \(E_{1}\) and \(E_{2}\). From Eqs. (35) and (41), we see that the bound on \(\|E_{1}\|_{F}\) is cubic in \(n\) and quartic in \(\kappa_{2}(X)\), while that of \(\|E_{2}\|_{F}\) is linear in both \(n\) and \(\kappa_{2}(X)\). Also, whereas both bounds show singularity when \(\text{Re}(\lambda_{j})\) approaches zero, the singularity in the former is \(0(1/\text{Re}(\lambda_{j}))\), while that of the latter is logarithmic. From these facts, we can say that \(E_{1}\) is dominant in the roundoff error.
## 4 Numerical results
We performed numerical experiments to check the validity of our error bound. In the experiments, we used a PC with the AMD Ryzen 7 3700X Processor (8-Core, 3.59GHz). Our program was written in Python. To compute the matrix products and inverses, we used NumPy. All the computations were performed in double precision arithmetic.
In the experiments, we computed \(\text{sign}(A)\) with the DE-based method, compared the result with that computed by Eq. (2), and evaluated their difference \(E\). In the DE-based method, \(h\) was chosen sufficiently small and \(N^{+}\) and \(N^{-}\) were chosen sufficiently large so that the discretization and truncation errors can be neglected. The test matrices were constructed as \(A=X\Lambda X^{\top}\), where \(X\) is a nonsingular matrix with a specified condition number \(\kappa_{2}(X)\) and \(\Lambda\) is a real random diagonal matrix with a specified condition number \(\kappa_{2}(\Lambda)\). To control the condition number of \(X\), we generated \(X\) as \(X=QDQ^{\top}\), where \(Q\) is a random orthogonal matrix and \(D\) is a real random diagonal matrix with the condition number \(\kappa_{2}(X)\). The matrix size \(n\) was fixed to 100 in the first and second experiments.
In the first experiment, we fixed \(\Lambda\) and varied \(\kappa_{2}(X)\) from \(10^{1}\) to \(10^{6}\). The results are shown in Fig. 1(a). Here, the horizontal axis and the vertical axis denote \(\kappa_{2}(X)\) and \(\|E\|_{F}\), respectively, and log-log plot is used. By regression, the slope is estimated to be 3.886. This is in accordance with our theoretical result, which shows that the dominant component of the roundoff error, \(\|E_{1}\|_{F}\), is bounded by a quantity proportional to \((\kappa_{2}(X))^{4}\).
In the second experiment, we fixed \(X\) and varied \(\kappa_{2}(\Lambda)\) from \(10^{0}\) to \(10^{6}\). The result is shown in Fig. 1(b). In this case, the slope is estimated to be 1.957. This is smaller than expected from our bound (35), which suggests that \(\|E_{1}\|_{F}\) is proportional to \((\kappa_{F}(\Lambda))^{3}\) when all eigenvalues of \(A\) are real. Thus, our bound seems to be somewhat overestimate with respect to \(\Lambda\), although it still is a valid upper bound. Note also that the condition number appearing in (35) is \(\kappa_{F}(\Lambda)\) (when all eigenvalues are real), while that specified in the experiment is \(\kappa_{2}(\Lambda)\). Hence, strictly speaking, they are not directly comparable. However, since \(\|A\|_{2}\leq\|A\|_{F}\leq\sqrt{n}\|A\|_{2}\) holds for any matrix \(A\), we have
\[\kappa_{2}(\Lambda)\leq\kappa_{F}(\Lambda)\leq n\kappa_{2}(\Lambda) \tag{42}\]
and the comparison is justified if we ignore a polynomial factor in \(n\).
In the third experiment, we fixed \(\kappa_{2}(\Lambda)\) and \(\kappa_{2}(X)\) to 100 and varied the matrix size \(n\) randomly between 240 and 2560. The result is illustrated in Fig. 2. Here, the slope is estimated as 0.540. Since the bound (35) on \(\|E_{1}\|_{F}\) is cubic in \(n\) (note that \(\gamma_{3n}\simeq 3n\mathbf{u}\)), it is a large overestimate. Essentially, this dependence on \(n\) comes from the backward error bound (13) in the solution of the linear simultaneous equation. However, this is a standard result in the roundoff error analysis of matrix computations and it seems to be difficult to improve it drastically. A possible approach for improvement would be to move from the _a priori_ error analysis, which we have adopted in this paper, to _a posteriori_ error analysis, which uses the computed residual to construct the bound. It will be a topic for future research.
## 5 Conclusion
In this paper, we presented a roundoff error analysis of the DE-based method for the matrix sign function. Our upper bound on the roundoff error is in good accordance with the actual error and partially explains why the accuracy of the method deteriorates when the input matrix is ill-conditioned or highly nonnormal. One possible remedy would be to expand \((t^{2}I+A^{2})^{-1}\) into partial fractions, by allowing complex arithmetic. This will reduce the dependency of Eq. (17) on \(\kappa_{2}(X)\) from quadratic to linear, improving the overall dependency of
Figure 1: Errors in the computation of \(\text{sign}(A)\).
Figure 2: Errors of \(\text{sign}(A)\) as a function of \(n\).
the total error to \((\kappa_{2}(X))^{3}\). This will be a topic in our future study.
## Acknowledgment
This study is supported by JSPS KAKENHI Grant Numbers 19KK0255 and 22KK19772.
|
2309.16136 | Finite-Key Analysis for Coherent One-Way Quantum Key Distribution | Coherent-one-way (COW) quantum key distribution (QKD) is a significant
communication protocol that has been implemented experimentally and deployed in
practical products due to its simple equipment requirements. However, existing
security analyses of COW-QKD either provide a short transmission distance or
lack immunity against coherent attacks in the finite-key regime. In this paper,
we present a tight finite-key security analysis within the universally
composable framework for a variant of COW-QKD, which has been proven to extend
the secure transmission distance in the asymptotic case. We combine the quantum
leftover hash lemma and entropic uncertainty relation to derive the key rate
formula. When estimating statistical parameters, we use the recently proposed
Kato's inequality to ensure security against coherent attacks and achieve a
higher key rate. Our paper confirms the security and feasibility of COW-QKD for
practical application and lays the foundation for further theoretical study and
experimental implementation. | Ming-Yang Li, Xiao-Yu Cao, Yuan-Mei Xie, Hua-Lei Yin, Zeng-Bing Chen | 2023-09-28T03:32:06Z | http://arxiv.org/abs/2309.16136v2 | # Finite-Key Analysis for Coherent-One-Way Quantum Key Distribution
###### Abstract
Coherent-one-way (COW) quantum key distribution (QKD) is a significant communication protocol that has been implemented experimentally and deployed in practical products due to its simple equipment requirements. However, existing security analyses of COW-QKD either provide a short transmission distance or lack immunity against coherent attacks in the finite-key regime. In this study, we present a tight finite-key security analysis within the universally composable framework for a new variant of COW-QKD, which has been proven to extend the secure transmission distance in the asymptotic case. We combine the Quantum Leftover Hash Lemma and entropic uncertainty relation to derive the key rate formula. When estimating statistical parameters, we use the recently proposed Kato's inequality to ensure security against coherent attacks and achieve a higher key rate. Our work confirms the security and feasibility of COW-QKD for practical application and lays the foundation for further theoretical study and experimental implementation.
## I Introduction
Quantum theory has been playing a significant role in the field of communication, leading to the development of novel primitives such as quantum repeater [1; 2; 3], quantum conference key agreement [4; 5; 6; 7; 8; 9; 10], quantum secret sharing [11; 12; 13; 14; 15; 16; 17; 18] and quantum digital signatures [19; 20; 21; 22]. Among these primitives, quantum key distribution (QKD) [23; 24] has received considerable attention due to its ability to provide two remote users with a secret key with unconditional security guaranteed by the laws of quantum mechanics. Since the first QKD protocol, the Bennett-Brassard 1984 protocol [23] was proposed, various QKD schemes have been developed [25; 26; 27] to improve its practicality. Among these developments, measurement-device-independent QKD [28; 29] is of vital importance for its immunity against one of the most threatening attacks, the detector attacks [30], which enables experimental operations over a long distance [31; 32]. However, due to channel loss, the key rates of most QKD protocols are bounded by the secret-key capacity of repeaterless QKD [33; 34; 35; 36]. A novel protocol called the twin-field QKD [37] and its variants [38; 39; 40; 41; 42; 43] which are based on single-photon interference instead of two-photon interference, break this bound and increase the secure distance to 833 km [44] and 1002 km [45] experimentally. Moreover, the recently proposed asynchronous measurement-device-independent QKD [46; 47] (also named mode-pairing QKD) has become a practical approach for long-distance quantum communication systems [48; 49; 50; 51] because it breaks the linear bound with its simple experimental implementation compared with twin-field QKD. The photon number splitting attack [52] is another critical limitation of practical QKD, that has been overcome by several means like decoy-state methods [53; 54; 55], non-orthogonal coding methods [56; 57; 58], strong reference methods [59] and distributed-phase-reference methods [60; 61; 62; 63], including differential-phase-shift (DPS) QKD and coherent-one-way (COW) QKD.
DPS protocol [60; 61] is becoming more significant for its excellent key rate performance achieved by the simple setup of equipment. The experimental progress [64; 65; 66; 67] shows the status of DPS-QKD as a promising protocol for realising the quantum communication process in the real world. Theoretically, long-term security analyses of DPS-QKD have been proposed to establish a solid foundation to guarantee its unconditional security in reality. Assuming a single photon to be in each of the blocks, the analysis in Ref. [68] provides the first security proof of DPS-QKD and this impractical assumption is changed to using a blockwise phase-randomized coherent photon source in later developments [69; 70]. Furthermore, recently proposed proofs [71; 72; 73] give more practical analyses, removing the requirement of a special photon source and covering more general cases. Finally, Refs. [74; 75] provide information-theoretic secure analyses to show the practicability of DPS-QKD in the finite-key regime, which builds a complete theoretic scheme for this protocol. We note that the security proof in Ref. [74] results in a key rate that scales in the order of \(O(\eta^{2})\) without relativistic constraint and has immunity against coherent attacks which is of vital importance in realistic implementation.
COW-QKD [62] is another type of distributed-phase-reference protocol that has been implemented in practical quantum information processing [76] with its easily achievable experimental requirements [77; 78; 79; 80; 81; 82; 83; 84] which are similar to that of DPS-QKD. Contrary to the DPS-QKD, the security proof for the COW protocol remains incomplete, primarily due to the absence of a finite-key secure analysis that simultaneously offers robust key rate performance and security against coherent attacks. Typically, the security of COW-QKD is proven by measuring the interference visibility to estimate information leakage [85; 86]. However, when considering the zero-error attack [87; 88], which enables eavesdropping by Eve with
out introducing bit errors, COW-QKD is insecure if its key rate scales as \(O(\eta)\)[89], which is the scale of the key rate used in many COW-QKD experiments [90; 81; 82; 83; 79]. In Ref. [91], the authors introduced an innovative method for calculating the key rate of a variant of COW-QKD, resulting in an improved key rate in high-loss channels. However, the security of this new protocol in the finite-key regime, particularly its immunity to coherent attacks, has yet to be proven. This is a crucial step in ensuring its practicality in real-world environments. In summary, the lack of a finite-key analysis for COW-QKD that offers both a high key rate and security against coherent attacks remains a significant challenge in enhancing the practicality of this technology. Recently, a new security proof for COW-QKD was proposed [92] based on an innovative practical implementation that retains the simplicity of the original version. By estimating the upper bound on the phase error rate instead of measuring the visibility of interference, it was shown that the secure transmission distance can be over 100 km, and an analytic formula for the key rate was provided.
In this work, we extend the security proof in Ref. [92] to the finite-key domain with composable security [93] to demonstrate its real-world applicability. We employ the Quantum Leftover Hash Lemma [94] and the entropic uncertainty relation [95; 96] to derive a formula for the secure key length in the finite-key regime. When dealing with correlated random variables, we apply Kato's inequality [97] to estimate statistical fluctuations, ensuring security against coherent attacks and resulting in a higher key rate compared to Azuma's inequality [98]. We simulate the performance of the key rate under different conditions, such as varying values of misalignment error and different choices of basis, to demonstrate the flexibility of our protocol. The simulation results and comparison with existing analyses and another similar protocol confirm the advantages of our approach.
This paper is organized as follows: Sec. II describes our COW-QKD protocol scheme. Sec. III presents details of the key rate calculation. Numerical simulations of key rate performance under different conditions are shown in Sec. IV, and we conclude in Sec. V.
## II Protocol Describes
In COW-QKD protocol, sender Alice uses two-pulse states \(\left|0_{k}\right\rangle=\left|0\right\rangle_{2k-1}\left|\alpha\right\rangle_ {2k}\) and \(\left|1_{k}\right\rangle=\left|\alpha\right\rangle_{2k-1}\left|0\right\rangle_ {2k}\) at two time windows \(2k-1\) and \(2k\) (\(k=1,2,\cdots,N\)) to encode logic bits \(0\) and \(1\) in the \(k\)-th round respectively. State \(\left|0\right\rangle\) means vacuum state and state \(\left|\alpha\right\rangle\) is coherent state whose mean photon number is \(\mu=\left|\alpha\right|^{2}\). As shown in Fig. 1, the COW-QKD scheme used in this paper takes both two-pulse coherent state \(\left|\alpha\right\rangle_{2k-1}\left|\alpha\right\rangle_{2k}\) and two-pulse vacuum state \(\left|0\right\rangle_{2k-1}\left|0\right\rangle_{2k}\) as decoy states to estimate the phase error rate instead of using visibility to reflect the broken coherence.
The detailed steps of this scheme are:
1. Alice randomly sends a sequence of pulses that consists of states \(\left|0\right\rangle_{2k-1}\left|\alpha\right\rangle_{2k}\), \(\left|\alpha\right\rangle_{2k-1}\left|0\right\rangle_{2k}\), \(\left|\alpha\right\rangle_{2k-1}\left|\alpha\right\rangle_{2k}\) and \(\left|0\right\rangle_{2k-1}\left|0\right\rangle_{2k}\) with probability \(p_{z}\), \(p_{z}\), \(p_{d_{1}}\) and \(p_{d_{2}}\) respectively to Bob where \(p_{z}=\frac{1}{2}(1-p_{d_{1}}-p_{d_{2}})\). She records her choice of sending in each round. This step is repeated for \(N\) rounds so we have \(k=1,2,\cdots,N\).
2. Bob uses a beam splitter of transmittance \(t_{B}\) to passively distribute incoming states into the data line or the monitoring line. On the data line, he measures the click time of each signal to determine which logic bit Alice encodes in this round and gets the raw key. On the monitoring line, he records which detector clicks in each round. Here we note that if multiple detectors click in one round, Bob records one of these detectors clickes randomly.
3. Bob announces in which round he records a click on the data line. Alice only keeps her logic bits in those rounds and discards the rest to get the raw key.
4. Bob announces his click records of the monitoring line. Alice calculates the following click counts \(n_{\alpha_{\alpha}}^{M_{i}}\), \(n_{\alpha 0}^{M_{i}}\), \(n_{\alpha 0}^{M_{i}}\) and \(n_{00}^{M_{i}}\) (\(i=0\) or \(1\)). \(n_{u}^{M_{i}}(u=0\alpha,\alpha 0,\alpha\alpha,00)\) are the click counts of state \(\left|0\right\rangle_{2k-1}\left|\alpha\right\rangle_{2k}\), \(\left|\alpha\right\rangle_{2k-1}\left|0\right\rangle_{2k}\), \(\left|\alpha\right\rangle_{2k-1}\left|\alpha\right\rangle_{2k}\) and \(\left|0\right\rangle_{2k-1}\left|0\right\rangle_{2k}\) respectively where the superscript \(M_{i}\) refers to the clicking detectors on the monitoring line. By applying Kato's inequality, she can estimate the upper bound on phase error rate \(\overline{E_{p}}\). The bit error rate \(E_{z}\) can be calculated by reveal
Figure 1: Experimental implementation of COW-QKD protocol in this work. With an intensity modulator (IM), Alice can prepare quantum states \(\left|0\right\rangle\) and \(\left|\alpha\right\rangle\) in each time window experimentally to randomly sends a sequence of pulses that consists of states \(\left|0\right\rangle_{2k-1}\left|\alpha\right\rangle_{2k}\), \(\left|\alpha\right\rangle_{2k-1}\left|0\right\rangle_{2k}\), \(\left|\alpha\right\rangle_{2k-1}\left|\alpha\right\rangle_{2k}\) and \(\left|0\right\rangle_{2k-1}\left|0\right\rangle_{2k}\) to Bob. After passively distributing these states into the data line or the monitoring line with a beam splitter of transmittance \(t_{B}\), Bob records the detector’s click in each round. \(D_{T}\), \(D_{M_{0}}\) and \(D_{M_{1}}\) are single-photon detectors. Compared to the original version of COW QKD, our protocol adds state \(\left|0\right\rangle_{2k-1}\left|0\right\rangle_{2k}\) as another decoy state, which maintains the requirements for experimental equipment.
ing some bits from the raw key. If either \(\overline{E_{p}}\) or \(E_{z}\) exceeds the preset values, the protocol aborts.
5. After an error correction step is performed, at most \(\text{leak}_{\text{EC}}\) bits of information are revealed. Then Alice and Bob verify whether the error correction step succeeds and perform privacy amplification to get the final key string.
## III The key-length formula
### Security definition
Before we present the security proof in the finite-key regime, we introduce the universally composable framework of QKD [93]. Typically, the performing of a QKD protocol either generates a pair of bit strings \(\hat{S}_{A}\) and \(\hat{S}_{B}\) for Alice and Bob respectively, or aborts so \(\hat{S}_{A}=\hat{S}_{B}=\varnothing\). A secure QKD protocol must satisfy the two criteria below.
The first is the correctness criterion which is met if two bit strings are the same, i.e., \(\hat{S}_{A}=\hat{S}_{B}\). In practical experiments, however, as it is not always possible to perfectly satisfy the correctness criterion, a small degree of error is typically allowed. Instead, we require that the probability of the two bit strings not being identical does not exceed a predetermined value, denoted as \(\varepsilon_{\text{cor}}\), In this case, we say that the protocol \(\varepsilon_{\text{cor}}\)-correct.
The second is the secrecy criterion which is met if there is no correlation between the system of the eavesdropper Eve and the bit strings of Alice. We assume the orthonormal basis which consists of Alice's quantum system and corresponds to each possible bit string of Alice to be \(\{|s\rangle\}_{s}\). The secrecy criterion requires the joint quantum state of Alice and Eve to be \(\rho_{\text{AE}}=\rho_{\text{AE}}^{\text{ideal}}\equiv U_{\text{A}}\otimes \rho_{\text{E}}\), where \(U_{\text{A}}=\sum_{s}\frac{1}{|\mathcal{S}|}|s\rangle\langle s|\) is a uniform mixture which indicates that the probability of generating each possible bit string of Alice is uniformly distributed, and \(\rho_{\text{E}}\) is Eve's system, which does not correlate with Alice's system. However, it is not always possible to perfectly satisfy this criterion in practice. This means that a small deviation between the actual joint quantum state of Alice and Eve and the ideal state is permissible. Trace distance measures the difference and we say the protocol is \(\varepsilon_{\text{sec}}\)-secret if the trace distance between the actual joint quantum state \(\rho_{\text{AE}}\) and the ideal state \(\rho_{\text{AE}}^{\text{ideal}}\) does not exceed \(\delta\), i.e.,
\[\frac{1}{2}\|\rho_{\text{AE}}-\rho_{\text{AE}}^{\text{ideal}}\|_{1}\leq\delta, \tag{1}\]
\(\text{and}(1-p_{\text{abort}})\delta\leq\varepsilon_{\text{sec}}\), where \(p_{\text{abort}}\) is the probability for aborting this protocol and \(\|\cdot\|_{1}\) denotes the trace norm.
Finally, a protocol is \(\varepsilon_{\text{s}}\)-secure if it is both \(\varepsilon_{\text{cor}}\)-correct and \(\varepsilon_{\text{sec}}\)-secret with \(\varepsilon_{\text{cor}}+\varepsilon_{\text{sec}}\leq\varepsilon_{\text{s}}\).
### Security proof
Here, a virtual entanglement-based protocol is introduced to obtain the secure key rate in the finite-key regime, which is based on the virtual entanglement-based protocol in Ref. [92]. To simplify the presentation, we ignore the label \(k\) and express the state sent in the \(k\)-th round as \(\ket{0_{z}}\) and \(\ket{1_{z}}\). Let \(\ket{0_{x}}=(\ket{0_{z}}+\ket{1_{z}})/\sqrt{N^{+}}\) and \(\ket{1_{x}}=(\ket{0_{z}}-\ket{1_{z}})/\sqrt{N^{-}}\) be the logic bits \(0\) and \(1\) in the X basis, where \(N^{\pm}=2(1\pm e^{-\mu})\) are the normalization factors. In the virtual entanglement-based protocol, Alice prepares \(K\) pairs of the entangled state
\[\begin{split}\ket{\phi}&=\frac{1}{\sqrt{2}}(\ket{+ z}_{A}\ket{0_{z}}_{A^{\prime}}+\ket{-z}_{A}\ket{1_{z}}_{A^{\prime}})\\ &=\frac{\sqrt{N^{+}}}{2}\ket{+x}_{A}\ket{0_{x}}_{A^{\prime}}+ \frac{\sqrt{N^{-}}}{2}\ket{-x}_{A}\ket{1_{x}}_{A^{\prime}},\end{split} \tag{2}\]
where \(\ket{\pm x}\) and \(\ket{\pm z}\) are the eigenstates of the Pauli matrices \(X\) and \(Z\) respectively and subscript \(A\) and \(A^{\prime}\) denote different quantum systems possessed by Alice. Then Alice measures the qubits in the system \(A\) randomly in the Pauli \(X\) or \(Z\) basis to obtain the raw key \(\hat{X}_{A}\) from the \(X\) basis and \(\hat{Z}_{A}\) from the \(Z\) basis. Bob's experimental implementation is the same as the practical COW-QKD. He obtains his raw key \(\hat{Z}_{B}\) of the \(Z\) basis on the data line by measuring the click time just like the original protocol. He also records a bit value \(0(1)\) in the \(X\) basis when detector \(D_{M_{0}}(D_{M_{1}})\) on the monitoring line clicks to obtain the raw key \(\hat{X}_{B}\). \(\hat{Z}_{A}\) and \(\hat{Z}_{B}\) are used to extract the final key so the error-correction step and error-verification step are performed to them. If these steps succeed, Alice and Bob obtain the same bit string which we denoted as \(\hat{Z}\). All the information that the eavesdropper Eve possesses up to the error-correction step and error-verification step is denoted as \(E^{\prime}\). The smooth min-entropy \(H^{e}_{\text{min}}(\hat{Z}|E^{\prime})\) characterizes the mean probability that Eve can guess \(\hat{Z}\) successfully with all information she owns using the optimal strategy [99]. The smooth max-entropy \(H^{e}_{\text{max}}(\hat{Z}_{A}|\hat{Z}_{B})\) quantifies the number of bits required to reconstruct \(\hat{Z}_{A}\) from \(\hat{Z}_{B}\)[100].
According to the Quantum Leftover Hashing Lemma [94], a \(\Delta\)-secret key of length \(l\) can be extract from \(\hat{Z}\) when a random universal\({}_{2}\) hash function to \(\hat{Z}\), where parameter \(\Delta\) satisfies
\[\Delta=2\varepsilon+\frac{1}{2}\sqrt{2^{l-H^{e}_{\text{min}}(\hat{Z}|E^{\prime })}}. \tag{3}\]
Let \(\varepsilon_{0}=\frac{1}{2}\sqrt{2^{l-H^{e}_{\text{min}}(\hat{Z}|E^{\prime})}}\), the length \(l\) of the secret key is [101]
\[l=H^{e}_{\text{min}}(\hat{Z}|E^{\prime})-2\log_{2}\left(\frac{1}{2\varepsilon_{ 0}}\right). \tag{4}\]
A chain-rule inequality for these smooth entropies is used to describe the error-correction step and error
verification step. That is
\[\begin{split} H^{\varepsilon}_{\text{min}}(\hat{Z}|E^{\prime})& \geq H^{\varepsilon}_{\text{min}}(\hat{Z}_{A}|E)-H^{\varepsilon}_{ \text{max}}(\hat{Z}_{A}|\hat{Z}_{B})\\ &=H^{\varepsilon}_{\text{min}}(\hat{Z}_{A}|E)-\text{leak}_{\text{ EC}}-\log_{2}\left(\frac{2}{\varepsilon_{\text{cor}}}\right),\end{split} \tag{5}\]
where \(\text{leak}_{\text{EC}}\) and \(\log_{2}(\frac{2}{\varepsilon_{\text{cor}}})\) are the numbers of bits that are revealed during the error-correction and error-verification procedure respectively to generate a \(\varepsilon_{\text{cor}}\)-correct key, and \(\hat{E}\) denotes all the information that Eve possesses before the error-correction step and error-verification step. The lower bound on the smooth min-entropy can be obtained by the entropic uncertainty relation [95]. We denote the binary Shannon entropy as \(h(x)=-x\log_{2}x-(1-x)\log_{2}(1-x)\). Let \(\hat{X}^{\prime}_{A}\) and \(\hat{X}^{\prime}_{B}\) be the bit strings that Alice and Bob would have obtained if Alice had measured in the \(X\) basis which is actually measured in the \(Z\) basis in the virtual protocol. So we have \(H^{\varepsilon}_{\text{max}}(\hat{X}^{\prime}_{A}|\hat{X}^{\prime}_{B})\leq n _{z}h(E_{x})\) where \(n_{z}\) is the size of \(\hat{Z}_{A}\) and \(E_{x}\) is the bit error rate in the \(X\) basis. By exploiting the entropic uncertainty relation we have
\[H^{\varepsilon}_{\text{min}}(\hat{Z}_{A}|E)\geq n_{z}-H^{\varepsilon}_{\text{ max}}(\hat{X}^{\prime}_{A}|\hat{X}^{\prime}_{B})\geq n_{z}[1-h(E_{x})], \tag{6}\]
and the final key length is
\[l\geq n_{z}[1-h(E_{x})]-\text{leak}_{\text{EC}}-\log_{2}\left(\frac{2}{ \varepsilon_{\text{cor}}}\right)-2\log_{2}\left(\frac{1}{2\varepsilon_{0}} \right). \tag{7}\]
### Phase error rate
The phase error rate formula is derived by the same method in Ref. [92]. For the completeness of this work, a brief deduction is presented here. We consider a prepare-and-measure protocol which is equivalent to the entanglement-based protocol in Sec. III.2 In this protocol, when Alice prepares her optical signals, she randomly chooses the Z or X basis. If she chooses the Z basis, she prepares states \(\ket{0_{z}}\) and \(\ket{1_{z}}\) with the same probability. If she chooses the X basis, she prepares states \(\ket{0_{x}}\) and \(\ket{1_{x}}\) with probability \(\frac{N^{+}}{4}\) and \(\frac{N^{-}}{4}\) respectively. She sends her states to Bob and Bob uses the same implementation as the practical protocol to measure these states in the \(Z\) basis (data line) or in the \(X\) basis (monitoring line) distributed by a beam splitter.
It is obvious that the density matrices of the X and Z basis are the same. That is
\[\begin{split}\rho&=(\ket{0_{z}}\!\!\bra{0_{z}}+\ket{ 1_{z}}\!\!\bra{1_{z}})/2\\ &=(N^{+}\ket{0_{x}}\!\!\bra{0_{x}}+N^{-}\ket{1_{x}}\!\!\bra{1_{x} })/4.\end{split} \tag{8}\]
Therefore the bit error rate of the X basis can be obtained as follows:
\[\begin{split} E_{x}&=\frac{N^{+}Q_{0_{z}}^{M_{1}}+N ^{-}Q_{1_{z}}^{M_{0}}}{N^{+}(Q_{0_{z}}^{M_{0}}+Q_{0_{z}}^{M_{1}})+N^{-}(Q_{1_{z }}^{M_{0}}+Q_{1_{z}}^{M_{1}})}\\ &=\frac{N^{+}(Q_{0_{z}}^{M_{1}}-Q_{0_{z}}^{M_{0}})+2(Q_{0_{z}}^{M_ {0}}+Q_{1_{z}}^{M_{1}})}{2(Q_{0_{z}}^{M_{0}}+Q_{0_{z}}^{M_{1}}+Q_{1_{z}}^{M_{1} })},\end{split} \tag{9}\]
which is equal to the bit error rate in the virtual entanglement-based protocol, where \(Q_{s_{x(z)}}^{M_{i}}\) refers to the gain of the event which Alice prepares state \(\ket{s_{x(z)}}(s=0,1)\) and Bob get a click with detector \(D_{M_{i}}(i=0,1)\) on the monitoring line. The relation \(\frac{N^{-}}{4}Q_{1_{z}}^{M_{i}}+\frac{N^{+}}{4}Q_{0_{x}}^{M_{i}}=\frac{1}{2}(Q_ {1_{z}}^{M_{i}}+Q_{0_{z}}^{M_{i}})\) is used in the second equation, which can be obtained from Eq. (8). Because the density matrices of the Z basis and the X basis are the same, the eavesdropdopper Eve cannot distinguish whether the prepare-and-measure protocol or the practical COW-QKD protocol is actually performed by Alice and Bob. The phase error rate in the practical COW-QKD protocol is equal to the bit error rate of the X basis in the prepare-and-measure protocol.
In practical COW-QKD protocol, states \(\ket{0_{x}}\) and \(\ket{1_{x}}\) are not sent so we can not calculate \(Q_{0_{z}}^{M_{1}}\) and \(Q_{0_{z}}^{M_{0}}\) directly. The decoy states \(\ket{\alpha}_{2k-1}\ket{\alpha}_{2k}\) and \(\ket{0}_{2k-1}\ket{0}_{2k}\) are used to estimate \(\overline{Q_{0_{x}}^{M_{1}}}\) and \(\underline{Q_{0_{x}}^{M_{0}}}\), where \(\overline{O}\) and \(\underline{Q}\) are the upper and lower bounds on value \(O\) respectively. The expressions are
\[\begin{split}\overline{Q_{0x}^{M_{1}}}&=\frac{1}{N^{+ }}\left(e^{\frac{\mu}{2}}\sqrt{Q_{\alpha\alpha}^{M_{1}}}+e^{-\frac{\mu}{2}} \sqrt{Q_{00}^{M_{1}}}\right)^{2}\\ &+\frac{N^{-}}{N^{+}}\left(\frac{e^{\mu}N^{-}}{4}+e^{\mu}\sqrt{Q_{ \alpha\alpha}^{M_{1}}}+\sqrt{Q_{00}^{M_{1}}}\right)\end{split} \tag{10}\]
and
\[\begin{split}\underline{Q_{0z}^{M_{0}}}&=\frac{1}{N^{+ }}\left(e^{\mu}Q_{\alpha\alpha}^{M_{0}}+e^{-\mu}Q_{00}^{M_{0}}-2\sqrt{Q_{00}^{ M_{0}}Q_{\alpha\alpha}^{M_{0}}}\right)\\ &-\frac{N^{-}}{N^{+}}\left(e^{\mu}\sqrt{Q_{\alpha\alpha}^{M_{0}}}+ \sqrt{Q_{00}^{M_{0}}}\right),\end{split} \tag{11}\]
where \(Q_{w}^{M_{i}}(w=\alpha\alpha,00)\) denotes the gain of the event which Alice sends state \(\ket{\alpha}\alpha\) or \(\ket{0}\ket{0}\) respectively where we also omit the subscripts of state \(\ket{\alpha}_{2k-1}\ket{\alpha}_{2k}\) and \(\ket{0}_{2k-1}\ket{0}_{2k}\) and Bob gets a click with detector \(D_{M_{i}}(i=0,1)\). The details of how to obtain Eq. (10) and Eq. (11) can be found in Ref. [92].
### Statistical fluctuations
However in the finite-key regime, only in the expected value case can we consider the bit error rate in the X basis to be equal to the phase error rate in the Z basis [32]. Eq. (10) and Eq. (11) hold only for the expected value as well. So we should consider the statistical fluctuations between observed values and expected values.
Typically, Azuma's inequality [98] is applied to convert observed values to the upper or lower bound on corresponding expected values and vice versa. As shown in Ref. [74], it can be concluded that a loose bound will be obtained when using Azuma's inequality to estimate the statistical fluctuations of events that occur with a very small probability. Specifically, the estimation of the gains
of decoy states is loose when using Azuma's inequality. Instead, we use a novel concentration inequality named Kato's inequality [97] to make our estimation tighter so that a higher key rate can be obtained. Here we introduce the general form of Kato's inequality which has been employed in some finite key analyses [102, 74]. Let \(n_{1},n_{2},\cdots,n_{k}\) be a sequence of random variables which satisfies \(0\leq n_{i}\leq 1,(i=1,2,\cdots,k)\). Let \(\Gamma_{i}=\sum_{u=1}^{i}n_{u}\) and \(f_{i}\) be the \(\sigma\)-algebra generated by \(\{n_{1},n_{2},\cdots,n_{k}\}\) which is called natural filtration of this sequence of random variables. For any \(k,a\in\mathbb{R}\) and any \(b\) s.t. \(b\geq\left|a\right|\), according to Kato's inequality we have that
\[\begin{split} Pr&\left[\sum_{u=1}^{k}E(n_{u}|f_{u- 1})-\Gamma_{k}\geq\left[b+a\left(\frac{2\Gamma_{k}}{k}-1\right)\right]\sqrt{k }\right]\\ &\leq\exp\left[\frac{-2(b^{2}-a^{2})}{(1+\frac{4a}{3\sqrt{k}})^{ 2}}\right].\end{split} \tag{12}\]
Where \(E(\cdot)\) refers to the expected value. Another form of Kato's inequality can be derived by replacing \(n_{i}\to 1-n_{i}\) and \(a\rightarrow-a\), which is
\[\begin{split} Pr&\left[\Gamma_{k}-\sum_{u=1}^{k}E(n _{u}|f_{u-1})\geq\left[b+a\left(\frac{2\Gamma_{k}}{k}-1\right)\right]\sqrt{k }\right]\\ &\leq\exp\left[\frac{-2(b^{2}-a^{2})}{(1-\frac{4a}{3\sqrt{k}})^{ 2}}\right].\end{split} \tag{13}\]
The details of how to use Kato's inequality to accomplish parameter estimation tasks are shown in Appendix. In the description below, we let \(O^{*}\) be the expected value of \(O\).
After performing the COW-QKD protocol, observed values \(n_{\alpha\alpha}^{M_{i}*}\) and \(n_{00}^{M_{i}^{\ast}}\)\((i=0,1)\) are obtained, which stand for the total click counts of detector \(M_{i}\) when state \(\left|0\right\rangle\left|0\right\rangle\) or \(\left|\alpha\right\rangle\left|\alpha\right\rangle\) is sent respectively. \(Q_{s_{i}}^{M_{i}}(s=0,1)\) and the click count of detector \(D_{T}\) which we express as \(n_{z}\) can be directly calculated as well. Firstly, we use these four observed values to estimate their upper bounds on corresponding expected values by Kato's inequality as follows:
\[n_{w}^{M_{i}*}\leq\overline{n_{w}^{M_{i}*}}=n_{w}^{M_{i}}+\Delta_{w}^{M_{i}}, \tag{14}\]
where \(w=00,\alpha\alpha\) and \(i=0,1\). The statistical fluctuation parameters here are obtained in the way presented in Appendix. Similarly, two lower bounds \(\underline{n_{w}^{M_{0}*}}\)\((w=00,\alpha\alpha)\) need to be calculated as follows:
\[n_{w}^{M_{0}*}\geq\underline{n_{w}^{M_{0}*}}=n_{w}^{M_{0}}-\Delta_{w}^{M_{0} \prime}. \tag{15}\]
We set the failure probability for estimating each of the six bounds above to be \(\varepsilon_{1}\). The total number of rounds performed is set to be \(N\). So we can denote the number of state \(\left|\alpha\right\rangle\left|\alpha\right\rangle\) sent by Alice as \(N_{\alpha\alpha}=N\times p_{d_{1}}\) and the number of state \(\left|0\right\rangle\left|0\right\rangle\) as \(N_{00}=N\times p_{d_{2}}\). We calculate the upper and lower bounds on gains of each event as follows:
\[\overline{Q_{w}^{M_{i}*}}=\overline{n_{w}^{M_{i}*}}/n_{w}, \tag{16}\]
\[\underline{Q_{w}^{M_{0}*}}=\underline{n_{w}^{M_{0}*}}/n_{w}. \tag{17}\]
Then by applying Eq. (10) and Eq. (11) in the expected case, we obtain the expected values as follows:
\[\begin{split}\overline{Q_{0x}^{M_{1}*}}&=\frac{1}{N ^{+}}\left(\epsilon^{\frac{\mu}{2}}\sqrt{\overline{Q_{\alpha\alpha}^{M_{1}*}} }+e^{-\frac{\mu}{2}}\sqrt{\overline{Q_{00}^{M_{1}*}}}\right)^{2}\\ &+\frac{N^{-}}{N^{+}}\left(\frac{e^{\mu}N^{-}}{4}+e^{\mu}\sqrt{ \overline{Q_{\alpha\alpha}^{M_{1}*}}}+\sqrt{\overline{Q_{00}^{M_{1}*}}} \right),\end{split} \tag{18}\]
\[\begin{split}\underline{Q_{0x}^{M_{0}*}}&=\frac{1}{N ^{+}}\left(\epsilon^{\mu}\underline{Q_{\alpha\alpha}^{M_{0}*}}+e^{-\mu} \underline{Q_{00}^{M_{0}*}}-2\sqrt{\overline{Q_{00}^{M_{0}*}}\times\overline{ Q_{\alpha\alpha}^{M_{0}*}}}\right)\\ &-\frac{N^{-}}{N^{+}}\left(\epsilon^{\mu}\sqrt{\overline{Q_{ \alpha\alpha}^{M_{0}*}}}+\sqrt{\overline{Q_{00}^{M_{0}*}}}\right).\end{split} \tag{19}\]
Finally, by applying Eq. (9) and considering the bit error rate in the X basis to be equal to the phase error rate in the Z basis in the expected value case [32], the expected upper bound on the phase error rate is
\[\overline{E_{p}^{*}}=\overline{E_{x}^{*}}=\frac{N^{+}(\overline{Q_{0x}^{M_{1}*} }-\underline{Q_{0x}^{M_{0}*}})+2(Q_{0_{z}}^{M_{0}}+Q_{1_{z}}^{M_{0}})}{2(Q_{0_{z }}^{M_{0}}+Q_{0_{z}}^{M_{1}}+Q_{1_{z}}^{M_{0}}+Q_{1_{z}}^{M_{1}})}. \tag{20}\]
We use Kato's inequality again but in a different form which is explained in Appendix to calculate the upper bound on phase error rate in the observed value case. The expected number of clicks corresponding to phase errors is \(\overline{n_{p}^{*}}=N\times\overline{E_{p}^{*}}\). So the upper bound on the observed value is
\[n_{p}\leq\overline{n_{p}}=\overline{n_{p}^{*}}+\Delta_{p}, \tag{21}\]
where \(\Delta_{p}=\sqrt{\frac{1}{2}n_{z}\ln\varepsilon_{2}^{-1}}\) and \(\varepsilon_{2}\) is the failure probability for estimating \(\overline{n_{p}}\). So the upper bound on the phase error rate is
\[\overline{E_{p}}=\overline{n_{p}}/n_{z}. \tag{22}\]
### Composable security
Considering the failure probability for estimating the statistical fluctuations described in Sec, the practical protocol has a total secrecy of \(\varepsilon_{\text{sec}}=2\varepsilon+\varepsilon_{0}+6\varepsilon_{1}+ \varepsilon_{2}\), where we take \(\varepsilon=\varepsilon_{0}=\varepsilon_{1}=\varepsilon_{2}=\varepsilon_{\text{ sec}}/10\). So the final key length is denoted as
\[l\geq n_{z}[1-h(\overline{E_{p}})]-\text{leak}_{\text{EC}}-\log_{2}\left(\frac{2}{ \varepsilon_{\text{cor}}}\right)-2\log_{2}\left(\frac{5}{\varepsilon_{\text{ sec}}}\right), \tag{23}\]
and the COW-QKD protocol in this paper is \(\varepsilon_{\text{s}}\)-secret where \(\varepsilon_{\text{s}}=\varepsilon_{\text{cor}}+\varepsilon_{\text{sec}}\).
Numerical simulation and discussion
To numerically simulate the key rate performance of our protocol in the finite-key regime, we assume that the dark-count rate is \(p_{d}=2\times 10^{-8}\) and the efficiency of the photon detectors is \(\eta_{d}=70\%\). The number of bits that are revealed in the error-correction step \(\text{leak}_{\text{EC}}\) is \(fn_{z}h(E_{z})\), where the correction efficiency \(f\) is set to \(1.1\). The transmittance of the optical fiber with length \(L\) is expressed by \(\eta=10^{-0.016L}\). For finite-key analysis, the security bounds of correctness and secrecy are fixed to \(\varepsilon_{\text{cor}}=10^{-15}\) and \(\varepsilon_{\text{sec}}=10^{-10}\). Other experimental parameters such as the mean photon number \(\mu=|\alpha|^{2}\) and the transmittance of the beam-splitter used to distribute incoming states are decided by an optimization algorithm.
We present the performance of the key rate of COW-QKD with different total numbers of rounds compared with the key rate of infinite limit [92], where the misalignment error rate is fixed to \(e_{d}=1\%\). As shown in Fig. 2 we can conclude that if the state-sending step of our protocol is repeated for \(N=10^{11}\) rounds, the key rate is close to that of infinite limit, showing the practicality of our protocol in the finite-key regime. When choosing \(N=10^{11}\), a 3 Mbit key can be obtained through 34 km fiber by Alice and Bob if they run our protocol with a laser operating at 1 GHZ for only 30 seconds, which presents the superiority of this protocol in short-distance communication. The results also demonstrate that our security analysis guarantees an unconditionally secure communication range exceeding 100km for COW-QKD, given its straightforward experimental setup.
We demonstrate the flexibility of our protocol by presenting its key rate performance under different conditions. Firstly, the beam splitter used to passively distribute optical pulses into the data or monitoring lines can be replaced by an optical switch that actively divides incoming states into different lines. This is referred to as passive and active basis choice, respectively. A comparison of the key rate between passive and active basis choice is presented in Fig. 3, along with the estimated upper bound on the phase error rate \(\overline{E_{p}}\) when using an active basis, demonstrating the applicability of our analysis with an active basis. Our protocol also exhibits robustness when faced with varying values of misalignment error rate, as shown in Fig. 4. The results show that even with a large misalignment error, the key rates are not significantly affected. This indicates the practicality of our protocol in constructing quantum communication systems under different experimental conditions.
Compared with other security analyses of several variants of COW-QKD, our protocol has notable advantages because its key rate performance is better and its security proof can be extended to the finite-key regime with immunity against coherent attacks for realistic implementation. As shown in Ref. [92], the asymptotic key rate of our protocol is remarkably higher than that of Ref. [86; 89], maintaining the original simple setting and security against coherent attacks. In Ref. [91], a novel method of calculating the lower bound on the secure key rate is proposed, obtaining a variant of COW-QKD whose key rate performance is dramatically improved. However, the lack of analysis in the finite-key regime and immunity against coherent attacks is a critical hindrance when bringing this theoretical protocol into reality. Since our security proof provides the analytical formulas of the key rate, the extension of analysing performance with finite key length can be completed as presented in Sec. III. With the help of Kato's inequality, our analysis gives a tight bound on the key rate and guarantees security against coherent attacks, establishing foundations for further practical applications.
Additionally, we compare our protocol with DPS-QKD [71], whose equipment requirements are similar to that of COW-QKD. The finite-key security analysis of it in Ref. [74] shows a tighter bound on phase error rate can be obtained by Kato's inequality. To simulate under the same experimental conditions, we follow the choice in Ref. [74] and fix the security bounds of both correctness and secrecy to \(2^{-28}\) to get a total secrecy of \(2^{-27}\approx 10^{-8.1}\). The misalignment error \(e_{d}\) is set to \(0.01\) and the dark-count rate is \(0\) so we can ensure the bit error rate is \(1\%\) as chosen in Ref. [74]. The correction efficiency \(f\) is \(1.16\), and we have simulated the key rate under varying values of overall channel transmittance, taking into account both optical fiber loss and detection efficiency. We note that in our COW-QKD protocol, Alice sends two pulses in each round, whereas the DPS-QKD protocol requires three pulses. For comparison, we need to
Figure 2: Secret key rate with different values of the total number of rounds, \(N=10^{9},10^{10}\) and \(10^{11}\), using passive basis choice. The misalignment error \(e_{d}\) is set to \(1\%\). When \(N=10^{11}\), which is reasonable in experimental implementation, the key rate is quite close to the performance in the asymptotic case. The key rate performance also shows that the security of our protocol ensures a secure distance exceeding 100km for COW-QKD in practical implementation.
ensure that the total number of pulses \(N_{\text{pulse}}\) is the same for both protocols. We compare the key rates of the two protocols when \(N_{\text{pulse}}\) is set to \(3\times 10^{11}\) and \(3\times 10^{12}\) so that one of the results for DPS-QKD is consistent with that reported in Ref. [74]. The simulation is shown in Fig. 5. The results reveal that the key rates of our protocol are significantly higher than those reported in Ref. [74].
## V Conclusion
In this work, we present a finite-key analysis for the COW-QKD protocol proposed in Ref. [92]. We apply the Quantum Leftover Hashing Lemma and entropic uncertainty relation to derive an analytic formula for the key length. When dealing with correlated random variables, we use Kato's inequality to ensure security against coherent attacks and achieve a higher key rate. By considering the failure probabilities for estimating statistical fluctuations between observed and expected values, we complete the security proof within the universally composable framework. Our finite-key analysis shows that the key transmission distance can exceed 100 km in specific cases, providing a feasible approach for the secure implementation of quantum communication processes. In short-distance communication, the numerical simulation in Fig.2 has shown that our protocol can generate a 3 Mbit secret key over 34km fiber by running this protocol for only 30 seconds with a photon source operating at 1 GHZ repetition rate. We also present numerical simulations of key rates under different conditions, demonstrating the practicality and flexibility of our protocol. Furthermore, compared to the finite-key
Figure 3: Comparison of key rates using passive basis and active basis when \(N=10^{9}\) and \(N=10^{11}\). The upper bound on the phase error rate \(\overline{E_{p}}\) using active basis choice is presented by the dashed red line. The misalignment error \(e_{d}\) is set to 1%. (a)The total number of rounds is \(N=10^{9}\); (b)The total number of rounds is \(N=10^{11}\).
Figure 4: Secret key rate simulation in the finite-key case with different values of misalignment error when using passive basis. When a large misalignment error is employed, the key rates do not drop significantly, which is an important advantage in constructing quantum information systems. (a)The total number of rounds is \(N=10^{9}\); (b)The total number of rounds is \(N=10^{11}\).
analysis of DPS-QKD in Ref. [74], our protocol obtains a significantly higher key rate with almost the same experimental setup. In conclusion, our protocol lays the theoretical foundation for applying COW-QKD in real-world scenarios by offering both a high key rate and unconditional security against coherent attacks and completes the intact security proof for this protocol. Our protocol may be employed in future quantum communication with minuscule devices like chips and quantum information networks due to the simple experimental setup and excellent key rate performance.
###### Acknowledgements.
This work is supported by the National Natural Science Foundation of China (No.12274223), the Natural Science Foundation of Jiangsu Province (No. BK20211145), the Fundamental Research Funds for the Central Universities (No.020414380182), and the Program for Innovative Talents and Entrepreneurs in Jiangsu (No.JSSCRC2021484).
## Appendix A Kato's inequality
Kato's inequality [97] is used to deal with the correlated random variables in this work when estimating parameters. Here we introduce how to use Kato's inequality to complete the estimation in the main text.
We can use Eq. (12) to estimate the upper bounds on expected values from the corresponding observed values. In the estimation of QKD protocols, the random variables \(n_{i},i=1,2,\cdots,k\) which indicate whether the detector clicks in the \(i\)-th round respectively, are Bernoulli random variables. If the detector clicks in the \(u\)-th round, \(n_{u}=1\), and if it doesn't click \(n_{u}=0\). So we have \(E(n_{u}|f_{u-1})=\Pr(n_{u}=1|f_{u-1})\). \(\Gamma_{k}\) is an observed value that denotes the total number of detector click during k rounds.
To get the tightest bound, one should choose the optimal values for \(a\) and \(b\) to minimize the deviation \(\left[b+a\left(\frac{2\Gamma_{k}}{k}-1\right)\right]\sqrt{k}\) by solving an optimization problem. To demonstrate, we let \(\varepsilon_{a}\) be the failure probability for estimating the upper bounds, i.e. \(\exp\left[\frac{-2(b^{2}-a^{2})}{(1+\frac{1}{3\sqrt{k}})^{2}}\right]= \varepsilon_{a}\), and the optimization problem which is denoted as: \(\min_{a,b\geq|a|}\left[b+a\left(\frac{2\Gamma_{k}}{k}-1\right)\right]\sqrt{k}\);.
This is solved by Ref. [102] and the solutions are
\[a_{1} =a_{1}(\Gamma_{k},k,\varepsilon_{a})\] \[=\frac{3\left(72\sqrt{k}\Gamma_{k}(k-\Gamma_{k})\ln\varepsilon_{a }-16k^{3/2}\ln^{2}\varepsilon_{a}+9\sqrt{2}(k-2\Gamma_{k})\sqrt{-k^{2}\ln \varepsilon_{a}(9\Gamma_{k}(k-\Gamma_{k})-2k\ln\varepsilon_{a})}\right)}{4(9k- 8\ln\varepsilon_{a})(9\Gamma_{k}(k-\Gamma_{k})-2k\ln\varepsilon_{a})}, \tag{16}\]
\[b_{1}=b_{1}(a_{1},k,\varepsilon_{a})=\frac{\sqrt{18a_{1}^{2}k-(16a_{1}^{2}+24a _{1}\sqrt{k}+9k)\ln\varepsilon_{a}}}{3\sqrt{2k}}. \tag{17}\]
With the fixed values \(a_{1}\) and \(b_{1}\), we get the upper bound on expected value as follows according to Eq. (12):
\[\Gamma_{k}^{*}\leq\overline{\Gamma_{k}^{*}}=\Gamma_{k}+\Delta_{1}(a_{1},b_{1},k,\Gamma_{k}), \tag{18}\]
where \(\Delta_{1}(a_{1},b_{1},k,\Gamma_{k})=\left[b_{1}+a_{1}\left(\frac{2\Gamma_{k}} {k}-1\right)\right]\sqrt{k}\) and we use the expected value \(\Gamma_{k}^{*}\) to denote \(\sum_{u=1}^{k}E(n_{u}|f_{u-1})\).
Similarly, Eq. (13) can be applied to estimate the lower bound on expected values, where we need to solve another optimization problem. That is
Figure 5: Comparison of key rates of our protocol and the DPS-QKD protocol in Ref. [74]. Two protocols have similar experimental settings and both belong to the distributed-phase-reference QKD protocols. The bit error rate is 1% and the correction efficiency \(f\) is 1.16. For convenience, the security bounds of both correctness and secrecy are fixed to \(2^{-28}\) as in Ref. [74]. We compare the key rates when the numbers of pulses are \(N_{pulse}=3\times 10^{11}\) and \(N_{pulse}=3\times 10^{12}\) and conclude that our protocol has advantages on the secure transmission distance and key rate performance.
\[\begin{split} a_{2}&=a_{2}(\Gamma_{k},k,\varepsilon_{a})\\ &=-\frac{3\left(72\sqrt{k}\Gamma_{k}(k-\Gamma_{k})\ln\varepsilon_{a} -16k^{3/2}\ln^{2}\varepsilon_{a}-9\sqrt{2}(k-2\Gamma_{k})\sqrt{-k^{2}\ln \varepsilon_{a}(9\Gamma_{k}(k-\Gamma_{k})-2k\ln\varepsilon_{a})}\right)}{4(9k -8\ln\varepsilon_{a})(9\Gamma_{k}(k-\Gamma_{k})-2k\ln\varepsilon_{a})},\end{split}\] (A.4)
\[b_{2}=b_{2}(a_{2},k,\varepsilon_{a})=\frac{\sqrt{18a_{2}^{2}k-(16a_{2}^{2}-24a _{2}\sqrt{k}+9k)\ln\varepsilon_{a}}}{3\sqrt{2k}},\] (A.5)
and we get the lower bound
\[\Gamma_{k}^{*}\geq\underline{\Gamma_{k}^{*}}=\Gamma_{k}-\Delta_{2}(a_{2},b_{ 2},k,\Gamma_{k}),\] (A.6)
where \(\Delta_{2}(a_{2},b_{2},k,\Gamma_{k})=\left[b_{2}+a_{2}\left(\frac{2\Gamma_{k} }{k}-1\right)\right]\sqrt{k}\). With the methods above the estimations of \(\frac{M_{w}^{M_{i}*}}{n_{w}^{M_{i}*}}\) and \(n_{w}^{M_{0}*}\) in the main text can be done where \(w=00,\alpha\alpha\) and \(\overline{i=0},1\). The failure probability for each estimation is \(\varepsilon_{a}\), which is considered when explaining the composable security of our protocol.
When converting expected values to observed values, Kato's inequality is available as well. However, to get specific values of the optimal \(a_{i},b_{i}\) and the deviation \(\Delta_{i}\) where \(i=1,2\), the observed value \(\Gamma_{k}\) needs to be employed, which is not known. So we follow the method used in Ref. [102]. Let \(a=0\) and set the failure probabilities in Eq. (12) and Eq. (13) to be \(\varepsilon_{b}\). We obtain the inequalities below:
\[\sum_{u=1}^{k}E(n_{u}|f_{u-1})\leq\Gamma_{k}+\Delta,\] (A.7)
\[\sum_{u=1}^{k}E(n_{u}|f_{u-1})\geq\Gamma_{k}-\Delta,\] (A.8)
where \(\Delta=\sqrt{\frac{1}{2}k\ln\varepsilon_{b}^{-1}}\). This is how the estimation procedure of Eq. (21) is done.
|
2309.11659 | MeV dark energy emission from a de Sitter Universe | The evolution of a de Sitter Universe is the base for both the accelerated
Universe and the late stationary Universe. So how do we differentiate between
both universes? In this paper, we state that it is not possible to design an
experiment using luminous or angular distances to distinguish between both
cases because they are the same during the de Sitter phase. However, this
equivalence allows us to predict a signal of it a constant dark energy emission
with a signal peak around 29.5 MeV, in where according to our astrophysical
test of survival probability, the radiation must be non-standard photons.
Remarkably, experiments beyond EGRET and COMPTEL could observe an excess of
gamma photons in this predicted region, coming from a possible decay process of
the dark energy emission, which might constitute the possible smoking gun of a
late stationary Universe with the continuous creation of non-standard
radiation, an alternative approach to understand the current stages of the
Universe evolution. | Yasmín B. Alcántara-Pérez, Miguel A. García-Aspeitia, H. Martíınez-Huerta, A. Hernández-Almada | 2023-09-20T21:49:58Z | http://arxiv.org/abs/2309.11659v2 | # MeV dark energy emission from a de Sitter Universe
###### Abstract
The evolution of a de Sitter Universe is the base for both the accelerated Universe and the late stationary Universe. So how do we differentiate between both universes? In this paper, we state that it is not possible to design an experiment using luminous or angular distances to distinguish between both cases because they are the same during the de Sitter phase. However, this equivalence allows us to predict a signal of _a constant dark energy emission_ with a signal peak around \(29.5\,\mathrm{MeV}\), in where according to our astrophysical test of survival probability, the radiation must be _non-standard photons_. Remarkably, experiments beyond EGRET and COMPTEL could observe an excess of gamma photons in this predicted region, coming from a possible decay process of the dark energy emission, which might constitute the possible smoking gun of a late stationary Universe with the continuous creation of non standard radiation, an alternative approach to understand the current stages of the Universe evolution.
## I Introduction
The late acceleration of the Universe was first observed by the teams dedicated to study the supernovae Type Ia (SNIa) [1; 2] and later confirmed by WMAP and Planck satellites through Cosmic Microwave Background Radiation (CMB) [3], is today an undisputed fact. Under the General Theory of Relativity (GR), and assuming a homogeneous and isotropic line element and a perfect fluid energy-momentum tensor, the scale factor for an accelerated Universe (\(z\lesssim 0.6\)) is expressed as \(a(t)=a_{0}\exp[\Lambda(t-t_{0})]\), where \(\Lambda\) is the cosmological constant at \(z=0\), representing a de Sitter evolution. Additionally, the equation of state (EoS) of the fluid responsible for the acceleration must fulfill the inequality \(\omega<-1/3\). Under these demands, the cosmological constant (CC) is one of the explanations, and through this approach, it is possible to construct the standard paradigm for the cosmology, also known as the \(\Lambda\)-Cold Dark Matter (\(\Lambda\)CDM) model, which is in agreement with modern observations. However, let us not lose sight of the profound unsolved problems afflicting the CC [4; 5], that can be attributed to possible modifications to the GR or a misinterpretation of the CC from the quantum field theory point of view in where the ultraviolet divergence of the energy density arises.
From the experimental approach, we know that we are in an accelerated phase because the observed luminous distances by SNIa (and confirmed by CMB [3]) coincide with the theoretical expression expected for an accelerated Universe under a background model. The luminous distance expression is \(d_{L}(z)\approx H_{0}^{-1}\Omega_{\Lambda}^{-1/2}(1+z)z\), where \(\Omega_{\Lambda}\) is the density parameter of the CC, and it is valid only when \(\Omega_{\Lambda}\) dominates over the other components (matter and radiation)1. A similar situation happens, for example, with other observations like Baryon Acoustic Oscillations (BAO) when we explore the final stages of the Universe evolution where \(\tilde{\Omega}_{\Lambda}\) dominates, in this case, the angular distance can be reduced to \(d_{A}(z)\approx zH_{0}^{-1}\Omega_{\Lambda}^{-1/2}(z+1)\). This confirms that observations and theory fit only if the Universe is considered to be in an accelerated stage today. Both measurements (and others) are important probes that the Universe is transiting to a de Sitter phase in its last stages, and two of its main physical observables are the luminous and angular distances.
Footnote 1: This assumption is valid because \(\Omega_{\Lambda}=0.68\) according to [3] and eventually \(\tilde{\Omega}_{\Lambda}=1\) while the other components tend to dilute.
On the other hand, the steady-state model (SSM) demands a _perfect cosmological principle_ (PCP) [6], generalizing the isotropy not only in all directions at cosmological scales but also at all times. Indeed, the SSM demands \(\dot{a}/a=H_{0}\), where \(H_{0}\) is the Hubble constant, implying \(a=a_{0}\exp[H_{0}(t-t_{0})]\) with a deceleration parameter \(q\simeq-1\) and a lumi
that coincides precisely with the one obtained from de Sitter evolution in the \(\Lambda\)CDM model in its last stages. Notice that the SSM was initially discarded because of its inability to predict the CMB radiation and its black body spectrum [7]. Additionally, when the SSM was proposed, there was no observational evidence regarding the transition of our Universe into an accelerated stage; thus, the best at that epoch was an universe dominated by matter at \(z\sim 0\), which is no longer the case today. Nowadays, SSM evolved to the so-called _matter/radiation creation model_[8; 9; 10; 11; 12] in where the de Sitter evolution is not caused by a cosmological constant, instead, a continuous creation of matter/radiation is produced through a diffusion term in the continuity equations driving the de Sitter evolution observed.
Evidence shows that our Universe is an evolving system rising from a Big Bang, 13.7 billion years ago with different transitions, but in particular with a transition at \(z\lesssim 0.6\) that coincides with the mathematical description of both an accelerated and a _late steady state universe_ (LSS). Additionally, based on the \(\Lambda\)CDM model the Universe eventually will tend to \(q\rightarrow-1\) when \(z\rightarrow-1\), i.e. in the far future. The word _late_ is because the steady state is not valid for all epochs in the universe evolution; previous to this epoch (accelerated/stationary), the \(\Lambda\)CDM model is still the cornerstone. This affirmation arises from the mathematical equivalence between both models. Thus, how can we ensure which model for the Universe is the ideal interpretation for \(z\leqslant 0.6\)? Or are both conditions equivalent, and there is no way to differentiate between an accelerated or a steady state Universe for \(z\leqslant 0.6\)? If there is an equivalence, maybe the interpretation in one condition is easier than in the other one (Accelerated \(\leftrightarrow\) LSS), as happens with the equivalence principle or the equivalence among Anti-de Sitter and a Conformal Field Theory (AdS/CFT).
Thus, this paper is dedicated to responding to these questions and following all its consequences. The outline of the paper is as follows: Sec. II is dedicated to discussing the LSS model divided as follows: in Subsection II.1 it is presented the predictions related to the continuous emission of radiation at MeV energy. Sec. III tackle the consequences of MeV emission at astrophysical scales and conclude that this radiation must belong to the dark sector. In Sec. IV it is revisited the underneath classical field theory and the consequences in the cosmology. Finally, in Sec. V, we present our discussions and conclusions.
## II The LSS model
LSS requires a continuous creation of matter and radiation in order to have a steady state condition, implying the violation of the conservation of the energy-momentum tensor \(\nabla^{\mu}T_{\mu\nu}\neq 0\) which is incompatible with GR and thus, we need to have a matter creation at a rate of \(\sim 3H\) per existing accumulation of matter in the Universe, i.e. we need to maintain \(\dot{\rho}=\dot{p}=0\) for the dominant components. On the other hand, notice that the CC shows similar complications, in this case we also need to fulfill the condition \(\dot{\rho}_{\Lambda}=\dot{p}_{\Lambda}=0\), pushing GR to its limits due to the continuous creation of energy/matter to maintain constant the CC2. In the SSM vein, Hoyle [14; 15] introduce the C-field in the Einstein equations as \(G_{\mu\nu}+C_{\mu\nu}=8\pi GT_{\mu\nu}\) in order to have a solution for the conundrum of the continuous creation. From a more recent perspective, of this \(C\)-field we use the unimodular gravity (UG) [16; 17; 18] expressed as \(G_{\mu\nu}+\frac{1}{4}(R+8\pi GT)g_{\mu\nu}=8\pi GT_{\mu\nu}\), which contains an extra term equivalent to the C-field proposed by Hoyle but in a natural deduction. Keep in mind that UG is a model that naturally emerges from a Lagrangian, only demanding the invariance in its volume \(\sqrt{-g}=\xi\), where \(\xi\) is a constant. In this context, the vacuum energy has no direct gravitational effects and \(\Lambda\) is only an integration constant, implying that it is possible to choose a small value (or even \(\Lambda=0\)).
Footnote 2: This is a consequence of introducing quantum vacuum fluctuations to explain this continuous creation of energy to maintain \(\rho_{\Lambda}=cte\). [13].
Connecting all these arguments, we raise the following proposition: _we state that we are not in position3 to know whether the universe is already transiting to an accelerated or a steady state phase, no matter if we use the luminous distance \(d_{L}(z)=H^{-1}z(z+1)\) or the angular distance \(d_{A}(z)=H^{-1}z(z+1)^{-1}\) for \(z\lesssim 0.6\), we can not conclude through an experiment/observation if it is transiting to an accelerated or a late steady state Universe._
Footnote 3: The detection of the radiation exposed hereafter could be evidence of LSS instead of an accelerated Universe. However, the radiation acts as an equivalent to CC in the standard way.
### The MeV radiation
Thus, under the previous state about accelerated and LSS, it is possible to follow its consequences.
The traditional view for CC needs to deal with quantum vacuum fluctuations afflicted by the ultraviolet divergences that grow at \(k^{4}\) when the energy density is calculated (see [13] and references therein). The reinterpretation is to use a continuous creation of radiation in concordance with previous calculations for SSM, which can be deduced through the expression proposed previously by Weinberg [7]
\[n(\nu)=8\pi\nu^{2}\int_{t_{1}}^{t_{0}}\exp\Big{\{}-\int_{t}^{t_ {0}}\Big{[}\sigma\left(\nu\frac{a(t_{0})}{a(t^{\prime})},t^{\prime}\right)\] \[-\zeta\left(\nu\frac{a(t_{0})}{a(t^{\prime})},t^{\prime}\right) \Big{]}dt^{\prime}\Big{\}}\zeta\left(\nu\frac{a(t_{0})}{a(t)},t\right)dt, \tag{1}\]
where \(n(\nu)\) is the number function, \(\sigma(\nu)\) is the absorption rate of a radiation of frequency \(\nu\), \(\zeta(\nu)\) is the emission rate
and \(8\pi\nu^{2}\zeta(\nu)d\nu\) is the emission rate per unit volume of radiation between frequency \(\nu\) and \(\nu+d\nu\), with \(t_{0}\) and \(t_{1}\) arbitrary chosen, being \(t_{1}\rightarrow-\infty\). Our hypothesis also considers that the radiation could be standard photons but not restricted to other particles with a behavior similar to standard photons, like axions, and dark photons, among others.
In the de Sitter phase, expression (1) can be written as [7]
\[n(\nu)=8\pi\nu^{2}\int_{-\infty}^{t_{0}}\exp\Big{\{}-\int_{t}^{t_ {0}}\Big{[}\sigma\left(\nu\exp(H_{0}[t_{0}-t^{\prime}])\right)\] \[-\zeta\left(\nu\exp(H_{0}[t_{0}-t^{\prime}])\right)\Big{]}dt^{ \prime}\Big{\}}\zeta\left(\nu\exp(H_{0}[t_{0}-t])\right)dt, \tag{2}\]
in where it is assumed that during this de Sitter phase \(\dot{a}/a\approx H_{0}\). With an appropriate change of variables to avoid dependence on \(t_{0}\) we have
\[n(\nu)=8\pi\nu^{2}\int_{\nu}^{\infty}\frac{\zeta(\nu^{\prime})} {H_{0}\nu^{\prime}}\exp\Big{(}-\int_{\nu}^{\nu^{\prime}}\frac{d\nu^{*}}{H_{0} \nu^{*}}\Big{[}\sigma(\nu^{*})\] \[-\zeta(\nu^{*})\Big{]}\Big{)}d\nu^{\prime}, \tag{3}\]
and differentiating with respect to \(\nu\) we arrive to the following expression
\[\zeta(\nu)=\frac{n(\nu)\sigma(\nu)}{8\pi\nu^{2}+n(\nu)}+\left[2n(\nu)-\nu\frac {dn(\nu)}{d\nu}\right]\frac{H_{0}}{8\pi\nu^{2}+n(\nu)}, \tag{4}\]
where we separate terms that depend on the Hubble constant \(H_{0}\). Thus, if we stand our hypothesis of assuming that the radiation (standard or dark emission) follows a Planckian number distribution, to have homogeneity and isotropy
\[n(\nu)=8\pi\nu^{2}[\exp(\nu/\mathcal{T})-1]^{-1}, \tag{5}\]
where \(\mathcal{T}\) is a fixed temperature. Therefore we obtain from Eq. (4)
\[\zeta(\nu)=\exp(-\nu/\mathcal{T})\sigma(\nu)+\zeta(\nu)_{\rm CMV}, \tag{6}\]
where
\[\zeta(\nu)_{\rm CMV}=H_{0}\left(\frac{\nu}{\mathcal{T}}\right)[\exp(\nu/ \mathcal{T})-1]^{-1}, \tag{7}\]
notice that \(\zeta(\nu)_{\rm CMV}\) is a constant emission of radiation that is independent of the absorption \(\sigma(\nu)\) and can be expressed as a function of the Hubble parameter, where the subscript CMV indicates the Cosmic MeV dark energy emission. The first term in (6) (r.h.s) is the classical behavior emission-absorption term for radiation implying interactions4 with baryonic matter and astrophysical background lights, such as the Extragalactic Background Light (EBL) and the Cosmic Background Radiation (CMB).
Footnote 4: This interaction depends on which particle we are dealing, for example, in the dark sector, interactions are weaker than that associated with standard photons.
Notice also that even in the case \(\nu\to 0\) we always have a non-negligible emission in the form \(\zeta(0)_{\rm CMV}=H_{0}\). After an integration in the form \(\rho=8\pi\int_{0}^{\infty}\nu^{2}\zeta(\nu)_{\rm CMV}d\nu\) (which is convergent) for the energy density, we conclude
\[\rho_{\rm CMV}=\frac{\pi^{2}}{15}H_{0}\mathcal{T}^{3}. \tag{8}\]
Thus, due to the equivalence between an accelerated and stationary Universe, it is possible to propose the identification \(\rho_{\rm CMV}=\rho_{\Lambda}\simeq 2.46\times 10^{-11}\rm eV^{4}\), where the last number is the value expected for the cause of the Universe acceleration. Consequently, we arrive at a radiation with energy
\[E_{\rm CMV}\simeq 29.5\,\rm MeV. \tag{9}\]
The energy is obtained by using the current value for the Hubble constant reported in [19] (\(E_{\rm CMV}\simeq 28.8\,\rm MeV\) using a local measurement of \(H_{0}\)[20]). This energy region for standard photons is a minimum in the photon interaction cross-section, and the transition from Compton scattering to pair production as the dominant process makes a piece of new physics evidence in this region particularly challenging. This would support why it has not been detected yet. Consequently, in the following sections, we will investigate these possibilities.
## III Astrophysical tests
In a CMB-like scenario, the Universe acceleration forecast an isotropic and homogeneous cosmic background of standard light radiation, such as the known CMB [3; 21]. Thus, if the \(E_{\rm CMV}\) is a CMB-like radiations, it will follow the energy density distribution given by eq. (5), as black body radiation with a mean energy density at eq. (9). As a background light, it is likely to interact with gamma rays and annihilate by the photo pair-production process (\(\gamma\ \gamma\ \rightarrow\ e^{+}\ e^{-}\)). The astrophysical expected outcome of this process is an attenuation of the expected astrophysical photon flux; for a CMB energy range, such attenuation is expected on the ultra-high-energy gamma rays. Lighter background lights as the EBL attenuates the TeV gamma-ray flux of extragalactic point sources, such as blazars [22; 23; 24; 25]. In this line of thought, we find the survivable probability (\(P_{surv}=e^{-\tau}\)) of standard photons, such as gamma rays, by finding the optical depth (\(\tau\)), but considering the presence of the \(E_{\rm CMV}\) radiation as a background light. The survivable probability is given by
\[P_{surv}=\exp\left[-\int_{0}^{z}\frac{\rm cdz}{H_{0}(1+z)h(z)} \int_{-1}^{1}\rm d\cos\theta\frac{1-\cos\theta}{2}\right. \tag{10}\] \[\times\int_{\epsilon\epsilon h}^{\infty}\rm d\epsilon n(\epsilon,z) \sigma(E_{\gamma},\epsilon,z)\Big{]}\,,\]
where \(H_{0}\) stands for the Hubble constant in the present time, \(h(z)=\sqrt{\Omega_{m}(1+z)^{3}+\Omega_{\rm CMV}}\) is the distance element in an expanding universe, \(\sigma(E_{\gamma},\epsilon,z)\) is the Breit-Wheeler cross-section for pair production process \(\gamma\ \gamma\ \rightarrow\ e^{+}\ e^{-}\)[26], and \(n(\epsilon,z)\) is the background density given by Eq. (5), with \(\epsilon=\nu\).
The outcome of these is that, if \(E_{CMV}\) radiation is a CMB-like, any astrophysical photon above keV energy range and from a distance beyond \(~{}10^{-26}\) Mpc will be attenuated by the presence of the \(E_{CMV}\) emission. In Fig. 1, we show the survival probability of gamma rays from different \(z\)'s to us on Earth. For comparison, we include the gamma-ray survival probability due to CMB; the shadow area represents the attenuated region for gamma rays from \(z=1\), from which CMB allows ultra-high-energy photons propagation from \(z=1\) to Earth. Once again, we conclude that no standard photons of order keV would survive the \(E_{CMV}\) radiation, which contradicts the observations. Therefore, if the CMV emission exist, it cannot be made by standard photons.
Thus, considering the unknown nature of the CMV and assuming that it can decay to some standard photons, it may be proven through astrophysical tests. Experiments like the Compton Telescope (COMPTEL) [27] and the Energetic Gamma-Ray Experiment Telescope (EGRET) [28] have reported measurements of photon flux at the energy region of our interest [29].
Therefore, in order to test the predicted signal, we model EGRET and COMPTEL data extracted from [27; 28] of the energy flux in the energy range \(1.12<E<15.28\times 10^{3}\)MeV. Based on [30], we considered the background modeled by the sum of two components, \(\mathrm{Bkg}_{1}(E)+\mathrm{Bkg}_{2}(E)\), where
\[\mathrm{Bkg}_{1,2}(E)=\frac{C_{1,2}}{(E/E_{b})^{\Gamma_{1,3}}+(E/E_{b})^{ \Gamma_{2,4}}}, \tag{11}\]
where \(C_{1,2}\) are normalization constants, \(E_{b}\) is the energy peak, and \(\Gamma_{1}\) and \(\Gamma_{2}\) are constants. For \(\mathrm{Bkg}_{1}\) component we fix \(\Gamma_{1}=1.32\), \(\Gamma_{2}=2.88\), \(E_{b}=25\) keV, and for \(\mathrm{Bkg}_{2}\) component, \(\Gamma_{3}=1.0\), \(\Gamma_{4}=2.41\) and \(E_{b}=20\) MeV [30] and allow to vary both \(C_{1}\) and \(C_{2}\). The full model is obtained by adding the signal presented in Eq. (7), which will be named \(\mathrm{Bkg}+\mathrm{Signal}\). Fig. 2 shows the fit obtained, the solid blue line corresponds to \(\mathrm{Signal}+\mathrm{Bkg}\) and the solid red line is the total background.
We compare both models \(\mathrm{Bkg}\) and \(\mathrm{Bkg}+\mathrm{Signal}\) statistically through the Akaike information criterion corrected (AICc) [31; 32] for small samples defined as \(\mathrm{AICc}=\chi^{2}_{min}+2k+(2k^{2}+2k)/(N-k-1)\) where \(\chi^{2}_{min}\) is the minimum of the \(\chi^{2}\)-function, \(k\) is the number of degree of freedom and \(N\) is the size of the sample. In this criterion the model with lower value of \(\mathrm{AICc}\) is the one preferred by data. When the difference \(\Delta\mathrm{AICc}\) between a given model and the best one is \(\Delta\mathrm{AICc}<4\) both models are equally supported by the data. For the range \(4<\Delta\mathrm{AICc}<10\) the data still support the given model but less than the preferred one and \(\Delta\mathrm{AICc}>10\) the given model is not supported. Our result gives \(\Delta\mathrm{AICc}=\mathrm{AICc}(\mathrm{Bkg}+\mathrm{Signal})-\mathrm{AICc }(\mathrm{Bkg})=-118.8\) which indicates that the model \(\mathrm{Bkg}+\mathrm{Signal}\) is preferred by EGRET and COMPTEL datasets. Additionally, we find a signal amplitude \(A=0.151\pm 0.014\,\mathrm{MeV}\mathrm{cm}^{-2}\mathrm{s}^{-1}\mathrm{sr}^{-1}\).
As can be seen, the expected energy is in the extreme of both data sets, in the maximum of COMPTEL and in the minimum of EGRET, also causing difficulties in its detection through the recent data compilations (see Fig. 2). Future experiments could improve the signal-to-noise detection in the MeV region. They might be able to confirm whether CMV emission exists and if it is the cause of the current de Sitter transition.
Figure 1: Photon survival probability due to interaction with the 30 MeV radiation from a given z as standard photons. For comparison, survival probability due to CMB is included. Everything above such lines attenuates, we illustrate this as the shadow area in the CMB case.
Figure 2: Energy flux vs energy. Black and grey markers are COMPTEL and EGRET data respectively. Solid blue line corresponds to \(\mathrm{Bkg}+\mathrm{Signal}\) fit and solid red line to \(\mathrm{Bkg}\) composed by \(\mathrm{Bkg}_{1}\) (dot dashed red line) and \(\mathrm{Bkg}_{2}\) (dotted red line).
The underneath field theory a revision
The continuous creation of energy, in principle, is incompatible with GR because \(\nabla^{\mu}T_{\mu\nu}\neq 0\) unless we accept fluids with negative pressures. One of the best approaches under a field equation to have a non-conservation of the energy-momentum tensor is through the equations of unimodular gravity (UG), given by
\[R_{\mu\nu}-\frac{1}{4}g_{\mu\nu}R=8\pi G\left(T_{\mu\nu}-\frac{1}{4}g_{\mu\nu}T \right), \tag{12}\]
which clearly is traceless [16], \(R_{\mu\nu}\), \(R\) are the Ricci tensor and scalar respectively, \(T_{\mu\nu}\) is the energy-momentum tensor and \(G\) is the Newton gravitational constant. In this context is unnecessary a CC and the traditional fluids can produce the evolution \(a(t)=\exp\Lambda(t-t_{0})\) only under the restriction that \(\dot{\rho}^{fluid}=0\), caused naturally by
\[32\pi G\nabla^{\mu}T_{\mu\nu}=\nabla^{\mu}(R+8\pi GT)g_{\mu\nu}. \tag{13}\]
Notice that this approach is similarly to the stationary model [15; 6; 14] in where a continuous creation of matter/energy produce a de Sitter behavior, being indistinguishable with the scale factor for an accelerated Universe. Some studies [33; 34] suggest that a continuous creation of radiation (relativistic particles), under UG approach could resolve the problem of the observed de Sitter phase.
Assuming a Friedmann-Lemaitre-Robertson-Walker (FLRW) metric and a perfect fluid energy-momentum tensor we arrive at the following equation
\[\dot{H}=-4\pi G\sum_{i}(\rho_{i}+p_{i}), \tag{14}\]
where \(\rho_{i}\), \(p_{i}\) are density and pressure of the fluids respectively, \(H\equiv\dot{a}/a\) is the Hubble parameter, \(a\) is the scale factor and dot stands for time derivative. On the other hand, resolving for Eq. (13) we have
\[\sum_{i}[(\dot{\rho}_{i}+\dot{p}_{i})+3H(\rho_{i}+p_{i})]=\frac{H^{3}}{4\pi G} (1-j), \tag{15}\]
in where \(j\equiv\dddot{a}/aH^{3}\) is known as the jerk parameter [35] in where dots stands for third order derivatives.
Thus, after some manipulation using Eqs. (14) and (15), the dynamical equations for the cosmology in this context can be presented as follows
\[H^{2}=\frac{8\pi G}{3}\sum_{i}\rho_{i}+H_{UG}^{2}, \tag{16}\] \[\frac{\ddot{a}}{a}=-\frac{4\pi G}{3}\sum_{i}(\rho_{i}+3p_{i})+H_ {UG}^{2},\] (17) \[H_{UG}^{2}=\frac{8\pi G}{3}\sum_{i}p_{i}+\frac{2}{3}\int_{a_{ini }}^{a}H(a^{\prime})^{2}[j(a^{\prime})-1]\frac{da^{\prime}}{a^{\prime}}. \tag{18}\]
According to [33; 34], the best selection of \(j\), to mimic \(\Lambda\)CDM model but with clues about the causative of the Universe acceleration is \(j=\frac{9}{2}(1+w)wE(z)^{-2}\Omega_{0i}(z+1)^{3(w+1)}+1\), where \(w\) is the fluid equation of state and \(\Omega_{0i}\) is the density parameter, \(z=a^{-1}-1\) is the redshift and \(E(z)\equiv H(z)/H_{0}\). The election, naturally, allows us to decouple matter with the standard equation of state while radiation depends on the \(j\) parameter. Therefore radiation plays a central role in the Universe's acceleration due to the coupling with \(j\). Thus, demanding a continuity equation for dark energy emission gives us the following Friedmann equation
\[E(z)^{2}=\Omega_{0m}(z+1)^{3}+\Omega_{0r}(z+1)^{4}+\omega_{\rm CMV}\Omega_{0 \rm CMV}(z_{ini}+1)^{4}, \tag{19}\]
where the last term acts like a cosmological constant but with an origin based in the term \(\frac{1}{2}(R+8\pi GT)g_{\mu\nu}\) that acts like a diffusion and can be interpreted as a CMV component plus effects of the UG dictated by the \(z_{ini}\) free parameter (see [33; 34] for details), being \(\Omega_{0m}\), \(\Omega_{\rm CMV}\) the matter and CMV parameters respectively, \(\omega_{\rm CMV}=1/3\) and \(z_{ini}=11.473^{+0.074}_{-0.073}\) constrained through recent observations (see [33; 34]).
## V Discussion and conclusions
As we discussed previously, at the moment, it is impossible to differentiate between an accelerated or late stationary Universe because the observables (\(d_{L}\) and \(d_{A}\)) of all the experiments that measure the Universe evolution are equally compatible with both approaches. We emphasize that this argument is only valid for \(z\lesssim 0.6\), where the data confirms the transition. On the other hand, it is well known that both models (accelerated and stationary) need a continuous creation of energy/matter to maintain the energy density constant of some spece in the Universe, which in turn push the GR's limits its in turnent form and suggests that changes are required to obtain, for instance, a non-conservation to the energy-momentum tensor. This equivalence suggests a new strategy to simultaneously tackle the energy density problem of the CC and the interpretation of the dark energy emission in the regime of \(\sim 29.5\)MeV which in principle could be detected through a decaying to standard photons. _Confirming this signal is crucial, so we encourage further astrophysical analysis in this scope._ Suggestly, the MeV region is also connected to several new physics proposals, such as Primordial Black Holes (PBH) [36; 37; 38] emitting radiation through Hawking effect, self-annihilation of DM particles candidates with MeV masses that can produce gamma radiation [39; 40], and some quantum gravity effects (like Continuous Spontaneous Localization, compatible with UG [18] or Diosi-Penrose model, DP [41; 42; 43]) could also produce radiation emission at the same energy scale of tens of MeV. Identifying the event that caused the excess radiation is vital because the theory presented here cannot identify which event causes the previously mentioned excess. For example, PBH as DM implies decaying radiation at MeV generating the observed accelerated/stationary Universe, sustaining a unified framework that relates several events. In this case, the excess of radiation is a transitory effect; therefore, the accelerated/stationary Universe is also a transitory process. In case, for example, the cause of the excess of radiation is the DP model under charged particles, the collapse of the wave function by gravitational effects will have significant consequences in the evolution of the Universe and its current stationary/accelerated stage. The radiation energy for this case is in the range \(\Delta E=(10-10^{5})\)keV, being CMV inside of the expected energy region. Therefore, both models could prove that we are dealing with an LSS instead of an accelerated Universe.
We need to remark that a stationary Universe is not a natural state (at least under the Friedmann-Lemaitre-Robertson
Walker line element) because maintaining a constant energy density requires the violation of the conservation of the energy-momentum tensor. As consequence, we expect that this state will eventually stop, finishing this particular condition. However, to respond how do we differentiate between both universes, we need to know what is the source of the CMV radiation, if exist. In this work, we have shown that the survival probability of standard photons due to the CMV predicts an impossible opacity region, not allowing the detection of any keV astrophysical photons on Earth. This strongly constrains the hypothesis that this radiation exists as standard photons. Hence, we have also studied the possibility of an indirect signal of standard photons derived from a potential non-standard component of the CMV emission at 29.5MeV. Our findings suggest that the model Bkg+Signal is preferred over the only Bkg scenario, using EGRET and COMPTEL datasets.
Finally, we need to remind, that a change of variables to the de Sitter line element produces a stationary metric as \(ds^{2}=-(1-r^{2}/\alpha^{2})dt^{2}+(1-r^{2}/\alpha^{2})^{-1}dr^{2}+r^{2}d\Omega^ {2}\), where \(\alpha\) is a non zero constant of the hyperboloid of one sheet. Remarkably, a de Sitter evolution is the analogous of a stationary Universe, as we discussed during the paper.
We encourage further and novel astrophysical experiments and studies to unravel the 29.5MeV emission to comprehend the current de Sitter stage.
###### Acknowledgements.
YBAP acknowledge the Ph.D. grant provided by CONACYT. MAGA acknowledges support from catedra Marcos Moshinsky and Universidad Iberoamericana for the support with the National Research System (SNI) grant. The numerical analysis was also carried out by _Numerical Integration for Cosmological Theory and Experiments in High-energy Astrophysics_ (Nicte Ha) cluster at IBERO University, acquired through catedra MM support. A.H.A. thank to the support from Luis Aguilar, Alejandro de Leon, Carlos Flores, and Jair Garcia of the Laboratorio Nacional de Visualizacion Cientifica Avanzada. A.H.A and M.A.G.-A. acknowledge partial support from project ANID Vinculacion Internacional FOVI220144. We are also grateful with Veronica Motta for revise this paper, suggesting changes and correcting misinterpretations into the text. This paper is in memory of my father Miguel Angel.
|
2309.09016 | Solitons and Normal Random Matrices | We discuss a general relation between the solitons and statistical mechanics
and show that the partition function of the normal random matrix model can be
obtained from the multi-soliton solutions of the two-dimensional Toda lattice
hierarchy in a special limit. | I. M. Loutsenko, V. P. Spiridonov, O. V. Yermolayeva | 2023-09-16T15:03:31Z | http://arxiv.org/abs/2309.09016v1 | ###### Abstract
###### Abstract
We discuss a general relation between the solitons and statistical mechanics and show that the partition function of the normal random matrix model can be obtained from the multi-soliton solutions of the two-dimensional Toda lattice hierarchy in a special limit.
**Solitons and Normal Random Matrices**
I. M. Loutsenko\({}^{1}\), V. P. Spiridonov\({}^{2,3}\) and O. V. Yermolayeva\({}^{1}\)
\({}^{1}\) Laboratoire de Physique Mathematique, Centre de Recherches Mathematiques,
Universite de Montreal, Montreal
\({}^{2}\) Laboratory of Theoretical Physics, Joint Institute for Nuclear Research, Dubna
\({}^{3}\) National Research University "Higher School of Economics", Moscow
## 1 Introduction
Present paper is devoted to intersections between the theory of solitons and statistical mechanics of two-dimensional "Coulomb-Dyson gases", or normal random matrices. The literature on the random matrix models and related Coulomb gases is quite extensive due to the links to various important problems of mathematics and physics. For an introduction to matrix models see, e.g., [6, 20]. The normal matrix model was introduced in [3] in connection to the quantum Hall effect and investigated further in detail in [4]. The connection between the two-dimensional Toda lattice (2DTL) hierarchy and normal random matrices has been established in [4], in [21] it emerged in the context of the theory of Hele-Shaw flows, or Laplacian growth (for a review see, e.g., [19]).
It is a well-known fact that there is deep connection between the theory of various matrix ensembles and that of tau-function of integrable hierarchies, see, e.g., [13, 14, 24] and references therein. Our "solitonic" models of statistical mechanics (besides the normal matrix model discussed here, one can mention a variety of ensembles considered in [16]) contain partition functions of these ensembles as particular or limiting cases and therefore generalize this connection: they introduce lattices and use non-trivial boundary conditions to the Coulomb-gas models related to the matrix ensembles.
The Coulomy gas formalism is a powerful tool of treating two-dimensional problems. Its in depth character is reflected in connecting seemingly different physical notions like two-dimensional statistical mechanics and quantum field theory models, random matrices and integrable systems [6]. It naturally emerges in the investigation of critical phenomena in two-dimensiona spin lattice systems [23] with further output to the two-dimensional conformal field theory. In this sense, the interpretation of solitonic tau-functions as lattice gas (Ising chains) partition functions described below is not an occasional coincidence, but yet another indication on the universality of this formalism with many physical manifestations.
We start from a brief introduction to basic concepts of statistical mechanics of lattice gases and its connection to the theory of solitons established previously in [15, 16, 17, 18] (see also, [31]). Statistical mechanics studies macroscopic properties of systems with large number of
degrees of freedom. Such systems can exist in a discrete (but possibly infinite) set of microstates. A microstate defines the values of all possible microscopic variables.
Consider a lattice consisting of \(N\) points (sites) on the plane which can be occupied by some particles. In the lattice gas model no more than one particle can occupy each site, i.e. the number of particles at the \(i\)-th site can be \(\nu_{i}=0\) or \(\nu_{i}=1\). These filling numbers \(\nu_{1},\nu_{2},\ldots,\nu_{N}\) constitute a set of microscopic variables. Then, a microstate of the lattice gas corresponds to a binary string (i.e., a sequence of zeros and ones) of the length \(N\). The total number of microstates of the gas equals \(2^{N}\). Statistical mechanics considers a macroscopic system, that is the \(N\to\infty\) "thermodynamic" limit, and investigates the corresponding behaviour of the various macroscopic variables, for example, that of the total number of particles
\[n=\sum_{i=1}^{N}\nu_{i}. \tag{1.1}\]
This number \(n\) can vary from \(0\), when the lattice is empty, to \(N\) when the lattice is fully occupied by particles.
The probability to find system in the microscopic state \(\nu\) fixed by \(N\) filling numbers \(\nu=\{\nu_{1},\nu_{2},\ldots,\nu_{N}\}\) equals
\[p(\nu)=\frac{1}{Z}e^{-\beta(E(\nu)-\mu n)}, \tag{1.2}\]
where \(E(\nu)=E(\nu_{1},\ldots,\nu_{N})\) is the system energy and \(n\) is defined in (1.1). The parameter \(\beta>0\) is called the inverse temperature, and the parameter \(\mu\) is the chemical potential. Since the sum of probabilities of all possible micro-states of the system equals to \(1\), the value \(Z\) in the normalization factor in (1.2) equals to the following sum over all possible system states:
\[Z=\sum_{\nu\in{\cal S}}e^{-\beta(E(\nu)-\mu n)}. \tag{1.3}\]
This sum is called the partition function of the grand canonical ensemble. Here \({\cal S}\) stands for the set of all possible microscopic configurations of the lattice gas given by \(2^{N}\) binary strings \(\nu\) of the length \(N\).
From definitions (1.2) and (1.3), it follows that
\[\frac{\partial\log Z}{\partial\beta}=-\langle E\rangle+\mu\langle n\rangle, \quad\frac{\partial\log Z}{\partial\mu}=\beta\langle n\rangle,\]
where
\[\langle E\rangle=\sum_{\nu\in{\cal S}}p(\nu)E(\nu),\quad\langle n\rangle=\sum _{\nu\in{\cal S}}p(\nu)n(\nu)\]
are the average energy of the system and the average number of particles in the system respectively. The quantities \(\langle E\rangle\) and \(\langle n\rangle\) describe the results of measurements of the corresponding macroscopic variables.
One can split the set of microstates of our system \({\cal S}\) into \(N+1\) disjoint sets: \({\cal S}={\cal S}_{0}\cup{\cal S}_{1}\cup{\cal S}_{2}\cup\cdots\cup{\cal S}_{N}\), where \({\cal S}_{i}\cap{\cal S}_{j}=\emptyset\) if \(i\neq j\). Here, the set \({\cal S}_{0}\) corresponds to the states with \(n=0\) particles on the lattice (empty lattice), and the set \({\cal S}_{k}\) corresponds to the states with \(n=k\)
with \(\mathcal{S}_{N}\) being the state of fully occupied lattice (\(n=N\), i.e. all \(\nu_{i}=1\)). Then from (1.3) we have \(Z=\sum_{n=0}^{N}\mathcal{Z}_{n}e^{\beta\mu n}\), where
\[\mathcal{Z}_{n}=\sum_{\nu\in\mathcal{S}_{n}}e^{-\beta E(\nu)} \tag{1.4}\]
is the partition function for the system with fixed number of particles \(n\). The quantity (1.4) is called the partition function of the canonical ensemble, or the \(n\)-particle partition function. For the canonical ensemble of \(n\) particles we have
\[\langle E\rangle=-\frac{\partial\log\mathcal{Z}_{n}}{\partial\beta}=\frac{ \partial\beta F}{\partial\beta},\quad F=-\frac{1}{\beta}\log\mathcal{Z}_{n},\]
where \(F\) is called the "free energy" of the \(n\)-particle system. In the thermodynamic limit \(n\to\infty\), one usually is interested in the asymptotics of free energy per particle \(F/n\).
Let \(\zeta_{i}\) be a complex coordinate of the \(i\)-th site of the lattice. We consider a system where particles interact pairwise through the two-particle potential \(V(z,z^{\prime})=V(z^{\prime},z)\) as well as they interact with external fields through a one-particle potential \(w(z)\). In other words, the energy of interaction of particles occupying the \(i\)-th and \(j\)-th sites equals \(V_{ij}=V(\zeta_{i},\zeta_{j})\). We consider only the two-particle interaction with \(V(z,z)=+\infty\), so that the condition that the filling factor \(\nu_{i}\) cannot be greater than \(1\) will be fulfilled automatically (see below). The energy of interaction of \(i\)-th site's particle with the external fields is \(w_{i}=w(\zeta_{i})\). The total energy of the gas then equals to
\[E=\sum_{1\leq i<j\leq N}V_{ij}\nu_{i}\nu_{j}+\sum_{i=1}^{N}w_{i}\nu_{i}. \tag{1.5}\]
This function, relations (1.1) and (1.3) define the grand partition function of the gas
\[Z=\sum_{\nu_{1}=0,1}\cdots\sum_{\nu_{N}=0,1}e^{-\beta\left(\sum_{1\leq i<j \leq N}V_{ij}\nu_{i}\nu_{j}+\sum_{i=1}^{N}(w_{i}-\mu)\nu_{i}\right)}. \tag{1.6}\]
Note that the number \(N\) stands here for the number of the lattice sites and not for the number of particles \(n\), which is not fixed in the grand canonical ensemble (one should not confuse \(Z\) with the \(N\)-particle partition function \(\mathcal{Z}_{N}\)).
For the \(n\)-particle partition function, defined by (1.4) and (1.5), we obtain
\[\mathcal{Z}_{n}=\frac{1}{n!}\sum_{z_{1}\in\zeta}\cdots\sum_{z_{n}\in\zeta}e^{ -\beta(\sum_{1\leq i<j\leq n}V(z_{i},z_{j})+\sum_{i=1}^{n}w(z_{i}))}, \tag{1.7}\]
where \(\zeta\) stands for the set of lattice points \(\zeta=\{\zeta_{1},\zeta_{2},\ldots,\zeta_{N}\}\). Note that, since \(V(z,z)=+\infty\), we do not have to care about the configurations in (1.7) with filling numbers exceeding \(1\), because all summands with \(z_{i}=z_{j}\) vanish there. The factor \(1/n!\) appears in (1.7) because, for a given configuration \(\nu\) with the number of particles \(n\), these \(n\) particles are identical in the sum (1.5), while it is not so in (1.7).
A crucial observation on the connection of lattice statistical mechanics models with the solitons was done in [15, 16, 17]. Namely, soliton \(\tau\)-functions of integrable hierarchies and partition functions of particular Ising models or lattice gas models have identical structure.
More precisely, the \(N\)-soliton \(\tau\)-function of many integrable hierarchies can be written in the following Hirota form [1, 10]
\[\tau_{N}=\sum_{\nu_{1}=0,1}\cdots\sum_{\nu_{N}=0,1}\exp\left(\sum_{1\leq i<j\leq N }A_{ij}\nu_{i}\nu_{j}+\sum_{i=1}^{N}\phi_{i}\nu_{i}\right), \tag{1.8}\]
where the sum is performed over \(2^{N}\) configuration of \(N\) discrete variables \(\nu_{i}\), each of \(\nu_{i}\) taking values \(0\) or \(1\). In equation (1.8), \(A_{ij}\) stands for the phase shift acquired due to interaction of the \(i\)-th and \(j\)-th solitons, while \(\phi_{i}\) stands for the \(i\)-th soliton phase. The phase shift \(A_{ij}\) depends on the momenta of corresponding solitons and the phase \(\phi_{i}\) depends on the momentum of the \(i\)-th soliton and the full set of the integrable hierarchy times.
In the expression for the \(N\)-soliton \(\tau\)-function (1.8) one can recognize the grand partition function of a gas models (1.6) on the lattice consisting of \(N\) sites: the variable \(\nu_{i}\) is the filling factor of the \(i\)-th lattice site and \(\phi_{i}\) is proportional to the sum of the chemical potential and the external potential at the \(i\)-th site. In this picture, the phase shift \(A_{ij}\) is proportional to the two-body interaction potential between particles occupying the \(i\)-th and the \(j\)-th sites:
\[A_{ij}=-\beta V_{ij},\quad\phi_{i}=-\beta\left(w_{i}-\mu\right). \tag{1.9}\]
An important point is that this correspondence between the \(\tau\)-functions and the partition functions has a restriction -- the parameter \(\beta\) appears to be fixed in the \(\tau\)-functions (see below).
Before proceeding to the main topic, we mention that some connections between soliton solutions of the integrable hierarchies and certain matrix models have been established in [16]. In particular, the soliton solutions of the KP hierarchy corresponding to the circular lattice are related to the Gaudin matrix models, which interpolate between the Dyson and uniform unitary ensembles. Also, soliton solutions of the KP and BKP hierarchies corresponding to one-dimensional exponential lattices are related to the one-dimensional translationally-invariant Ising models with non-local interactions. The simplest models of such type are described by the self-similar potentials of the one-dimensional Schrodinger equation [28] and they are related to special infinite-soliton solutions of the KdV hierarchy [15]. Despite of the fact that the inverse temperature \(\beta\) is fixed in these models, through playing with the model parameters it is possible to go to the zero temperature limit exhibiting a particular critical behavior of the corresponding Ising chains in the thermodynamic limit [18].
In this paper we consider Coulomb gases on general two-dimensional lattices and discuss their relation to normal random matrices. Namely, we derive the partition function of a discretized \(n\times n\) normal matrix model (or a Coulomb-Dyson gas of \(n\) charges on \(N\) sites on the plane) from the \(N\)-soliton \(\tau\)-function of the two-dimensional Toda lattice (2DTL) hierarchy.
## 2 Coulomb-Dyson Gases
Coulomb gas is a statistical ensemble of charged particles on the plane interacting through the Coulomb potential \(V\). On the \((x,y)\)-plane the potential created by the particle, placed at the point \((x^{\prime},y^{\prime})\), is proportional to the Green function of the 2D Laplace operator
\[\Delta V(x,y;x^{\prime},y^{\prime})=-2\pi\delta(x-x^{\prime})\delta(y-y^{ \prime}),\quad V(x,y;x^{\prime},y^{\prime})=V(x^{\prime},y^{\prime};x,y), \tag{2.1}\]
with certain boundary conditions imposed on \(V\). For the plane without boundaries
\[V(z,z^{\prime})=-\log|z-z^{\prime}|=-\frac{1}{2}\left(\log(z-z^{\prime})+\log( \bar{z}-\bar{z}^{\prime})\right),\]
where the complex variables notation \(z=x+{\rm i}y\), \(\bar{z}=x-{\rm i}y\) is used. If the system has an ideal dielectric boundary \(\Gamma\), the normal to \(\Gamma\) component of the gradient of \(V\) vanishes on this boundary, i.e. \((n_{x}\partial_{x}+n_{y}\partial_{y})V(x,y,x^{\prime},y^{\prime})=0\), where \((n_{x},n_{y})\) is the normal to \(\Gamma\) at the point \((x,y)\in\Gamma\). The tangential component of the gradient vanishes at \(\Gamma\), if it is an ideal conductor boundary, i.e. at fixed \(z^{\prime}\) the conductor boundary \(\Gamma\) is an equipotential surface of \(V\). A useful way of solving (2.1) with such boundary conditions is provided by the method of images. In what follows, we consider systems where every charge has a finite number of images created by the boundaries.
The electrostatic energy of a system of \(n\) particles with charges \({\cal Q}_{i}\), \(i=1,2,\ldots,n\), is
\[E_{n}=\sum_{1\leq i<j\leq n}{\cal Q}_{i}{\cal Q}_{j}V(z_{i},z_{j})+\sum_{1\leq i \leq n}{\cal Q}_{i}^{2}\tilde{V}(z_{i})+\sum_{1\leq i\leq n}{\cal Q}_{i}W(z_{ i}), \tag{2.2}\]
where \(z_{i}=x_{i}+{\rm i}y_{i}\) are the coordinates of the particles. The first term in (2.2) is the energy of interaction between different charges. The second term is the sum of self-energies: the self-energy of a charge is the energy of interaction between the charge and its own images. The third term describes an interaction of charges with external fields.
The Dyson gas is a one-component (i.e., all particles have equal charges) Coulomb gas at certain fixed temperatures corresponding to different random matrix models. Here, we consider the case \(\beta{\cal Q}_{i}=2\). Then, without loss of generality, we can set all \({\cal Q}_{i}\) in (2.2) equal to unity and let the inverse temperature to be \(\beta=2\). Next, we consider the lattice version of the Dyson gas where particles occupy a set of points \(\zeta=\{\zeta_{1},\ldots,\zeta_{N}\}\) on the \(N\) sites lattice (i.e., \(z_{i}\in\zeta\)) and no more than one particle can occupy each site. Then, the grand partition function of the system corresponding to energy (2.2) equals
\[Z=\sum_{\nu_{1}=0,1}\cdots\sum_{\nu_{N}=0,1}e^{-\beta\left(\frac{1}{2}\sum_{1 \leq i\neq j\leq N}V(\zeta_{i},\zeta_{j})\nu_{i}\nu_{j}+\sum_{i=1}^{N}(w(\zeta _{i})-\mu)\nu_{i}\right)},\]
where \(w(z)=\tilde{V}(z)+W(z)\) and \(\mu\) stands for the chemical potential.
## 3 Ensembles Related to KP, BKP and 2DTL Hierarchies
It turned out that the Coulomb interaction \(V(\zeta_{i},\zeta_{j})\) can be identified with the phase shifts of different integrable hierarchies [17]. In this section we describe connections between the ensembles of the Coulomb-Dyson gases and soliton solutions of the KP (including KdV hierarchy as a subcase) and the BKP hierarchies established in [17] and of the 2DTL hierarchy found in [31].
### KP Hierarchy
The \(N\)-soliton \(\tau\)-function of the Kadomtsev-Petviashvili (KP) hierarchy can be written in the Hirota form (1.8) [10] with
\[A_{ij}=\log\frac{(a_{i}-a_{j})(b_{i}-b_{j})}{(a_{i}+b_{j})(b_{i}+a_{j})}, \tag{3.1}\]
\[\phi_{i}=\varphi_{i}+\sum_{n=1}^{\infty}\left(a_{i}^{n}-(-b_{i})^{n}\right)t_{n},\]
where \(t_{n}\), \(n=1,2,3,\dots\), is an infinite set of independent variables called the hierarchy "times", \((a_{i},b_{i})\) is the two-dimensional momentum of the \(i\)-th soliton and \(\varphi_{i}\) is its initial phase. The first non-trivial equation of the hierarchy, the celebrated KP-equation, involves three independent variables \(x=t_{1}\), \(y=t_{2}\) and \(t=t_{3}\) and has the form
\[3\frac{\partial^{2}u}{\partial y^{2}}=\frac{\partial}{\partial x}\left(4\frac {\partial u}{\partial t}+6u\frac{\partial u}{\partial x}-\frac{\partial^{3}u} {\partial x^{3}}\right),\quad u=-2\partial_{x}^{2}\log\tau. \tag{3.2}\]
All equations of the hierarchy can be encoded in the single bi-linear Hirota residue equation for the \(\tau\)-function
\[\oint_{z=0}\frac{dz}{2\pi\mathrm{i}}e^{\xi(\boldsymbol{t}^{\prime}-\boldsymbol {t},z)}\tau(\boldsymbol{t}^{\prime}-[z^{-1}])\tau(\boldsymbol{t}+[z^{-1}])=0, \tag{3.3}\]
where
\[\boldsymbol{t}=\{t_{1},t_{2},t_{3}\dots\},\quad\xi(\boldsymbol{t},z)=t_{1}z+ t_{2}z^{2}+t_{3}z^{3}+\dots,\quad[z^{-1}]=\{z^{-1},\tfrac{1}{2}z^{-2},\tfrac{1}{ 3}z^{-3},\dots\}, \tag{3.4}\]
and \(\oint_{z=0}dz/(2\pi\mathrm{i})\) (also denoted as \(\,\mathrm{res}\,_{z=0}\)) means taking residue at \(z=0\), i.e. a coefficient of \(z^{-1}\) in the Laurent series in \(z\). A full (infinite) set of the bilinear differential Hirota equations can be obtained from equation (3.3) as the conditions of vanishing of various coefficients of the Taylor series in \(\varepsilon_{i}=t_{i}^{\prime}-t_{i}\).
Substituting
\[a_{i}=\zeta_{i},\quad b_{i}=-\bar{\zeta}_{i},\quad\mathrm{Im}\,\zeta_{i}>0, \quad i=1,2,\dots,N, \tag{3.5}\]
in (3.1), we obtain
\[V(z,z^{\prime})=-\log|z-z^{\prime}|+\log|\bar{z}-z^{\prime}|.\]
This potential satisfies equation (2.1). For a charge placed at a point \(z^{\prime}\), the real line \(\mathrm{Im}\,z=0\) is an equipotential surface of the potential \(V(z,z^{\prime})\) created by this charge. Therefore, \(V(z,z^{\prime})\) is the Coulomb potential in the upper half-plane with an ideal conducting boundary along the real line. Equivalently, we may say that this is a potential created by a positive charge at the point \(z^{\prime}\) in the upper half-plane and its reflection image of the opposite charge located in the lower half-plane at \(\bar{z}^{\prime}\). Thus, the \(N\)-soliton solution of the KP hierarchy with the momenta defined by (3.5) and the phases
\[\phi_{i}=-\beta\left(w(\zeta_{i})-\mu\right)=-\beta\left(\log|\bar{\zeta}_{i}- \zeta_{i}|+W(\zeta_{i})-\mu\right), \tag{3.6}\]
describes a Coulomb gas at the temperature \(\beta=2\) on a lattice in the upper half-plane with the ideally conductive boundary along the \(x\)-axis (see the first picture on Figure 1).
In (3.6), the potential \(w(z)\) is a sum of the self-interaction potential, corresponding to the "charge-image" interaction,
\[\tilde{V}(z)=\log|\bar{z}-z|,\]
and of the external potential \(W(z)\). The latter is a sum of a confining potential \(U(z)\), keeping particles in the compact plane domain and determining the initial phases of solitons
\[\varphi_{i}=-\beta(U(\zeta_{i})-\mu),\]
and of a harmonic function, determined by the KP "times", that corresponds to the electric field created by some distant external charges, i.e.,
\[W(z)=U(z)-\frac{1}{2}\sum_{p=1}^{\infty}(z^{p}-\bar{z}^{p})t_{p}.\]
We draw attention to the fact that in order for the external potential to be real all hierarchy "times" must be purely imaginary. The real axis is the equipotential surface of the above harmonic function.
Putting all Coulomb particles on the vertical axis we come to the condition that \(a_{i}=b_{i}\) for the KP soliton momenta, which corresponds to the KdV-hierarchy solitons. This system has a natural interpretation as a nonlocal Ising chain with exactly computable free energy per spin in the translationally invariant cases [15, 18].
### BKP Hierarchy
Soliton solutions of the BKP hietrarchy in the Hirota form (1.8) are detrmined by \(N\) two-dimensional momenta \((a_{i},b_{i})\), an infinite number of odd BKP times \(t_{2p-1}\), \(p=1,2,3,\dots\), and \(N\) initial phases of the solitons \(\varphi_{i}\), so that [11]
\[A_{ij}=\log\frac{(a_{i}-a_{j})(b_{i}-b_{j})(a_{i}-b_{j})(b_{i}-a_{j})}{(a_{i}+ a_{j})(b_{i}+b_{j})(a_{i}+b_{j})(b_{i}+a_{j})}, \tag{3.7}\]
\[\phi_{i}=\varphi_{i}+\sum_{p=1}^{\infty}(a_{i}^{2p-1}+b_{i}^{2p-1})t_{2p-1}. \tag{3.8}\]
Figure 1: From the left to the right. KP hierarchy: a two-dimensional Coulomb gas above an ideal conductor. BKP hierarchy: a two-dimensional Coulomb gas in the corner between an ideal dielectric (the horizontal axis) and an ideal conductor (the vertical axis). 2DTL hierarchy: a two-dimensional Coulomb gas in the exterior of the unit disc with ideally conducting boundary. Positive charges are shown as white squares while their negative images are shown as black squares. The interactions between different charges are shown by dashed lines, while the interactions between charges and their own images are shown by solid lines.
The first non-trivial equation in the hierarchy (the BKP equation) involves three independent variables \(t_{1}\), \(t_{3}\) and \(t_{5}\). Similar to the KP hierarchy (3.3) and (3.4), the BKP hierarchy can be encoded in a single Hirota residue equation:
\[\oint_{z=0}\frac{dz}{2\pi{\rm i}z}e^{\xi(\boldsymbol{t}^{\prime}-\boldsymbol{t},z)}\tau(\boldsymbol{t}^{\prime}-2[z^{-1}])\tau(\boldsymbol{t}+2[z^{-1}])= \tau(\boldsymbol{t})\tau(\boldsymbol{t}^{\prime}),\]
with only odd times involved, i.e. \(\boldsymbol{t}=\{t_{1},t_{3},t_{5},\dots\}\) and \([z^{-1}]=\{z^{-1},\frac{1}{3}z^{-1},\frac{1}{5}z^{-5},\dots\}\).
Choosing soliton momenta as follows
\[a_{i}=\zeta_{i},\quad b_{i}=\bar{\zeta}_{i},\quad\operatorname{Re}\zeta_{i}>0,\]
the BKP phase shifts (3.7) yield the following two-particle interaction potential
\[V(z,z^{\prime})=-\log|z-z^{\prime}|-\log|\bar{z}-z^{\prime}|+\log|z+z^{\prime }|+\log|\bar{z}+z^{\prime}|,\quad\beta=2,\]
which is a Coulomb potential in a quarter of the plane. For convenience, let us choose the right upper corner of the plane, \(\operatorname{Re}z>0\), \(\operatorname{Im}z>0\). Then the horizontal boundary of the corner \(y=0\) is the ideal dielectric, while the vertical boundary \(x=0\) is the ideal conductor. Each charge has now three images: one of the same sign and two of the opposite signs, see picture 2 on Figure 1.
The phases of solitons have the form
\[\phi_{i}=-\beta\left(\tilde{V}(\zeta_{i})+W(\zeta_{i})-\mu\right),\]
where the self-interaction potential is
\[\tilde{V}(z)=\log|\bar{z}-z|-\log|\bar{z}+z|-\log|2z|,\]
and the external potential \(W\) is a sum of the confining potential \(U\) and a harmonic function determined by the BKP times (see equations (3.8) and (3.5))
\[W(z)=U(z)-\frac{1}{2}\sum_{p=1}^{\infty}(z^{2p-1}+\bar{z}^{2p-1})t_{2p-1}.\]
This harmonic function satisfies all the necessary conditions on the boundaries of the corner.
After setting \(a_{i}=b_{i}\) in the phase shifts of the BKP hierarchy we obtain the KdV solitons phase shifts multiplied by 2. This corresponds to the same lattice gas or Ising chain as in the plain KdV case, but for the twice bigger value of the inverse temperature \(\beta=4\)[17]. Again, this value of the temperature corresponds to the particular random matrix ensembles.
### 2TDL Hierarchy
The two-dimentional Toda lattice (2DTL) hierarchy consists of difference-differential equations [29]. It involves an infinite number of continuous independent variables (2DTL times) \(t_{p}\), where \(p\) runs over all positive and negative integers, excluding zero, as well as a discrete variable \(m\in\mathbb{Z}\). The simplest equation in the hierarchy is the 2DTL equation:
\[\frac{\partial^{2}u(m)}{\partial t_{1}\partial t_{-1}}=e^{-u(m+1)}+e^{-u(m-1) }-2e^{-u(m)}, \tag{3.9}\]
where \(u(m)\) depend on \(t_{p}\), \(p=\pm 1,\pm 2,\pm 3,\dots\). In terms of the \(\tau\)-function the 2DTL equation reads as
\[\frac{\partial\tau(m)}{\partial t_{1}}\frac{\partial\tau(m)}{\partial t_{-1}}- \tau(m)\frac{\partial^{2}\tau(m)}{\partial t_{1}\partial t_{-1}}=\tau(m-1) \tau(m+1),\quad u(m)=\log\frac{\tau(m)^{2}}{\tau(m+1)\tau(m-1)}. \tag{3.10}\]
The \(N\)-soliton \(\tau\) function has the form (1.8) with the phase shifts
\[A_{ij}=\log\frac{(a_{i}-a_{j})(b_{i}-b_{j})}{(a_{i}-b_{j})(b_{i}-a_{j})}, \tag{3.11}\]
which are essentially the KP phase shifts (one should replace \(b_{i}\) to \(-b_{i}\) in (3.1)), and the soliton phases
\[\phi_{i}=\varphi_{i}+m(\log a_{i}-\log b_{i})+\sum_{p=-\infty,p\neq 0}^{ \infty}\left(a_{i}^{p}-b_{i}^{p}\right)t_{p}.\]
Choosing soliton momenta as follows
\[a_{i}=\zeta_{i}/R,\quad b_{i}=R/\bar{\zeta}_{i},\quad|\zeta_{i}|>R,\]
where \(R\) is a positive real number, we come to the interaction potential
\[V(z,z^{\prime})=-\log|z-z^{\prime}|+\log|\bar{z}z^{\prime}-R^{2}|-\log R,\quad \beta=2. \tag{3.12}\]
We set now \(R\) equal to \(1\), \(R=1\), and obtain
\[V(z,z^{\prime})=-\log|z-z^{\prime}|+\log|\bar{z}z^{\prime}-1|. \tag{3.13}\]
Then \(V(z,z^{\prime})\) satisfies equation (2.1) in the exterior of the ideally conducting unit disc. In other words, for a charge placed at \(z^{\prime}\), the disc boundary (i.e., the unit circle \(|z|=1\)) is the equipotential surface of (3.13). Equivalently, one can say that this is a potential created by a positive charge at the point \(z^{\prime}\), in the exterior of the disc, and its reflection (where reflection is the inversion with respect to the unit circle) image of the opposite charge located inside the disc at \(1/\bar{z}^{\prime}\), see the third picture on Figure 1.
The self-interaction potential in the 2DTL case is
\[\tilde{V}(z)=\log\left|z-\bar{z}^{-1}\right|,\]
while the external potential is a sum of the confining potential \(U\) and the harmonic function determined by the 2DTL times and the discrete variable \(m\):
\[W(z)=U(z)-m\log|z|-\frac{1}{2}\sum_{p=1}^{\infty}\left(\left(z^{p}-\bar{z}^{- p}\right)t_{p}+\left(\bar{z}^{p}-z^{-p}\right)\bar{t}_{p}\right).\]
Note, that we have redefined the negative times as \(t_{-p}=-\bar{t}_{p}\), \(p>0\) in order for the potential to be real. Obviously, the unit circle is an equipotential surface of the harmonic field.
Concluding this section we note that the Coulomb interaction potential and the boundary can be conformally transformed. Indeed, let \(f(z)\) be a conformal mapping from the exterior of a
simple compact domain \(\Omega\) to the exterior of the unit circle \(|z|=1\), then \(\tilde{V}(z,z^{\prime})=V(f(z),f(z^{\prime}))\) is also a Coulomb potential whoose equipotential surface is \(\partial\Omega\). Thus \(N\)-soliton solution with momenta \(a_{i}=f(\zeta_{i})\) and \(b_{i}=\bar{f}(\bar{\zeta}_{i})\) corresponds to the grand partition function of the lattice Coulomb gas in an exterior of conducting domain \(\Omega\).
The simplest mapping \(f(z)=z/R\) transforms (3.13) to (3.12). As an example of less trivial mapping one can take \(f(z)=z/2-\sqrt{z^{2}/4-1}\) (i.e. the mapping inverse to \(z\to z+1/z\)) from the exterior of the real segment \([-2,2]\) to that of the unit disc. In this case, the ideal conductor is placed along that segment.
## 4 2DTL Solitons and Normal Random Matrices
Let us now remove the constraint \(R=1\), and consider a Coulomb gas in the exterior of the disc of an arbitrary radius \(R\). Note that the 2DTL hierarchy is invariant under the transformation
\[m\to m-j,\quad t_{p}\to R^{p}t_{p},\quad\bar{t}_{p}\to R^{p}\bar{t}_{p},\quad \tau(m)\to R^{m^{2}+cm+C}\tau(m),\]
where \(c,C\) and \(j\in\mathbb{Z}\) are arbitrary constants. In terms of the dependent variable \(u\) this transformation produces the shift \(u\to u-2\log R\), see (3.10). In particular, the \(\tau\)-function
\[\tilde{\tau}_{N}(m,t_{1},\bar{t}_{1},t_{2},\bar{t}_{2},\dots)=R^{m^{2}}\tau_{ N}(m-1,Rt_{1},R\bar{t}_{1},R^{2}t_{2},R^{2}\bar{t}_{2},\dots)\]
is also a solution of the hierarchy. This follows from the bilinear equation (notations are defined in (3.4))
\[\oint_{z=0}\frac{dz}{2\pi{\rm i}}z^{m^{\prime}-m}e^{\xi({\boldsymbol{t}}^{ \prime}-{\boldsymbol{t}},z)}\tau(m^{\prime},{\boldsymbol{t}}^{\prime}-[z^{-1} ],\boldsymbol{\bar{t}}^{\prime})\tau(m,{\boldsymbol{t}}+[z^{-1}],\boldsymbol {\bar{t}})=\]
\[=\oint_{z=0}\frac{dz}{2\pi{\rm i}}z^{m^{\prime}-m}e^{\xi(\boldsymbol{\bar{t}} ^{\prime}-\boldsymbol{\bar{t}},z^{-1})}\tau(m^{\prime}+1,{\boldsymbol{t}}^{ \prime},\boldsymbol{\bar{t}}^{\prime}-[z])\tau(m-1,{\boldsymbol{t}}, \boldsymbol{\bar{t}}+[z])\]
which encodes the whole hierarchy (for a review see, e.g., [29]).
Repeating the above computations for an arbitrary \(R\), we get the transformed \(N\)-soliton \(\tau\)-function
\[\tilde{\tau}_{N}(m)=R^{m^{2}}\sum_{\nu_{1}=0,1}\cdots\sum_{\nu_{N}=0,1}e^{- \beta E(\nu_{1},\dots,\nu_{N})},\quad\beta=2, \tag{4.1}\]
where
\[E=-\sum_{1\leq i<j\leq N}\left(\log|\zeta_{i}-\zeta_{j}|-\log|\zeta_{i}\bar{ \zeta}_{j}-R^{2}|+\log R\right)\nu_{i}\nu_{j}\]
\[-\sum_{i=1}^{N}\left(\frac{\varphi_{i}}{2}+(m-1)(\log|\zeta_{i}|-\log R)+{ \cal U}(\zeta_{i})\right)\nu_{i}\]
and \({\cal U}\) is the harmonic function of the form
\[{\cal U}(z)=-\frac{1}{2}\sum_{p=1}^{\infty}\left(\left(z^{p}-\frac{R^{2p}}{ \bar{z}^{p}}\right)t_{p}+\left(\bar{z}^{p}-\frac{R^{2p}}{z^{p}}\right)\bar{t}_ {p}\right).\]
Fixing the initial phases of solitons as
\[\varphi_{i}=-\beta U(\zeta_{i})-\log R,\quad\beta=2, \tag{4.2}\]
where \(U(z)\) is a confining potential, we then take the limit of small \(R\). In this limit, when the disc contracts to a point, the harmonic function becomes analytic at \(z=0\)
\[{\cal U}(z)=-\frac{1}{2}\sum_{p=1}^{\infty}\left(z^{p}t_{p}+\bar{z}^{p}\bar{t}_{ p}\right),\quad R\to 0, \tag{4.3}\]
and the energy of the gas becomes
\[E=-\sum_{1\leq i<j\leq N}\nu_{i}\nu_{j}\log|\zeta_{i}-\zeta_{j}|-\sum_{i=1}^{N} \left(U(\zeta_{i})+{\cal U}(\zeta_{i})+(m-n)\log|\zeta_{i}|\right)\nu_{i}-n \left(\frac{n}{2}-m\right)\log R. \tag{4.4}\]
Here
\[n=\sum_{i=1}^{N}\nu_{i}\]
is the number of particles. In the \(R\to 0\) limit the last term in (4.4), i.e. \(-n\left(\frac{n}{2}-m\right)\log R\), is the dominant one. Combining it with the factor \(R^{m^{2}}\) in (4.1), we obtain the common factor of the \(n\)-particle terms in the \(\tilde{\tau}_{N}(m)\)-function
\[R^{m^{2}}e^{\beta n\left(\frac{n}{2}-m\right)\log R}=R^{(m-n)^{2}},\]
since \(\beta=2\). We see that only the \(n\)-particle terms with \(n=\sum_{i}\nu_{i}=m\) are finite for \(R\to 0\), while other terms vanish. Also, since the total number of gas particles \(n\) cannot be negative or exceed the number of lattice sites \(N\), the lowest vanishing rate of \(\tilde{\tau}_{N}(m)\) for \(m\leq 0\) is reached at \(n=0\), while for \(m>N\) it is reached at \(n=N\) (i.e., at one of the ends of the interval \(0\leq n\leq N\)), so that all the terms in (4.1) for \(m\) outside of this interval vanish. Thus, in the limit \(R\to 0\), we obtain
\[\tilde{\tau}_{N}(m)={\cal Z}_{m},\quad 0\leq m\leq N,\quad\tilde{\tau}_{N}(m<0)= \tilde{\tau}_{N}(m>N)=0,\]
where \({\cal Z}_{m}\) is the partition function of the gas of \(m\) Coulomb particles on the lattice \(\zeta=\{\zeta_{1},\zeta_{2},\ldots,\zeta_{N}\}\) in the complex plane without boundaries. No more than one particle can occupy a lattice site and
\[{\cal Z}_{m}=\frac{1}{m!}\sum_{z_{1}\in\zeta}\cdots\sum_{z_{m}\in\zeta}e^{- \beta E_{m}(z_{1},\ldots,z_{m})},\quad\beta=2, \tag{4.5}\]
with the gas energy being
\[E_{m}(z_{1},\ldots,z_{m})=-\sum_{1\leq i<j\leq m}\log|z_{i}-z_{j}|+\sum_{1\leq i \leq m}\left(U(z_{i})+{\cal U}(z_{i})\right). \tag{4.6}\]
Finally, we can write \({\cal Z}_{0}=1\) and
\[{\cal Z}_{m}=\frac{1}{m!}\sum_{\ell_{1}=1}^{N}\cdots\sum_{\ell_{m}=1}^{N}\prod _{1\leq i<j\leq m}\left|\zeta_{\ell_{i}}-\zeta_{\ell_{j}}\right|^{2}\prod_{i= 1}^{m}\exp\left(-2U(\zeta_{\ell_{i}})-2{\cal U}(\zeta_{\ell_{i}})\right),\quad m >0. \tag{4.7}\]
Thus, we have shown that the \(m\)-particle partition function \({\cal Z}_{m}\), determined in (4.5), (4.6) or (4.7), with the harmonic external potential \({\cal U}\) fixed in (4.3), is a \(\tau\)-function of the two-dimensional Toda lattice of the length \(N-1\). In other words, the function
\[\tau(m,t_{1},\bar{t}_{1},t_{2},\bar{t}_{2},\dots)=\left\{\begin{array}{cc}{ \cal Z}_{m}(t_{1},\bar{t}_{1},t_{2},\bar{t}_{2},\dots),&\mbox{if}\quad m=0,1,2, \dots,N,\\ 0,&\mbox{otherwise},\end{array}\right. \tag{4.8}\]
is a solution of the 2DTL hierarchy. In terms of \(u(m)\), the 2DTL equation (3.9) becomes a system of \(N-1\) differential equations for \(u(1),\dots,u(N-1)\). For \(m=2,3,\dots,N-2\), the equations are fixed in (3.9), while for \(m=1\) and \(m=N-1\) we have \(\partial_{t_{1}}\partial_{t_{-1}}u(1)=e^{-u(2)}-2e^{-u(1)}\) and \(\partial_{t_{1}}\partial_{t_{-1}}u(N-1)=e^{-u(N-2)}-2e^{-u(N-1)}\), respectively.
The \(m\)-particle partition function (4.7) (or, equivalently, (4.5), (4.6)) can be rewritten in the form
\[{\cal Z}_{m}=\frac{1}{m!}\int\prod_{1\leq i<j\leq m}|z_{i}-z_{j}|^{2}\prod_{j =1}^{m}e^{-2{\cal U}(z_{j})}\rho(z_{j})dx_{j}dy_{j}, \tag{4.9}\]
where
\[\rho(z)=\sum_{i=1}^{N}e^{-2U(z)}\delta(x-X_{i})\delta(y-Y_{i}),\quad\zeta_{i}= X_{i}+{\rm i}Y_{i}.\]
In the continuous \(N\to\infty\) limit, the spacing between sites tends to zero, i.e. \(\zeta_{i}=\epsilon\xi_{i}\), \(\epsilon\to 0\), while the area of the lattice, which is of the order of \(\epsilon^{2}N\), remains finite. In this limit the measure \(\rho\) tends to the continuous measure
\[\rho(z)=\epsilon^{-2}e^{-2U(z)}\varrho(z), \tag{4.10}\]
where \(\varrho(z)\) is a normalized finite density of the lattice sites. We recall that the \(\tau\)-function is defined modulo a gauge factor, and, in particular, it can be multiplied by any constant. Therefore, the diverging factor \(\epsilon^{-2}\) can be discarded in the above equality.
The \(\tau\)-function (4.9) is the partition function of the random normal \(m\times m\) matrix model Depending on the measure, we obtain either continuous or discrete version of the model. Note that the standard hermitian matrix model is a special case of the normal matrix model, when the measure \(\rho\) is concentrated on the real line, i.e. this model enters our considerations as well.
By choosing \(\varphi_{i}=-\beta U(\zeta_{i})+(2\ell-1)\log R\), \(\ell\in\mathbb{Z}\), instead of (4.2), we will get a partition function of the Coulomb gas in the presence of a fixed point charge of the value \(\ell\) placed at the point \(z=0\). Determinantal representations of the partition functions of gases with several fixed charges were given, e.g., in [4, 9].
Note that the partition function (4.9) of the normal matrix model with an arbitrary (continuous or/and discrete) measure can be obtained without the lattice scaling procedure, by applying our limiting procedure directly to the well-known solution of the hierarchy (see e.g. [12, 31])
\[\tau=\sum_{n\geq 0}\frac{1}{n!}\int\cdots\int\prod_{1\leq i<j\leq n}\frac{(p_{i}- p_{j})(q_{i}-q_{j})}{(p_{i}-q_{j})(q_{i}-p_{j})}\prod_{i=1}^{n}e^{m(\log p_{i}- \log q_{i})+\sum_{k\neq 0}^{\infty}\left(p_{i}^{k}-q_{i}^{k}\right)t_{k}}r(p_{i},q _{i})dp_{i}dq_{i},\]
for which the \(N\)-soliton solution is a particular case corresponding to \(r(p,q)=\sum_{i=1}^{N}r_{i}\delta(p-a_{i})(q-b_{i})\). The procedure is essentially the same as in the case of the \(N\)-soliton solutions, except the continuum limit at the final stage is not needed.
Conclusions
The grand partition function of the Coulomb-Dyson gas and the \(m\)-particle partition function related to the normal matrix model are usually obtained by using different "group-like" elements in the free-fermion approach to integrable hierarchies [31]. In this article, we have shown that the latter can be obtained from the former one by a special limiting procedure.
It is well known that the partition function of the standard normal random matrix model having the continuous measure emerges in the dispersionless limit of the 2DTL hierarchy corresponding to a special \(m\to\infty\) scaling limit. In the dispersionless limit one introduces the scaled ("slow") times \(T_{i}\), such that \(T_{\pm i}=ht_{\pm i}\), and \(T_{0}=hm\). Here, one considers the double scaling limit, such that \(h\to 0\), \(m\to\infty\) with \(T_{0}\) remaining finite. The scaling parameter \(h\) plays the role of spacing on the Toda lattice (i.e., the minimal step in "time" \(T_{0}\), not to confuse with spacing on the soliton momentum lattice). When \(h\to 0\), the time \(T_{0}\) becomes the continuous variable for slowly varying solutions. For instance, in this limit the Toda lattice equation (3.9) becomes
\[\frac{\partial^{2}u}{\partial T_{1}\partial T_{-1}}=\frac{\partial^{2}e^{-u}}{ \partial T_{0}^{2}}.\]
Commutators in the Lax representation of the hierarchy degenerate to the Poisson brackets and the Lax operators become functions, i.e. the dispersionless limit is a kind of a quasiclassical limit with \(h\) playing a role of the "Plank constant" (for more details see e.g. [30]).
From the point of view of the matrix models, introduction of the "slow" times corresponds to the scaling of the harmonic potential \({\cal U}(z,t)={\cal U}(z,T)/h\) (see eq. (4.3)). Also by making the measure (4.10) dependent on the parameter \(h\) through the \(h\)-dependence of the confining potential \(U(z,h)=v(z)/h\), we get the matrix model with the scaled total potential \(U(z,h)+{\cal U}(z,t)=\frac{1}{h}\left(v(z)+{\cal U}(z,T)\right)\). The potential diverges as \(h\to 0\), while \(hm\) (where \(m\) is the size of matrix) remains finite. In this limit, for a wide class of potentials, the eigenvalues of random normal matrices occupy a compact domain in the complex plane, called a "droplet". In the case of the confining potential created by the uniform neutralizing background, the evolution of boundaries of this droplet with respect to the matrix size, i.e. with respect to the time \(T_{0}\), is a solution to the Laplacian growth, or Hele-Shaw moving boundary problem of the 2D fluid dynamics (with the area of the droplet proportional to \(T_{0}\), see, e.g., [8, 19, 22, 21]).
The Laplacian growth is a model of evolution of a droplet whose boundary is driven by a harmonic field being a potential for the growth velocity. The field is a Green function of the exterior of the droplet which vanishes on the droplet boundary. The normal velocity of the boundary is proportional to the normal derivative of the field. The problem in which the droplet is expanding is linearly unstable and ill-posed for almost any initial condition (see, e.g. [7] and references therein). The finite \(m\) normal matrix model provides a sort of integrable regulaization of the above ill-posed problem. The discretization of the matrix model, related to \(N\)-soliton solutions, could provide another type of the regularization, a kind of the lattice regularization. In this case, the continuous Laplacian growth is recovered in a pair of double scaling limits \(N\to\infty,\epsilon\to 0\), \(m\to\infty,h\to 0\).
It is worth mentioning that apart from the standard normal matrix model, which is a generalization of the hermitian random matrix model to normal matrices, non-hermitian generalizations of the symmetric and quarternion-real models were considered in the literature as well, see [24] and references therein. These models, called "generalized Ginibre ensembles",
turn out to be related to the so-called "large BKP" and "large 2-BKP" hierarchies. It would be interesting to study solitonic \(\tau\)-functions of these hierarchies in the context of the statistical mechanics of Coulomb gases with boundaries. Some soliton solutions of the large ("fermionic") BKP hierarchy were written down in [26] and this may be useful for further studies.
It can be also interesting to try to apply our method to the Mehta-Pandey interpolating ensembles, to circular \(\beta=1,4\) ensembles [20] and also to certain new solvable ensembles of random matrices such as recently considered in [2] and [25].
Note that since the KP and 2DTL phase shifts are essentialy the same, the Coulomb potential related to the 2DTL could be already obtained from the KP phase shifts (3.1) by setting \(a_{i}=\zeta_{i}/R\), \(b_{i}=-R/\zeta_{i}\). One might try to apply this choice to the BKP phase shifts (3.7). However, the corresponding interaction potential has no clear physical meaning in this case: here not only images which are reflections \(\zeta_{i}\to R^{2}/\bar{\zeta}_{i}\) with respect to the circle, but also the inversions \(\zeta_{i}\to-\zeta_{i}\) with respect to the origin are present (see Figure 2).
Concluding this article we would like to remind that, while useful from the point of view of random matrices and Dyson gases, the \(\tau\)-function approach has an essential drawback in the framework of general statistical mechanics. Namely, the partition functions derived from the \(\tau\)-functions of integrable hierarchies correspond to the Coulomb gas (or Ising models [15, 16]) models at fixed (inverse) temperatures \(\beta=2\). For the translationally invariant Ising models with non-local interaction in one dimension, related to the self-similar potentials [28] and certain soliton solutions of the KdV and BKP hierarchies [15, 16], we have temperatures \(\beta=2\) and \(\beta=4\), respectively. Restriction to fixed temperatures is a consequence of the fact that the integrable hierarchies are nothing, but the Plucker relations on an infinite dimensional Grassmanian [5, 12, 27] that can be obtained in the framework of the free-fermion formalism.
**Acknowledgments.** This study has been partially funded within the framework of the HSE University Basic Research Program. I. Loutsenko and O. Yermolaeva would like to thank Centre de Recherches Mathematiques for a support. The authors thank also the referees for their helpful remarks.
Figure 2: Positive charges/images are shown as white squares while negative images are shown as black squares. The interactions between different charges are shown by dashed lines, while the interactions between charges and their own images are shown by solid lines. |
2309.06325 | Distributed Precoding for Satellite-Terrestrial Integrated Networks
Without Sharing CSIT: A Rate-Splitting Approach | Satellite-terrestrial integrated networks (STINs) are promising architecture
for providing global coverage. In STINs, full frequency reuse between a
satellite and a terrestrial base station (BS) is encouraged for aggressive
spectrum reuse, which induces non-negligible amount of interference. To address
the interference management problem in STINs, this paper proposes a novel
distributed precoding method. Key features of our method are: i) a
rate-splitting (RS) strategy is incorporated for efficient interference
management and ii) the precoders are designed in a distributed way without
sharing channel state information between a satellite and a terrestrial BS.
Specifically, to design the precoders in a distributed fashion, we put forth a
spectral efficiency decoupling technique, that disentangles the total spectral
efficiency function into two distinct terms, each of which is dependent solely
on the satellite's precoder and the terrestrial BS's precoder, respectively.
Then, to resolve the non-smoothness raised by the RS strategy, we approximate
the spectral efficiency expression as a smooth function by using the LogSumExp
technique; thereafter we develop a generalized power iteration inspired
optimization algorithm built based on the first-order optimality condition.
Simulation results demonstrate that the proposed method offers considerable
spectral efficiency gains compared to the existing methods. | Doseon Kim, Sungyoon Cho, Wonjae Shin, Jeonghun Park, Dong Ku Kim | 2023-09-12T15:37:44Z | http://arxiv.org/abs/2309.06325v4 | # Distributed Precoding for Satellite-Terrestrial Integrated Networks Without Sharing CSIT:
###### Abstract
Satellite-terrestrial integrated networks (STINs) are promising architecture for providing global coverage. In STINs, full frequency reuse between a satellite and a terrestrial base station (BS) is encouraged for enhancing spectral efficiency, which accounts for non-negligible amount of interference. To address the interference management problem in STINs, this paper proposes a novel distributed precoding method. Key features of our method are: i) a rate-splitting (RS) strategy is incorporated for efficient interference management, ii) precoders are designed in a distributed way without sharing channel state information between a satellite and a terrestrial BS. Specifically, to design precoders in a distributed fashion, we put forth a spectral efficiency decoupling technique. This technique disentangles the total spectral efficiency into two distinct terms, each dependent solely on the satellite's precoder and the terrestrial BS's precoder, respectively. Then, to resolve the non-smoothness raised by adopting the RS strategy, we approximate the spectral efficiency expression as a smooth function; thereafter we develop a generalized power iteration inspired optimization algorithm built based on the first-order optimality condition. Simulation results demonstrate that the proposed method improves the spectral efficiency (around \(20\sim 29\%\)) compared to existing distributed precoding schemes.
Satellite-terrestrial integrated networks, rate-splitting, distributed precoding, generalized power iteration
## I Introduction
Managing inter-cell interference (ICI) is one of long-standing problems in cellular networks. Due to an inherent characteristic of resource reuse, ICI fundamentally limits the rate performance [1]; thus, efficiently handling the interference is a key to achieve the high spectral efficiency of cellular networks. So far, numerous studies have been conducted to mitigate ICI by using multi-cell multiple-input multiple-output (MIMO) cooperative transmission [2]. In most of MIMO cooperation techniques, a key principle is designing precoders in a coordinated fashion (i.e., coordinated precoding), where multiple base stations (BSs) share channel state information at transmitters (CSIT), then obtain their precoders by jointly exploiting shared CSIT.
Recently, with the growing interest in satellite communications, satellite-terrestrial integrated networks (STINs) gain significant attention. One promising scenario of operating STINs is using satellites to serve alienated users left outside of a terrestrial coverage region [3]. In such a scenario, assuming full frequency reuse (FFR) for aggressive spectrum reuse [4], the interference stemming from a satellite is mainly considered as a significant factor to deteriorate the spectral efficiency of STINs [5]. To handle this interference, one may be tempted to apply the widely used coordinated precoding approaches from previous researches that require sharing CSIT between a satellite and a terrestrial BS. Nonetheless, CSIT sharing between a satellite and a terrestrial BS is difficult. Unlike terrestrial BSs connected via wireling transport network of X2, satellites require extra wireless resources for connecting to a terrestrial gateway [5]. For this reason, sharing CSIT in STINs causes significant overheads, which hinders to use the coordinated precoding methods. Accordingly, a desirable way to address the interference in STINs is using a distributed precoding method, where each of satellite and terrestrial BS determines their precoders individually while not sharing CSIT. Motivated by this, we propose a novel distributed precoding method for efficiently managing the interference in STINs.
### _Related Work_
In design of distributed downlink precoding, it is infeasible to compute the exact signal-to-interference-plus-noise ratio (SINR) because CSIT sharing is not allowed. To address this, in [6], the signal-to-leakage-plus-noise ratio (SLNR) was considered, which is computed by the ratio between the signal power and the sum of the leakage interference plus the noise power. Here, the leakage interference power indicates the amount of transmit power that a BS incurs to neighboring cells. By doing this, one can readily come up with downlink precoding vectors by taking advantage of equivalence between the SLNR and the uplink SINR, wherein the optimal combiner is well-known as a minimum mean-squared error (MMSE) combiner. In [7, 8, 9], virtual SINR (VSINR) was proposed and studied, which is an extension of SLNR to multi-point joint transmission setups. Especially, in [9], the Pareto optimal boundary of linear precoding was characterized using VSINR.
Besides the Pareto optimality analysis, practical distributed precoding schemes were also actively investigated. In [10], assuming heterogeneous network scenarios, a per-cell energy efficiency maximization precoding was developed based on weighted MMSE, while restricting the maximum amount of leakage to other cells. SLNR was also introduced to develop precoding methods for maximizing spectral efficiency and
energy efficiency [11]. In [12], the inter-user interference (IUI) term and the other-cell interference term are separated from the sum spectral efficiency by assuming high signal-to-noise ratio (SNR) approximation, by which the distributed precoding scheme was developed for jointly mitigating the IUI and the leakage. In [13], a new metric called signal-to-interference-plus-leakage-plus-noise ratio (SILNR) was proposed, which primarily differs from SLNR in how it handles the IUI while considering the product term of leakage interference. [13] showed that for a large number of antennas at a BS, SILNR-based distributed precoding provides an equivalent sum spectral efficiency to the coordinated precoding scheme. In [14], a distributed precoding method with network virtualization was proposed in a semi-closed form.
Despite the abundant prior work of distributed precoding methods, the existing schemes are inherently limited in an information-theoretical sense. To be specific, the existing methods implicitly assume a treating interference as noise (TIN) based decoding strategy, wherein each user only decodes its own signal without using an advanced treatment to interference. From an information theoretical perspective, TIN is optimal in a weak interference regime [15], which achieves quasi-optimal spectral efficiency [16]. However, in STINs, it is infeasible for a terrestrial receiver to perfectly estimate the satellite interference channels due to the large propagation distance and severe randomness, such as large Doppler effects. As a result, it cannot sufficiently suppress interference from the satellite and may remain in a strong interference region. In such a strong interference regime, a terrestrial receiver using TIN loses its optimality and needs a proactive decoding strategy, such as rate-splitting (RS) [17].
From such a motivation, there exist some prior work that employed RS in STINs based on satellite-terrestrial BS coordination. For instance, in [5], two approaches were proposed: one is a coordinated approach that shares CSIT, the other is a cooperative approach that shares not only CSIT but also transmitted data. For both approaches, a precoding scheme incorporating RS was proposed to mitigate the interference in STINs. In [18], a multi-layer interference management scheme was proposed in the multiple satellite network, where the RS strategy is implemented across different satellites through CSIT sharing. To use such methods [5, 18], however, CSIT or transmitted data should be shared between a satellite and a terrestrial BS. As mentioned above, this sharing is not easy in STINs due to the lack of a dedicated link between a satellite and a terrestrial BS. As a result, a precoding method that i) incorporates RS and ii) requires no CSIT sharing is needed to develop. This serves as a primary motivation of our work.
### _Contributions_
In this paper, we consider the downlink system of STINs. In our scenario, a terrestrial BS serves terrestrial users (TUs) located within its coverage, while a low-earth-orbit (LEO) satellite serves satellite users (SUs) situated beyond the coverage range of the terrestrial BS. For this reason, the terrestrial BS does not impose any interference to the SUs. On the contrary, the satellite can incur the interference to certain TUs in some regions reachable by the LEO satellite signal. Given this setup, we propose a distributed precoding method to efficiently handle the interference without sharing CSIT. Key features of our method are summarized as follows.
* **RS strategy**: Our precoding method incorporates the RS strategy. The RS strategy enables the SUs and a part of the TUs to mitigate the interference coming from the satellite by using successive interference cancellation (SIC). To this end, the messages intended to the SUs are split into a common and a private parts, wherein the common parts are jointly encoded into a common stream. The common stream is constructed to be decoded by the SUs and some of the TUs affected by the LEO satellite signal. To be specific, the SUs and a part of the TUs first decode the common stream while treating other streams as noise, and eliminate the common stream with SIC, thereafter decode the private stream. Upon this decoding process of the RS strategy, we derive a lower bound on the spectral efficiencies by considering the effects on the imperfect CSIT and the RS decoding condition. Subsequently, we formulate the sum spectral efficiency maximization problem.
* **Spectral efficiency decoupling**: Addressing the posed problem necessitates a coordinated approach involving CSIT sharing between the LEO satellite and the terrestrial BS. To solve this problem in a distributed fashion, we introduce a novel spectral efficiency decoupling technique. Specifically, we take average on IUI terms over the randomness related to incomplete knowledge of the channel fading process. This allows us to decouple the spectral efficiencies into two separated terms, each of which is only associated with a terrestrial BS precoding vector and satellite precoding vector, respectively. Based on this decoupling, we transform the original problem into the distributed precoding optimization problem.
* **Distributed precoding optimization**: Even after transforming the original problem into the distributed problem, finding its solution is challenging since the problem is non-convex and non-smooth. To address this, we first approximate the objective function by using the LogSum-Exp (LSE) technique. Then we characterize the first-order Karush-Kuhn-Tucker (KKT) conditions and cast these as generalized nonlinear eigenvalue problems. We show that finding the principal eigenvector of these nonlinear eigenvalue problems is equivalent to finding the local optimal solution. Leveraging this, we propose an iterative algorithm named STIN-generalized power iteration (STIN-GPI).
* **Simulations**: Through link-level and system-level simulations, we numerically demonstrate that the proposed STIN-GPI algorithm outperforms the current state-of-the-art distributed precoding methods.
## II System Models
### _Network Model_
We consider a STIN that consists of a LEO satellite and a terrestrial BS. We focus on the downlink system with FFR,
herein both the LEO satellite and the terrestrial BS use the same frequency band. As illustrated in Fig. 1, there exist \(K_{s}\) SUs and \(K_{t}\) TUs, and the total number of users is \(K=K_{s}+K_{t}\). We denote \(\mathcal{K}_{s}=\{1,\cdots,K_{s}\}\) as a set of the SUs with \(|\mathcal{K}_{s}|=K_{s}\) and \(\mathcal{K}_{t}=\{1,\cdots,K_{t}\}\) as a set of the TUs with \(|\mathcal{K}_{t}|=K_{t}\). We assume that the TUs are associated with the terrestrial BS and the SUs are connected to the LEO satellite. In our setup, the terrestrial BS does not incur any interference to the SUs since all the SUs are located outside of the coverage region of the terrestrial BS. On the contrary, the LEO satellite can incur interference to some TUs1 when they are located within the coverage region of the LEO satellite [5] as shown in Fig. 1. We define a subset of the TUs that experiences the interference from the LEO satellite as \(\mathcal{K}_{t}^{\text{int}}\subseteq\mathcal{K}_{t}\), where \(|\mathcal{K}_{t}^{\text{int}}|=K_{t}^{\text{int}}\). The total number of users in the LEO satellite coverage is \(K^{\text{sat}}=K_{t}^{\text{int}}+K_{s}\). If \(K_{t}=K_{s}=1\) and \(\mathcal{K}_{t}^{\text{int}}=\mathcal{K}_{t}\), the considered setup corresponds to the \(Z\) channel [19]. Our interference environment is an extension of the Z channel for \(K_{t}\geq 1\), \(K_{s}\geq 1\), and \(\mathcal{K}_{t}^{\text{int}}\subseteq\mathcal{K}_{t}\).
Footnote 1: If SUs enter the terrestrial coverage region, they change their association to the terrestrial BS and become TUs [5].
The LEO satellite is equipped with the uniform planar arrays (UPAs) with \(M_{1}\) and \(M_{2}\) array elements in the \(x\)-axis and \(y\)-axis, respectively. The total number of antennas at the LEO satellite is \(M\triangleq M_{1}M_{2}\). The terrestrial BS is also equipped with \(N\triangleq N_{1}N_{2}\) number of UPAs, where it consists of \(N_{1}\) and \(N_{2}\) array elements in the \(x\)-axis and \(y\)-axis, respectively. Furthermore, we assume that all users are equipped with a single antenna.
### _Channel Model_
#### Ii-B1 Satellite Channel
For modeling satellite channel, we use a widely-adopted multi-path channel model. Using a ray-tracing based modeling, the complex baseband channel impulse response of the downlink satellite channel vector for SU \(u,\forall u\in\mathcal{K}_{s}\), denoted by \(\mathbf{g}_{u}(t,\tau)\in\mathbb{C}^{M}\), is written as
\[\mathbf{g}_{u}(t,\tau)=\frac{1}{\sqrt{L_{s}}}\sum_{\ell=0}^{L_{s}-1}g_{u}e^{j 2\pi\nu_{u,\ell}t}\delta\left(\tau-\tau_{u,\ell}\right)\mathbf{a}\left(\theta _{u,\ell}^{\text{sat}},\varphi_{u,\ell}^{\text{sat}},\mathcal{D}^{\text{sat} }\right), \tag{1}\]
where \(L_{s}\) is the number of propagation paths, \(g_{u}\) is the complex channel gain, \(\nu_{u,\ell}\) is the Doppler shift, \(\tau_{u,\ell}\) is the propagation delay, and \(\mathbf{a}(\cdot,\cdot,\cdot)\) is the array response vector corresponding to the \(\ell\)-th path of SU \(u\)'s channel. Note that \(\theta_{u,\ell}^{\text{sat}}\) and \(\varphi_{u,\ell}^{\text{sat}}\) represent the vertical and horizontal angle-of-departure (AoD), respectively and \(\mathcal{D}^{\text{sat}}=\{d_{1}^{\text{sat}},d_{2}^{\text{sat}}\}\) is a set of the satellite's UPA inter-element spacing in the \(x\)-axis and \(y\)-axis, respectively. Without loss of generality, we assume that \(\ell=0\) indicates the first arriving path and \(\ell=L_{s}-1\) indicates the last arriving path at the satellite, resulting in \(\tau_{u,0}\leq\tau_{u,1}\leq\cdots\leq\tau_{u,L_{s}-1}\).
**Doppler shift**: We first explain the Doppler shift \(\nu_{u,\ell}\). Typically, the Doppler shift for the LEO satellite channel is much larger compared to the terrestrial channel because of the high mobility of the LEO satellite. Fortunately, we can exploit a favorable characteristic of the Doppler shift in the LEO satellite channel; each path has almost identical Doppler, i.e., \(\nu_{u,\ell}=\nu_{u}\). This is because the traveling distances of each path can be assumed to be approximately the same due to the high altitude of the LEO satellite.2 This allows to compensate the Doppler shift at the LEO satellite by multiplying \(e^{-j2\pi\nu_{u}t}\) to the received signal as presented in [20, 21, 22, 23].
Footnote 2: In practice, the Doppler shift can be varied over time due to moving direction of the satellite keeps changing. Taking into account this variation of Doppler shift is, however, beyond the scope of our paper. Thus we assume perfect knowledge on the Doppler shift.
**Delay**: The propagation delay is also an important issue in LEO satellite communications because of a long propagation distance. Thanks to the line-of-sight (LoS)-like characteristic of LEO satellite channels [20, 21], it is not difficult to compensate the delay effects. Specifically, denoting the delay spread as \(\tau^{\text{sp}}=\tau_{u,L_{s}-1}-\tau_{u,0}\), \(\tau^{\text{sp}}\) is rather smaller than that of terrestrial communications since the traveling distances between each propagation path are very similar, which makes the LEO channel rather LoS-like. This is also confirmed with the measurement results [24, 25]. Therefore, if the receiver can achieve symbol synchronization by shifting time for a minimum delay of \(\tau_{u,0}\), the delay spread can be readily resolved by using the typical orthogonal frequency division multiplex (OFDM) technique as reported in [20, 21, 22, 23].
**Array response vector**: Incorporating the LoS-like characteristic of a LEO satellite channel, we have \(\mathbf{a}(\theta_{u,\ell}^{\text{sat}},\mathcal{D}^{\text{sat}},\mathcal{D}^{ \text{sat}})\simeq\mathbf{a}(\theta_{u}^{\text{sat}},\varphi_{u}^{\text{sat}},\mathcal{D}^{\text{sat}})\) for all \(0\leq\ell\leq L_{s}-1\). As a result, the effective LEO satellite channel that SU \(u\) experiences in one OFDM symbol block is given by
\[\mathbf{g}_{u}=\bar{g}_{u}\mathbf{a}\left(\theta_{u}^{\text{sat}},\varphi_{u}^{ \text{sat}},\mathcal{D}^{\text{sat}}\right), \tag{2}\]
where \(\bar{g}_{u}\) is the effective channel gain after Doppler and delay compensation. Since the LEO satellite channel is nearly LoS, not much spatial randomness exists in \(\bar{g}_{u}\). To reflect this, we model \(\bar{g}_{u}\) by Rician fading. To be specific, \(\bar{g}_{u}\sim\mathcal{CN}\!\left(\sqrt{\frac{\kappa_{s}\alpha_{u}}{1+\kappa_{s} }},\frac{\alpha_{u}}{1+\kappa_{s}}\right)\), where \(\alpha_{u}=\frac{G_{\text{sat}}G_{u}}{k_{B}T_{u}B_{w}}\left(\frac{c}{4\pi f_{u} \cdot d_{u}^{\text{sat}}}\right)^{2}\) is the channel power of SU \(u\) considering free-space path loss. Here, \(c\) is the speed of light, \(f_{c}\) is the carrier frequency, \(d_{0}^{\text{sat}}\) denotes the altitude of the LEO satellite, \(k_{B}\) is the Boltzmann constant, \(T_{n}\) is the noise temperature and \(B_{w}\) is the system bandwidth, \(G_{\text{sat}}\) and \(G_{u}\) respectively represent the antenna gains of the transmitter and the receiver, and \(\kappa_{s}\) determines the ratio between the deterministic and random components. As \(\kappa_{s}\rightarrow\infty\), \(\bar{g}_{u}\) becomes a constant, while \(\kappa_{s}\rightarrow 0\), \(\bar{g}_{u}\) is
Fig. 1: The system model of the STIN and the geometrical model of UPA.
distributed as Rayleigh fading. We note that all of our channel modeling assumptions corresponds to [20, 21, 23, 26].
Now we elaborate on the array response vector. In (2), the array response vector \(\mathbf{a}(\theta_{u}^{\text{sat}},\varphi_{u}^{\text{sat}},\mathcal{D}^{\text{ sat}})\in\mathbb{C}^{M}\) is given by
\[\mathbf{a}\big{(}\theta_{u}^{\text{sat}},\varphi_{u}^{\text{sat}},\mathcal{D}^ {\text{sat}}\big{)}=\mathbf{a}_{h}\big{(}\theta_{u}^{\text{sat}},\varphi_{u}^{ \text{sat}},d_{1}^{\text{sat}}\big{)}\otimes\mathbf{a}_{v}\big{(}\theta_{u}^{ \text{sat}},d_{2}^{\text{sat}}\big{)}, \tag{3}\]
where \(\otimes\) denotes the Kronecker product, the horizontal steering vector \(\mathbf{a}_{h}(\theta_{u}^{\text{sat}},\varphi_{u}^{\text{sat}},d_{1}^{ \text{sat}})\in\mathbb{C}^{M_{1}}\) and the vertical steering vector \(\mathbf{a}_{v}(\theta_{u}^{\text{sat}},d_{2}^{\text{sat}})\in\mathbb{C}^{M_{2}}\) are
\[\mathbf{a}_{h}\left(\theta_{u}^{\text{sat}},\varphi_{u}^{\text{ sat}},d_{1}^{\text{sat}}\right)=\] \[\left[e^{-j\frac{2\pi}{4}\left(\frac{M_{1}-1}{2}\right)}d_{1}^{ \text{sat}}\sin\theta_{u}^{\text{sat}}\cos\varphi_{u}^{\text{sat}},\cdots,e^ {+j\frac{2\pi}{4}\left(\frac{M_{1}-1}{2}\right)}d_{1}^{\text{sat}}\sin\theta_ {u}^{\text{sat}}\cos\varphi_{u}^{\text{sat}}\right]^{\mathsf{T}}, \tag{4}\] \[\mathbf{a}_{v}\left(\theta_{u}^{\text{sat}},d_{2}^{\text{sat}} \right)=\] \[\left[e^{-j\frac{2\pi}{4}\left(\frac{M_{2}-1}{2}\right)}d_{2}^{ \text{sat}}\cos\theta_{u}^{\text{sat}},\cdots,e^{+j\frac{2\pi}{4}\left(\frac{M _{2}-1}{2}\right)}d_{2}^{\text{sat}}\cos\theta_{u}^{\text{sat}}\right]^{ \mathsf{T}}. \tag{5}\]
The channel matrix between the LEO satellite and SUs is denoted by \(\mathbf{G}=\left[\mathbf{g}_{1},\cdots,\mathbf{g}_{K_{s}}\right]\in\mathbb{C}^ {M\times K_{s}}\).
Similar to the above, we can also characterize the interfering channel from the LEO satellite to TUs in \(\mathcal{K}_{t}^{\text{int}}\). Applying the equivalent treatments to the Doppler and the delay, the effective interference channel experienced by TU \(k,\forall k\in\mathcal{K}_{t}^{\text{int}}\), denoted by \(\mathbf{z}_{k}\in\mathbb{C}^{M}\), is presented by
\[\mathbf{z}_{k}(t,\tau) =\frac{1}{\sqrt{L_{s}}}\sum_{\ell=0}^{L_{s}-1}z_{k,\ell}e^{j2\pi \nu_{k,\ell}t}\delta\left(\tau-\tau_{k,\ell}\right)\mathbf{a}\left(\theta_{k, \ell}^{\text{sat}},\varphi_{k,\ell}^{\text{sat}},\mathcal{D}^{\text{sat}}\right)\] \[=\tilde{z}_{k}\mathbf{a}\left(\theta_{u}^{\text{sat}},\varphi_{k}^ {\text{sat}},\mathcal{D}^{\text{sat}}\right)=\mathbf{z}_{k}, \tag{6}\]
where \(\tilde{z}_{k}\) is the effective channel gain, which follows \(\tilde{z}_{k}\sim\mathcal{CN}\left(\sqrt{\frac{L_{s}\alpha_{k}}{L_{s}}},\frac{ \alpha_{k}}{1+\kappa_{s}}\right)\). The interfering channel matrix between the LEO satellite and TUs in \(\mathcal{K}_{t}^{\text{int}}\) is defined as \(\mathbf{Z}=\left[\mathbf{z}_{1},\cdots,\mathbf{z}_{K_{t}^{\text{int}}}\right] \in\mathbb{C}^{M\times K_{t}^{\text{int}}}\).
#### Iii-B2 Terrestrial Channel
In the terrestrial channel, we assume there is no LoS component by considering dense urban scenarios where large-scale blockages such as buildings are densely placed. Assuming that there are \(L_{t}\) scatters contributing the Non-LoS components, the channel vector \(\mathbf{h}_{k}\in\mathbb{C}^{N}\) between the terrestrial BS and TU \(k\) is given by
\[\mathbf{h}_{k}=\frac{1}{\sqrt{L_{t}}}\sum_{\ell=1}^{L_{t}}h_{k,\ell}\mathbf{a }_{h}\left(\theta_{k,\ell}^{\text{bs}},\varphi_{k,\ell}^{\text{bs}},d_{1}^{ \text{bs}}\right)\otimes\mathbf{a}_{v}\left(\theta_{k,\ell}^{\text{bs}},d_{2}^ {\text{bs}}\right), \tag{7}\]
where \(h_{k,\ell}\) is the complex channel gain of \(\ell\)-th path, and it follows \(\mathcal{CN}\left(0,\beta_{k}\right)\) which considers the channel power \(\beta_{k}=\frac{G_{\text{tr}}G_{k}}{k_{\text{tr}}T_{\text{tr}}}\left(\frac{c}{4 \pi f_{c}}\right)^{2}\left(\frac{1}{d_{k}^{\text{bs}}}\right)^{2}\). Here, \(d_{k}^{\text{bs}}\) is the distance between the terrestrial BS and TU \(k\), and \(\rho\) is the path loss exponent. We denote that \(d_{k,\ell}^{\text{bs}}\) and \(\varphi_{k,\ell}^{\text{bs}}\) as the vertical AoD and the horizontal AoD for the \(\ell\)-th path and \(\mathcal{D}^{\text{bs}}=\{d_{1}^{\text{bs}},d_{2}^{\text{bs}}\}\) as a set of the terrestrial BS's UPA inter-element spacing in the \(x\)-axis and \(y\)-axis, respectively. Here, the horizontal steering vector \(\mathbf{a}_{h}(\theta_{k,\ell}^{\text{bs}},\varphi_{k,\ell}^{\text{bs}},d_{1}^ {\text{bs}})\in\mathbb{C}^{N_{1}}\) and the vertical steering vector \(\mathbf{a}_{v}(\theta_{k,\ell}^{\text{bs}},d_{2}^{\text{bs}})\in\mathbb{C}^{N_{2}}\) are written as
\[\mathbf{a}_{h}\left(\theta_{k,\ell}^{\text{bs}},\varphi_{k,\ell}^ {\text{bs}},d_{1}^{\text{bs}}\right)=\] \[\left[e^{-j\frac{2\pi}{4}\left(\frac{N_{1}-1}{2}\right)}d_{1}^{ \text{ss}}\sin\theta_{k,\ell}^{\text{bs}}\cos\varphi_{u,\ell}^{\text{bs}}, \cdots,e^{+j\frac{2\pi}{4}\left(\frac{N_{1}-1}{2}\right)}d_{1}^{\text{ss}}\sin \theta_{u,\ell}^{\text{bs}}\cos\varphi_{u,\ell}^{\text{bs}}\right]^{\mathsf{T}}, \tag{8}\] \[\mathbf{a}_{v}\left(\theta_{k,\ell}^{\text{bs}},d_{2}^{\text{bs}}\right)=\] \[\left[e^{-j\frac{2\pi}{4}\left(\frac{N_{2}-1}{2}\right)}d_{2}^{ \text{ss}}\cos\theta_{u,\ell}^{\text{bs}},\cdots,e^{+j\frac{2\pi}{4}\left(\frac{N_ {2}-1}{2}\right)}d_{2}^{\text{ss}}\cos\theta_{u,\ell}^{\text{bs}}\right]^{ \mathsf{T}}. \tag{9}\]
The terrestrial channel matrix is \(\mathbf{H}=\left[\mathbf{h}_{1},\cdots,\mathbf{h}_{K_{s}}\right]\in\mathbb{C}^{N \times K_{s}}\). As mentioned above, the channels from the terrestrial BS to the SUs are negligible because the SUs are located outside of the terrestrial BS coverage region.
#### Iii-B3 CSIT Estimation
In this subsection, we explain the CSIT estimation. We assume that the perfect CSI is given at the users (including the SUs and the TUs) (i.e., perfect CSI at the receiver (CSIR)).3 Assuming time division duplex (TDD) that allow for channel reciprocity, each user sends a pilot sequence to estimate the uplink channel. Thanks to the reciprocity, the estimated uplink CSI is reused in design of downlink precoders. In this case, the CSIT estimation error is modeled based on the finite uplink pilot power and sequence length [27]. To incorporate the CSIT imperfection, we assume that CSIT is estimated via linear MMSE at both the LEO satellite and the terrestrial BS. Specifically, we denote the estimated channel vectors \(\{\mathbf{\hat{g}}_{u},\mathbf{\hat{z}}_{k}\}\in\mathbb{C}^{M}\) and \(\mathbf{\hat{h}}_{k}\in\mathbb{C}^{N}\) as
Footnote 3: In practice, channel estimation errors can occur at receivers, too. For the sake of conciseness, this paper does not consider the case of imperfect CSIR. Incorporating imperfect CSIR is promising as future research.
\[\mathbf{\hat{g}}_{u}=\mathbf{g}_{u}-\mathbf{q}_{u}^{\text{sat}},\ \mathbf{\hat{z}}_{k}=\mathbf{z}_{k}-\mathbf{e}_{k}^{\text{sat}},\ \mathbf{\hat{h}}_{k}=\mathbf{h}_{k}-\mathbf{e}_{k}^{\text{bs}}, \tag{10}\]
where \(\{\mathbf{q}_{u}^{\text{sat}},\mathbf{e}_{k}^{\text{sat}}\}\in\mathbb{C}^{M}\) and \(\mathbf{e}_{k}^{\text{bs}}\in\mathbb{C}^{N}\) are the CSIT estimation error vectors. The estimated error covariance matrices for each channel are given by a function
by mobility is represented by an auto-regressive function (i.e., first-order Gauss-Markov process) as in [28, 29].
Our model generalizes the previous satellite CSIT acquisition model used in [20]. In [20], two cases of CSIT acquisition were considered, instantaneous CSI (iCSI) and statistical CSI (sCSI). In iCSI, the CSIT is perfectly known at a LEO satellite, while in sCSI only the long-term channel covariance is available without any instantaneous channel knowledge. These are two extreme cases in our model, where the iCSI case corresponds to \(\tau p^{\text{pi}}=\infty\) and the sCSI case corresponds to \(\tau p^{\text{pi}}=0\). In practice, it is more plausible that the CSIT is partially known at a satellite with some error depending on the used estimation methods. In this sense, our assumption is more general and realistic.
Lastly, we emphasize that our distributed precoding method is applicable in any CSIT estimation model. To be specific, the proposed method is developed provided that the CSIT estimation error can be calculated in transmitters as a function of the long-term channel covariance. We will provide a more detailed explanation of this in Section IV.
### _Signal Model_
**Satellite transmit signal**: To effectively handle the interference of the LEO satellite without sharing CSIT, we exploit the RS strategy. To be specific, following the principle of 1-layer RS [30], the messages \(M_{1},\cdots,M_{K_{s}}\) intended to SUs are split into common parts and private parts, i.e., \(M_{u}\to\{M_{c,u},M_{p,u}\}\). The common parts \(\{M_{c,1},\cdots,M_{c,K_{s}}\}\) are jointly combined and encoded into a common stream \(s_{c}\). The private parts \(M_{p,u}\) are independently encoded into the private stream \(s_{p,u}\). For the common stream \(s_{c}\), the corresponding codebook is shared with both the SUs in \(\mathcal{K}_{s}\) and the TUs in \(\mathcal{K}_{t}^{\text{int}}\), who experience the interference from the LEO satellite. The codebook for the private stream \(s_{p,u}\) is only given to SU \(u\). Accordingly, \(s_{c}\) is decodable to the users in \(\mathcal{K}_{s}\) and \(\mathcal{K}_{t}^{\text{int}}\), while \(s_{p,u}\) is decodable only to SU \(u\). Note that \(s_{c}\) and \(s_{p,u}\) are drawn from independent Gaussian codebooks, i.e., \(s_{c},s_{p,u}\sim\mathcal{CN}(0,P_{s})\) where \(P_{s}\) is the total transmit power of the LEO satellite.
At the LEO satellite, the common stream \(s_{c}\) and the private streams \(s_{p,u}\) are linearly combined with the precoding vectors \(\mathbf{f}_{c}\in\mathbb{C}^{M}\) and \(\mathbf{f}_{p,u}\in\mathbb{C}^{M}\). Then the transmit signal vector of the LEO satellite \(\mathbf{x}^{\text{sat}}\in\mathbb{C}^{M}\) is
\[\mathbf{x}^{\text{sat}}=\mathbf{f}_{c}s_{c}+\sum_{i=1}^{K_{s}} \mathbf{f}_{p,i}s_{p,i}. \tag{14}\]
We define \(\mathbf{F}=\left[\mathbf{f}_{c},\mathbf{f}_{p,1},\cdots,\mathbf{f}_{p,K_{s}} \right]\in\mathbb{C}^{M\times(K_{s}+1)}\) as the precoding matrix of the LEO satellite. For the transmit power constraint, we assume \(\text{tr}(\mathbf{F}\mathbf{F}^{\text{th}})\leq 1\), by which the total transmit power is constrained by \(P_{s}\).
**Terrestrial BS transmit signal**: In the terrestrial BS, the unicast messages \(W_{1},\cdots,W_{K_{t}}\) are directly encoded into the streams \(m_{1},\cdots,m_{K_{t}}\) without RS. Each stream \(m_{k}\) is drawn from an independent Gaussian codebook, i.e., \(m_{k}\sim\mathcal{CN}\left(0,P_{s}\right)\) where \(P_{s}\) is the total transmit power of the terrestrial BS. Then, the stream \(m_{k}\) are linearly combined with precoding vectors \(\mathbf{v}_{k}\in\mathbb{C}^{N}\). Accordingly, the transmit signal of the terrestrial BS \(\mathbf{x}^{\text{bs}}\in\mathbb{C}^{N}\) is given by
\[\mathbf{x}^{\text{bs}}=\sum_{j=1}^{K_{s}}\mathbf{v}_{j}m_{j}. \tag{15}\]
The precoding matrix of the terrestrial BS is \(\mathbf{V}=\left[\mathbf{v}_{1},\cdots,\mathbf{v}_{K_{t}}\right]\in\mathbb{C} ^{N\times K_{t}}\) and the power constraint of terrestrial BS is \(\text{tr}(\mathbf{V}\mathbf{V}^{\text{th}})\leq 1\).
**Received signal5** : The received signal at TU \(k\in\mathcal{K}_{t}^{\text{int}}\) is
Footnote 5: With compensation for Doppler and delay at each user, we assume perfect synchronization in both time and frequency between the terrestrial BS and TUs \(\forall_{k}\in\mathcal{K}_{t}\), as well as between the LEO satellite and both SUs \(\forall_{k}\in\mathcal{K}_{s}\) and TUs \(\forall_{k}\in\mathcal{K}_{t}^{\text{int}}\)[20, 21].
\[y_{k}=\mathbf{h}_{k}^{\text{H}}\sum_{j=1}^{K_{s}}\mathbf{v}_{j}m_{j}+\mathbf{ z}_{k}^{\text{H}}\left(\mathbf{f}_{c}s_{c}+\sum_{i=1}^{K_{s}}\mathbf{f}_{p,i}s_{p,i }\right)+n_{k}, \tag{16}\]
where \(n_{k}\) is the additive white Gaussian noise (AWGN) with variance \(\sigma^{2}\) and \(\mathbf{z}_{k}\) is the interference channel from the LEO satellite to TU \(k\). If TU \(k\not\in\mathcal{K}_{t}^{\text{int}}\), \(\mathbf{z}_{k}=\mathbf{0}\). The received signal of SU \(u\) is
\[y_{u}=\mathbf{g}_{u}^{\text{H}}\left(\mathbf{f}_{c}s_{c}+\sum_{i=1}^{K_{s}} \mathbf{f}_{p,i}s_{p,i}\right)+n_{u}, \tag{17}\]
where \(n_{u}\sim\mathcal{CN}(0,\sigma^{2})\) is AWGN. We note that there is no interference coming from the terrestrial BS to the SUs due to the limited coverage of a terrestrial BS.
### _Performance Metrics and Problem Formulation_
#### Iv-D1 Spectral Efficiency Characterization
Before formulating our main problem, we explain the decoding process in the considered RS strategy. Each user (including the SUs in \(\mathcal{K}_{s}\) and TUs in \(\mathcal{K}_{t}^{\text{int}}\)) first decodes the common stream \(s_{c}\) by treating all the other private streams as noise. Provided that the code rate of \(s_{c}\) is properly determined, it is guaranteed that the common stream is decoded without any error. After successfully decoding the common stream, each user removes the common stream from the received signal by using SIC. Then, the SUs and the TUs decode \(s_{p,u}\) and \(m_{k}\) by using single-user decoding.
Given the perfect CSIT, the SINR of the common stream \(s_{c}\) at TU \(k\in\mathcal{K}_{t}^{\text{int}}\) and the common stream \(s_{c}\) at SU \(u\) are respectively given by
\[\text{SINR}_{c,k} =\frac{\left|\mathbf{z}_{k}^{\text{H}}\mathbf{f}_{c}\right|^{2}}{ \sum_{j=1}^{K_{s}}\left|\mathbf{h}_{k}^{\text{H}}\mathbf{v}_{j}\right|^{2}+ \frac{P_{s}}{P_{s}}\sum_{i=1}^{K_{s}}\left|\mathbf{z}_{k}^{\text{H}}\mathbf{f}_{p,i}\right|^{2}+\frac{\sigma^{2}}{P_{s}}},\] \[\text{SINR}_{c,u} =\frac{\left|\mathbf{z}_{k}^{\text{H}}\mathbf{f}_{c}\right|^{2}}{ \sum_{i=1}^{K_{s}}\left|\mathbf{g}_{u}^{\text{H}}\mathbf{f}_{p,i}\right|^{2}+ \frac{\sigma^{2}}{P_{s}}}. \tag{18}\]
Note that, in \(\text{SINR}_{c,u}\) of (18), there is no signal from the terrestrial BS due to the coverage restriction. The spectral efficiencies are denoted as \(R_{c,k}\left(\mathbf{F},\mathbf{V}\right)=\log_{2}\left(1+\text{SINR}_{c,k}\right)\) and \(R_{c,u}\left(\mathbf{F}\right)=\log_{2}\left(1+\text{SINR}_{c,u}\right)\), respectively.
To guarantee that the users in \(\mathcal{K}_{s}\) and \(\mathcal{K}_{t}^{\text{int}}\) can decode the common stream \(s_{c}\), the code rate of the common stream should be determined as the minimum value of the spectral
efficiencies among the users in \(\mathcal{K}_{i}^{\text{int}}\) and \(\mathcal{K}_{s}\); thus, \(R_{c}\left(\mathbf{F},\mathbf{V}\right)=\min_{k\in\mathcal{K}_{u}^{\text{int}},u \in\mathcal{K}_{s}}\left\{R_{c,k}\left(\mathbf{F},\mathbf{V}\right),R_{c,u} \left(\mathbf{F}\right)\right\}\). After cancelling \(s_{c}\), TU \(k\in\mathcal{K}_{t}\) and SU \(u\in\mathcal{K}_{s}\) decode their desirable private streams, \(m_{k}\) and \(s_{p,u}\). For TU \(k\), the SINR of the private stream \(m_{k}\) is
\[\text{SINR}_{p,k}=\frac{\left|\mathbf{h}_{k}^{\text{H}}\mathbf{v}_{k}\right|^{ 2}}{\sum_{j=1,j\neq k}^{K_{s}}\left|\mathbf{h}_{k}^{\text{H}}\mathbf{v}_{j} \right|^{2}+\frac{P_{s}}{P_{s}}\sum_{i=1}^{K_{s}}\left|\mathbf{z}_{k}^{\text{ H}}\mathbf{f}_{p,i}\right|^{2}+\frac{\sigma^{2}}{P_{s}}}. \tag{19}\]
The SINR of the private stream \(s_{p,u}\) for SU \(u\) is
\[\text{SINR}_{p,u}=\frac{\left|\mathbf{g}_{u}^{\text{H}}\mathbf{f}_{p,u}\right| ^{2}}{\sum_{i=1,i\neq u}^{K_{s}}\left|\mathbf{g}_{u}^{\text{H}}\mathbf{f}_{p, i}\right|^{2}+\frac{\sigma^{2}}{P_{s}}}. \tag{20}\]
The spectral efficiencies of the private streams \(m_{k}\) and \(s_{p,u}\) are respectively \(R_{p,k}\left(\mathbf{F},\mathbf{V}\right)=\log_{2}\left(1+\text{SINR}_{p,k}\right)\) and \(R_{p,u}\left(\mathbf{F}\right)=\log_{2}\left(1+\text{SINR}_{p,u}\right)\).
Unfortunately, however, the spectral efficiencies \(R_{c}\left(\mathbf{F},\mathbf{V}\right),R_{p,k}\left(\mathbf{F},\mathbf{V}\right)\) and \(R_{p,u}\left(\mathbf{F}\right)\) cannot be computed under an imperfect CSIT setup. To address this, we derive a lower bound on the spectral efficiency. Specifically, we rewrite the received signals (16) and (17) with the estimated CSIT, i.e., (10), which yields
\[y_{k} =\left(\mathbf{\hat{h}}_{k}+\mathbf{e}_{k}^{\text{bs}}\right)^{ \text{H}}\sum_{j=1}^{K_{s}}\mathbf{v}_{j}m_{j}+\left(\mathbf{\hat{z}}_{k}+ \mathbf{e}_{k}^{\text{sat}}\right)^{\text{H}}\left(\mathbf{f}_{c}s_{c}+\sum_ {i=1}^{K_{s}}\mathbf{f}_{p,i}s_{p,i}\right)\] \[+n_{k}, \tag{21}\] \[y_{u} =\left(\mathbf{\hat{g}}_{u}^{\text{H}}+\mathbf{q}_{u}^{\text{ sat}}\right)\left(\mathbf{f}_{c}s_{c}+\sum_{i=1}^{K_{s}}\mathbf{f}_{p,i}s_{p,i}\right)+n_{u}. \tag{22}\]
Applying a generalized mutual information technique [31], we treat the CSIT estimation error as independent Gaussian noise with appropriate moment matching. This makes a lower bound on \(R_{c,k}\left(\mathbf{F},\mathbf{V}\right)\) as follows:
\[R_{c,k}\left(\mathbf{F},\mathbf{V}\right)\overset{(a)}{\geq} \mathbb{E}\left[\log_{2}\left(1+\frac{\left|\mathbf{z}_{k}^{\text{H}} \mathbf{f}_{c}\right|^{2}}{\left\{\sum_{j=1,j\neq k}^{K_{s}}\left(\left| \mathbf{\hat{h}}_{k}+\mathbf{e}_{k}^{\text{bs}}\right|^{2}+\left|\mathbf{ \hat{r}}_{k}\right|^{2}\right)+\left|\mathbf{\hat{r}}_{k}\right|^{2}\right\}} \right)\right]\] \[\overset{(b)}{\geq}\log_{2}\left(1+\frac{\left|\mathbf{z}_{k}^{ \text{H}}\mathbf{f}_{c}\right|^{2}}{I_{c,k}^{U}+I_{c,k}^{C}+\mathbf{f}_{c}^{ \text{H}}\mathbb{E}\left[\mathbf{e}_{k}^{\text{sat}}(\mathbf{e}_{k}^{\text{ sat}})^{\text{H}}\right]\mathbf{f}_{c}+\frac{\sigma^{2}}{P_{s}}}\right). \tag{23}\]
In \((a)\), the expectation is taken over the randomness associated with the CSIT estimation error \(\{\mathbf{e}_{k}^{\text{bs}},\mathbf{e}_{k}^{\text{sat}}\}\). We clarify that this lower bound technique was used in the prior work [32, 30]. \((b)\) is obtained by applying Jensen's inequality where \(I_{c,k}^{U}=\sum_{j=1}^{K_{s}}(|\mathbf{\hat{h}}_{k}^{\text{H}}\mathbf{v}_{j}| ^{2}+\mathbf{v}_{j}^{\text{H}}\mathbb{E}[\mathbf{e}_{k}^{\text{bs}}(\mathbf{e }_{k}^{\text{bs}})^{\text{H}}]\mathbf{v}_{j})\) and \(I_{c,k}^{C}=\frac{P_{s}}{P_{s}}\sum_{i=1}^{K_{s}}\left(|\mathbf{\hat{h}}_{k}^{ \text{H}}\mathbf{f}_{p,i}|^{2}+\mathbf{f}_{p,i}^{\text{H}}\mathbb{E}[\mathbf{ e}_{k}^{\text{ss}}(\mathbf{e}_{k}^{\text{sat}})^{\text{H}}]\mathbf{f}_{p,i}\right)\). By applying (12) and (13) to (23), the lower bound on \(R_{c,k}\left(\mathbf{F},\mathbf{V}\right)\) is given by
\[R_{c,k}\left(\mathbf{F},\mathbf{V}\right)\] \[\geq\log_{2}\left(1+\left|\mathbf{\hat{z}}_{k}^{\text{H}} \mathbf{f}_{c}\right|^{2}\right)^{2}\Bigg{/}\left(\begin{array}{c}\sum_{j=1} ^{K_{s}}\left(|\mathbf{\hat{h}}_{k}^{\text{H}}\mathbf{v}_{j}|^{2}+\mathbf{v}_{ j}^{\text{H}}\mathbf{e}_{k}^{\text{HS}}\mathbf{v}_{j}\right)+\mathbf{f}_{c}^{ \text{H}}\mathbf{e}_{k}^{\text{H}}\mathbf{e}_{k}^{\text{H}}\mathbf{e}_{j}^{ \text{H}}\mathbf{e}_{k}^{\text{H}}\mathbf{e}_{k}^{\text{H}}\mathbf{e}_{j}\right)+ \frac{\sigma^{2}}{P_{s}}}\Bigg{)}\] \[=\bar{R}_{c,k}\left(\mathbf{F},\mathbf{V}\right). \tag{24}\]
Since the error covariance matrices \(\mathbf{\Phi}_{k}^{\text{bs}}\) and \(\mathbf{\Phi}_{k}^{\text{sat}}\) are assumed to be known at the LEO satellite, we are able to calculate (24) as a closed-form. Likewise, a lower bound on the spectral efficiency of the common stream at SU \(u\) is acquired as follows.
\[R_{c,u}\left(\mathbf{F}\right) \geq\log_{2}\left(1+\left|\mathbf{\hat{g}}_{u}^{\text{H}}\mathbf{ f}_{c}\right|^{2}\Bigg{/}\left\{\sum_{i=1}^{K_{s}}\left(|\mathbf{\hat{h}}_{k}^{ \text{H}}\mathbf{f}_{p,i}|^{2}+\mathbf{v}_{j}^{\text{H}}\mathbf{\Phi}_{k}^{ \text{bs}}\mathbf{v}_{k}+\frac{\sigma^{2}}{P_{s}}\right)\right\}\right\}\] \[=\bar{R}_{c,u}\left(\mathbf{F}\right). \tag{25}\]
We obtain a lower bound on the spectral efficiency of the private stream for TU \(k\) and SU \(u\) as
\[R_{p,k}\left(\mathbf{F},\mathbf{V}\right) \geq\log_{2}\left(1+\frac{\left|\mathbf{\hat{h}}_{k}^{\text{H}} \mathbf{v}_{k}\right|^{2}}{I_{p,k}^{U}+I_{p,k}^{C}+\mathbf{v}_{k}^{\text{H}} \mathbf{\Phi}_{k}^{\text{bs}}\mathbf{v}_{k}+\frac{\sigma^{2}}{P_{s}}}\right) \tag{26}\] \[=\bar{R}_{p,k}\left(\mathbf{F},\mathbf{V}\right),\] \[R_{p,u}\left(\mathbf{F}\right) \geq\log_{2}\left(1+\left|\mathbf{\hat{g}}_{u}^{\text{H}} \mathbf{f}_{p,u}\right|^{2}\Bigg{/}\left\{\sum_{i=1}^{K_{s,i\neq u}^{\text{H}} \left(|\mathbf{\hat{h}}_{k}^{\text{H}}\mathbf{f}_{p,i}|^{2}+\mathbf{v}_{j}^{ \text{H}}\mathbf{e}_{k}^{\text{H}}\mathbf{e}_{j,i}\right)\}\right\}\] \[=\bar{R}_{p,u}\left(\mathbf{F}\right), \tag{27}\]
where \(I_{p,k}^{U}=\sum_{j=1,
We also define a unit vector whose the \(k\)-th element is \(1\) and the rest of the elements are zero as \(\mathbf{u}_{k}=[0,\cdots,1,\cdots,0]^{\mathsf{T}}\in\mathbb{R}^{K_{s}}\), and a unit vector whose the \((u+1)\)-th element \(1\) and the rest of the elements zero as \(\mathbf{w}_{u}=[0,0,\cdots,1,\cdots,0]^{\mathsf{T}}\in\mathbb{R}^{(K_{s}+1)}\).
With (30), the spectral efficiency for the common stream of TU \(k\) (24) is expressed by
\[\tilde{R}_{c,k}\left(\bar{\mathbf{f}},\bar{\mathbf{v}}\right)= \log_{2}\left(\frac{\bar{\mathbf{v}}^{\mathsf{H}}\mathbf{U}_{c,k}^{\mathrm{bs} }\bar{\mathbf{v}}+\bar{\mathbf{f}}^{\mathsf{H}}\mathbf{(}\mathbf{S}_{c,k}^{ \mathrm{bs}}+\mathbf{C}_{c,k}^{\mathrm{bs}})\bar{\mathbf{f}}}{\bar{\mathbf{v}} ^{\mathsf{H}}\mathbf{U}_{c,k}^{\mathrm{bs}}\bar{\mathbf{v}}+\bar{\mathbf{f}}^ {\mathsf{H}}\mathbf{C}_{c,k}^{\mathrm{bs}}\bar{\mathbf{f}}}\right), \tag{31}\]
where \(\mathbf{S}_{c,k}^{\mathrm{bs}}\in\mathbb{C}^{M(K_{s}+1)\times M(K_{s}+1)}\), \(\mathbf{U}_{c,k}^{\mathrm{bs}}\in\mathbb{C}^{NK_{s}\times NK_{t}}\) and \(\mathbf{C}_{c,k}^{\mathrm{bs}}\in\mathbb{C}^{\bar{M}(K_{s}+1)\times M(K_{s}+1)}\) are defined as
\[\mathbf{S}_{c,k}^{\mathrm{bs}}=\mathbf{w}_{0}\mathbf{w}_{0}^{ \mathsf{H}}\otimes\mathbf{z}_{k}\mathbf{z}_{k}^{\mathsf{H}}, \tag{32}\] \[\mathbf{U}_{c,k}^{\mathrm{bs}}=\mathbf{I}_{K_{s}}\otimes\left( \hat{\mathbf{h}}_{k}\hat{\mathbf{h}}_{k}^{\mathsf{H}}+\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{{ }}}}}}}}}}}}}} \mathbf{}\mathbf{}}\mathbf{}\mathbf{}\mathbf{}}\mathbf{}\mathbf{}}\mathbf{}}\mathbf{}}\mathbf{}}\mathbf{}}\mathbf{} }\mathbf{}}\mathbf{}}\mathbf{}}\mathbf{}}\mathbf{}}\mathbf{}\mathbf{}}\mathbf{} \mathbf{}\mathbf{}}\mathbf{}\mathbf{}}\mathbf{}\mathbf{}}\mathbf{}\mathbf{}}\mathbf{} \mathbf{}\mathbf{}}\mathbf{}\mathbf{}}\mathbf{}\mathbf{}\}}\mathbf{}\mathbf{} \tag{33}\]
We also write the spectral efficiency for the common stream of SU \(u\) (25) as
\[\tilde{R}_{c,u}\left(\bar{\mathbf{f}}\right)=\log_{2}\left( \frac{\bar{\mathbf{f}}^{\mathsf{H}}\mathbf{(}\mathbf{S}_{c,u}^{\mathrm{sat}}+ \mathbf{U}_{c,u}^{\mathrm{sat}})\bar{\mathbf{f}}}{\bar{\mathbf{f}}^{\mathsf{H }}\mathbf{U}_{c,u}^{\mathrm{sat}}\bar{\mathbf{f}}}}\right), \tag{35}\]
where \(\mathbf{S}_{c,k}^{\mathrm{sat}}\in\mathbb{C}^{M(K_{s}+1)\times M(K_{s}+1)}\), \(\mathbf{U}_{c,k}^{\mathrm{sat}}\in\mathbb{C}^{M(K_{s}+1)\times M(K_{s}+1)}\) are given by
\[\mathbf{S}_{c,u}^{\mathrm{sat}}=\mathbf{w}_{0}\mathbf{w}_{0}^{ \mathsf{H}}\otimes\mathbf{\tilde{g}}_{u}\mathbf{\tilde{g}}_{u}^{\mathsf{H}}, \tag{36}\] \[\mathbf{U}_{c,u}^{\mathrm{sat}}=\mathbf{I}_{K_{s}+1}\otimes\left( \mathbf{\tilde{g}}_{u}\mathbf{\tilde{g}}_{u}^{\mathsf{H}}+\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbfmathbfmathbfmathbfmathbfmathbf{ }}}}}}}}}}}}}} \mathbf{}}\mathbf{}\mathbf{}\mathbf{}}\mathbf{}\mathbf{}}\mathbf{}}\mathbf{}} \mathbf{}\mathbf{}}\mathbf{}\mathbf{}}\mathbf{}\mathbf{}}\mathbf{}\mathbf{}} \mathbf{}\mathbf{}}\mathbf{}\mathbf{}}\mathbf{}\mathbf{}}\mathbf{}\mathbf{}} \mathbf{}\mathbf{}}\mathbf{}\mathbf{}}\mathbf{}\mathbf{}}\mathbf{}}\mathbf{} \mathbf{}\mathbf{}}\mathbf{}\mathbf{}}\mathbf{}\mathbf{}}\mathbf{}\mathbf{}}\mathbf{} \mathbf{}\}\mathbf{}\mathbf{}\mathbf{}}\mathbf{}\}\mathbf{}\mathbf{}\}\mathbf{} \mathbf{}\mathbf{}\mathbf{}}\mathbf{}\mathbf{}}\mathbf{}\}\mathbf{}}\mathbf{}\mathbf{} \mathbf{}\}\mathbf{}\mathbf{}\}\mathbf{}\mathbf{}}\mathbf{}\mathbf{}\}\mathbf{}\} \mathbf{}\mathbf{}\}\mathbf{}\mathbf{}\}\mathbf{}\mathbf{}\}\mathbf{}\mathbf{}\} \mathbf{}\mathbf{}\}\mathbf{}\mathbf{}\}\mathbf{}\}\mathbf{}\mathbf{}\}\mathbf{}\} \mathbf{}\mathbf{}\}\mathbf{}\mathbf{}\}\mathbf{}\mathbf{}\}\mathbf{}\}\mathbf{}\} \mathbf{}\mathbf{}\}\mathbf{}\mathbf{}\}\mathbf{}\}\mathbf{}\mathbf{}\}\mathbf{}\mathbf{} \}\mathbf{}\mathbf{}\}\mathbf{}\mathbf{}\}\mathbf{}\}\mathbf{}\}\mathbf{}\mathbf{}\} \mathbf{}\}\mathbf{}\mathbf{}\}\mathbf{}\}\mathbf{}\mathbf{}\}\mathbf{}\}\mathbf{}\mathbf{}\} \mathbf{}\mathbf{}\}\mathbf{}\}\mathbf{}\mathbf{}\}\mathbf{}\}\mathbf{}\mathbf{}\}\mathbf{}\mathbf{}\}\mathbf{}\mathbf{}\}\mathbf{}\}\mathbf{}\mathbf{}\}\mathbf{}\mathbf{}\}\mathbf{}\}\mathbf{}\mathbf{}\}\mathbf{}\mathbf{}\}\mathbf{}\mathbf{}\}\mathbf{}\}\mathbf{}\mathbf{}\}\mathbf{}\mathbf{}\}\mathbf{}\}\mathbf{}\mathbf{}\}\mathbf{}\}\mathbf{}\mathbf{} \tag{37}\]
Similar to this, the spectral efficiency for the private stream of TU \(k\) (26) is also written by
\[\tilde{R}_{p,k}\left(\bar{\mathbf{f}},\bar{\mathbf{v}}\right)= \log_{2}\left(\frac{\bar{\mathbf{v}}^{\mathsf{H}}\mathbf{(}\mathbf{S}_{p,k}^{ \mathrm{bs}}+\mathbf{U}_{p,k}^{\mathrm{bs}})\bar{\mathbf{v}}+\bar{\mathbf{f}}^ {\mathsf{H}}\mathbf{C}_{p,k}^{\mathrm{bs}}\bar{\mathbf{f}}}{\bar{\mathbf{v}}^{ \mathsf{H}}\mathbf{U}_{p,k}^{\mathrm{bs}}\bar{\mathbf{f}}}\right)\bar{\mathbf{v}} \bar{\mathbf{v}}^{\mathsf{H}}\mathbf{U}_{p,k}^{\mathrm{bs}}\bar{\mathbf{f}}} \right), \tag{38}\]
where \(\mathbf{S}_{p,k}^{\mathrm{bs}}\in\mathbb{C}^{NK_{s}\times NK_{s}}\), \(\mathbf{U}_{p,k}^{\mathrm{bs}}\in\mathbb{C}^{NK_{s}\times NK_{s}}\) and \(\mathbf{C}_{p,k}^{\mathrm{bs}}\in\mathbb{C}^{M(K_{s}+1)\times M(K_{s}+1)}\) are
\[\mathbf{S}_{p,k}^{\mathrm{bs}}=\mathbf{u}_{k}\mathbf{u}_{k}^{ \mathsf{H}}\otimes\mathbf{\tilde{h}}_{k}\hat{\mathbf{h}}_{k}^{\mathsf{H}},\] (39) \[\mathbf{U}_{p,k}^{\mathrm{bs}}=\mathbf{I}_{K_{s}}\otimes\left( \hat{\mathbf{h}}_{k}\hat{\mathbf{h}}_{k}^{\mathsf{H}}+\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbfmathbfmathbfmathbfmathbfmathbf{\mathbfmathbfmathbf{\mathbfmathbfmathbfmathbfmathbfmathbfmathbfmathbfmathbf{\mathbfmathbfmathbfmathbfmathbfmathbf{ \mathbfmathbfmathbf{\mathbfmathbfmathbfmathbfmathbfmathbfmathbfmathbfmathbfmathbfmathbf{\mathbfmathbfmathbfmathbfmathbfmathbfmathbfmathbfmathbf{ \mathbf{ \mathbf{ \mathbfmathbfmathbfmathbf{ \mathbfmathbfmathbfmathbfmathbf{ \mathbfmathbfmathbfmathbf{ \mathbfmathbfmathbfmathbfmathbfmathbfmathbf{ \mathbfmathbfmathbfmathbfmathbf{ \mathbfmathbfmathbfmathbf{ \mathbfmathbfmathbfmathbfmathbfmathbfmathbfmathbf{ \mathbfmathbfmathbfmathbfmathbfmathbf^^^^^^^^^^^^^^^^^
term into \(\bar{R}_{p,i}(\bar{\mathbf{f}})\) as shown in (\(b\)). In (47), the private stream spectral efficiency is reshaped into two decoupled terms, wherein the first term is a function of \(\bar{\mathbf{f}}\) and the second term is a function of \(\bar{\mathbf{v}}\). This enables to tackle the problem in a distributed fashion. Nonetheless, we observe in (\(c\)) that the IUI term vanishes since we take average and pull out from \(\bar{R}_{p,k}(\bar{\mathbf{f}},\bar{\mathbf{v}})\) for decoupling (46). To compensate this, we restore the vanished IUI term in (\(c\)) as follows:
\[\sum_{j=1}^{K_{s}}\mathbb{E}\left[\log_{2}\left(\bar{\mathbf{v}}^ {\mathsf{H}}\mathbf{S}_{p,j}^{\text{bs}}\bar{\mathbf{v}}\right)\right] =\mathbb{E}\left[\log_{2}\left(\prod_{j=1}^{K_{s}}\frac{\bar{ \mathbf{v}}^{\mathsf{H}}\mathbf{S}_{p,j}^{\text{bs}}\bar{\mathbf{v}}\cdot\bar{ \mathbf{v}}^{\mathsf{H}}\mathbf{U}_{p,j}^{\text{bs}}\bar{\mathbf{v}}}{\bar{ \mathbf{v}}^{\mathsf{H}}\mathbf{U}_{p,j}^{\text{bs}}\bar{\mathbf{v}}}\right) \right]\] \[=\mathbb{E}\left[\log_{2}\left(\prod_{j=1}^{K_{s}}\frac{\bar{ \mathbf{v}}^{\mathsf{H}}\mathbf{S}_{p,j}^{\text{bs}}\bar{\mathbf{v}}}{\bar{ \mathbf{v}}^{\mathsf{H}}\mathbf{U}_{p,j}^{\text{bs}}\bar{\mathbf{v}}}\right) \right]+\hat{\epsilon}, \tag{48}\]
where \(\hat{\epsilon}=\mathbb{E}\big{[}\sum_{j=1}^{K_{s}}\log_{2}\left(\bar{\mathbf{v }}^{\mathsf{H}}\mathbf{U}_{p,j}^{\text{bs}}\bar{\mathbf{v}}\right)\big{]}\). Inserting \(\hat{\epsilon}\) into the \(\bar{\mathbf{f}}\)-related term, we get
\[\sum_{i=1}^{K_{s}}\mathbb{E}\left[\bar{R}_{p,i}\left(\bar{\mathbf{ f}}\right)\right]+\sum_{j=1}^{K_{s}}\mathbb{E}\left[\bar{R}_{p,j}\left(\bar{ \mathbf{f}},\bar{\mathbf{v}}\right)\right]\] \[\gtrsim\sum_{i=1}^{K_{s}}\mathbb{E}\left[\bar{\Gamma}_{p,j}\left( \bar{\mathbf{f}}\right)\right]+\sum_{j=1}^{K_{s}}\mathbb{E}\left[\bar{\Gamma} _{p,j}\left(\bar{\mathbf{v}}\right)\right], \tag{49}\]
where
\[\mathbb{E}\big{[}\bar{\Gamma}_{p,u}\left(\bar{\mathbf{f}}\right) \big{]}\] \[=\mathbb{E}\left[\log_{2}\left(\frac{2\frac{\bar{\mathbf{v}}^{ \mathsf{H}}\mathbf{S}_{p,u}^{\text{st}}+\mathbf{U}_{p,u}^{\text{st}})\bar{ \mathbf{f}}}{\bar{\mathbf{v}}^{\mathsf{H}}\mathbf{U}_{p,i}^{\text{st}}\bar{ \mathbf{f}}\cdot\left\{\prod_{j=1}^{K_{s}}\bar{\mathbf{f}}^{\mathsf{H}}\big{(} \mathbf{U}_{p,j}^{\text{bs}}+\epsilon_{j}\mathbf{I}_{M\left(K_{s}+1\right)} \bar{\mathbf{f}}\big{)}^{\frac{1}{K_{s}}}\right\}}\right], \tag{50}\]
It can be seen that \(\bar{\Gamma}_{p,u}\left(\bar{\mathbf{f}}\right)\) and \(\bar{\Gamma}_{p,k}\left(\bar{\mathbf{v}}\right)\) are functions of only \(\bar{\mathbf{f}}\) and \(\bar{\mathbf{v}}\) respectively, thus each can be maximized in a distributed fashion without CSIT sharing. Note that the LEO satellite is required to have knowledge on \(\epsilon_{k}\) and \(\hat{\epsilon}\) to maximize \(\bar{\Gamma}_{p,u}\left(\bar{\mathbf{f}}\right)\). We emphasize that \(\epsilon_{k}\) and \(\hat{\epsilon}\) are constants that does not vary over channels. This is because \(\epsilon_{k}\) and \(\hat{\epsilon}\) are obtained by averaging channel realizations and designed precoders. For this reason, it is very easy to deliver these values to the LEO satellite, for instance by embedding into the control channels without incurring much overheads.
**Common stream spectral efficiency decoupling**: We reform the common stream spectral efficiency \(\bar{R}_{c,k}(\bar{\mathbf{f}},\bar{\mathbf{v}})\) in (31) by a distributed way. A critical obstacle to decouple \(\bar{R}_{c,k}(\bar{\mathbf{f}},\bar{\mathbf{v}})\) is that \(\bar{\mathbf{v}}^{\mathsf{H}}\mathbf{U}_{c,k}^{\text{bs}}\bar{\mathbf{v}}\) exists in the denominator of \(\bar{R}_{c,k}(\bar{\mathbf{f}},\bar{\mathbf{v}})\). To resolve this, we consider the ergodic spectral efficiency of common stream for TU \(k\) and its lower bound as follows:
\[\mathbb{E}\left[\bar{R}_{c,k}\left(\bar{\mathbf{f}},\bar{\mathbf{ v}}\right)\right]=\mathbb{E}\left[\log_{2}\left(1+\frac{\bar{\mathbf{f}}^{ \mathsf{H}}\mathbf{S}_{c,k}^{\text{bs}}\bar{\mathbf{f}}}{\bar{\mathbf{v}}^{ \mathsf{H}}\mathbf{U}_{c,k}^{\text{bs}}\bar{\mathbf{v}}+\bar{\mathbf{f}}^{ \mathsf{H}}\mathbf{U}_{c,k}^{\text{bs}}\bar{\mathbf{f}}}\right)\right]\] \[\overset{(a)}{\geq}\mathbb{E}\left[\log_{2}\left(1+\frac{\bar{ \mathbf{f}}^{\mathsf{H}}\mathbf{S}_{c,k}^{\text{bs}}\bar{\mathbf{f}}}{\mathbb{E} \big{[}\bar{\mathbf{v}}^{\mathsf{H}}\mathbf{U}_{c,k}^{\text{bs}}\bar{\mathbf{v}} \big{]}+\bar{\mathbf{f}}^{\mathsf{H}}\mathbf{C}_{c,k}^{\text{bs}}\bar{\mathbf{f} }}\right)\right], \tag{52}\]
where the expectation is taken over the randomness associated with the imperfect knowledge of the channel fading process. (\(a\)) is the lower bound with Jensen's inequality applied. In (52), by averaging the IUI term, it becomes a constant, i.e., \(\mathbb{E}\big{[}\bar{\mathbf{v}}^{\mathsf{H}}\mathbf{U}_{c,k}^{\text{bs}} \bar{\mathbf{v}}\big{]}=\omega_{k}\), we can transform \(\bar{R}_{c,k}\left(\bar{\mathbf{f}},\bar{\mathbf{v}}\right)\), which is a joint function of \(\bar{\mathbf{f}}\) and \(\bar{\mathbf{v}}\), into \(\bar{\Gamma}_{c,k}\left(\bar{\mathbf{f}}\right)\), which is a single function of \(\bar{\mathbf{f}}\):
\[\mathbb{E}\left[\bar{R}_{c,k}\left(\bar{\mathbf{f}},\bar{\mathbf{ v}}\right)\right]\geq\mathbb{E}\left[\bar{\Gamma}_{c,k}\left(\bar{ \mathbf{f}}\right)\right]\] \[=\mathbb{E}\left[\log_{2}\left(\frac{\bar{\mathbf{f}}^{\mathsf{H}} \mathbf{(S}_{c,k}^{\text{bs}}+\mathbf{C}_{c,k}^{\text{bs}}+\omega_{k}\mathbf{I}_{M \left(K_{s}+1\right)})\bar{\mathbf{f}}}{\bar{\mathbf{f}}^{\mathsf{H}}\mathbf{(C}_{c,k}^{\text{bs}}+\omega_{k}\mathbf{I}_{M\left(K_{s}+1\right)})\bar{\mathbf{f}}} \right)\right]. \tag{53}\]
By doing this, provided that \(\omega_{k}\) is known to the LEO satellite, it is feasible to maximize the common stream spectral efficiency at the LEO satellite without sharing CSIT. Similar to \(\epsilon_{k}\) and \(\hat{\epsilon}\), \(\omega_{k}\) is a constant; thereby it is relative effortless to deliver \(\omega_{k}\) to the LEO satellite.
**Distributed problem formulation**: Up to this point, we transform \(\mathbb{E}\left[\bar{R}_{c,k}(\bar{\mathbf{f}},\bar{\mathbf{v}})\right]\) and \(\mathbb{E}\left[\bar{R}_{p,k}\left(\bar{\mathbf{f}},\bar{\mathbf{v}}\right)\right]\) to \(\mathbb{E}\left[\bar{\Gamma}_{c,k}\left(\bar{\mathbf{f}}\right)\right]\) and \(\mathbb{E}\left[\bar{\Gamma}_{p,k}\left(\bar{\mathbf{v}}\right)\right]\), wherein distributed precoding optimization is feasible because of our spectral efficiency decoupling. Based on this, we reformulate our main problem \(\mathcal{P}_{1}\) in (28). We first recall that the expectation considered in the ergodic spectral efficiency (49) and (53) is associated with the randomness of the imperfect knowledge on channel fading process, i.e., \(\mathbb{E}_{\{\bar{\mathbf{f}}_{k},\hat{\mathbf{z}}_{k},\hat{\epsilon}_{k}\}} \left[\cdot\right]\). Since the LEO satellite and the terrestrial BS are assumed to be aware of estimated CSIT \(\bar{\mathbf{g}}_{u},\hat{\mathbf{z}}_{k}\), (the satellite channel) and \(\hat{\mathbf{h}}_{k}\) (the terrestrial channel), maximizing the ergodic spectral efficiency is equivalent to maximizing the
spectral efficiency by using the estimated CSIT given per each channel block. For this reason, we can omit \(\mathbb{E}\left[\cdot\right]\) in (49) and (53). We note that the same concept was presented in [30, 33]. Consequently, we formulate distributed precoding optimization problem as
\[\mathcal{P}_{2}:\quad\underset{\mathbf{f},\mathbf{v}}{\text{ maximize}} \quad\left[\begin{smallmatrix}\min_{k\in\mathcal{R}^{\text{min}}_{k},\text{ext} \mathcal{R}_{k}}\left\{\Gamma_{c,k}\left(\bar{\mathbf{f}}\right),\bar{R}_{c,u} \left(\bar{\mathbf{f}}\right)\right\}\\ +\sum_{i=1}^{K_{s}}\Gamma_{p,i}(\mathbf{f})+\sum_{j=1}^{K_{s}}\Gamma_{p,j}( \mathbf{v})\\ \text{subject to}\quad\|\bar{\mathbf{f}}\|^{2}=1,\,\|\bar{\mathbf{v}}\|^{2}=1.\end{smallmatrix}\right], \tag{55}\]
It is noteworthy that in \(\mathcal{P}_{2}\), all the spectral efficiencies in the objective function are exclusively determined by \(\bar{\mathbf{f}}\) and \(\bar{\mathbf{v}}\) respectively. This eliminates the necessity of CSIT sharing in solving \(\mathcal{P}_{2}\). Furthermore, the precoding vectors \(\bar{\mathbf{f}},\bar{\mathbf{v}}\) respectively can be normalized by dividing the numerator and denominator of Rayleigh quotient with \(\|\bar{\mathbf{f}}\|^{2}\) and \(\|\bar{\mathbf{v}}\|^{2}\), without affecting the objective function. Thanks to this reason, the constraint (55) can be omitted in (54).
**Remark 1**.: (Interference report mechanism) To solve \(\mathcal{P}_{2}\), it is required to report \(\epsilon_{k}\), \(\hat{\epsilon}\), and \(\omega_{k}\) to the satellite. Depending on reporting frequencies, we consider three following cases.
i) **Average report mechanism**: In this case, TUs report \(\epsilon_{k}\), \(\hat{\epsilon}\), and \(\omega_{k}\) to the LEO satellite once. To this end, we first generate \(\mathbf{U}_{p,k}^{\text{bs}}\) and \(\mathbf{U}_{c,k}^{\text{bs}}\) by a Monte-Carlo fashion. Then, we design \(\bar{\mathbf{v}}\) accordingly and calculate \(\mathbb{E}\left[\bar{\mathbf{v}}^{\text{H}}\mathbf{U}_{p,k}^{\text{bs}}\bar{ \mathbf{v}}\right]=\epsilon_{k}\) and \(\mathbb{E}\left[\bar{\mathbf{v}}^{\text{H}}\mathbf{U}_{c,k}^{\text{bs}}\bar{ \mathbf{v}}\right]=\omega_{k}\) by averaging the results. Finally, these averaged values are reported to the LEO satellite. This method corresponds to a case that we mainly explain in Section III.
ii) **Instantaneous report mechanism**: In this case, the report is carried per every channel coherence block. Specifically, in the Average report mechanism, the performance loss is inevitable since the precoding vector \(\bar{\mathbf{f}}\) cannot reflect the instantaneous interference environment; yet only exploits the averaged value. To compensate this, we report the instantaneous received signal power \(\bar{\mathbf{v}}^{\text{H}}\mathbf{U}_{c,k}^{\text{bs}}\bar{\mathbf{v}}\) to the LEO satellite. Since the instantaneous received signal power \(\bar{\mathbf{v}}^{\text{H}}\mathbf{U}_{c,k}^{\text{bs}}\bar{\mathbf{v}}\) is determined depending on the instantaneous channel realization, the TUs are required to deliver it per every channel block, which consumes more overheads compared to the Average report mechanism.
iii) **Zero report mechanism**: In this case, we never report by setting \(\epsilon_{k}=0\) and \(\omega_{k}=0\). While this mechanism has an advantage of not consuming overheads, it comes with a drawback of not managing interference effectively, leading to some performance degradation.
In Section V, we will numerically compare the spectral efficiency performances for the three report mechanism.
## IV Precoder Optimization with Generalized Power Iteration
In this section, we propose a GPI-based algorithm to solve \(\mathcal{P}_{2}\).
### _LSE Approximation_
One challenge in solving \(\mathcal{P}_{2}\) is the non-smoothness of the minimum function \(\min_{k\in\mathcal{R}^{\text{min}}_{k},\text{ext}\mathcal{R}_{k}}\left\{ \bar{\Gamma}_{c,k}\left(\bar{\mathbf{f}}\right),\bar{R}_{c,u}\left(\bar{ \mathbf{f}}\right)\right\}\). To resolve this, we first approximate the non-smooth minimum function in (54) as a smooth function using the LSE technique [34]. Applying the LSE technique, the minimum function in (54) is approximated as follows
\[\min_{k\in\mathcal{R}^{\text{min}}_{k},\mu,\in\mathcal{K}_{k}} \left\{\bar{\Gamma}_{c,k}\left(\bar{\mathbf{f}}\right),\bar{R}_{c,u}\left( \bar{\mathbf{f}}\right)\right\}\] \[\simeq-\mu\log\left[\frac{1}{K^{\text{sat}}}\left\{\sum_{j=1}^{K _{s}}\exp\left(\frac{\bar{\Gamma}_{c,j}\left(\bar{\mathbf{f}}\right)}{-\mu} \right)+\sum_{i=1}^{K_{s}}\exp\left(\frac{\bar{R}_{c,i}\left(\bar{\mathbf{f}} \right)}{-\mu}\right)\right\}\right], \tag{56}\]
where \(\mu\) is the accuracy of the LSE technique. As we set large \(\mu\), (56) becomes tight. This leads to the following problem:
\[\mathcal{P}_{3}:\underset{\mathbf{f},\mathbf{v}}{\text{ maximize}} \quad\left[\begin{smallmatrix}\mu\log\left[\frac{1}{K^{\text{sat}}}\left[\sum_{j= 1}^{K_{s}}\exp\left(\frac{\bar{\Gamma}_{c,j}\left(\bar{\mathbf{f}}\right)}{- \mu}\right)\right]+\sum_{j=1}^{K_{s}}\exp\left(\frac{\bar{R}_{c,i}\left(\bar{ \mathbf{f}}\right)}{-\mu}\right)\right]\right]\right],\] \[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\
\[f\left(\bar{\mathbf{f}},\bar{\mathbf{v}}\right)=\underbrace{-\mu\log\left[\frac{1}{K^{ \text{sat}}}\left(\sum_{j=1}^{K_{t}}\exp\left(\frac{\bar{\Gamma}_{c,j}\left(\bar{ \mathbf{f}}\right)}{-\mu}\right)+\sum_{i=1}^{K_{1}}\exp\left(\frac{\bar{R}_{c,i }\left(\bar{\mathbf{f}}\right)}{-\mu}\right)\right)\right]+\sum_{i=1}^{K_{s}} \bar{\Gamma}_{p,j}\left(\bar{\mathbf{f}}\right)}_{f_{2}\left(\bar{\mathbf{v}} \right)}. \tag{58}\]
\[\frac{\partial f_{1}\left(\bar{\mathbf{f}}\right)}{\partial\bar{ \mathbf{f}}} =\frac{1}{\log 2}\sum_{u=1}^{K_{s}}\left(\frac{\left(\mathbf{S}_{p,u}^{ \text{sat}}+\mathbf{U}_{p,u}^{\text{sat}}\right)\bar{\mathbf{f}}}{\bar{ \mathbf{f}}^{\text{H}}(\mathbf{S}_{p,u}^{\text{sat}}+\mathbf{U}_{p,u}^{\text {sat}})\bar{\mathbf{f}}\cdot L\left(\bar{\mathbf{f}}\right)}-\frac{\mathbf{U }_{p,u}^{\text{sat}}\bar{\mathbf{f}}}{\bar{\mathbf{f}}^{\text{H}}\mathbf{U}_ {p,u}^{\text{sat}}\cdot L\left(\bar{\mathbf{f}}\right)}+\frac{L\left(\bar{ \mathbf{f}}\right)}{\bar{\mathbf{v}}_{L}L\left(\bar{\mathbf{f}}\right)}\right)\] \[+\sum_{k=1}^{K_{s}}\left(\frac{\exp\left(-\frac{1}{\mu}\log_{2} \left(\frac{\bar{\mathbf{f}}^{\text{H}}\mathbf{S}_{c,k}^{\text{sat}}\bar{ \mathbf{f}}}{\mathbf{U}_{p,k}^{\text{sat}}\bar{\mathbf{f}}}\right)\right) \left(\frac{\mathbf{S}_{c,k}^{\text{sat}}\bar{\mathbf{f}}}{\mathbf{U}_{p,k}^ {\text{sat}}\bar{\mathbf{f}}^{\text{H}}\mathbf{U}_{p,k}^{\text{sat}}\bar{ \mathbf{f}}}\right)}{\log 2\cdot\sum_{j=1}^{K_{s}}\exp\left(-\frac{1}{\mu}\log_{2} \left(\frac{\bar{\mathbf{f}}^{\text{H}}\mathbf{S}_{c,k}^{\text{sat}}\bar{ \mathbf{f}}}{\bar{\mathbf{f}}^{\text{H}}\mathbf{U}_{p,k}^{\text{sat}}\bar{ \mathbf{f}}}\right)\right)}+\sum_{u=1}^{K_{s}}\left(\frac{\exp\left(-\frac{1}{ \mu}\log_{2}\left(\frac{\bar{\mathbf{f}}^{\text{H}}\mathbf{S}_{c,k}^{\text{sat }}\bar{\mathbf{f}}}{\bar{\mathbf{f}}^{\text{H}}\mathbf{U}_{p,k}^{\text{sat}} \bar{\mathbf{f}}}\right)\right)}{\log 2\cdot\sum_{i=1}^{K_{s}}\exp\left(-\frac{1}{\mu}\log_{2} \left(\frac{\bar{\mathbf{f}}^{\text{H}}\mathbf{S}_{c,k}^{\text{sat}}\bar{ \mathbf{f}}}{\bar{\mathbf{f}}^{\text{H}}\mathbf{U}_{p,k}^{\text{sat}}\bar{ \mathbf{f}}}\right)\right)}\right)}, \tag{59}\]
_As a result, the first-order KKT condition of \(f_{1}\left(\bar{\mathbf{f}}\right)\) regarding \(\bar{\mathbf{f}}\) is satisfied if the following holds:_
\[\mathbf{A}\left(\bar{\mathbf{f}}\right)\bar{\mathbf{f}}=\lambda^{sat}\left( \bar{\mathbf{f}}\right)\bar{\mathbf{B}}\left(\bar{\mathbf{f}}\right)\bar{ \mathbf{f}}\;\Leftrightarrow\;\mathbf{B}^{-1}\left(\bar{\mathbf{f}}\right) \mathbf{A}\left(\bar{\mathbf{f}}\right)\bar{\mathbf{f}}=\lambda^{sat}\left( \bar{\mathbf{f}}\right)\bar{\mathbf{f}}. \tag{60}\]
_Second, for \(\bar{\mathbf{v}}\), we follow a similar approach as above. We calculate the partial derivative of \(f_{2}\left(\bar{\mathbf{v}}\right)\), and then, use the first-order KKT condition, i.e., \(\frac{\partial f_{2}\left(\bar{\mathbf{v}}\right)}{\partial\bar{\mathbf{v}}}=0\). As a result, the first-order KKT condition of \(f_{2}\left(\bar{\mathbf{v}}\right)\) regarding \(\bar{\mathbf{v}}\) is satisfied if the following holds:_
\[\mathbf{D}^{-1}\left(\bar{\mathbf{v}}\right)\mathbf{C}\left(\bar{\mathbf{v}} \right)\bar{\mathbf{v}}=\lambda^{bs}\left(\bar{\mathbf{v}}\right)\bar{\mathbf{ v}}, \tag{61}\]
_where \(\lambda^{bs}\left(\bar{\mathbf{v}}\right)\), \(\mathbf{C}\left(\bar{\mathbf{v}}\right)\) and \(\mathbf{D}\left(\bar{\mathbf{v}}\right)\) are given by_
\[\lambda^{bs}\left(\bar{\mathbf{v}}\right)=\prod_{j=1}^{K_{s}}\bar{\Gamma}_{p,j }\left(\bar{\mathbf{v}}\right)=\frac{\lambda_{1}^{bs}\left(\bar{\mathbf{v}} \right)}{\lambda_{2}^{bs}\left(\bar{\mathbf{v}}\right)}, \tag{62}\]
\[\mathbf{C}\left(\bar{\mathbf{v}}\right)=\lambda_{1}^{bs}\left(\bar{\mathbf{v}} \right)\sum_{j=1}^{K_{s}}\left(\frac{\mathbf{S}_{p,j}^{\text{bs}}}{\bar{ \mathbf{v}}^{\text{H}}\mathbf{S}_{p,j}^{\text{H}}\bar{\mathbf{v}}}\right), \mathbf{D}\left(\bar{\mathbf{v}}\right)=\lambda_{2}^{bs}\left(\bar{\mathbf{v}} \right)\sum_{j=1}^{K_{s}}\left(\frac{\mathbf{U}_{p,j}^{\text{bs}}}{\bar{ \mathbf{v}}^{\text{H}}\mathbf{U}_{p,j}^{\text{H}}\bar{\mathbf{v}}}\right). \tag{63}\]
In Lemma 1, the first important observation is that the optimality conditions are decoupled into two parts, each of which is solely related to \(\bar{\mathbf{f}}\) and \(\bar{\mathbf{v}}\), respectively. To be specific, differentiating the Lagrangian function \(f\left(\bar{\mathbf{f}},\bar{\mathbf{v}}\right)\) in (58) with regard to \(\bar{\mathbf{f}}\), we have \(\frac{\partial f\left(\bar{\mathbf{f}},\bar{\mathbf{v}}\right)}{\partial\bar{ \mathbf{f}}}=g_{1}\left(\bar{\mathbf{f}}\right)\), where \(g_{1}\left(\bar{\mathbf{f}}\right)\) is independent to \(\bar{\mathbf{v}}\). Similar to this, we have \(\frac{\partial f\left(\bar{\mathbf{v}},\bar{\mathbf{v}}\right)}{\partial\bar{ \mathbf{v}}}=g_{2}\left(\bar{\mathbf{v}}\right)\), where \(g_{2}\left(\bar{\mathbf{v}}\right)\) is independent to \(\bar{\mathbf{f}}\). We note that this is done by our decoupling process described in Section III. Thanks to this decoupling, functions related to \(\bar{\mathbf{f}}\) and \(\bar{\mathbf{v}}\) can be independently computed in a distributed way.
Next we explain how to achieve the local optimal point for each of \(\bar{\mathbf{f}}\) and \(\bar{\mathbf{v}}\). Note that the first-order KKT condition (60) (regarding \(\bar{\mathbf{f}}\)) and (61) (regarding \(\bar{\mathbf{v}}\)) are cast as a form of generalized nonlinear eigenvalue problems. Specifically, in (61), \(\bar{\mathbf{f}}\) behaves as an eigenvector of the eigenvector-dependent matrix \(\mathbf{B}^{-1}\left(\bar{\mathbf{f}}\right)\mathbf{A}\left(\bar{\mathbf{f}}\right)\), and in (61), \(\bar{\mathbf{v}}\) acts as an eigenvector of the eigenvector-dependent matrix \(\mathbf{D}^{-1}\left(\bar{\mathbf{v}}\right)\mathbf{C}\left(\bar{\mathbf{v}}\right)\). In this relation, the eigenvalue \(\lambda^{\text{sat}}\left(\bar{\mathbf{f}}\right)\) given by (63) corresponds to the \(\bar{\mathbf{f}}\)-related term of the Lagrangian function \(f_{1}\left(\bar{\mathbf{f}}\right)\), while the eigenvalue \(\lambda^{\text{bs}}\left(\bar{\mathbf{v}}\right)\) given by (63) corresponds to the \(\bar{\mathbf{v}}\)-related term of the Lagrangian function \(f_{2}\left(\bar{\mathbf{v}}\right)\). As a result, if we find the principal eigenvectors of (60) and (61), then we reach the maximum stationary point. By doing this, we maximize the objective function (51). Note that this generalized nonlinear eigenvalue problem is different from the classical eigenvalue problem in that the matrix is dependent on its eigenvector itself. In what follows, we propose an algorithm to find the principal eigenvectors of (60) and (61).
### _Stin-Gpi_
We propose a STIN-GPI algorithm based on the method developed in [30]. The process of the proposed algorithm is iteratively updating \(\bar{\mathbf{f}}\) and \(\bar{\mathbf{v}}\) by using the power iteration, while the matrix is computed with the previously obtained \(\bar{\mathbf{f}}\) and \(\bar{\mathbf{v}}\). In our method, STIN-GPI consists of two stages, where the first stage is performed in the LEO satellite for designing \(\bar{\mathbf{f}}\) and the second stage is performed in the terrestrial BS for designing \(\bar{\mathbf{v}}\). We mention that
resulting in slow convergence. To address this, we adjust \(\mu\) for facilitating the convergence. For instance, we can incrementally decrease \(\mu\) if the algorithm does not converge within predetermined iterations.
**Remark 2**.: (Algorithm complexity) In a big-O sense, the total computational complexity of the proposed STIN-GPI algorithm is dominated by the calculation of \(\mathbf{B}^{-1}(\bar{\mathbf{f}})\) and \(\mathbf{D}^{-1}(\bar{\mathbf{v}})\). The matrix \(\mathbf{B}(\bar{\mathbf{f}})\) is expressed as the sum of the block diagonal matrices of \(\{\mathbf{S}_{c,k}^{\text{bs}},\mathbf{C}_{c,k}^{\text{bs}},\mathbf{S}_{c,t}^{ \text{sat}},\mathbf{U}_{c,t}^{\text{sat}}\}\in\mathbb{C}^{M(K_{s}+1)\times M (K_{s}+1)}\). Since the inverse matrix of \(\mathbf{B}^{-1}(\bar{\mathbf{f}})\) can be calculated using the inverse of each submatrix, the total computational complexity is with the order of \(\mathcal{O}\big{(}\frac{1}{3}\left(K_{s}+1\right)M^{3}\big{)}\). Similar to this way, in case of stage 2, the computational complexity of the inverse matrix \(\mathbf{D}^{-1}\left(\bar{\mathbf{v}}\right)\) is \(\mathcal{O}\big{(}\frac{1}{3}K_{t}N^{3}\big{)}\). For this reason, the complexity of the proposed STIN-GPI per iteration scales with the order of \(\mathcal{O}\big{(}\frac{1}{3}\left(K_{s}M^{3}+K_{t}N^{3}\right)\big{)}\) when \(K_{s},M,K_{t}\) and \(N\) increase with the same order.
**Remark 3**.: (Generalization to multi-layer RS strategy) Even though we only consider the 1-layer RS strategy, it is possible to generalize this to the multi-layer RS strategy. For instance, we construct three-types of common streams, such as \(s_{c}\), \(s_{c}^{\text{sat}}\) and \(s_{c}^{\text{bs}}\). Upon this, we make \(s_{c}^{\text{sat}}\) to be decoded by the SUs in \(\mathcal{K}_{s}\) and \(s_{c}^{\text{bs}}\) to be decoded by the TUs in \(\mathcal{K}_{t}\) (and same \(s_{c}\)). By doing this, we efficiently mitigate the IUI among users as well as ICI, which leads to improve performance. To design the precoder in such multi-layer RS strategy cases, we can extend it to our scenario by additionally constructing \(\mathbf{F}_{c}^{\text{sat}}\in\mathbb{C}^{M}\) and \(\mathbf{I}_{c}^{\text{bs}}\in\mathbb{C}^{M}\), and then reassembling it into higher-dimensional vector as described in (30). For conciseness of the paper and page limitation, we only focus on the 1-layer RS strategy.
```
Initialize:\(\bar{\mathbf{f}}^{(0)}=(\text{MRT})\), \(\bar{\mathbf{v}}^{(0)}=(\text{MRT})\), \(t=1\) repeat Stage 1. LEO Satellite Beamforming Design repeat Calculate \(\mathbf{A}(\bar{\mathbf{f}}^{(t-1)}),\mathbf{B}(\bar{\mathbf{f}}^{(t-1)})\) Obtain \(\bar{\mathbf{f}}^{(t)}\leftarrow\frac{\mathbf{B}^{-1}(\bar{\mathbf{f}}^{(t-1)}) \mathbf{A}(\bar{\mathbf{f}}^{(t-1)})\bar{\mathbf{f}}^{(t-1)}}{\|\mathbf{B}^{- 1}(\bar{\mathbf{f}}^{(t-1)})\mathbf{A}(\bar{\mathbf{f}}^{(t-1)})\bar{\mathbf{ f}}^{(t-1)}\|}\) until\(\|\bar{\mathbf{f}}^{(t)}-\bar{\mathbf{f}}^{(t-1)}\|<\zeta\); Stage 2. Terrestrial BS Beamforming Design repeat Calculate \(\mathbf{C}(\bar{\mathbf{v}}^{(t-1)}),\mathbf{D}(\bar{\mathbf{v}}^{(t-1)})\) Obtain \(\bar{\mathbf{v}}^{(t)}\leftarrow\frac{\mathbf{D}^{-1}(\bar{\mathbf{v}}^{(t-1)}) \mathbf{C}(\bar{\mathbf{v}}^{(t-1)})\bar{\mathbf{v}}^{(t-1)}}{\|\mathbf{D}^{- 1}(\bar{\mathbf{v}}^{(t-1)})\mathbf{C}(\bar{\mathbf{v}}^{(t-1)})\bar{\mathbf{ v}}^{(t-1)}\|}\) until\(\|\bar{\mathbf{v}}^{(t)}-\bar{\mathbf{v}}^{(t-1)}\|<\zeta\); \(t\gets t+1\) until\(t=\max\); until\(\bar{\mathbf{f}}^{(t\max)}\), \(\bar{\mathbf{v}}^{(t\max)}\)
```
**Algorithm 1**STIN-GPI algorithm
## V Numerical Results
In this section, we evaluate the spectral efficiency performance of the proposed method via numerical simulations. To this end, we consider the following STIN simulation scenario. The STIN uses the Ka-band as operating bandwidth [21, 25, 35]. The radius of the LEO satellite coverage region is 500km, while the radius of terrestrial BS coverage is 50km and the height of terrestrial BS is 30m. SUs and TUs are uniformly distributed within their respective coverage areas. The simulation setups are: \(f_{c}=20\text{GHz}\), \(B_{w}=800\text{MHz}\), \(d_{c}^{\text{sat}}=1000\text{km},G_{\text{sat}}=6\text{dBi},G_{u}=0\text{ dBi},d_{t}^{\text{sat}}=d_{2}^{\text{ss}}=\frac{4}{2},L_{t}=10,M_{1}=5,M_{2}=5,N_{1}=3, N_{2}=3,K_{s}=10,K_{t}=3,K_{t}^{\text{int}}=1,\mu=0.1,\tau p^{\text{pi}}=2,\rho=4, \zeta=0.01\) and \(t^{\text{max}}=1000\), unless mentioned otherwise. As baseline methods, we consider the followings:
* **Coordinated Precoding with RS (Coord-RS)**[5]: In this method, we find the optimal precoder to solve \(\mathcal{P}_{1}\) in a coordinated fashion, i.e., by sharing CSIT, while incorporating the RS strategy. The basic setup of Coord-RS corresponds to [5], except that [5] maximizes the minimum spectral efficiency while Coord-RS maximizes the sum spectral efficiency.
* **SILNR Max**[13]: This method adopts the SILNR instead of the exact SINR for distributed design.
* **IUI-ICI Separation**[12]: This method decouples the IUI and the ICI based on the high SNR assumption. Subsequently, the sum spectral efficiency maximization is performed without using the RS strategy.
* **SLNR Max**[6]: The SLNR is adopted as an alternative of the exact SINR.
* **Local ZF**[36]: This method not only mitigates the IUI using the classical zero-forcing (ZF) but also mitigates the ICI; this is done by projecting a precoding vector to null space of IUI and ICI by using the remaining spatial degrees-of-freedom.
* **Single-cell ZF** : This method is classical ZF that only suppresses the IUI.
We clarify that, including the proposed STIN-GPI, SILNR Max, IUI-ICI Separation, SLNR Max, Local ZF, Single-cell ZF are distributed method that does not require CSIT sharing, while Coord-RS is a coordinated method. For the proposed STIN-GPI method, we consider 3 different versions depending on the report mechanisms:
* **STIN-GPI-Ins.** : As mentioned in Remark 1, the TUs in \(\mathcal{K}_{t}^{\text{int}}\) report the instantaneous received signal power \(\bar{\mathbf{v}}^{\text{H}}\mathbf{U}_{c,k}^{\text{bs}}\) to the LEO satellite; so that the STIN-GPI exploits the exact instantaneous interference information.
* **STIN-GPI-Avg.** : In (46) and (53), we repeatedly form the channel vector for TU \(k\) and use it to generate 1000 data samples \(\mathbf{U}_{p,k}^{\text{bs}}\) and \(\mathbf{U}_{c,k}^{\text{bs}}\). Then, we design \(\bar{\mathbf{v}}\), multiply it with \(\mathbf{U}_{p,k}^{\text{bs}}\) and \(\mathbf{U}_{c,k}^{\text{bs}}\), and finally average to obtain \(\mathbb{B}\big{[}\bar{\mathbf{v}}^{\text{H}}\mathbf{U}_{p,k}^{\text{bs}}\bar{ \mathbf{v}}\big{]}=\epsilon_{k}\) and \(\mathbb{B}\big{[}\bar{\mathbf{v}}^{\text{H}}\mathbf{U}_{c,k}^{\text{bs}}\bar{ \mathbf{v}}\big{]}=\omega_{k}\). As a result, the STIN-GPI uses the averaged interference information.
* **STIN-GPI-Zero** : This method forcibly decouples the functions for \(\bar{\mathbf{f}}\) and \(\bar{\mathbf{v}}\) by assuming zero IUI (\(\epsilon_{k}=0\)) and zero ICI (\(\omega_{k}=0\)). In this case, the STIN-GPI does not use the interference information.
The simulation results are demonstrated as follows:
**Link-level simulation**: Fig. 2(a) shows how the ergodic sum spectral efficiency per SNR. We first observe that, in the proposed methods, the spectral efficiency performance order is STIN-GPI-Ins. \(>\) STIN-GPI-Avg. \(>\) STIN-GPI-Zero. This is reasonable because STIN-GPI-Ins. exploits exact instan
taneous interference information, while STIN-GPI-Avg. only uses the averaged one and STIN-GPI-Zero does not use any information in designing precoders. However, STIN-GPI-Ins. requires instantaneous interference reporting from the TUs, incurring a substantial amount of overhead. Unlike this, STIN-GPI-Avg. merely needs to deliver the constant \(\epsilon_{k}\) and \(\omega_{k}\) that does not vary over channels, thus the associated overheads are significantly smaller than those of STIN-GPI-Ins. Considering the trade-off between the performance gains and the associated overheads, STIN-GPI-Avg. is the most favorable option. Therefore, Fig. 2(a) shows that the STIN-GPI-Avg. offers around 20% and 29% gains at SNR = 30dB over the SILNR Max and the SLNR Max, respectively. These gains stem from two perspectives: i) Our method enables to perform SIC at the TUs by using the RS strategy. If the interference from the LEO satellite persists due to imperfect CSIT estimation, the TUs encounter severe performance deterioration. Through the RS strategy, TUs can eliminate this interference by means of SIC, by which significant spectral efficiency gains are attainable. ii) In the decoupling process of our method, we take average to the IUI term to disentangle the ICI term from the sum spectral efficiency. This is meticulous treatment that establishes a lower bound on the spectral efficiency, aptly encompassing the interference's influence. In the case of SLNR approach, only the leakage term is considered, which cannot count the effect of the interference in a proper way. This brings the spectral efficiency improvement to our method.
**System-level simulation**: We conduct system-level simulations. Fig. 2(b) shows the cumulative distribution function (CDF) for the spectral efficiency. Compared to the SILNR Max, IUI-ICI Separation and SLNR Max methods, the 95-percentile user's spectral efficiency with STIN-GPI-Avg. increases by about \(37\%,40\%\) and \(44\%\), respectively. Furthermore, we observe a performance difference of approximately 5% in the 95-percentile user's spectral efficiency between Coord-RS and STIN-GPI-Avg.. However, in the case of STIN-GPI-Ins., the performance difference in the 95-percentile user's spectral efficiency compared to the Coord-RS method reduces to 3%. This means that our method, assisted by the instantaneous reporting, achieves comparable performances to those of the coordinated approach even in the absence of CSIT sharing.
**Per \(K_{t}^{\text{int}}\) and \(M\)**: Fig. 3(a) demonstrates the comparison of sum spectral efficiency per the number of TUs \(K_{t}^{\text{int}}\) that experiences the interference from the LEO satellite. By increasing the radius of the LEO satellite coverage area to 500km, 520km, and 550km, \(K_{t}^{\text{int}}\) is set to increase from 1 to 3. We observe that our proposed STIN-GPI-Ins. method worsens by 7%, while other methods (SILNR Max, IUI-ICI Separation, SLNR Max) that do not use the RS strategy worsen by 28%, 33%, and 39%, respectively. The rationale behind these gains lies in the RS strategy's capability to effectively mitigate ICI, making it
Fig. 3: Comparison of sum spectral efficiency among different strategies under the assumption of SNR = 15dB (a) per the number of TUs experiencing interference from the satellite (\(K_{t}^{\text{int}}\)), (b) per the number of LEO satellite antennas to \(M=16,25,36,49,64\).
Fig. 2: Link-level and system-level simulations between the proposed distributed STIN-GPI and baseline methods. (a) Comparison of sum spectral efficiency per SNR. (b) Comparison of CDF per user’s spectral efficiency.
robust against the increased ICI.
To shed light on the performance in the large-scale antenna regime, we show the sum spectral efficiency per the number of the LEO satellite antennas in Fig. 3(b). The figure shows that proposed STIN-GPI-Avg. method outperforms the SILNR Max, IUI-ICI Separation and SLNR Max methods by around 16%, 22% and 29%, respectively. Furthermore, the proposed STIN-GPI-Ins. method, which applies the reporting regime, achieves a performance gain of 8%. However, this gap gets smaller as the number of transmit antennas increases. This is due to the increasing spatial degrees-of-freedom resulting from a larger number of antennas, leading to enhanced overall throughput gains. Nevertheless, we observe that our proposed STIN-GPI methods outperform other methods that do not use RS strategy in all antenna regions; meaning that our method is suitable in massive MIMO systems.
## VI Conclusions
In this paper, we propose a novel distributed precoding approach using RS strategy. Our key idea is to decouple the sum spectral efficiency into two separated terms, each of which is a only function of the satellite's precoder and the terrestrial BS's precoder, respectively. Based on the distributed optimization problem, we approximate the non-smooth objective function by using the LSE technique, thereafter develop the STIN-GPI algorithm that finds the best local optimal point. The simulation results show that the proposed STIN-GPI method achieves around 20 \(\sim\) 29% spectral efficiency gains over the existing distributed precoding methods. In future work, we plan to extend this work by incorporating a multi-satellites and multi-BSs environment.
|
2301.13678 | Boson mixing and flavor vacuum in the expanding Universe: a possible
candidate for the dark energy | We analyze the boson mixing in curved spacetime and compute the expectation
value of the energy-momentum tensor of bosons on the flavor vacuum in spatially
flat Friedmann-Lemaitre-Robertson-Walker metrics. We show that the
energy-momentum tensor of the flavor vacuum behaves as the effective
energy-momentum tensor of a perfect fluid. Assuming a fixed de Sitter
background, we show that the equation of state can assume values in the
interval [-1, 1], and, in the flat spacetime limit has a value -1, which is the
one of the dark energy. The results here presented show that vacuum of mixed
bosons like neutrino super-partners, can represent a possible component of the
dark energy of the Universe. | Antonio Capolupo, Aniello Quaranta | 2023-01-31T14:52:13Z | http://arxiv.org/abs/2301.13678v1 | # Boson mixing and flavor vacuum in the expanding Universe: a possible candidate for the dark energy
###### Abstract
We analyze the boson mixing in curved spacetime and compute the expectation value of the energy-momentum tensor of bosons on the flavor vacuum in spatially flat Friedmann-Lemaitre-Robertson-Walker metrics. We show that the energy-momentum tensor of the flavor vacuum behaves as the effective energy-momentum tensor of a perfect fluid. Assuming a fixed de Sitter background, we show that the equation of state can assume values in the interval \([-1,1]\), and, in the flat spacetime limit has a value \(-1\), which is the one of the dark energy. The results here presented show that vacuum of mixed bosons like neutrino super-partners, can represent a possible component of the dark energy of the Universe.
## I Introduction
One of the most important discoveries of modern cosmology is represented by the observation of the accelerated expansion of the universe [1; 2; 3; 4; 5; 6; 7]. In the context of general relativity, classical sources of matter generate only positive pressures, while an accelerating expansion of the universe requires negative pressures. Therefore, the understanding of cosmic acceleration turns out to be very complicated. This issue is commonly referred to as: the dark energy problem. Recent measurements indicate that dark energy contributes 68% of the total energy in the present-day observable universe. In the last years, many different proposals have been suggested as theoretical frameworks for cosmic acceleration. They can be divided basically in three main categories [5]. A first one, in which the gravitational interaction is modified (see e.g. [8; 9; 10; 11; 12; 13; 14; 15; 16; 17]), a second one in which the underlying geometry of the Universe is modified (see e.g. [18; 19; 20]) and a third one in which the models for the matter sector of the gravitational field equations are refined and/or modified [21; 22; 23; 24; 25; 26; 27; 28; 29; 30]. Other proposals rely on pure quantum field theoretical effects and the non-trivial structure of the vacuum of flavor particle mixing [31; 32; 33; 34; 35].
The particle mixing and oscillations, that in the fermion sector characterize the evolution of neutrinos, and in the boson sector affect the dynamics of neutral kaons, \(B^{0}\), \(D^{0}\), and \(\eta-\eta^{\prime}\) system, represent important phenomena of physics beyond the standard model of particles. Indeed, neutrino oscillations and \(K^{0}\)-\(\overline{K}^{0}\) mixing play a crucial role in the analysis of the CP violation and to test the CPT symmetry. Moreover, neutrinos, axions and axion-like particles which oscillate with photons [36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47] might constitute a component of the dark matter of the universe. It has been also shown that, in flat space time, the vacuum associated to the mixed particles, the flavor vacuum, behaves as a perfect fluid which has a state equation typical of the cold dark matter, in the case of fermion mixing, and a state equation of the dark energy, in the case of boson mixing [31; 32]. The study of fermion mixing in curved space, together with the derivation of general oscillation formulae, has been carried out in refs.[48; 49; 50; 51; 52], and the analysis of the neutrino mixing contribution to the dark matter in curved background has been recently performed in ref.[53].
Here, we consider mixed bosons in curved space. We extend the analysis presented in Refs.[31; 32; 33; 34; 35], in which the possible contribution of the flavor vacuum to the dark matter and energy was studied in Minkowski spacetime, and we analyze the behaviour of the bosonic flavor vacuum in the case of a curved background. In a previous work [54] the fundamentals of the quantum field theory of boson mixing in curved space were laid out. Building on this formalism we take into account a generic spatially flat Friedmann-Lemaitre-Robertson-Walker metric. We compute the expectation value of the energy momentum tensor of free bosons on the vacuum for mixed fields and we show that this tensor is diagonal and satisfies the Bianchi identities. These results are independent of the specific scale factor employed. Therefore, the bosonic flavor vacuum can be effectively considered as a perfect fluid. In particular, assuming a fixed de Sitter background, we show that the adiabatic factor \(w^{(MIX)}\) due to the boson flavor mixing assumes values ranging from \(-1\) to \(1\). On the other hand, in the flat space time limit one has \(w^{(MIX)}=-1\), which corresponds to the state equation of the dark energy. The analysis here presented gives an indication that the energy of the boson flavor vacuum may partially contribute to the cosmological dark energy. Elementary mixed bosons, leading to the aforementioned vacuum energy, may be represented by the neutrino super-partners.
The paper is organized as following. In section II, we describe the properties of the Klein-Gordon equation in curved space-time, focusing our attention on the Friedmann-Lemaitre-Robertson-Walker spacetime (FLRW). In section III, we quantize a boson field with definite mass, and we introduce the mixing of two flavor fields. The expectation value of the stress energy tensor on the flavor vacuum is derived in section IV, and, in section V we consider the de Sitter space-time and derive the corresponding exact solution for the component of the stress energy tensor. Consideration on its regularization are also presented. Section VI is devoted to the conclusions.
Klein-Gordon fields in flat FLRW spacetime
We study the cosmological effects of particle mixing in curved space-time. In particular, we focus our attention on the spatially flat Friedmann-Lemaitre-Robertson-Walker spacetime (FLRW), \(ds^{2}=dt^{2}-C^{2}(t)d\mathbf{x}^{2},\) where \(C(t)\) is the scale factor, since the FLRW metric well describes the expansion of the universe in the present epoch. We start this section by summarizing the solution of Klein-Gordon equation in the FLRW spacetimes and use the \((+,-,-,-)\) signature, so that the metric tensor is \(g_{\mu\nu}=\text{diag}\left(1,-C^{2}(t),-C^{2}(t),-C^{2}(t)\right).\) For our purposes, it is useful to express this metric in terms of the conformal time \(\tau\), defined as \(d\tau=\frac{dt}{C(t)}\,.\) The range of the conformal time \(\tau\) corresponding to the coordinate time interval \(t\in\mathbb{R}\) depends on the specific scale factor considered \(C(t)\). In terms of \(\tau\), the line element reads
\[ds^{2}=C^{2}(\tau)[d\tau^{2}-dx^{2}-dy^{2}-dz^{2}]\,. \tag{1}\]
We consider two free charged scalar fields \(\phi_{i}\) with masses \(m_{i}\), for \(i=1,2\), with the minimally coupled Lagrangian (\(\xi=0\))
\[\mathcal{L}=\sum_{i=1,2}\left\{\frac{1}{2}\sqrt{-g}\left[g^{\mu\nu}\partial_{ \mu}\phi_{i}^{\dagger}\partial_{\nu}\phi_{i}-m_{i}^{2}\phi_{i}^{\dagger}\phi_ {i}\right]\right\}\,, \tag{2}\]
from which the Klein-Gordon equations \((\Box+m_{i}^{2})\phi_{i}=0\,,\) are derived. The symbol \(\Box\) stands here for the curved space D'alembertian. The scalar product between two solutions \(A\) and \(B\) of the Klein-Gordon equation is defined as
\[(A,B)_{\tau}=i\int_{\Sigma_{\tau}}d\ \Sigma^{\mu}\sqrt{-g}\left(A^{*}\partial_{ \mu}B-\left(\partial_{\mu}A^{*}\right)B\right)=i\int_{\Sigma_{\tau}}d^{3}x \sqrt{-g}g^{\tau\tau}\left(A^{*}\partial_{\tau}B-\left(\partial_{\tau}A^{*} \right)B\right)\, \tag{3}\]
where the last equality holds for hypersurfaces \(\Sigma_{\tau}\) of constant conformal time \(\tau\), on which the integration is to be performed. If \(A\) and \(B\) are solutions of the _same_ Klein-Gordon equation, the scalar product \((A,B)_{\tau}\) is independent of \(\tau\). In general this does not hold true for solutions to Klein-Gordon equations with distinct masses.
The energy-momentum tensor is obtained as usual by varying the action \(S=\int d^{4}x\mathcal{L}\) with respect to the metric; it is given by
\[T_{\mu\nu}=\frac{1}{2}\sum_{i=1,2}\left\{\partial_{\mu}\phi_{i}^{\dagger} \partial_{\nu}\phi_{i}+\partial_{\nu}\phi_{i}^{\dagger}\partial_{\mu}\phi_{i} -g_{\mu\nu}g^{\rho\sigma}\partial_{\rho}\phi_{i}^{\dagger}\partial_{\sigma} \phi_{i}+m_{i}^{2}g_{\mu\nu}\phi_{i}^{\dagger}\phi_{i}\right\} \tag{4}\]
## III Quantization of flavor fields
In the section, we first quantize a single boson field of definite mass, and then we analyze the main properties of two flavor mixed fields.
### Boson field
Let us consider two complete sets of solutions to the Klein-Gordon equations with masses \(m_{i}\)\(\{u_{\boldsymbol{p};i},u_{\boldsymbol{p};i}^{*}\}\). The notation here anticipates that, given the general form of the metric, the spatial part of the equations is solved by plane waves labelled by 3-vectors \(\boldsymbol{p}\). Any solution of the (linear) Klein-Gordon equations can be written as a linear combination of these modes. In particular, the boson fields can be expressed as: \(\phi_{i}(x)=\int d^{3}p\left(a_{\boldsymbol{p};i}u_{\boldsymbol{p};i}+b_{- \boldsymbol{p};i}^{*}u_{-\boldsymbol{p};i}^{*}\right)\), where the spacetime dependence is contained in the modes, while the coefficients are independent of the space and time coordinates. Quantization proceeds in analogy to the Minkowski space, by promoting the fields \(\phi_{i}\) to operators
\[\phi_{i}(x)=\int d^{3}p\left(a_{\boldsymbol{p};i}u_{\boldsymbol{p};i}+b_{- \boldsymbol{p};i}^{\dagger}u_{-\boldsymbol{p};i}^{*}\right)\, \tag{5}\]
and imposing the canonical commutation relations between \(\phi_{i}(x)\) and its conjugate momentum. The choice of the opposite label \((-\boldsymbol{p})\) for the antiparticles is just a matter of convention, but it gives to both the terms of the expansion in Eq. (5) the same spatial dependency.
This translates to the following commutation relations for the creation and annihilation operators:
\[\left[a_{\boldsymbol{p};i},a_{\boldsymbol{q};j}^{\dagger}\right]=\left[b_{ \boldsymbol{p};i},b_{\boldsymbol{q};j}^{\dagger}\right]=\delta_{ij}\delta^{3}( \boldsymbol{p}-\boldsymbol{q}), \tag{6}\]
with all the other commutators vanishing. The vacuum state \(\left|0\right\rangle\) is then defined as \(a_{\mathbf{p};i}\left|0\right\rangle=b_{\mathbf{p};i}\left|0\right\rangle=0\,\) for all \(\mathbf{p}\) and \(i\). By introducing the field expansion in Eq.(4), one obtains the quantized energy-momentum tensor of the boson fields:
\[T_{\mu\nu} = \sum_{i=1,2}\int d^{3}p\int d^{3}q\{a^{\dagger}_{\mathbf{p};i}a_{\mathbf{ q};i}L_{\mu\nu}(u_{\mathbf{p};i},u_{\mathbf{q};i})+a^{\dagger}_{\mathbf{p};i}b^{\dagger}_{- \mathbf{q};i}L_{\mu\nu}(u_{\mathbf{p};i},u^{*}_{-\mathbf{q};i}) \tag{7}\] \[+b_{-\mathbf{p};i}a_{\mathbf{q};i}L_{\mu\nu}(u^{*}_{-\mathbf{p};i},u_{\mathbf{q}; i})+b_{-\mathbf{p};i}b^{\dagger}_{-\mathbf{q};i}L_{\mu\nu}(u^{*}_{-\mathbf{p};i},u^{*}_{- \mathbf{q};i})\}\.\]
In Eq.(7) there is a neat separation between the operator part and the functional part, represented by the tensor functional \(L_{\mu\nu}(A,B)\), defined on the solutions \(A,B\) of the Klein Gordon equations. By definition one has
\[L_{\mu\nu}(A_{i},B_{i})=\frac{1}{2}\left\{\partial_{\mu}A^{*}_{i}\partial_{ \nu}B_{i}+\partial_{\nu}A^{*}_{i}\partial_{\mu}B_{i}-g_{\mu\nu}g^{\rho\sigma} \partial_{\rho}A^{*}_{i}\partial_{\sigma}B_{i}+m^{2}_{i}g_{\mu\nu}A^{*}_{i}B _{i}\right\} \tag{8}\]
for any two solutions \(A_{i},B_{i}\) of the Klein-Gordon equation with mass \(m_{i}\). Some elementary properties of \(L_{\mu\nu}\) can be read off the definition. Clearly \(L_{\mu\nu}\) is symmetric (\(=L_{\nu\mu}\)) for any argument and
\[L_{\mu\nu}(A_{i},B_{i})=L^{*}_{\mu\nu}(B_{i},A_{i}). \tag{9}\]
In particular Eq. (9) implies that \(L_{\mu\nu}(A,A)\) is real for any \(A\). Additional properties of \(L_{\mu\nu}\) are shown below.
### The flavor fields
We now move on to the quantization of the flavor fields, essentially following the treatment in [54]. As usual, the flavor fields are defined by means of the rotation
\[\phi_{A}(x) = \cos(\theta)\phi_{1}(x)+\sin(\theta)\phi_{2}(x)\] \[\phi_{B}(x) = \cos(\theta)\phi_{2}(x)-\sin(\theta)\phi_{1}(x)\, \tag{10}\]
where \(\theta\) is the mixing angle. The rotation can be recast in terms of the mixing generator
\[\mathcal{G}_{\theta}(\tau)=exp\left\{\theta\left[(\phi_{1},\phi_{2})_{\tau}-( \phi_{2},\phi_{1})_{\tau}\right]\right\}\, \tag{11}\]
where \((\phi_{2},\phi_{1})_{\tau}\) is the scalar product at the \(\tau\) hypersurface as defined in Eq.(3). Then, the flavor fields can be expressed as
\[\phi_{A}(x) = \mathcal{G}_{\theta}^{-1}\,\phi_{1}(x)\,\mathcal{G}_{\theta}\] \[\phi_{B}(x) = \mathcal{G}_{\theta}^{-1}\,\phi_{2}(x)\,\mathcal{G}_{\theta}. \tag{12}\]
In a similar way, the flavor annihilators are defined as \(a_{\mathbf{p};\sigma}=\mathcal{G}_{\theta}^{-1}\,a_{\mathbf{p};i}\,\mathcal{G}_{\theta}\), with \(\sigma=A,B\) and \(j=1,2\), and similar for the antiparticles. The flavor vacuum, annihilated by the flavor annihilators, is given by
\[\left|0_{F}(\tau)\right\rangle=\mathcal{G}_{\theta}^{-1}(\tau)\left|0\right\rangle\, \tag{13}\]
where \(\left|0\right\rangle\) is the vacuum defined by the mass annihilators. Notice that, \(\left|0_{F}(\tau)\right\rangle\) carries an explicit \(\tau\) dependence due to \(\mathcal{G}_{\theta}^{-1}(\tau)\).
## IV VEV of the energy-momentum tensor on the flavor vacuum
Here, we compute the contribution of the flavor vacuum to the energy and to the pressure. To do that, we calculate the expectation value of the energy-momentum tensor at time \(\tau\) on the flavor vacuum at a given fixed time \(\left|0_{F}(\tau_{0})\right\rangle\), with \(\tau_{0}\) not necessarily coincident with the time argument of the energy momentum tensor \(\tau\). We consider a specific expansion of the mass fields, and thus a specific choice of the mass vacuum as suggested by the form of the metric. Other mass representations are of course possible, and the effect of a change of mass representation on the flavor fields can be obtained by the adequate transformations described in [54]. The quantity we wish to compute is
\[\mathbb{T}_{\mu\nu}=\left\langle 0_{F}(\tau_{0})\right|T_{\mu\nu}\left|0_{F}( \tau_{0})\right\rangle\, \tag{14}\]
where \(T_{\mu\nu}\) is given by Eq.(7). We start by considering the typical term in Eq.(14), which has the form
\[\left\langle 0_{F}(\tau_{0})\right|a^{\dagger}_{\mathbf{p};1}a_{\mathbf{q};1}\left|0_{F} (\tau_{0})\right\rangle. \tag{15}\]
By using the definition of the flavor vacuum such expectation value can be written as
\[\left\langle 0\right|\mathcal{G}_{\theta}(\tau_{0})a^{\dagger}_{ \mathbf{p};1}a_{\mathbf{q};1}\mathcal{G}^{-1}_{\theta}(\tau_{0})\left|0\right\rangle =\left\langle 0\right|\mathcal{G}_{\theta}(\tau_{0})a^{\dagger}_{ \mathbf{p};1}\mathcal{G}^{-1}_{\theta}(\tau_{0})\mathcal{G}_{\theta}(\tau_{0})a_{ \mathbf{q};1}\mathcal{G}^{-1}_{\theta}(\tau_{0})\left|0\right\rangle\] \[=\left\langle 0\right|\mathcal{G}^{-1}_{-\theta}(\tau_{0})a^{ \dagger}_{\mathbf{p};1}\mathcal{G}_{-\theta}(\tau_{0})\mathcal{G}^{-1}_{-\theta}( \tau_{0})a_{\mathbf{q};1}\mathcal{G}_{-\theta}(\tau_{0})\left|0\right\rangle \tag{16}\]
where we have used the relation (Eq. 11) \(\mathcal{G}^{-1}_{\theta}=\mathcal{G}_{-\theta}\). The transformed operator \(\mathcal{G}^{-1}_{-\theta}(\tau_{0})a_{\mathbf{p};1}\mathcal{G}_{-\theta}(\tau_{0})\) and the others relative to the mass 2 are just the mass annihilators transformed according to the mixing transformation with angle \(-\theta\). Such annihilators are given by:
\[\mathcal{G}^{-1}_{-\theta}(\tau_{0})a_{\mathbf{p};1}\mathcal{G}_{- \theta}(\tau_{0}) =\cos(\theta)a_{\mathbf{p};1}-\sin(\theta)\left(\Lambda^{*}_{\mathbf{p}} (\tau_{0})a_{\mathbf{p};2}+\Xi_{\mathbf{p}}(\tau_{0})b^{\dagger}_{-\mathbf{p};2}\right)\] \[\mathcal{G}^{-1}_{-\theta}(\tau_{0})a_{\mathbf{p};2}\mathcal{G}_{- \theta}(\tau_{0}) =\cos(\theta)a_{\mathbf{p};2}+\sin(\theta)\left(\Lambda_{\mathbf{p}}( \tau_{0})a_{\mathbf{p};1}-\Xi_{\mathbf{p}}(\tau_{0})b^{\dagger}_{-\mathbf{p};1}\right)\] \[\mathcal{G}^{-1}_{-\theta}(\tau_{0})b_{-\mathbf{p};1}\mathcal{G}_{- \theta}(\tau_{0}) =\cos(\theta)b_{-\mathbf{p};1}-\sin(\theta)\left(\Lambda^{*}_{\mathbf{p}} (\tau_{0})b_{-\mathbf{p};2}+\Xi_{\mathbf{p}}(\tau_{0})a^{\dagger}_{\mathbf{p};2}\right) \tag{17}\] \[\mathcal{G}^{-1}_{-\theta}(\tau_{0})b_{-\mathbf{p};2}\mathcal{G}_{- \theta}(\tau_{0}) =\cos(\theta)b_{-\mathbf{p};2}+\sin(\theta)\left(\Lambda_{\mathbf{p}}( \tau_{0})b_{-\mathbf{p};1}-\Xi_{\mathbf{p}}(\tau_{0})a^{\dagger}_{\mathbf{p};1}\right)\]
The Bogoliubov coefficients are defined by means of the inner products
\[\delta^{3}(\mathbf{p}-\mathbf{q})\Lambda_{\mathbf{p}}(\tau) =\left(u_{\mathbf{p};2},u_{\mathbf{q};1}\right)_{\tau} \tag{18}\] \[\delta^{3}(\mathbf{p}+\mathbf{q})\Xi_{\mathbf{p}}(\tau) =\left(u_{\mathbf{p};1},u^{*}_{\mathbf{q};2}\right)_{\tau}\,\]
where the delta function is absorbed by a corresponding momentum integration in the Eqs.(17). For distinct labels \(\mathbf{p},\mathbf{q}\), the inner products vanish. The coefficients in Eq.(18) satisfy the condition \(|\Lambda_{\mathbf{p}}|^{2}-|\Xi_{\mathbf{p}}|^{2}=1\) for all \(\mathbf{p},\tau\). By using Eqs.(17), we can compute the expectation values present in Eq.(14):
\[\left\langle 0_{F}(\tau_{0})\right|a^{\dagger}_{\mathbf{p};j}a_{\mathbf{q};j} \left|0_{F}(\tau_{0})\right\rangle =\sin^{2}\theta\left|\Xi_{\mathbf{p}}(\tau_{0})\right|^{2}\delta^{3} (\mathbf{p}-\mathbf{q})\ \,\ \ \ \forall j\] \[\left\langle 0_{F}(\tau_{0})\right|b^{\dagger}_{-\mathbf{p};j}b_{-\mathbf{q} ;j}\left|0_{F}(\tau_{0})\right\rangle =\sin^{2}\theta\left|\Xi_{\mathbf{p}}(\tau_{0})\right|^{2}\delta^{3} (\mathbf{p}-\mathbf{q})\ \,\ \ \ \forall j\] \[\left\langle 0_{F}(\tau_{0})\right|a^{\dagger}_{\mathbf{p};1}b^{ \dagger}_{-\mathbf{p};1}\left|0_{F}(\tau_{0})\right\rangle =\sin^{2}\theta\left|\Xi_{\mathbf{p}}^{*}(\tau_{0})\Lambda_{\mathbf{p}}(\tau_{0}) \right\rangle\delta^{3}(\mathbf{p}-\mathbf{q}) \tag{19}\] \[\left\langle 0_{F}(\tau_{0})\right|a^{\dagger}_{\mathbf{p};2}b^{ \dagger}_{-\mathbf{q};2}\left|0_{F}(\tau_{0})\right\rangle =-\sin^{2}\theta\left|\Xi_{\mathbf{p}}^{*}(\tau_{0})\right\Lambda_{\mathbf{p}}^{*} (\tau_{0})\delta^{3}(\mathbf{p}-\mathbf{q})\] \[\left\langle 0_{F}(\tau_{0})\right|b_{-\mathbf{p};1}a_{\mathbf{q};1}\left|0_{F}( \tau_{0})\right\rangle =\sin^{2}\theta\left|\Xi_{\mathbf{p}}(\tau_{0})\right\rangle\Lambda_{\mathbf{p}}^{*} (\tau_{0})\left\rangle\delta^{3}(\mathbf{p}-\mathbf{q})\] \[\left\langle 0_{F}(\tau_{0})\right|b_{-\mathbf{p};2}a_{\mathbf{q};2}\left|0_{F}( \tau_{0})\right\rangle =-\sin^{2}\theta\left|\Xi_{\mathbf{p}}(\tau_{0})\Lambda_{\mathbf{p}}(\tau_{0}) \right\rangle\delta^{3}(\mathbf{p}-\mathbf{q})\.\]
The expectation value of the energy momentum tensor is then given by two contributions as follows:
\[\mathbb{T}_{\mu\nu} =\mathbb{T}_{\mu\nu}^{(MIX)}+\mathbb{T}_{\mu\nu}^{(N)} \tag{20}\] \[\mathbb{T}_{\mu\nu}^{(MIX)} =\sin^{2}\theta\,\int d^{3}p\bigg{\{}|\Xi_{\mathbf{p}}(\tau_{0})|^{2} \sum_{j=1,2}\left(L_{\mu\nu}(u_{\mathbf{p};j},u_{\mathbf{p};j})+L_{\mu\nu}(u^{*}_{-\mathbf{ p};j},u^{*}_{-\mathbf{p};j})\right)\] \[+ \Xi_{\mathbf{p}}^{*}(\tau_{0})\Lambda_{\mathbf{p}}(\tau_{0})L_{\mu\nu}(u_ {\mathbf{p};1},u^{*}_{-\mathbf{p};1})+\Xi_{\mathbf{p}}(\tau_{0})\Lambda_{\mathbf{p}}^{*}(\tau_{0 })L_{\mu\nu}(u^{*}_{-\mathbf{p};1},u_{\mathbf{p};1})\] \[- \Xi_{\mathbf{p}}^{*}(\tau_{0})\Lambda_{\mathbf{p}}^{*}(\tau_{0})L_{\mu\nu }(u_{\mathbf{p};2},u^{*}_{-\mathbf{p};2})-\Xi_{\mathbf{p}}(\tau_{0})\Lambda_{\mathbf{p}}(\tau_{0 })L_{\mu\nu}(u^{*}_{-\mathbf{p};2},u_{\mathbf{p};2})\bigg{\}}\] (21) \[\mathbb{T}_{\mu\nu}^{(N)} =\sum_{j=1,2}\int d^{3}pL_{\mu\nu}(u^{*}_{-\mathbf{p};j},u^{*}_{-\bm {p};j}). \tag{22}\]
Here the first term is exclusively due to the mixing, indeed \(\mathbb{T}_{\mu\nu}^{(MIX)}\) depends on \(\sin^{2}\theta\) and vanishes for \(\theta=0\). The last term derives from the commutation relation \(\left[b_{-\mathbf{p};j},b^{\dagger}_{-\mathbf{q};j}\right]=\delta^{3}(\mathbf{p}-\mathbf{q})\) applied to the \(bb^{\dagger}\) term. \(\mathbb{T}_{\mu\nu}^{(N)}\) is the expectation value of the energy-momentum tensor on the _mass_ vacuum:
\[\mathbb{T}_{\mu\nu}^{(N)}=\left\langle 0\right|T_{\mu\nu}\left|0\right\rangle. \tag{23}\]
The \((0,0)\) component of this term corresponds to the diverging energy that is removed by the normal ordering in flat space. Indeed, in the flat space, one has \(u^{*}_{-{\mathbf{p}};j}=\frac{1}{\sqrt{2(2\pi)^{3}\omega_{p;j}}}e^{i\omega_{p;j}t+i{ \mathbf{p}}\cdot{\mathbf{x}}}\) with \(\omega_{p;j}=\sqrt{p^{2}+m_{j}^{2}}\), and the auxiliary tensor becomes
\[L_{\mu\nu}(u^{*}_{-{\mathbf{p}};j},u^{*}_{-{\mathbf{p}};j})=\partial_{\mu}u_{-{\mathbf{p}}; j}\partial_{\nu}u^{*}_{-{\mathbf{p}};j}+\partial_{\nu}u_{-{\mathbf{p}};j}\partial_{\mu}u^{* }_{-{\mathbf{p}};j}-\eta_{\mu\nu}\partial_{\rho}u_{-{\mathbf{p}};j}\partial^{\rho}u^{*} _{-{\mathbf{p}};j}+m^{2}\eta_{\mu\nu}|u_{-{\mathbf{p}};j}|^{2} \tag{24}\]
where the metric tensor and the derivatives are those of the ordinary flat space. For the \((0,0)\) component, we have
\[L_{00}(u^{*}_{-{\mathbf{p}};j},u^{*}_{-{\mathbf{p}};j})=\frac{\omega_{p;j}}{2}\, \tag{25}\]
then
\[\mathbb{T}^{(N)}_{00}=\sum_{j=1,2}\int d^{3}p\ \frac{\omega_{p;j}}{2}\ \ \ \ \ \ \ (>0). \tag{26}\]
Clearly this term has to be removed if the curved space energy momentum tensor is to approach the normal ordered energy momentum tensor of flat space in the appropriate limit. This property is featured also among the Wald's axioms for the energy-momentum tensors in curved space [55]. We then define the renormalized energy-momentum tensor as
\[T^{r}_{\mu\nu}=T_{\mu\nu}-\mathbb{T}^{(N)}_{\mu\nu}\,. \tag{27}\]
Its expectation value is then
\[\left\langle 0_{F}(\tau_{0})\right|T^{r}_{\mu\nu}\left|0_{F}(\tau_{0})\right\rangle =\mathbb{T}^{(MIX)}_{\mu\nu}. \tag{28}\]
In the following we show the main properties of the Bogoliubov coefficients and we demonstrate that \(\mathbb{T}_{\mu\nu}\) corresponds to the energy momentum tensor of a perfect fluid.
### General properties of the Bogoliubov Coefficients
Given the form of the Klein-Gordon equations, we employ the following ansatz for the solutions:
\[u_{{\mathbf{p}};j}(\tau,{\mathbf{x}})=(2\pi)^{-\frac{3}{2}}e^{i{\mathbf{p}}\cdot{\mathbf{x}}} C^{-1}(\tau)\chi_{p,j}(\tau). \tag{29}\]
The functions \(\chi_{p,j}(\tau)\) depend only on the modulus of the momentum \(p=|{\mathbf{p}}|\) and on the conformal time \(\tau\). Inserting the ansatz (29) in the inner products defining the Bogoliubov coefficients of Eq. (18), and recalling the form of the metric in conformal time, we obtain
\[\Lambda_{p}(\tau) = i\left(\chi^{*}_{p;2}(\tau)\partial_{\tau}\chi_{p,1}(\tau)- \left(\partial_{\tau}\chi^{*}_{p,2}(\tau)\right)\chi_{p;1}(\tau)\right)\] \[\Xi_{p}(\tau) = i\left(\chi^{*}_{p;1}(\tau)\partial_{\tau}\chi^{*}_{p,2}(\tau)- \left(\partial_{\tau}\chi^{*}_{p,1}(\tau)\right)\chi_{p;2}(\tau)\right). \tag{30}\]
It can be easily checked that the fundamental property
\[|\Lambda_{p}(\tau)|^{2}-|\Xi_{p}(\tau)|^{2}=1\, \tag{31}\]
holds, provided that the normalization \((u_{{\mathbf{p}};i},u_{{\mathbf{q}};j})=\delta_{ij}\delta^{(3)}({\mathbf{p}}-{\mathbf{q}})=-( u^{*}_{{\mathbf{p}};i},u^{*}_{{\mathbf{q}};j})\) is used. For completeness, on the reduced modes the normalization condition reads
\[i\left(\chi^{*}_{p;j}(\tau)\partial_{\tau}\chi_{p,j}(\tau)-\left(\partial_{ \tau}\chi^{*}_{p,j}(\tau)\right)\chi_{p;j}(\tau)\right)=1\ ;\ \ \ \ \ \ \ \ \ \forall j=1,2. \tag{32}\]
### Diagonality of the energy-momentum tensor
We now show that \(\mathbb{T}_{\mu\nu}\) can be interpreted as the energy-momentum tensor of a perfect fluid. This result relies on the properties of the auxiliary tensor \(L_{\mu\nu}\) in a spatially flat and isotropic metric. We first show that
\[L_{\tau i}(u_{\mathbf{p};j},u_{\mathbf{p};j})=p_{i}f_{1}(p,\tau);\ \ \ \ L_{\tau i}(u_{\mathbf{p};j},u^{*}_{-\mathbf{p};j})=p_{i}f_{2}(p, \tau);\ \ \ \forall j=1,2;i=1,2,3 \tag{33}\]
where \(f_{1,2}\) are functions of the modulus \(p\) and of the conformal time \(\tau\) alone. Let us insert the ansatz (29) in the definition (8):
\[L_{\tau i}(u_{\mathbf{p};j},u_{\mathbf{p};j}) =\frac{1}{2}\left\{\partial_{\tau}u^{*}_{\mathbf{p};j}\partial_{i}u_{ \mathbf{p};j}+\partial_{i}u^{*}_{\mathbf{p};j}\partial_{\tau}u_{\mathbf{p};j}\right\}\] \[=\frac{1}{2(2\pi)^{3}}\left\{ip_{i}\left(\dot{C}C^{-3}\chi^{*}_{p,j}(\tau)+C^{-2}\dot{\chi}^{*}_{p;j}(\tau)\right)\chi_{p;j}(\tau)-ip_{i}\chi^{* }_{p;j}(\tau)\left(\dot{C}C^{-3}\chi_{p,j}(\tau)+C^{-2}\dot{\chi}_{p;j}(\tau) \right)\right\}\] \[=p_{i}\left\{\frac{i}{2(2\pi)^{3}}\left[\left(\dot{C}C^{-3}\chi^{ *}_{p,j}(\tau)+C^{-2}\dot{\chi}^{*}_{p;j}(\tau)\right)\chi_{p;j}(\tau)-\chi^{* }_{p;j}(\tau)\left(\dot{C}C^{-3}\chi_{p,j}(\tau)+C^{-2}\dot{\chi}_{p;j}(\tau) \right)\right]\right\}\.\]
The parenthetical term clearly depends only on \(p\) and \(\tau\). Here the dot denotes a derivative with respect to \(\tau\). Similarly
\[L_{\tau i}(u_{\mathbf{p};j},u*_{-\mathbf{p};j})=p_{i}\left\{\frac{-i}{(2\pi)^{3}}\left[ \left(\dot{C}C^{-3}\chi^{*}_{p,j}(\tau)+C^{-2}\dot{\chi}^{*}_{p;j}(\tau) \right)\chi^{*}_{p;j}(\tau)\right]\right\}\.\]
Due to the basic property (9), the auxiliary tensor has the same form also for the other arguments. Because the Bogoliubov coefficients depend only on \(p\) and \(\tau\), the overall form of \(\mathbb{T}^{(MIX)}_{\tau i}\) is
\[\mathbb{T}^{(MIX)}_{\tau i}=\int d^{3}p\ p_{i}F(p,\tau) \tag{34}\]
for some function \(F\) of \(p\) and \(\tau\). This quantity clearly vanishes due to symmetry (the \(p_{i}\) integral extends over the whole of \(\mathbb{R}\)). A similar argument applies to the spatial components, for which
\[L_{ik}(u_{\mathbf{p};j},u_{\mathbf{p};j})=p_{i}p_{k}g_{1}(p,\tau);\ \ \ \ L_{ik}(u_{\mathbf{p};j},u^{*}_{-\mathbf{p};j})=p_{i}p_{k}g_{2}(p,\tau);\ \ \ \ \forall j=1,2;i,k=1,2,3\ i\neq k. \tag{35}\]
Indeed
\[L_{ik}(u_{\mathbf{p};j},u_{\mathbf{p};j}) =\frac{1}{2}\left\{\partial_{i}u^{*}_{\mathbf{p};j}\partial_{k}u_{\bm {p};j}+\partial_{k}u^{*}_{\mathbf{p};j}\partial_{i}u_{\mathbf{p};j}\right\}\] \[=\frac{1}{2(2\pi)^{3}}\left\{2p_{i}p_{k}C^{-2}|\chi_{p;j}(\tau)|^ {2}\right\}=p_{i}p_{k}\left\{\frac{|\chi_{p;j}(\tau)|^{2}}{(2\pi)^{3}C^{2}}\right\}\]
and
\[L_{ik}(u_{\mathbf{p};j},u_{-\mathbf{p};j})=p_{i}p_{k}\left\{\frac{(\chi^{*}_{p;j}( \tau))^{2}}{(2\pi)^{3}C^{2}}\right\}\]
so that overall
\[\mathbb{T}^{(MIX)}_{ik}=\int d^{3}p\ p_{i}p_{k}G(p,\tau) \tag{36}\]
for some function \(G(p,\tau)\) of \(p\) and \(\tau\) alone. For \(i\neq k\) the integral is zero due to symmetry. This proves that \(\mathbb{T}_{\mu\nu}\) is diagonal. It is likewise easy to show that for \(i=k\) one has
\[\mathbb{T}^{(MIX)}_{ii}=\int d^{3}p\ H(p,\tau) \tag{37}\]
for some function \(H\) of \(p\) and \(\tau\). As an immediate consequence, and as expected from the isotropy of the metric, the tensor is also isotropic, i. e. \(\mathbb{T}^{(MIX)}_{ii}\) is the same for all \(i=1,2,3\). Below we shall compute explicitly the two non-zero components of \(\mathbb{T}_{\mu\nu}\). We shall see that the only residual dependency on the coordinates is on the conformal time \(\tau\), while no spatial dependency arises. Then the vacuum expectation value respects the spatial translation symmetry of the metric.
### Energy density and pressure
Another trivial property of the auxiliary tensor, that can be read straight off the definition (8), is
\[L_{\mu\nu}(A_{i}^{*},B_{i}^{*})=L_{\mu\nu}^{*}(A_{i},B_{i})\implies L_{\mu\nu}(A_{i }^{*},A_{i}^{*})=L_{\mu\nu}^{*}(A_{i},A_{i})=L_{\mu\nu}(A_{i},A_{i}). \tag{38}\]
In the last equality we have taken into account the reality of \(L_{\mu\nu}(A_{i},A_{i})\) (see Eq. (9)). Together with Eq. (9) this allows us to write
\[\mathbb{T}_{\mu\nu}^{(MIX)} = \sin^{2}\theta\int d^{3}p\bigg{\{}2|\Xi_{p}(\tau_{0})|^{2}\sum_{j =1,2}L_{\mu\nu}(u_{\mathbf{p};j},u_{\mathbf{p};j})+\left[\Xi_{p}^{*}(\tau_{0})\Lambda_{ p}(\tau_{0})L_{\mu\nu}(u_{\mathbf{p};1},u_{-\mathbf{p};1}^{*})+c.c.\right] \tag{39}\] \[- \left[\Xi_{p}(\tau_{0}),\Lambda_{p}(\tau_{0})L_{\mu\nu}^{*}(u_{ \mathbf{p};2},u_{-\mathbf{p};2}^{*})+c.c.\right]\bigg{\}}\.\]
As shown above only the diagonal components are non-zero, with the three spatial components identified. We then need only the following components of the auxiliary tensor:
\[L_{\tau\tau}(u_{\mathbf{p};j},u_{\mathbf{p};j})=\frac{1}{2(2\pi)^{3}}\left\{|\chi_{p;j} |^{2}\left(p^{2}C^{-2}+m_{j}^{2}+C^{-4}\dot{C}^{2}\right)+C^{-2}|\dot{\chi}_{ p;j}|^{2}-C^{-3}\dot{C}\left(\chi_{p;j}^{*}\dot{\chi}_{p;j}+\dot{\chi}_{p;j}^{*} \chi_{p;j}\right)\right\} \tag{40}\]
\[L_{\tau\tau}(u_{\mathbf{p};j},u_{-\mathbf{p};j}^{*})=\frac{1}{2(2\pi)^{3}}\left\{( \chi_{p;j}^{*})^{2}\left(p^{2}C^{-2}+m_{j}^{2}+C^{-4}\dot{C}^{2}\right)+C^{-2 }(\dot{\chi}_{p;j}^{*})^{2}-2C^{-3}\dot{C}\chi_{p;j}^{*}\dot{\chi}_{p;j}^{*} \right\} \tag{41}\]
\[L_{kk}(u_{\mathbf{p};j},u_{\mathbf{p};j})=\frac{1}{2(2\pi)^{3}}\left\{|\chi_{p;j}|^{2} \left(2p_{k}^{2}C^{-2}+C^{-4}\dot{C}^{2}-p^{2}C^{-2}-m_{j}^{2}\right)+C^{-2}| \dot{\chi}_{p;j}|^{2}-C^{-3}\dot{C}\left(\chi_{p;j}^{*}\dot{\chi}_{p;j}+\dot{ \chi}_{p;j}^{*}\chi_{p;j}\right)\right\} \tag{42}\]
\[L_{kk}(u_{\mathbf{p};j},u_{-\mathbf{p};j}^{*})=\frac{1}{2(2\pi)^{3}}\left\{(\chi_{p;j} ^{*})^{2}\left(2p_{k}^{2}C^{-2}+C^{-4}\dot{C}^{2}-p^{2}C^{-2}-m_{j}^{2}\right)+ C^{-2}(\dot{\chi}_{p;j}^{*})^{2}-2C^{-3}\dot{C}\chi_{p;j}^{*}\dot{\chi}_{p;j}^{*} \right\}. \tag{43}\]
Here \(k=1,2,3\) denotes any of the spatial indices. In all of the above equations it is understood that the mode functions \(\chi_{p;j}\equiv\chi_{p;j}(\tau)\) and the scale factor \(C\equiv C(\tau)\) depend on the conformal time \(\tau\) alone. Given that the Bogoliubov coefficients in Eq. (39) depend only on the reference time \(\tau_{0}\), it is clear, as claimed above, that the diagonal components of \(\mathbb{T}_{\mu\nu}^{(MIX)}\) depend only on the conformal time \(\tau\) and the reference time \(\tau_{0}\). Due to the form of \(\mathbb{T}_{\mu\nu}^{(MIX)}\) we can identify the energy density associated to the flavor vacuum \(\rho^{(MIX)}(\tau_{0},\tau)\) with the \(\tau\tau\) component of \(\mathbb{T}_{\mu}^{(MIX)\nu}\) and the corresponding pressure \(p^{(MIX)}(\tau_{0},\tau)\) with \(\mathbb{T}_{k}^{(MIX)k}\):
\[\rho^{(MIX)}(\tau_{0},\tau)=\mathbb{T}_{\tau}^{(MIX)\tau}=g^{\tau\tau}\mathbb{ T}_{\tau\tau}^{(MIX)}=C^{-2}(\tau)\mathbb{T}_{\tau\tau}^{(MIX)}(\tau_{0},\tau) \tag{44}\]
\[p^{(MIX)}(\tau_{0},\tau)=-\mathbb{T}_{k}^{(MIX)k}=-g^{kk}\mathbb{T}_{kk}^{(MIX )}=C^{-2}(\tau)\mathbb{T}_{kk}^{(MIX)}(\tau_{0},\tau). \tag{45}\]
Here no sum over \(k\) is intended.
### Minkowskian Limit
Let us compute the energy density and pressure in the flat space limit \(C(t)\to 1\), \(\tau\to t\). The reduced modes of Eq. (29) take the usual form
\[\chi_{p;j}(t)=\frac{1}{\sqrt{2\omega_{p;j}}}e^{-i\omega_{p;j}t},\ \ \ \ \omega_{p;j}=\sqrt{p^{2}+m_{j}^{2}}. \tag{46}\]
Insertion in Eqs. (30) yields the known flat spacetime expressions
\[\Lambda_{p}(t_{0})=\frac{(\omega_{p;1}+\omega_{p;2})}{\sqrt{4\omega_{p;1}\omega _{p;2}}}e^{i(\omega_{p;2}-\omega_{p;1})t_{0}}\ \ ;\ ;\ \ \ \ \Xi_{p}(t_{0})=\frac{(\omega_{p;1}-\omega_{p;2})}{\sqrt{4\omega_{p;1}\omega_{p;2}} }e^{i(\omega_{p;2}+\omega_{p;1})t_{0}}. \tag{47}\]
Equations (40) to (43) become
\[L_{\tau\tau}(u_{\mathbf{p};j},u_{\mathbf{p};j}) =\frac{1}{2(2\pi)^{3}}\left\{\left|\chi_{p;j}\right|^{2}\left(p^{2} +m_{j}^{2}\right)+\left|\dot{\chi}_{p;j}\right|^{2}\right\}=\frac{\omega_{p;j}} {2(2\pi)^{3}}\] \[L_{\tau\tau}(u_{\mathbf{p};j},u_{\mathbf{-p};j}^{*}) =\frac{1}{2(2\pi)^{3}}\left\{\left(\chi_{p;j}^{*}\right)^{2} \left(p^{2}+m_{j}^{2}\right)+\left(\dot{\chi}_{p;j}^{*}\right)^{2}\right\}=0\] \[L_{kk}(u_{\mathbf{p};j},u_{\mathbf{p};j}) =\frac{1}{2(2\pi)^{3}}\left\{\left|\chi_{p;j}\right|^{2}\left(2p_ {k}^{2}-p^{2}-m_{j}^{2}\right)+\left|\dot{\chi}_{p;j}\right|^{2}\right\}=\frac {p_{k}^{2}}{2(2\pi)^{3}\omega_{p;j}}\] \[L_{kk}(u_{\mathbf{p};j},u_{\mathbf{-p};j}^{*}) =\frac{1}{2(2\pi)^{3}}\left\{\left(\chi_{p;j}^{*}\right)^{2} \left(2p_{k}^{2}-p^{2}-m_{j}^{2}\right)+\left(\dot{\chi}_{p;j}^{*}\right)^{2} \right\}=\frac{\left(p_{k}^{2}-\omega_{p;j}^{2}\right)}{2(2\pi)^{3}\omega_{p;j }}e^{2i\omega_{p;j}t}\.\]
Therefore
\[\mathbb{T}_{tt}^{(MIX)}=\frac{\sin^{2}\theta}{(2\pi)^{3}}\int d^{3}p\frac{ \left(\omega_{p;1}-\omega_{p;2}\right)^{2}\left(\omega_{p;1}+\omega_{p;2} \right)}{4\omega_{p;1}\omega_{p;2}}. \tag{48}\]
The \(kk\) component is slightly more involved:
\[\mathbb{T}_{kk}^{(MIX)} =\frac{\sin^{2}\theta}{(2\pi)^{3}}\int d^{3}p\Bigg{\{}\frac{ \left(\omega_{p;1}-\omega_{p;2}\right)^{2}}{4\omega_{p;1}\omega_{p;2}}\sum_{j =1,2}\frac{p_{k}^{2}}{\omega_{p;j}}+\left[\frac{\left(\omega_{p;1}^{2}-\omega _{p;2}^{2}\right)\left(p_{k}^{2}-\omega_{p;1}^{2}\right)}{8\omega_{p;1}^{2} \omega_{p;2}}e^{2i\omega_{p;1}(t-t_{0})}+c.c\right]\] \[-\left[\frac{\left(\omega_{p;1}^{2}-\omega_{p;2}^{2}\right)\left( p_{k}^{2}-\omega_{p;2}^{2}\right)}{8\omega_{p;1}\omega_{p;2}^{2}}e^{2i\omega_{p;2} (t_{0}-t)}+c.c\right]\Bigg{\}}. \tag{49}\]
In particular, when \(t_{0}=t\),
\[\mathbb{T}_{kk}^{(MIX)} =\frac{\sin^{2}\theta}{(2\pi)^{3}}\int d^{3}p\Bigg{\{}\frac{ \left(\omega_{p;1}-\omega_{p;2}\right)^{2}}{4\omega_{p;1}\omega_{p;2}}\left( \frac{p_{k}^{2}}{\omega_{p;1}}+\frac{p_{k}^{2}}{\omega_{p;2}}\right)+\frac{ \left(\omega_{p;1}^{2}-\omega_{p;2}^{2}\right)\left(p_{k}^{2}-\omega_{p;1}^{2 }\right)}{4\omega_{p;1}^{2}\omega_{p;2}}-\frac{\left(\omega_{p;1}^{2}-\omega _{p;2}^{2}\right)\left(p_{k}^{2}-\omega_{p;2}^{2}\right)}{4\omega_{p;1}\omega_ {p;2}^{2}}\Bigg{\}}\] \[=-\frac{\sin^{2}\theta}{(2\pi)^{3}}\int d^{3}p\frac{\left(\omega_{ p;1}-\omega_{p;2}\right)^{2}\left(\omega_{p;1}+\omega_{p;2}\right)}{4\omega_{p;1} \omega_{p;2}}\.\]
It follows that
\[\rho^{(MIX)}(t,t)=-p^{(MIX)}(t,t)\.\]
We have recovered the standard result [31; 32; 33; 34; 35] in flat spacetime, corresponding to the equation of state \(w^{(MIX)}(t,t)=\frac{p^{(MIX)}(t,t)}{\rho^{(MIX)}(t,t)}=-1\).
## V De Sitter expansion
We now move on to a non-trivial application of the formalism above. The general features of \(\mathbb{T}_{\mu\nu}^{(MIX)}\) analyzed in the previous section ensure that it may be regarded as the energy-momentum tensor of a perfect fluid. Since it also satisfies the Bianchi identity (see Appendix A), it represents a valid source term for the Einstein field equations, with a metric of the form given by Eq. (1). Nevertheless the simultaneous solution of the Einstein field equations with the source term \(\mathbb{T}_{\mu\nu}^{(MIX)}\) and of the mode equations
\[\ddot{\chi}_{p;j}+\left(p^{2}+m_{j}^{2}C^{2}-\ddot{C}C^{-1}\right)\chi_{p;j}=0 \tag{50}\]
is an extremely difficult analytical task. We postpone this kind of self-consistent computation to future works. Here we instead assume that the Einstein field equations are dominated by some other classical source term \(T_{\mu\nu}^{C}\) which forces a definite form of the scale factor \(C(t)\) and thus of the metric. As a first approximation we ignore the effect of \(\mathbb{T}_{\mu\nu}^{(MIX)}\) on the scale factor, and compute its value on the metric determined by \(T_{\mu\nu}^{(C)}\).
Here we take into account a De Sitter expansion, and assume that the scale factor has the form \(C(t)=e^{H_{0}t}\), for some constant \(H_{0}\) with dimensions of a mass.
### Mode functions
For the exponential scale factor \(C(t)=e^{H_{0}t}\) the conformal time is \(\tau=\frac{-e^{-H_{0}t}}{H_{0}}<0\), and \(C(\tau)=-(H_{0}\tau)^{-1}\). The mode equations (50) are
\[\ddot{\chi}_{p;j}+\left(p^{2}+\frac{m_{j}^{2}}{H_{0}^{2}\tau^{2}}-\frac{2}{\tau ^{2}}\right)\chi_{p;j}=0\, \tag{51}\]
or, introducing the positive variable \(s=-p\tau\)
\[\partial_{s}^{2}\chi_{p;j}+\left(1+\frac{\frac{m_{j}^{2}}{H_{0}^{2}}-2}{s^{2} }\right)\chi_{p;j}=0. \tag{52}\]
This is a Bessel-like equation and its general solution can be written as
\[\chi_{p;j}(s)=s^{\frac{1}{2}}\left(AH_{\nu_{j}}^{1}(s)+BH_{\nu_{j}}^{2}(s) \right)\ ;\qquad\nu_{j}=\sqrt{\frac{9}{4}-\frac{m_{j}^{2}}{H_{0}^{2}}} \tag{53}\]
where \(H_{\nu}^{1,2}\) are the Hankel functions of the first and second kind [56] while \(A,B\) are complex constants. We select the positive energy modes with respect to \(\partial_{\tau}\) by requiring that at early times (\(\tau\rightarrow-\infty,s\rightarrow\infty\)) the mode functions are proportional to \(\chi_{p;j}\propto e^{-ip\tau}=e^{is}\). Given the asymptotic form of the Hankel functions [56] for \(s\rightarrow\infty\)\(H_{\nu}^{1}\simeq\sqrt{\frac{2}{\pi s}}e^{i\left(s-\frac{p\pi}{2}-\frac{\pi}{4} \right)}\) and \(H_{\nu}^{2}\simeq\sqrt{\frac{2}{\pi s}}e^{-i\left(s-\frac{p\pi}{2}-\frac{\pi} {4}\right)}\), we set \(B=0\). The remaining constant is determined by the normalization condition
\[i(\chi_{p;j}^{*}\dot{\chi}_{p;j}-\dot{\chi}_{p;j}^{*}\chi_{p;j})=\frac{4p|A|^{ 2}}{\pi}e^{\pi\,{\rm Im}(\nu_{j})}\stackrel{{!}}{{=}}1 \tag{54}\]
which implies \(A=e^{-\frac{\tau\ln(\nu_{j})}{2}}\sqrt{\frac{\pi}{4p}}\) up to a phase. The modes then take the form
\[\chi_{p;j}(\tau)=e^{-\frac{\pi\ln(\nu_{j})}{2}}\sqrt{-\frac{\pi\tau}{4}}H_{ \nu_{j}}^{1}(-p\tau). \tag{55}\]
The mode index \(\nu_{j}\) turns out to be imaginary, at least in relatively close epochs, since \(\frac{m_{j}}{H_{0}}\gg\frac{3}{2}\). To give an idea of the size of \(H_{0}\), consider that the current Hubble constant is of the order \(H_{0}\sim 10^{-33}\)eV, which is far below the masses of the known particles. Even for very light masses, say \(m_{j}\simeq 10^{-2}\)eV, we can expect the condition \(\frac{m_{j}}{H_{0}}\gg\frac{3}{2}\) to break down only very close to the Big Bang, when \(H_{0}\) blows up due to inflation. Then \(\nu_{j}=i|\nu_{j}|=i\sqrt{\frac{m_{j}^{2}}{H_{0}^{2}}-\frac{9}{4}}\). We shall later need the asymptotic form of Eq. (55) at late times \(\tau\to 0^{-},s\to 0\). Because \({\rm Re}(\nu_{j})=0\), we cannot directly use Eq. (9.1.9) of [56] and have first to express the Hankel functions in terms of Bessel functions. We quote the resulting expression for \(H_{\nu}^{1}\)
\[H_{\nu_{j}}^{1}(-p\tau\to 0)\simeq\frac{1}{\sinh\pi|\nu_{j}|}\left[\frac{e ^{\pi|\nu_{j}|}}{\Gamma(1+\nu_{j})}\left(\frac{-p\tau}{2}\right)^{\nu_{j}}- \frac{1}{\Gamma(1-\nu_{j})}\left(\frac{-p\tau}{2}\right)^{-\nu_{j}}\right] \tag{56}\]
where \(\Gamma(x)\) is the Euler Gamma function. We can immediately write down the components of the auxiliary tensor Eqs. (40) to (43) as
\[L_{\tau\tau}(u_{\mathbf{p};j},u_{\mathbf{p};j}) =\frac{\pi H_{0}^{2}\tau^{2}e^{-\pi|\nu_{j}|}}{8(2\pi)^{3}}\left\{ |H_{\nu_{j}}^{1}|^{2}\left(-p^{2}\tau-\frac{m_{j}^{2}}{H_{0}^{2}\tau}-\frac{1} {4\tau}\right)-\tau|\partial_{\tau}H_{\nu_{j}}^{1}|^{2}-\frac{1}{2}\left(H_{ \nu_{j}}^{1*}\partial_{\tau}H_{\nu_{j}}^{1}+H_{\nu_{j}}^{1}\partial_{\tau}H_{ \nu_{j}}^{1*}\right)\right\}\] \[L_{\tau\tau}(u_{\mathbf{p};j},u_{-\mathbf{p};j}^{*}) =\frac{\pi H_{0}^{2}\tau^{2}e^{-\pi|\nu_{j}|}}{8(2\pi)^{3}}\left\{ (H_{\nu_{j}}^{1*})^{2}\left(-p^{2}\tau-\frac{m_{j}^{2}}{H_{0}^{2}\tau}-\frac{1} {4\tau}\right)-\tau(\partial_{\tau}H_{\nu_{j}}^{1*})^{2}-\left(H_{\nu_{j}}^{1*} \partial_{\tau}H_{\nu_{j}}^{1*}\right)\right\}\] \[L_{kk}(u_{\mathbf{p};j},u_{\mathbf{p};j}) =\frac{\pi H_{0}^{2}\tau^{2}e^{-\pi|\nu_{j}|}}{8(2\pi)^{3}}\left\{ |H_{\nu_{j}}^{1}|^{2}\left(-2p_{k}^{2}\tau+p^{2}\tau+\frac{m_{j}^{2}}{H_{0}^{2} \tau}-\frac{9}{4\tau}\right)-\tau|\partial_{\tau}H_{\nu_{j}}^{1}|^{2}-\frac{3} {2}\left(H_{\nu_{j}}^{1*}\partial_{\tau}H_{\nu_{j}}^{1}+H_{\nu_{j}}^{1} \partial_{\tau}H_{\nu_{j}}^{1*}\right)\right\}\] \[L_{kk}(u_{\mathbf{p};j},u_{-\mathbf{p};j}^{*}) =\frac{\pi H_{0}^{2}\tau^{2}e^{-\pi|\nu_{j}|}}{8(2\pi)^{3}}\left\{ (H_{\nu_{j}}^{1*})^{2}\left(-2p_{k}^{2}\tau+p^{2}\tau+\frac{m_{j}^{2}}{H_{0}^{2} \tau}-\frac{9}{4\tau}\right)-\tau(\partial_{\tau}H_{\nu_{j}}^{1*})^{2}-3 \left(H_{\nu_{j}}^{1*}\partial_{\tau}H_{\nu_{j}}^{1*}\right)\right\}. \tag{57}\]
Here we have suppressed the argument \(-p\tau\) of the Hankel functions. Likewise we can compute the Bogoliubov coefficients of flavor mixing by inserting the solutions of Eq. (55) in the definition of Eq. (30):
\[\Lambda_{p}(\tau) =ie^{-\frac{\pi(|\nu_{2}|+|\nu_{1}|)}{2}}\left(\frac{-\pi\tau}{4} \right)\left(H_{\nu_{2}}^{1*}(-p\tau)\partial_{\tau}H_{\nu_{1}}^{1}(-p\tau)-H_ {\nu_{1}}^{1}(-p\tau)\partial_{\tau}H_{\nu_{2}}^{1*}(-p\tau)\right)\] \[\Xi_{p}(\tau) =ie^{-\frac{\pi(|\nu_{2}|+|\nu_{1}|)}{2}}\left(\frac{-\pi\tau}{4} \right)\left(H_{\nu_{1}}^{1*}(-p\tau)\partial_{\tau}H_{\nu_{2}}^{1*}(-p\tau)-H _{\nu_{2}}^{1*}(-p\tau)\partial_{\tau}H_{\nu_{1}}^{1*}(-p\tau)\right) \tag{58}\]
### The late time approximation
Above we have obtained fairly simple analytical expressions for the various quantities that appear in the energy-momentum tensor \(\mathbb{T}_{\mu\nu}^{(MIX)}\). The exact value of \(\rho^{(MIX)}\) and \(p^{(MIX)}\) for given masses and conformal time can be obtained by inserting the mixing Bogoliubov coefficients of Eq. (58) and the auxiliary tensor of Eq. (57) in Eq. (39) for the corresponding components. At this point it is impossible to proceed analytically in the computation of the final \(\int d^{3}p\) integral, which involves a non trivial combination of \(p\)-dependent Hankel functions. The integral may be evaluated numerically for given sets of parameters.
We prefer here to pursue a different route, and gain insight on \(\rho^{(MIX)}\) and \(p^{(MIX)}\) by invoking some approximations and handling the approximate integral analytically. We focus on the late time approximation, in which both the flavor vacuum and the energy-momentum tensor operator are considered for late conformal time arguments, \(\tau_{0}\to 0^{-}\) and \(\tau\to 0^{-}\) respectively. As we wish to compute the energy density and the pressure at a given time \(\tau\) associated to the flavor vacuum defined at a previous (or at most coincident) time, we shall always consider \(\tau\geq\tau_{0}\). The late time approximation has the advantage to turn the Hankel functions integral into an integral over polynomials in \(p\), which can be straightforwardly evaluated. With a tedious but simple computation we determine the late time form of \(\mathbb{T}_{\tau\tau}^{(MIX)}\) and \(\mathbb{T}_{kk}^{(MIX)}\). The computation employs the asymptotic form of the Hankel functions of Eq. (56) and makes use of the properties of the \(\Gamma\) functions in several intermediate steps. We show the result in the Appendix B. The expressions derived in Eqs. (B3), (B4) and (B5) are still quite cumbersome to deal with. Yet we can invoke another approximation regarding the size of the mass parameters \(m_{1},m_{2}\). As argued above, except for epochs extremely close to the Big Bang, we can expect that even very small masses \(\sim 10^{-2}\)eV shall be several orders of magnitude greater than the expansion rate \(H_{0}\). Then we can safely consider the high mass limit
\[\frac{m_{j}}{H_{0}}\gg 1\Longrightarrow|\nu_{j}|\gg 1\, \tag{59}\]
and take into account the \(|\nu_{j}|\rightarrow\infty\) asymptotic expression for each of Eqs. (B3), (B4) and (B5). In such a limit many of the terms are suppressed by hyperbolic functions \(\sinh(\pi|\nu_{j}|)\) in the denominator and only a few terms survive. We find
\[\mathbb{T}_{\tau\tau}^{(MIX)} \simeq\frac{\sin^{2}\theta H_{0}^{2}\tau}{(2\pi)^{3}}\int d^{3}p \Bigg{\{}\left(\frac{(|\nu_{1}|^{2}+|\nu_{2}|^{2})\coth(\pi|\nu_{1}|)\coth( \pi|\nu_{2}|)}{2|\nu_{1}||\nu_{2}|}-1\right)\sum_{j}\frac{\coth(\pi|\nu_{j}|)}{ |\nu_{j}|}\left(\frac{1}{2}-\frac{m_{j}^{2}}{2H_{0}^{2}}\right)+\] \[-\frac{5(|\nu_{2}|^{2}-|\nu_{1}|^{2})\coth(\pi|\nu_{1}|)}{32|\nu_ {1}||\nu_{2}|^{2}}\left(1+\coth^{2}(\pi|\nu_{2}|)\right)\cos\left(2|\nu_{2}| \log\left(\frac{\tau}{\tau_{0}}\right)\right)+\] \[-\frac{5(|\nu_{1}|^{2}-|\nu_{2}|^{2})\coth(\pi|\nu_{2}|)}{32|\nu_ {1}|^{2}|\nu_{2}|}\left(1+\coth^{2}(\pi|\nu_{1}|)\right)\cos\left(2|\nu_{1}| \log\left(\frac{\tau}{\tau_{0}}\right)\right)+\] \[+\frac{\coth(\pi|\nu_{1}|)(|\nu_{2}|^{2}-|\nu_{1}|^{2})}{16|\nu_ {1}||\nu_{2}|}\left(1+\coth^{2}(\pi|\nu_{2}|)\right)\sin\left(2|\nu_{2}|\log \left(\frac{\tau}{\tau_{0}}\right)\right)+\] \[+\frac{\coth(\pi|\nu_{2}|)(|\nu_{1}|^{2}-|\nu_{2}|^{2})}{16|\nu_ {1}||\nu_{2}|}\left(1+\coth^{2}(\pi|\nu_{1}|)\right)\sin\left(2|\nu_{1}|\log \left(\frac{\tau}{\tau_{0}}\right)\right)\Bigg{\}}. \tag{60}\]
and
\[\mathbb{T}^{(MIX)}_{kk} \simeq \frac{\sin^{2}\theta H_{0}^{2}\tau}{(2\pi)^{3}}\int d^{3}p\Bigg{\{} \frac{\coth(\pi|\nu_{1}|)(|\nu_{2}|^{2}-|\nu_{1}|^{2})}{8|\nu_{1}|}\left(1+\coth^ {2}(\pi|\nu_{2}|)\right)\cos\left(2|\nu_{2}|\log\left(\frac{\tau}{\tau_{0}} \right)\right)+ \tag{61}\] \[+ \frac{\coth(\pi|\nu_{2}|)(|\nu_{1}|^{2}-|\nu_{2}|^{2})}{8|\nu_{2} |}\left(1+\coth^{2}(\pi|\nu_{1}|)\right)\cos\left(2|\nu_{1}|\log\left(\frac{ \tau}{\tau_{0}}\right)\right)\] \[+ \frac{3\coth(\pi|\nu_{1}|)(|\nu_{2}|^{2}-|\nu_{1}|^{2})}{16|\nu_{ 1}||\nu_{2}|}\left(1+\coth^{2}(\pi|\nu_{2}|)\right)\sin\left(2|\nu_{2}|\log \left(\frac{\tau}{\tau_{0}}\right)\right)\] \[+ \frac{3\coth(\pi|\nu_{2}|)(|\nu_{1}|^{2}-|\nu_{2}|^{2})}{16|\nu_{ 1}||\nu_{2}|}\left(1+\coth^{2}(\pi|\nu_{1}|)\right)\sin\left(2|\nu_{1}|\log \left(\frac{\tau}{\tau_{0}}\right)\right)\Bigg{\}}\.\]
We can see from the above equations that both the \(\tau\tau\) component and the \(kk\) component depend only on the ratio \(\frac{\tau}{\tau_{0}}\) between the instantaneous \(\tau\) and the reference conformal time \(\tau_{0}\). They exhibit a simple oscillating behaviour weighted by the mode indices \(\nu_{j}\). The integral over \(d^{3}p\) is trivial but divergent. In order to obtain a finite result we need a regularization. Given that the particle mixing phenomena are generally suppressed at high momenta, that is, when the mass terms that are responsible for mixing become negligible, we employ an ultraviolet cutoff \(\mathcal{P}_{0}\). For instance, if the bosons which are being mixed are the supersymmetric partners of neutrinos, the natural choice is a cutoff of the order of the electroweak scale \(\mathcal{P}_{0}\simeq 246\ \text{GeV}\). We must now recall that the mode label \(\mathbf{p}\) is _not_ the actual momentum carried by the particles, which is instead the comoving momentum \(\mathbf{p}_{PHYS}=\frac{\mathbf{p}}{C(\tau)}\). If the cutoff is to be imposed on the physical momentum of the particles \(p_{PHYS}^{CUTOFF}=\mathcal{P}_{0}\), then the cutoff for the mode label is the comoving cutoff
\[p^{CUTOFF}=p_{PHYS}^{CUTOFF}C(\tau)=-\frac{\mathcal{P}_{0}}{H_{0}\tau}\equiv \mathcal{P}(\tau)\, \tag{62}\]
where of course, given that \(\tau<0\), \(\mathcal{P}(\tau)\) is strictly positive. Moving to polar coordinates, evaluating the angular integral and performing the \(dp\) integral with the cutoff \(\mathcal{P}(\tau)\), we find
\[\mathbb{T}^{(MIX)}_{\tau\tau} \simeq \frac{\sin^{2}\theta H_{0}^{2}\tau\mathcal{P}^{3}(\tau)}{6\pi^{2} }\Bigg{\{}\left(\frac{(|\nu_{1}|^{2}+|\nu_{2}|^{2})\coth(\pi|\nu_{1}|)\coth(\pi| \nu_{2}|)}{2|\nu_{1}||\nu_{2}|}-1\right)\sum_{j}\frac{\coth(\pi|\nu_{j}|)}{| \nu_{j}|}\left(\frac{1}{2}-\frac{m_{j}^{2}}{2H_{0}^{2}}\right)+ \tag{63}\] \[- \frac{5(|\nu_{2}|^{2}-|\nu_{1}|^{2})\coth(\pi|\nu_{1}|)}{32|\nu_{ 1}||\nu_{2}|^{2}}\left(1+\coth^{2}(\pi|\nu_{2}|)\right)\cos\left(2|\nu_{2}| \log\left(\frac{\tau}{\tau_{0}}\right)\right)+\] \[- \frac{5(|\nu_{1}|^{2}-|\nu_{2}|^{2})\coth(\pi|\nu_{2}|)}{32|\nu_{ 1}|^{2}|\nu_{2}|}\left(1+\coth^{2}(\pi|\nu_{1}|)\right)\cos\left(2|\nu_{1}| \log\left(\frac{\tau}{\tau_{0}}\right)\right)+\] \[+ \frac{\coth(\pi|\nu_{1}|)(|\nu_{2}|^{2}-|\nu_{1}|^{2})}{16|\nu_{ 1}||\nu_{2}|}\left(1+\coth^{2}(\pi|\nu_{2}|)\right)\sin\left(2|\nu_{2}|\log \left(\frac{\tau}{\tau_{0}}\right)\right)+\] \[+ \frac{\coth(\pi|\nu_{2}|)(|\nu_{1}|^{2}-|\nu_{2}|^{2})}{16|\nu_{ 1}||\nu_{2}|}\left(1+\coth^{2}(\pi|\nu_{1}|)\right)\sin\left(2|\nu_{1}|\log \left(\frac{\tau}{\tau_{0}}\right)\right)\Bigg{\}}\.\]
and
\[\mathbb{T}^{(MIX)}_{kk} \simeq \frac{\sin^{2}\theta H_{0}^{2}\tau\mathcal{P}^{3}(\tau)}{6\pi^{2 }}\Bigg{\{}\frac{\coth(\pi|\nu_{1}|)(|\nu_{2}|^{2}-|\nu_{1}|^{2})}{8|\nu_{1}|} \left(1+\coth^{2}(\pi|\nu_{2}|)\right)\cos\left(2|\nu_{2}|\log\left(\frac{\tau} {\tau_{0}}\right)\right)+ \tag{64}\] \[+ \frac{\coth(\pi|\nu_{2}|)(|\nu_{1}|^{2}-|\nu_{2}|^{2})}{8|\nu_{2} |}\left(1+\coth^{2}(\pi|\nu_{1}|)\right)\cos\left(2|\nu_{1}|\log\left(\frac{\tau} {\tau_{0}}\right)\right)\] \[+ \frac{3\coth(\pi|\nu_{1}|)(|\nu_{2}|^{2}-|\nu_{1}|^{2})}{16|\nu_{ 1}||\nu_{2}|}\left(1+\coth^{2}(\pi|\nu_{2}|)\right)\sin\left(2|\nu_{2}|\log \left(\frac{\tau}{\tau_{0}}\right)\right)\] \[+ \frac{3\coth(\pi|\nu_{2}|)(|\nu_{1}|^{2}-|\nu_{2}|^{2})}{16|\nu_{ 1}||\nu_{2}|}\left(1+\coth^{2}(\pi|\nu_{1}|)\right)\sin\left(2|\nu_{1}|\log \left(\frac{\tau}{\tau_{0}}\right)\right)\Bigg{\}}\.\]
Although the energy density and the pressure (which differ from Eqs. (63) and (64) by the multiplicative factor \(C^{-2}(\tau)=H_{0}^{2}\tau^{2}\)) depend on the cutoff, their ratio is cutoff-independent. Then we can derive a cutoff-indipendent equation of state dividing Eq. (64) by (63)
\[w^{(MIX)}(\tau_{0},\tau)=\frac{p^{(MIX)}(\tau_{0},\tau)}{\rho^{(MIX)}(\tau_{0}, \tau)}=\frac{\mathbb{T}_{kk}^{(MIX)}(\tau_{0},\tau)}{\mathbb{T}_{\tau\tau}^{( MIX)}(\tau_{0},\tau)}. \tag{65}\]
The behaviour of \(w^{(MIX)}\) is shown in Fig. 1 for sample values of the parameters. Interestingly, the adiabatic factor undergoes oscillations in the full range \([-1,1]\), periodically going through "dark energy" phases (\(w<-\frac{1}{3}\)), dust phases (\(w=0\)) and radiation phases (\(w=\frac{1}{3}\)). This is at odds with both the fermionic counterpart (\(w=0\) at all times) and the flat spacetime result (\(w=-1\) at all times). From the right panel of Fig. 1 one can see that all the curves approach the limiting value \(w=-1\) when \(\tau\rightarrow\tau_{0}\) and their ratio tends to unity. This is nothing but a reinstatement of the flat space limit: the instantaneous adiabatic factor is the same for any choice of parameters and is equal to
\[w^{(MIX)}(\tau,\tau)=-1. \tag{66}\]
The behaviour of the energy density \(\rho^{(MIX)}(\tau_{0},\tau)\) for sample values of the parameters is shown in Fig. 2. We can see that within the approximations employed, the energy density is a constant for fixed values of the parameters.
## VI Conclusions
We have analyzed the possible contribution to the dark energy given by the quantum vacuum for mixed boson, named flavor vacuum, by studying the boson mixing in curved space. We have calculated the expectation value of the energy momentum tensor \(T_{\mu\nu}^{MIX}\) of free bosons on the vacuum for mixed fields for a spatially flat FLRW metric, and we have shown that this tensor is diagonal (the off-diagonal elements are all equal to zero), and therefore, it behaves as a classical perfect fluid, whose properties depend on the space-time geometry. This result connects the non trivial structure of the quantum flavor vacuum for mixed bosons with classical fluids. Moreover, \(T_{\mu\nu}^{MIX}\) also represents a valid source term for the Einstein field equations, since it satisfies the Bianchi identity. However, as approximation, we ignore the effect of \(T_{\mu\nu}^{MIX}\) on the scale factor, and compute its value on the metric determined only by classical source terms.
We have analyzed the trend of \(T_{\mu\nu}^{MIX}\) in the case of de Sitter background, and we have shown that the values of the adiabatic factor of the boson flavor mixing \(w^{(MIX)}\) are included in the interval \([-1;1]\). On the other hand, in time limit of the flat space time, the bosonic mixed vacuum assumes the state equation of the dark energy: \(w^{(MIX)}=-1\). Therefore, there may be a strong link between purely quantum effects and phenomena on a cosmological scale, and the vacuum energy of mixed bosons, such as neutrino super-partners, may give a non-trivial contribution to the dark energy.
Figure 1: (color online) Plots of the adiabatic factor \(w^{(MIX)}(\tau_{0},\tau)\) for sample values of the parameters. Masses are expressed in units of \(H_{0}\) and conformal times in units of \(H_{0}^{-1}\). Masses are chosen so to satisfy the condition \(m_{j}\gg H_{0}\). Blue solid line: \(m_{1}=200,m_{2}=15000\); orange dashed line: \(m_{1}=100,m_{2}=5000\); dark green dotdashed line: \(m_{1}=150,m_{2}=20000\).
## Acknowledgements
A.C. and A.Q. acknowledge partial financial support from MUR and INFN, A.C. also acknowledges the COST Action CA1511 Cosmology and Astrophysics Network for Theoretical Advances and Training Actions (CANTATA).
## Appendix A Bianchi identity
In this appendix we prove explicitly the covariant conservation of the energy momentum tensor associated to the flavor vacuum. We demonstrate the four equations
\[\nabla_{\mu}\mathbb{T}^{\mu\nu}=0 \tag{10}\]
with \(\nabla_{\mu}\) denoting the covariant derivative. There is no need here to distinguish between \(\mathbb{T}^{(MIX)}_{\mu\nu}\) and \(\mathbb{T}^{(N)}_{\mu\nu}\), since both satisfy eq. (10), and so does the full energy-momentum tensor. Let us first compute the connection coefficients for the metric of eq. (1). The non-zero coefficients are
\[\Gamma^{\tau}_{\tau\tau}=\Gamma^{\tau}_{ii}=\Gamma^{i}_{\tau i}=\Gamma^{i}_{i \tau}=\frac{\dot{C}}{C}. \tag{11}\]
No sum is here intended over repeated indices. Notice that the coefficients depend only on \(\tau\). In terms of the connection coefficients, the covariant divergence reads
\[\nabla_{\mu}\mathbb{T}^{\mu\nu}=\partial_{\mu}\mathbb{T}^{\mu\nu}+\Gamma^{\mu }_{\mu\sigma}\mathbb{T}^{\sigma\nu}+\Gamma^{\nu}_{\mu\sigma}\mathbb{T}^{\mu \sigma}. \tag{12}\]
* (\(\nu=i\)) For \(\nu=i\), with \(i=1,2,3\) equation (12) becomes \[\nabla_{\mu}\mathbb{T}^{\mu i}=\partial_{\mu}\mathbb{T}^{\mu i}+\Gamma^{\mu} _{\mu\sigma}\mathbb{T}^{\sigma i}+\Gamma^{i}_{\mu\sigma}\mathbb{T}^{\mu \sigma}\.\] (13) From the diagonality of \(\mathbb{T}^{\mu\nu}\) proved above, we can write \[\nabla_{\mu}\mathbb{T}^{\mu i}=\partial_{i}\mathbb{T}^{ii}+\sum_{\mu}\Gamma^{ \mu}_{\mu i}\mathbb{T}^{ii}+\sum_{\mu}\Gamma^{i}_{\mu\mu}\mathbb{T}^{\mu\mu}\,\] (14)
Figure 2: (color online) Plots of the energy density \(\rho^{(MIX)}(\tau_{0},\tau)\) for sample values of the parameters. Masses are expressed in units of \(H_{0}\) and conformal times in units of \(H_{0}^{-1}\). We have considered \(\sin^{2}\theta=0.307\), \(\mathcal{P}_{0}=246\)GeV, \(H_{0}=10^{-33}\)eV and \(\tau_{0}=-0.01\). Masses are chosen so to satisfy the condition \(m_{j}\gg H_{0}\) and conformal times to suite the late time approximation. Blue solid line: \(m_{1}=200,m_{2}=15000\); orange dashed line: \(m_{1}=100,m_{2}=5000\); dark green dotdashed line: \(m_{1}=150,m_{2}=20000\).
where no sum is intended over repeated indices and the summations are written out explicitly to avoid confusion. The first term on the right hand side of eq. (101) is zero, since \(\mathbb{T}^{\mu\nu}\) depends only on \(\tau\). Similarly, from eq. (100) we know that \(\Gamma^{\mu}_{\mu i}=0=\Gamma^{i}_{\mu\mu}\) for each \(\mu=0,1,2,3\), so that also the second and the third term on the right hand side of eq. (101) vanish. Therefore \[\nabla_{\mu}\mathbb{T}^{\mu i}=0\ \ \ \ \forall i\.\] (102)
* (\(\nu=\tau\)) Only a slightly longer calculation is needed to prove the statement for \(\nu=\tau\). Starting from equation (102) we have \[\nabla_{\mu}\mathbb{T}^{\mu\tau} =\partial_{\mu}\mathbb{T}^{\mu\tau}+\Gamma^{\mu}_{\mu\sigma} \Gamma^{\sigma\tau}+\Gamma^{\tau}_{\mu\sigma}\mathbb{T}^{\mu\sigma}\] \[=\partial_{\tau}\mathbb{T}^{\tau\tau}+\left(\Gamma^{\tau}_{\tau \tau}+\sum_{i}\Gamma^{i}_{i\tau}\right)\mathbb{T}^{\tau\tau}+\Gamma^{\tau}_{ \tau\tau}\mathbb{T}^{\tau\tau}+\sum_{i}\Gamma^{\tau}_{ii}\mathbb{T}^{ii}\] \[=\partial_{\tau}\mathbb{T}^{\tau\tau}+5\Gamma^{\tau}_{\tau\tau} \mathbb{T}^{\tau\tau}+3\Gamma^{\tau}_{\tau\tau}\mathbb{T}^{ii}\,\] (103) where we have used the diagonality and isotropy of \(\mathbb{T}^{\mu\nu}\) and eqs. (100). It is convenient to express Eq. (103) in terms of the covariant components by means of the metric of Eq. (1), so to get \[\nabla_{\mu}\mathbb{T}^{\mu\tau}=\partial_{\tau}\left(C^{-4}\mathbb{T}_{\tau \tau}\right)+5C^{-5}\dot{C}+3C^{-5}\dot{C}\mathbb{T}_{ii}\.\] (104) From equation (14), we know that each of the terms above is the integral of the auxiliary tensor components \(L_{\tau\tau},L_{ii}\) weighted by \(\tau\)-independent coefficients (because the Bogoliubov coefficients are evaluated at the fixed reference time \(\tau_{0}\)). It is then sufficient to prove that \[\partial_{\tau}\left(C^{-4}L_{\tau\tau}(A,B)\right)+5C^{-5}\dot{C}L_{\tau\tau }(A,B)+3C^{-5}\dot{C}L_{ii}(A,B)=0\] (105) for each \(A,B=u_{\mathbf{p};j},u^{*}_{-\mathbf{p};j}\), to show that the divergence (104) vanishes. It is understood that the equality (105) has to hold, multiplied by the appropriate Bogoliubov coefficients, under the integral sign \(\int d^{3}p\). We can prove Eq. (105) by direct computation, inserting Eqs. (40) to (43) and using Eq. (50). We have \[\partial_{\tau}\left(C^{-4}L_{\tau\tau}(u_{\mathbf{p};j},u_{\mathbf{p};j} )\right)+5C^{-5}\dot{C}L_{\tau\tau}(u_{\mathbf{p};j},u_{\mathbf{p};j})+3C^{-5}\dot{C} L_{ii}(u_{\mathbf{p},j},u_{\mathbf{p};j})=\] \[C^{-4}\left\{\partial_{\tau}\left(L_{\tau\tau}(u_{\mathbf{p};j},u_{ \mathbf{p};j})\right)+\dot{C}C^{-1}L_{\tau\tau}(u_{\mathbf{p};j},u_{\mathbf{p};j})+3\dot {C}C^{-1}L_{ii}(u_{\mathbf{p};j},u_{\mathbf{p};j})\right\}=\] \[\frac{C^{-4}}{2(2\pi)^{3}}\Bigg{\{}\left(\dot{\chi}^{*}_{p;j} \chi_{p;j}+\chi^{*}_{p;j}\dot{\chi}_{p;j}\right)\left(p^{2}C^{-2}+m_{j}^{2}+4 C^{-4}\dot{C}^{2}-C^{-3}\ddot{C}\right)+|\chi_{p;j}|^{2}\left(-2p^{2}C^{-3}\dot{C}-4C^{- 5}\dot{C}^{3}+2C^{-4}\dot{C}\ddot{C}\right)\] \[+|\dot{\chi}_{p;j}|^{2}\left(-4C^{-3}\dot{C}\right)+\left(\chi^{* }_{p;j}\ddot{\chi}_{p;j}+\ddot{\chi}^{*}_{p;j}\chi_{p;j}\right)\left(-C^{-3} \dot{C}\right)+\left(\ddot{\chi}^{*}_{p;j}\dot{\chi}_{p;j}+\dot{\chi}^{*}_{p;j }\ddot{\chi}_{p;j}\right)C^{-2}\Bigg{\}}\] \[+\frac{C^{-4}}{2(2\pi)^{3}}\Bigg{\{}|\chi_{p;j}|^{2}\left(p^{2}C^{- 3}\dot{C}+m_{j}^{2}C^{-1}\dot{C}+C^{-5}\dot{C}^{3}\right)+|\dot{\chi}_{p;j}|^ {2}C^{-3}\dot{C}-C^{-4}\dot{C}^{2}\left(\chi^{*}_{p;j}\dot{\chi}_{p;j}+\dot{ \chi}^{*}_{p;j}\chi_{p;j}\right)\Bigg{\}}\] \[\frac{C^{-4}}{2(2\pi)^{3}}\Bigg{\{}|\chi_{p;j}|^{2}\left(-4p^{2}C ^{-3}\dot{C}-2m_{j}^{2}C^{-1}\dot{C}+6p_{i}^{2}C^{-3}\dot{C}+2C^{-4}\dot{C} \ddot{C}\right)+\left(\dot{\chi}^{*}_{p;j}\dot{\chi}_{p;j}+\dot{\chi}^{*}_{p; j}\chi_{p;j}\right)\Big{(}p^{2}C^{-2}+m_{j}^{2}-C^{-3}\ddot{C}\Big{)}\] \[+\left(\chi^{*}_{p;j}\ddot{\chi}_{p;j}+\ddot{\chi}^{*}_{p;j}\chi_ {p;j}\right)\left(-C^{-3}\dot{C}\right)+\left(\ddot{\chi}^{*}_{p;j}\dot{\chi}_ {p;j}+\dot{\chi}^{*}_{p;j}\ddot{\chi}_{p;j}\right)C^{-2}\Bigg{\}}=\frac{C^{-7} \dot{C}}{2(2\pi)^{3}}\left[|\chi_{p;j}|^{2}\left(6p_{i}^{2}-2p^{2}\right) \right]\.\] In the last equality we have made use of Eq. (50) and its complex conjugate. Recalling that this quantity multiplies a function of \(p^{2}\) under the integral sign \(\int d^{3}p\), we can make the substition1\(p_{i}^{2}\to\frac{p^{2}}{3}\), and so \[\partial_{\tau}\left(C^{-4}L_{\tau\tau}(u_{\mathbf{p};j},u_{\mathbf{p};j})\right)+5C^{-5} \dot{C}L_{\tau\tau}(u_{\mathbf{p};j},u_{\mathbf{p};j})+3C^{-5}\dot{C}L_{ii}(u_{\mathbf{p},j},u_{\mathbf{p};j})=0\.\] (106)
Likewise it is easy to show that
\[\partial_{\tau}\left(C^{-4}L_{\tau\tau}(u_{\mathbf{p};j},u^{*}_{-\mathbf{p};j})\right)+ 5C^{-5}\dot{C}L_{\tau\tau}(u_{\mathbf{p};j},u^{*}_{-\mathbf{p};j})+3C^{-5}\dot{C}L_{ii}( u_{\mathbf{p},j},u^{*}_{-\mathbf{p};j})=\frac{C^{-7}\dot{C}}{2(2\pi)^{3}}\left[(\chi^{*}_{p ;j})^{2}\left(6p_{i}^{2}-2p^{2}\right)\right]\equiv 0. \tag{101}\]
This proves the statement for \(\nu=\tau\).
## Appendix B Energy-momentum tensor in the late time approximation
In this appendix we show the late time form for \(\mathbb{T}^{(MIX}_{\mu\nu}\). It is convenient to split the auxiliary tensor components in its mass and kinetic parts. We define (see Eqs. (40) to (43))
\[L^{(MASS)}_{\tau\tau}(u_{\mathbf{p};j},u_{\mathbf{p};j}) = \frac{m_{j}^{2}}{2(2\pi)^{3}}|\chi_{p;j}|^{2}\ ;\ \ \ L^{(MASS)}_{\tau\tau}(u_{\mathbf{p};j},u^{*}_{-\mathbf{p};j})=\frac{m_{j}^{2}}{2(2 \pi)^{3}}(\chi^{*}_{p;j})^{2}\ ;\] \[L^{(MASS)}_{kk}(u_{\mathbf{p};j},u_{\mathbf{p};j}) = -L^{(MASS)}_{\tau\tau}(u_{\mathbf{p};j},u_{\mathbf{p};j})\ ;\ \ \ L^{(MASS)}_{kk}(u_{\mathbf{p};j},u^{*}_{-\mathbf{p};j})=-L^{(MASS)}_{\tau\tau}(u_{ \mathbf{p};j},u^{*}_{-\mathbf{p};j})\]
and the "kinetic" parts as \(L^{(KIN)}_{\mu\mu}(A,B)=L_{\mu\mu}(A,B)-L^{(MASS)}_{\mu\mu}(A,B)\), for \(\mu=\tau,k\) and \(A,B=u_{\mathbf{p};j},u^{*}_{-\mathbf{p};j}\). Letting \((a)=(MASS),(KIN)\), we also set
\[\mathbb{T}^{(a)}_{\mu\mu} = \sin^{2}\theta\int d^{3}p\bigg{\{}2|\Xi_{p}(\tau_{0})|^{2}\sum_{j =1,2}L^{(a)}_{\mu\mu}(u_{\mathbf{p};j},u_{\mathbf{p};j})+\left[\Xi^{*}_{p}(\tau_{0}) \Lambda_{p}(\tau_{0})L^{(a)}_{\mu\mu}(u_{\mathbf{p};1},u^{*}_{-\mathbf{p};1})+c.c. \right] \tag{102}\] \[- \left[\Xi_{p}(\tau_{0}),\Lambda_{p}(\tau_{0})L^{(a)*}_{\mu\mu}(u _{\mathbf{p};2},u^{*}_{-\mathbf{p};2})+c.c.\right]\bigg{\}}\.\]
Evidently
\[\mathbb{T}^{(MIX)}_{\mu\mu}=\mathbb{T}^{(MASS)}_{\mu\mu}+\mathbb{T}^{(KIN)}_{ \mu\mu} \tag{103}\]
for \(\mu=\tau,k\). Of course no sum is involved over the repeated index. At the lowest order in \(\tau\) we have
\[\mathbb{T}_{\tau\tau}^{(MASS)}(\tau_{0},\tau)=-\mathbb{T}_{kk}^{( MASS)}(\tau_{0},\tau)=\frac{\sin^{2}\theta}{(2\pi)^{3}}\int d^{3}p\Bigg{\{}\frac{m_{1}^{2} \tau\coth\pi|\nu_{2}|}{8|\nu_{1}|^{2}|\nu_{2}|}\Bigg{[}(|\nu_{1}|^{2}+|\nu_{2}| ^{2})+\] \[\frac{(|\nu_{1}|^{2}-|\nu_{2}|^{2})\left(\sinh^{2}(\pi|\nu_{1}|)+ \cosh^{2}(\pi|\nu_{1}|)\right)}{4\sinh^{2}(\pi|\nu_{1}|)}\left(\left(\frac{ \tau}{\tau_{0}}\right)^{2\nu_{1}}+\left(\frac{\tau_{0}}{\tau}\right)^{2\nu_{ 1}}\right)\Bigg{]}+\frac{m_{2}^{2}\tau\coth\pi|\nu_{1}|}{8|\nu_{1}||\nu_{2}|^{2 }}\Bigg{[}(|\nu_{1}|^{2}+|\nu_{2}|^{2})+\] \[\Bigg{\{}\left(\frac{p}{2}\right)^{2(\nu_{1}-\nu_{2})}\frac{\pi^ {2}}{8\sinh^{2}(\pi|\nu_{1}|)\sinh^{2}(\pi|\nu_{2}|)\Gamma^{2}(1+\nu_{1}) \Gamma^{2}(1-\nu_{2})}\Bigg{[}(-\tau_{0})^{2(\nu_{1}-\nu_{2})}(\nu_{1}+\nu_{2 })^{2}\Bigg{(}\frac{m_{1}^{2}\tau(\coth(\pi|\nu_{1}|)-1)}{4|\nu_{j}|}\Bigg{)}\] \[+(-\tau_{0})^{-2\nu_{2}}(-\tau)^{2\nu_{1}}(|\nu_{1}|^{2}-|\nu_{2}| ^{2})\left(\frac{m_{1}^{2}\tau\coth(\pi|\nu_{1}|)}{4|\nu_{1}|}-\frac{m_{1}^{2} \tau(1+e^{\pi|\nu_{1}|})}{8|\nu_{1}|\sinh(\pi|\nu_{1}|)}\right)+(-\tau_{0})^{ 2\nu_{1}}(-\tau)^{-2\nu_{2}}(|\nu_{2}|^{2}-|\nu_{1}|^{2})\times\] \[\Bigg{(}\frac{m_{2}^{2}\tau\coth(\pi|\nu_{2}|)}{4|\nu_{2}|}- \frac{m_{2}^{2}\tau(1+e^{\pi|\nu_{2}|})}{8|\nu_{2}|\sinh(\pi|\nu_{2}|)}\Bigg{)} \Bigg{]}+c.c.\Bigg{\}}+\Bigg{\{}\left(\frac{p}{2}\right)^{2(\nu_{1}+\nu_{2})} \frac{\pi^{2}(-\tau_{0})^{2\nu_{2}}(-\tau)^{2\nu_{1}}(\nu_{2}-\nu_{1})^{2}}{ 8\sinh^{2}(\pi|\nu_{1}|)\sinh^{2}(\pi|\nu_{2}|)\Gamma^{2}(1+\nu_{1})\Gamma^{2} (1+\nu_{2})}\times\] \[\Bigg{[}\frac{m_{1}^{2}\tau\coth(\pi|\nu_{1}|)}{4|\nu_{1}|}- \frac{m_{1}^{2}\tau(1+e^{\pi|\nu_{1}|})}{8|\nu_{1}|\sinh(\pi|\nu_{1}|)}-\frac {m_{2}^{2}\tau\coth(\pi|\nu_{2}|)}{4|\nu_{2}|}+\frac{m_{2}^{2}\tau(1+e^{\pi| \nu_{2}|})}{8|\nu_{2}|\sinh(\pi|\nu_{2}|)}\Bigg{]}+c.c.\Bigg{\}}+\] \[\Bigg{\{}\left(\frac{p}{2}\right)^{2\nu_{1}}\frac{\pi}{4|\nu_{2}| \sinh^{2}(\pi|\nu_{1}|)\Gamma^{2}(1+\nu_{1})}\Bigg{[}(-\tau_{0})^{2\nu_{1}}(| \nu_{1}|^{2}-|\nu_{2}|^{2})\left(\frac{m_{2}^{2}\tau}{4|\nu_{2}|}\right)+(- \tau_{0})^{2(\nu_{1}-\nu_{2})}(-\tau)^{2\nu_{2}}(\nu_{1}+\nu_{2})^{2}\times\] \[\left(\frac{m_{2}^{2}\tau}{16|\nu_{2}|\sinh^{2}(\pi|\nu_{2}|)} \right)\left(e^{2\pi|\nu_{2}|}+e^{-\pi|\nu_{2}|}-1\right)+(-\tau_{0})^{2(\nu_{ 1}+\nu_{2})}(-\tau)^{-2\nu_{2}}(\nu_{2}-\nu_{1})^{2}\left(\frac{m_{2}^{2}\tau} {16|\nu_{2}|\sinh^{2}(\pi|\nu_{2}|)}\right)\times\] \[\left(e^{2\pi|\nu_{2}|}+e^{-\pi|\nu_{2}|}-1\right)\Bigg{]}+c.c. \Bigg{\}}+\Bigg{\{}\left(\frac{p}{2}\right)^{2\nu_{2}}\frac{\pi}{4|\nu_{1}| \sinh^{2}(\pi|\nu_{2}|)\Gamma^{2}(1+\nu_{2})}\Bigg{[}(-\tau_{0})^{2\nu_{2}}(| \nu_{2}|^{2}-|\nu_{1}|^{2})\left(\frac{m_{1}^{2}\tau}{4|\nu_{1}|}\right)+\] \[(-\tau_{0})^{2(\nu_{2}-\nu_{1})}(-\tau)^{2\nu_{1}}(\nu_{1}+\nu_{2 })^{2}\left(\frac{m_{1}^{2}\tau}{16|\nu_{1}|\sinh^{2}(\pi|\nu_{1}|)}\right) \left(e^{2\pi|\nu_{1}|}+e^{-\pi|\nu_{1}|}-1\right)+(-\tau_{0})^{2(\nu_{1}+\nu_{ 2})}(-\tau)^{-2\nu_{1}}(\nu_{2}-\nu_{1})^{2}\times\] \[\left(\frac{m_{1}^{2}\tau}{16|\nu_{1}|\sinh^{2}(\pi|\nu_{1}|)} \right)\left(e^{2\pi|\nu_{1}|}+e^{-\pi|\nu_{1}|}-1\right)\Bigg{]}+c.c.\Bigg{\}} \Bigg{\}}\;, \tag{33}\]
\[\mathbb{T}^{(KIN)}_{\tau\tau}=\frac{\sin^{2}\theta H_{0}^{2}\tau}{(2\pi)^{3}}\int d ^{3}p\Bigg{\{}\left(\frac{\coth(\pi|\nu_{1}|)\coth(\pi|\nu_{2}|)(|\nu_{1}|^{2}+| \nu_{2}|^{2})}{2|\nu_{1}||\nu_{2}|}-1\right)\sum_{j}\frac{\coth(\pi|\nu_{j}|)}{| \nu_{j}|}\left(\frac{1}{2}-\frac{m_{j}^{2}}{4H_{0}^{2}}\right)+\]
\[\frac{\coth(\pi|\nu_{2}|)(|\nu_{1}|^{2}+|\nu_{2}|^{2})}{4|\nu_{1}|^{2}|\nu_{2}| \sinh^{2}(\pi|\nu_{1}|)}\left(\frac{m_{1}^{2}}{4H_{0}^{2}}-\frac{1}{2}\right) +\frac{\coth(\pi|\nu_{1}|)(|\nu_{1}|^{2}+|\nu_{2}|^{2})}{4|\nu_{1}||\nu_{2}|^{ 2}\sinh^{2}(\pi|\nu_{2}|)}\left(\frac{m_{2}^{2}}{4H_{0}^{2}}-\frac{1}{2}\right) +\Bigg{[}\left(\frac{\tau}{\tau_{0}}\right)^{2\nu_{2}}\frac{\coth(\pi|\nu_{1}|) (|\nu_{2}|^{2}-|\nu_{1}|^{2})}{8|\nu_{1}||\nu_{2}|^{2}\sinh^{2}(\pi|\nu_{2}|)} \times\]
\[\left(\left(\frac{5}{8}-\frac{m_{1}^{2}}{4H_{0}^{2}}-\frac{\nu_{1}}{4}\right) +\cosh(2\pi|\nu_{1}|)\left(\frac{-5}{8}+\frac{m_{1}^{2}}{4H_{0}^{2}}-\frac{ \nu_{1}}{4}\right)\right)+c.c.\Bigg{]}+\]
\[\left[\left(\frac{p}{2}\right)^{2(\nu_{1}-\nu_{2})}\frac{\pi^{2}}{16\sinh^{2} (\pi|\nu_{1}|)\sinh^{2}(\pi|\nu_{2}|)\Gamma^{2}(1+\nu_{1})\Gamma^{2}(1-\nu_{2} )}\Bigg{(}(-\tau_{0})^{2\nu_{1}}(-\tau)^{-2\nu_{2}}(|\nu_{1}|^{2}-|\nu_{2}|^{ 2})\coth(\pi|\nu_{2}|)\nu_{2}-\right.\]
\[\left.\left(-\tau_{0}\right)^{-2\nu_{2}}(-\tau)^{2\nu_{1}}(|\nu_{2}|^{2}-|\nu _{1}|^{2})\coth(\pi|\nu_{1}|)\nu_{1}\right)+c.c.\Bigg{]}+\]
\[\left[\left(\frac{p}{2}\right)^{2(\nu_{1}+\nu_{2})}\frac{\pi^{2}}{16\sinh^{2} (\pi|\nu_{1}|)\sinh^{2}(\pi|\nu_{2}|)\Gamma^{2}(1+\nu_{1})\Gamma^{2}(1+\nu_{2} )}\Bigg{(}-(-\tau_{0})^{2\nu_{1}}(-\tau)^{2\nu_{2}}(|\nu_{1}|^{2}-|\nu_{2}|^{ 2})\coth(\pi|\nu_{2}|)\nu_{2}-\right.\]
\[\left.\left(-\tau_{0}\right)^{2\nu_{2}}(-\tau)^{2\nu_{1}}(|\nu_{2}|^{2}-|\nu_{ 1}|^{2})\coth(\pi|\nu_{1}|)\nu_{1}\right)+c.c.\Bigg{]}+\]
\[\left[\left(\frac{p}{2}\right)^{2\nu_{1}}\frac{\pi}{8|\nu_{2}|\sinh^{2}(\pi| \nu_{1}|)\sinh^{2}(\pi|\nu_{2}|)\Gamma^{2}(1+\nu_{1})}\Bigg{(}2(\tau_{0})^{2 \nu_{1}}\frac{(|\nu_{1}|^{2}-|\nu_{2}|^{2})}{|\nu_{2}|}\left(\frac{1}{2}-\frac {m_{2}^{2}}{4H_{0}^{2}}\right)\sinh^{2}(\pi|\nu_{2}|)+\right.\]
\[\left.\left(-\tau_{0}\right)^{2(\nu_{1}-\nu_{2})}(-\tau)^{2\nu_{2}}\frac{(\nu_{ 1}+\nu_{2})^{2}}{2|\nu_{2}|}\left((1-2\cosh(2\pi|\nu_{2}|))\left(\frac{5}{8}- \frac{m_{2}^{2}}{4H_{0}^{2}}\right)-(1+2\cosh(2\pi|\nu_{2}|))\,\frac{\nu_{2}}{ 4}\right)+\]
\[\left.\left(-\tau_{0}\right)^{2(\nu_{1}+\nu_{2})}(-\tau)^{-2\nu_{2}}\frac{(\nu_ {1}-\nu_{2})^{2}}{2|\nu_{2}|}\left((1-2\cosh(2\pi|\nu_{2}|))\left(\frac{5}{8}- \frac{m_{2}^{2}}{4H_{0}^{2}}\right)+(1+2\cosh(2\pi|\nu_{2}|))\,\frac{\nu_{2}}{ 4}\right)-\]
\[\left.\left(-\tau\right)^{2\nu_{2}}\frac{4}{|\nu_{1}|}\left(|\nu_{1}||\nu_{2}| \sinh^{2}(\pi|\nu_{2}|)\left(\frac{5}{8}-\frac{m_{1}^{2}}{4H_{0}^{2}}-\frac{ \nu_{1}}{4}\right)+\frac{\nu_{1}(|\nu_{1}|^{2}+|\nu_{2}|^{2})}{4}\coth(\pi| \nu_{1}|)\cosh(\pi|\nu_{2}|)\sinh(\pi|\nu_{2}|)\right)\right)+c.c.\Bigg{]}+\]
\[\left[\left(\frac{p}{2}\right)^{2\nu_{2}}\frac{\pi}{8|\nu_{1}|\sinh^{2}(\pi| \nu_{1}|)\sinh^{2}(\pi|\nu_{2}|)\Gamma^{2}(1+\nu_{2})}\Bigg{(}2(\tau_{0})^{2 \nu_{2}}\frac{(|\nu_{2}|^{2}-|\nu_{1}|^{2})}{|\nu_{1}|}\left(\frac{1}{2}- \frac{m_{1}^{2}}{4H_{0}^{2}}\right)\sinh^{2}(\pi|\nu_{1}|)+\right.\]
\[\left.\left(-\tau_{0}\right)^{2(\nu_{2}-\nu_{1})}(-\tau)^{2\nu_{1}}\frac{(\nu_{ 1}+\nu_{2})^{2}}{2|\nu_{1}|}\left((1-2\cosh(2\pi|\nu_{1}|))\left(\frac{5}{8}- \frac{m_{1}^{2}}{4H_{0}^{2}}\right)-(1+2\cosh(2\pi|\nu_{1}|))\,\frac{\nu_{1}}{ 4}\right)+\right.\]
\[\left.\left(-\tau_{0}\right)^{2(\nu_{1}+\nu_{2})}(-\tau)^{-2\nu_{1}}\frac{(\nu_{ 1}-\nu_{2})^{2}}{2|\nu_{1}|}\left((1-2\cosh(2\pi|\nu_{1}|))\left(\frac{5}{8}- \frac{m_{1}^{2}}{4H_{0}^{2}}\right)+(1+2\cosh(2\pi|\nu_{1}|))\,\frac{\nu_{1}}{ 4}\right)-\right.\]
\[\left.\left(-\tau_{0}\right)^{2(\nu_{1}+\nu_{2})}(-\tau)^{-2\nu_{1}}\frac{(\nu_{ 1}-\nu_{2})^{2}}{2|\nu_{1}|}\left((1-2\cosh(2\pi|\nu_{1}|))\left(\frac{5}{8}- \frac{m_{1}^{2}}{4H_{0}^{2}}\right)+(1+2\cosh(2\pi|\nu_{1}|))\,\frac{\nu_{1}}{ 4}\right)-\right.\]
\[\left.\left(-\tau_{0}\right)^{2(\nu_{1}+\nu_{2})}(-\tau)^{-2\nu_{1}}\frac{(\nu_{ 1}-\nu_{2})^{2}}{2|\nu_{1}|}\left((1-2\cosh(2\pi|\nu_{1}|))\left(\frac{5}{8}- \frac{m_{1}^{2}}{4H_{0}^{2}}\right)+(1+2\cosh(2\pi|\nu_{1}|))\,\frac{\nu_{1}}{ 4}\right)-\right.\]
\[\left.\left.\left(-\tau_{0}\right)^{2(\nu_{1}+\nu_{2})}(-\tau)^{-2\nu_{1}} \frac{(\nu_{1}-\nu_{2})^{2}}{2|\nu_{1}|}\left((1-2\cosh(2\pi|\nu_{1}|))\left( \frac{5}{8}-\frac{m_{1}^{2}}{4H_{0}^{2}}\right)+(1+2\cosh(2\pi|\nu_{1}|))\, \frac{\nu_{1}}{4}\right)-\right.\]
\[\left.\left(-\tau_{0}\right)^{2(\nu_{1}-\nu_{2})}\frac{4}{|\nu_
\[\mathbb{T}_{kk}^{(KIN)}=\frac{\sin^{2}\theta}{(2\pi)^{3}}\int d^{3}p \Bigg{\{}\sum_{j}\frac{m_{j}^{2}\tau\coth(\pi|\nu_{j}|)}{4|\nu_{j}|}+\Bigg{[} \left(\frac{\tau}{\tau_{0}}\right)^{2\nu_{2}}\frac{H_{0}^{2}\tau\coth(\pi|\nu_{ 1}|)(|\nu_{2}|^{2}-|\nu_{1}|^{2})}{8|\nu_{1}||\nu_{2}|^{2}\sinh^{2}(\pi|\nu_{ 2}|)}\left(\frac{9}{8}-\frac{m_{2}^{2}}{4H_{0}^{2}}+\frac{3\nu_{2}}{4}\right)\times\] \[(1-\cosh(2\pi|\nu_{2}|))+c.c.\Bigg{]}+\Bigg{[}\left(\frac{\tau}{ \tau_{0}}\right)^{2\nu_{1}}\frac{H_{0}^{2}\tau\coth(\pi|\nu_{2}|)(|\nu_{1}|^{2 }-|\nu_{2}|^{2})}{8|\nu_{2}||\nu_{1}|^{2}\sinh^{2}(\pi|\nu_{1}|)}\left(\frac{ 9}{8}-\frac{m_{1}^{2}}{4H_{0}^{2}}+\frac{3\nu_{1}}{4}\right)(1-\cosh(2\pi| \nu_{1}|))+c.c.\Bigg{]}+\] \[\Bigg{[}\left(\frac{p}{2}\right)^{2\nu_{1}}\frac{\pi}{4|\nu_{2}| \sinh^{2}(\pi|\nu_{1}|)\Gamma^{2}(1+\nu_{1})}\Bigg{(}(-\tau_{0})^{2\nu_{1}} \frac{m_{2}^{2}\tau(|\nu_{2}|^{2}-|\nu_{1}|^{2})}{4|\nu_{2}|}+(-\tau_{0})^{2( \nu_{1}-\nu_{2}))}(-\tau)^{2\nu_{2}}\frac{(\nu_{1}+\nu_{2})^{2}H_{0}^{2}\tau}{ 4|\nu_{2}|\sinh^{2}(\pi|\nu_{2}|)}\times\] \[\left(\frac{9}{8}-\frac{m_{2}^{2}}{4H_{0}^{2}}+\frac{3\nu_{2}}{4 }\right)(1-\cosh(2\pi|\nu_{2}|))+(-\tau_{0})^{2(\nu_{1}+\nu_{2})}(-\tau)^{-2 \nu_{2}}\frac{(\nu_{2}-\nu_{1})^{2}H_{0}^{2}\tau}{4|\nu_{2}|\sinh^{2}(\pi|\nu _{2}|))}\left(\frac{9}{8}-\frac{m_{2}^{2}}{4H_{0}^{2}}-\frac{3\nu_{2}}{4}\right)\times\] \[(1-\cosh(2\pi|\nu_{2}|))-2(-\tau)^{2\nu_{1}}H_{0}^{2}\tau|\nu_{2}| \left(\frac{9}{8}-\frac{m_{1}^{2}}{4H_{0}^{2}}+\frac{3\nu_{1}}{4}\right) \Bigg{)}+c.c\Bigg{]}+\] \[\Bigg{[}\left(\frac{p}{2}\right)^{2\nu_{2}}\frac{\pi}{4|\nu_{1}| \sinh^{2}(\pi|\nu_{2}|)\Gamma^{2}(1+\nu_{2})}\Bigg{(}(-\tau_{0})^{2\nu_{2}} \frac{m_{1}^{2}\tau(|\nu_{1}|^{2}-|\nu_{2}|^{2})}{4|\nu_{1}|}+(-\tau_{0})^{2( \nu_{2}-\nu_{1})})(-\tau)^{2\nu_{1}}\frac{(\nu_{1}+\nu_{2})^{2}H_{0}^{2}\tau} {4|\nu_{1}|\sinh^{2}(\pi|\nu_{1}|)}\times\] \[\left(\frac{9}{8}-\frac{m_{1}^{2}}{4H_{0}^{2}}+\frac{3\nu_{1}}{4} \right)(1-\cosh(2\pi|\nu_{1}|))+(-\tau_{0})^{2(\nu_{1}+\nu_{2})}(-\tau)^{-2\nu _{1}}\frac{(\nu_{2}-\nu_{1})^{2}H_{0}^{2}\tau}{4|\nu_{1}|\sinh^{2}(\pi|\nu_{1} |))}\left(\frac{9}{8}-\frac{m_{1}^{2}}{4H_{0}^{2}}-\frac{3\nu_{1}}{4}\right)\times\] \[(1-\cosh(2\pi|\nu_{1}|))-2(-\tau)^{2\nu_{2}}H_{0}^{2}\tau|\nu_{1}| \left(\frac{9}{8}-\frac{m_{2}^{2}}{4H_{0}^{2}}+\frac{3\nu_{2}}{4}\right) \Bigg{)}+c.c\Bigg{]}\Bigg{\}}. \tag{10}\]
|
2309.05638 | Errors are Robustly Tamed in Cumulative Knowledge Processes | We study processes of societal knowledge accumulation, where the validity of
a new unit of knowledge depends both on the correctness of its derivation and
on the validity of the units it depends on. A fundamental question in this
setting is: If a constant fraction of the new derivations is wrong, can
investing a constant fraction, bounded away from one, of effort ensure that a
constant fraction of knowledge in society is valid? Ben-Eliezer, Mikulincer,
Mossel, and Sudan (ITCS 2023) introduced a concrete probabilistic model to
analyze such questions and showed an affirmative answer to this question. Their
study, however, focuses on the simple case where each new unit depends on just
one existing unit, and units attach according to a $\textit{preferential
attachment rule}$.
In this work, we consider much more general families of cumulative knowledge
processes, where new units may attach according to varied attachment mechanisms
and depend on multiple existing units. We also allow a (random) fraction of
insertions of adversarial nodes.
We give a robust affirmative answer to the above question by showing that for
$\textit{all}$ of these models, as long as many of the units follow simple
heuristics for checking a bounded number of units they depend on, all errors
will be eventually eliminated. Our results indicate that preserving the quality
of large interdependent collections of units of knowledge is feasible, as long
as careful but not too costly checks are performed when new units are
derived/deposited. | Anna Brandenberger, Cassandra Marcussen, Elchanan Mossel, Madhu Sudan | 2023-09-11T17:29:28Z | http://arxiv.org/abs/2309.05638v3 | # Combinative Cumulative Knowledge Processes
###### Abstract
We analyze Cumulative Knowledge Processes, introduced by Ben-Eliezer, Mikulincer, Mossel, and Sudan (ITCS 2023), in the setting of "directed acyclic graphs", i.e., when new units of knowledge may be derived by combining multiple previous units of knowledge. The main considerations in this model are the role of errors (when new units may be erroneous) and local checking (where a few antecedent units of knowledge are checked when a new unit of knowledge is discovered). The aforementioned work defined this model but only analyzed an idealized and simplified "tree-like" setting, i.e., a setting where new units of knowledge only depended directly on one previously generated unit of knowledge.
The main goal of our work is to understand when the general process is safe, i.e., when the effect of errors remains under control. We provide some necessary and some sufficient conditions for safety. As in the earlier work, we demonstrate that the frequency of checking as well as the depth of the checks play a crucial role in determining safety. A key new parameter in the current work is the _combination factor_ which is the distribution of the number of units \(M\) of old knowledge that a new unit of knowledge depends on. Our results indicate that a large combination factor can compensate for a small depth of checking. The dependency of the safety on the combination factor is far from trivial. Indeed some of our main results are stated in terms of \(\mathbf{E}\{1/M\}\) while others depend on \(\mathbf{E}\{M\}\).
## 1 Introduction
Understanding and reducing the effect of errors in systems is a central goal of theoretical computer science. Recent work by [1] initiated the study of the robustness of _societal knowledge accumulation_ to errors. This model studies errors and correction in knowledge accumulation process -- i.e., processes in which correctness of a newly gained piece of knowledge depends on the correctness of previous units of knowledge. In this work, we investigate the setting of this model in which new units may depend on _multiple_ previous units of knowledge and analyze the effect of errors in this new setting.
The model in [1] is motivated by several key examples in which each new unit of knowledge relies on previous units and may introduce error into the system. First is the example of knowledge accumulation in science as it is accumulated in scientific publications. Other examples include software development (where knowledge accumulates in software packages) and knowledge accumulation in web-content (in which knowledge accumulates in web-pages).
A major concern regarding these processes is that, over time, a significant fraction of the units of knowledge are incorrect; see [14, 15, 16, 17] for the example of scientific publication networks. Of particular concern is that incorrect knowledge may result in even more incorrect knowledge in the future or a slowdown in the scientific progress in an area, see e.g. [13] and the discussion in [1], in particular the recent example discussed there of [18, 19].
**The model.** To study knowledge accumulation, [1] proposed the following abstraction: each new unit of knowledge relies on pre-existing units of knowledge. Each new unit may also be erroneous, either because it introduces error or relies on an erroneous unit of knowledge. Without methods of checking for errors, the erroneous units of knowledge can overwhelm a knowledge accumulation process. The error-checking mechanisms studied for knowledge accumulation processes should not be too time- or resource-intensive (as to not hinder the pace of discovery of knowledge), but should be effective enough to ensure that the system reaches a stable state of limited errors over time.
[1] represents the body of knowledge as a _directed acyclic graph_ (DAG), in which units of knowledge are represented as nodes, and an edge from \(u\) to \(v\) indicates that \(v\) "builds upon" or "inherits from" \(u\). While the authors introduced the general DAG model, they only prove results for the simpler "tree-like" case, where each node has a single parent. The parent is chosen according to the preferential attachment model, as is common to do [1, 2]. In their model, they assume that each new node performs a check to depth \(k\) with probability \(p\) and showed that if \(p\) and \(k\) are large then errors are eliminated, and if \(p\) is small, then they are not.
**The challenge of multiple parents.** Of course many (if not all) of the motivating examples are not tree processes since new units of knowledge depend on many prior units of knowledge. Our goal in this paper is to flesh out the model (especially in regards to how checks are performed) and analyze these knowledge accumulation processes with multiple parents. Of particular interest is understanding how is error elimination and survival depend on the distribution of the number of parents chosen.
[1] suggests various possibilities for how new nodes should connect to existing nodes in a general DAG model, including connecting according to the _relevance_ or the _importance and impact_ of the pre-existing nodes. We follow the latter idea, and one of our contributions is formalizing a natural and analytically tractable extension of the CKP model to DAGs. New nodes connect to existing nodes according to the preferential attachment model where parents are chosen uniformly at random. This is a standard and well-motivated choice given the difficulty of modeling attachment based on the relevance of the previous nodes to the new node (which is correlated with latent features such as topics, language, or goals, and the fact that nodes are likely to copy parents from their parents). Along with deciding how nodes should connect the existing DAG, there are also many possible extensions of a checking mechanism up to depth \(k\). We propose several local checking mechanisms that are both analytically tractable and feasible for different practical contexts.
Our results also show that when checking probability \(p\) and depth \(k\) are large, errors are eliminated, and when \(p\) is small they are not, but there are some important qualitative differences between our results and those of [1].
Indeed, in the tree-like case, they show that for \(p\) small enough, errors can remain undetected forever for
any \(k\geq 1\). They also show that when \(k\) is small (\(k=1\) or \(2\)), this can hold no matter how large the checking probability \(p\) is. Conversely, they also show that errors are eliminated when \(k\geq 4\) provided \(p\) is large enough.
In this paper, we explore this analogous question in the general DAG setting. We show that if the checking probability \(p\) is small enough, then again error effects survive no matter how large \(k\) is, thus vastly extending the first result mentioned above. Interestingly, our results indicate that in the other extreme of small checking depth \(k\), errors can be eliminated even when \(k=2\), provided that the number of parents is large. This result is stated in terms of \(\mathbf{E}\{1/M\}\), where \(M\) is the random variable determining the number of parents new nodes connect to. This suggests that models where each unit depends on more units of knowledge require shallower checks than models where each unit only depends on a small number of parents.
A challenging open problem even in the tree case is to prove monotonicity in the model with respect to the natural parameters (\(p,k,\) and \(M\)). Another contribution of the current paper is proving monotonicity in a special case of trees.
### Formal Definition of The Model
We now give a formal definition of the CKP model that we just described.
**Definition 1.1** (Cumulative Knowledge Process).: The DAG Cumulative Knowledge Process (CKP) consists of a sequence \(X_{0},X_{1},X_{2},\dots\) where at each time \(t\), \(X_{t}=(\mathcal{G}_{t},L_{t})\) is a DAG \(\mathcal{G}_{t}\) of size \(t+1\), where each node is labelled from the set \(\mathcal{L}=\{\mathsf{PF},\mathsf{CF},\mathsf{CT}\}\), i.e., \(L_{t}\in\mathcal{L}^{t}\).
**Notation.** The \(\mathsf{PF},\mathsf{CF}\) and \(\mathsf{CT}\) hidden labels respectively denote "proclaimed false", "conditionally false" and "conditionally true". Publicly, the labels \(\mathsf{CF}\) and \(\mathsf{CT}\) are collapsed into a general \(\mathsf{PT}\) "proclaimed true" state; the node's true label is revealed if it is _checked_. A node is \(\mathsf{True}\) if it and all of its ancestors are \(\mathsf{CT}\), and \(\mathsf{False}\) otherwise. For any node \(v\) and label \(L\in\mathcal{L}\cup\{\mathsf{PT}\}\), denote by \(\deg_{L}(v)\) the number of children of \(v\) labelled \(L\); the most important of these is \(\deg_{\mathsf{PT}}(v)\). We additionally define
* the \(\mathsf{PT}\), \(\mathsf{PF}\), \(\mathsf{CF}\) and \(\mathsf{CT}\) sub-DAGs, denoted \(\mathcal{G}_{t}^{L}\) for each \(L\in\mathcal{L}=\{\mathsf{PF},\mathsf{CF},\mathsf{CT}\}\)
* the sub-DAG of **roots**\(\mathcal{G}_{t}^{R}\): \(\mathsf{CT}\) nodes with one or more \(\mathsf{PF}\) parents.
* the **minimal false** nodes \(\mathcal{F}_{t}=\mathcal{G}_{t}^{\mathsf{CF}}\cup\mathcal{G}_{t}^{R}\), the set of \(\mathsf{CF}\) nodes and roots.
The size \(|\cdot|\) of any of these sub-DAGs is the number of nodes that it contains.
**Parameters.** A CKP depends on four parameters: A bounded random variable \(1\leq M<\infty\), the combination factor, which determines the number of parents each new node will connect to, \(\varepsilon\in[0,1]\) the error probability, \(p\in[0,1]\) the checking probability, and \(k\in\mathbf{N}\) the checking height. In principle, \(k\) could also be a random variable, but we restrict it to be a fixed constant in our analysis. Note that setting \(M\) to be the constant \(1\) reduces our model to the tree-like CKP analyzed in [1].
**Evolution.** For an \((M,p,k,\varepsilon)\)-CKP at state \(X_{t}=(\mathcal{G}_{t},L_{t})\), the next state \(X_{t+1}\) is determined as follows.
1. **Choosing the number of parents.** Draw \(m\sim M\) according to the combination factor.
2. **Selecting parents.** If the DAG is entirely \(\mathsf{PF}\), i.e., \(L_{t}=\{\mathsf{PF},\dots\}\), then the process stops and \(X_{t+1}=X_{t}\). Otherwise, \(m\) random \(\mathsf{PT}\) nodes \(u_{i}\in\mathcal{G}_{t}^{\mathsf{PT}}\) are chosen with probability proportional to \(1+\deg_{\mathsf{PT}}(u_{i})\), analogously to the preferential attachment model.
3. **Creating a new node.** A new node \(v\) is created, with parents \(u_{1},\dots u_{m}\).
4. **Error introduction.** Label \(v\) as \(\mathsf{CF}\) with probability \(\varepsilon\) and \(\mathsf{CT}\) with probability \(1-\varepsilon\).
5. **Error checking.** With probability depending on \(p\), a given checking mechanism is applied to \(v\). We list several options at the end of this subsection.
**General CKP initial state.** In the general CKP with \(\varepsilon>0\), the starting state \(X_{0}\) consists of a \(\mathsf{CT}\) node (though our analysis could allow for multiple \(\mathsf{CT}\) nodes in \(X_{0}\)). Thus, \(\mathsf{False}\) nodes are those with a \(\mathsf{CF}\) ancestor added at some error introduction phase. See Figure 1 for an example of an evolution step.
Several results in this paper involve the following simplification of the model, in which there is no error introduction (\(\varepsilon=0\)), and we begin with a \(\mathsf{CF}\) node to avoid trivialities. In this simplification, the entire DAG is \(\mathsf{False}\).
**Simple CKP.** A \((M,p,k)\)-simple CKP sets \(\varepsilon=0\) and \(X_{0}\) consists of one CF node. See Figure 2.
**Checking mechanisms.** All of the checking mechanisms that we consider for step (v) of the evolution are _local_ in the sense that a new node conducts a check on itself and on ancestor nodes within a certain distance \(k\). We analyze the following checking mechanisms, listed roughly in order of the amount of nodes that can be deleted in one checking step. For nodes \(u\) and \(v\), let \(\operatorname{dist}(u,v)\) denote the number of edges on the shortest path between \(u\) and \(v\).
1. Stringy: With probability \(p\), check one random path of height \(k\) up from \(u\), stopping at the first CF/PF node reached. If one is found, flag it and its descendants along the path checked as PF.
2. BFS: With probability \(p\), perform a breadth-first search (BFS) starting from node \(v\) up to depth \(k\) (i.e., all nodes \(u\) checked have \(\operatorname{dist}(u,v)\leq k\)), stopping at the first minimal false node reached. Flag as PF this node and its descendants that were seen along the BFS from \(v\).
3. Exhaustive BFS: For each parent \(u_{i}\) of \(v\), one at a time, with probability \(p\), check \(v\) and then perform a BFS from \(u_{i}\) up to depth \(k-1\) (i.e., all checked nodes \(u\) have \(\operatorname{dist}(u,u_{i})\leq k-1\)). Stop the checking procedure bat the first minimal false node reached. Flag as PF this node and its descendants seen along the BFS from \(u_{i}\).
4. Parent-wise BFS: Do the same thing as in the Exhaustive BFS, but instead of stopping the check completely as soon as a minimal false node is found, restart the check for the next parent \(u_{i+1}\). A
Figure 1: One evolution step of an \((M=3,p,k=3,\varepsilon)\)-CKP with checking mechanism Stringy. Node labels PF, CF and CT are respectively represented by crossed, empty and filled circles. (a) the initial CKP state \(X_{t}\); (b) the result of steps (i)-(iv) in which a new CT node is added; (c) step (v), a random path of length \(k=3\) is checked and stops at a CF node; (d) all visited descendants of the CF node are marked PF; this is \(X_{t+1}\).
Figure 2: One evolution step of an \((M=3,p,k=4)\)-simple CKP with Exhaustive BFS checking mechanism. Steps (a) and (b) are as in Figure 1; (c) the first parent hits the probability \(p\) and a BFS is performed, which finds the original CF node and stops; (d) all visited descendants of the CF node are marked PF. Note in this case, the Parent-wise BFS and Complete checks would have marked the same set of nodes as PF.
maximum of \(m\) minimal false nodes can now be found (1 per parent \(u_{i}\)), rather than 1.
* Complete: For each parent \(u_{i}\) of \(v\), with probability \(p\), check \(v\) and all nodes that are within distance \(k-1\) of \(u_{i}\). This is equivalent to doing the Exhaustive BFS but continuing after finding any minimal false node. Now, if each parent performs a check, all CF and root nodes within distance \(k\) can be found.
In this paper, we study _error elimination_ and _error survival_ for CKP processes where new nodes connect to multiple parent nodes in the DAG. Formally, these terms are defined as follows.
**Definition 1.2** (Error elimination).: The error effect in a \((M,p,k,\varepsilon)\)-CKP is completely eliminated if for all CF nodes \(u\), there exists a time \(t^{\prime}\) such that for all \(t\geq t^{\prime}\), the sub-DAG \(\mathcal{G}_{t}[u]\) rooted at \(u\) is completely marked as PF.
**Definition 1.3** (Error survival).: The error effect in a \((M,p,k,\varepsilon)\)-CKP survives if with some positive probability, there exists a CF node \(u\) such that for all time steps \(t\), there exists at least one PT node in the sub-DAG \(\mathcal{G}_{t}[u]\) rooted at \(u\).
### Main results
We now present our main results regarding error elimination and survival in the CKP process. We present simplified versions of our results for the simple CKP model. For the statements of the results in full generality, see Sections 2 and 3.
First, we present an error elimination result for the simple CKP. As the combination factor increases (more formally, as \(\mathbf{E}\{1/M\}\) decreases), we find that error elimination holds for a wider range of values of the checking probability \(p\).
Our error elimination results in Section 2 yield the following Theorem, when values of the checking depth \(k\) are specifically set. This Theorem exhibits a contrast between the tree-CKP case and the case where nodes connect to a larger number of parents. It was proven in [1] that when each node connects to one parent node and the checks are shallow (meaning \(k=2\)), for any \(0\leq p<1\) the error effect in the \((M=1,p,k=2)\)-simple CKP survives with positive probability. On the other hand, we prove that if \(\mathbf{E}\{1/M\}\leq 1/3\), for sufficiently large values of \(p\), shallow checks eliminate the error effect completely. We also obtain an error elimination result for the case of \(k\geq 3\). The authors of [1] raised the question of whether error effect is eliminated or survives for the case of \(k=3\). We answer this question when \(\mathbf{E}\{1/M\}\leq 1/2\).
**Theorem 1.1** (Error Elimination in the Simple CKP).: _The error effect in the \((M,p,k)\)-simple CKP with Exhaustive BFS, Parent-wise BFS and Complete checking mechanisms is eliminated completely when the parameters satisfy one of the following:_
* \(k\geq 2\)_,_ \(p\geq 7/8\)_, and_ \(\mathbf{E}\{1/M\}\leq 1/4\)_, or_
Figure 3: Visualizations of various checking mechanisms. (i) Stringy with \(k=3\). (ii) BFS with \(k=1\). (iii) Exhaustive BFS with \(k=3\), which we note stopped as soon as it found the CF node. (iv) Parent-wise BFS with \(k=2\), which found one more node than the Exhaustive check. (v) Complete with \(k=2\).
* \(k\geq 2\)_,_ \(p=1\) _and_ \(\mathbf{E}\{1/M\}\leq 1/3\)_, or_
* \(k\geq 3\)_,_ \(p\geq 0.7895\)_, and_ \(\mathbf{E}\{1/M\}\leq 1/2\)_._
By decreasing \(\mathbf{E}\{1/M\}\), one can obtain lower values of \(p\) (e.g. \(p\geq 2/3\) for \(\mathbf{E}\{1/M\}\leq 1/9\) and \(k=2\)) that yield error elimination; however, we state the bound on \(p\) that works for a broader range of combination factor variables \(M\) (less restrictive \(\mathbf{E}\{1/M\}\) requirement) for simplicity. See Theorem 2.1 for the full parameter dependence statement. Furthermore, Figure 4 shows the regions of the \((p,k)\) parameter space in which we have error elimination, for \(M\) fixed to be a constant \(m=1\) (tree-CKP setting), \(m=2\) or \(m=5\).
**Sharpness in \(k\).** While stated in terms of a BFS checking mechanism that stops at any minimal false node (CF, PF, or roots), these exact elimination results (Theorems 1.1, 2.1) also hold for a BFS checking mechanism which stops only at the first CF or PF node seen. For such checking mechanisms, if \(k=1\), for any parameters \(M\), \(p\), and \(\varepsilon\), the error effect in the \((M,p,k=1,\varepsilon)\)-CKP survives with positive probability. Therefore, our error survival result is sharp in \(k\) in for these checking mechanisms that stop only at CF or PF nodes.
Indeed, the error surviving with positive probability if \(k=1\) can be seen through the following simple argument: with positive probability, the CKP contains a CF node with a CT child. This CT node is False. However, since any future new node added only discovers the label of its parent, this CT node will never be labelled PF and will keep accumulating False children.
We also prove error survival results for combinative Cumulative Knowledge Processes. The existence of error survival results demonstrates that error checking is nontrivial for combinative CKPs (i.e., there are some settings of parameters for which error is not eliminated). It also provides a fuller understanding of the parameter thresholds for which errors persist or are eliminated. For example, in the combinative simple-CKP model, if each new node forms 2 edges to parent nodes, the error effect is eliminated for \(p\geq 0.7895\) and \(k\geq 3\) and survives for \(p\leq 0.25\) for all \(k\) (see Figure 4, \(m=2\)).
Our error survival bounds depend on the _minimum_ number of parent nodes that a new node may connect to. This is formally reflected in the following Theorem, which is presented for the simple CKP setting. See Section 3 for the statement of the error survival result for general CKPs. Let \(\min(M)\geq 1\) denote the minimum value of the random variable \(M\).
**Theorem 1.2** (Error Survival in the Simple CKP).: _If \(M\geq 1\) and \(p\in[0,1)\) satisfy_
\[\frac{\mathbf{E}\{M\}}{\min(M)+1}<1\ \ and\ \ p\leq\frac{1}{2}\left(1-\frac{ \mathbf{E}\{M\}}{\min(M)+1}\right), \tag{1}\]
_then for any \(k\geq 1\), the error effect survives with positive probability in the \((M,p,k)\)-simple CKP with the Stringy and BFS checking mechanisms. For the Exhaustive BFS and Parent-wise BFS checking mechanisms, this holds if the parameters satisfy (1) with \(p\) replaced by \(p\mathbf{E}\{M\}\)._
As in the tree-like CKP model, the analysis of error survival depends on the continued existence of leaf nodes, illustrating that nodes with uncaught errors and low preferential attachment weight are contributors to the propagation of error effects.
See again Figure 4 for some visualizations of regions of the \((p,k)\)-parameter space in this Theorem holds, for \(M\) set to specific constant values.
We also prove results about the general CKP model. These have a more complex behavior due to the introduction of the error probability \(\varepsilon\in[0,1]\). For quantitative bounds on \(p\) and \(k\) which depend on \(\varepsilon\) and (in the case of survival) \(M\), we refer to the full Theorem statements in Sections 2 and 3.
**Theorem 1.3** (Error Elimination and Survival in the General CKP).: _Consider the Exhaustive BFS and Parent-wise BFS checking mechanisms._
* _For every_ \(\varepsilon\in(0,1)\) _and bounded_ \(M\geq 1\)_, there exists_ \(p_{E}\in(0,1)\) _and_ \(k_{E}\in\mathbf{N}\) _such that for any_ \(p\geq p_{E}\) _and_ \(k\geq k_{E}\)_, the error effect is eliminated in the_ \((M,p,k,\varepsilon)\)_-general CKP._
* _For every_ \(\varepsilon\in(0,1)\) _and_ \(M\geq 1\) _bounded satisfying_ \(\mathbf{E}\{M\}/(\min M+1)<1\)_, there exists_ \(p_{S}\in(0,1)\) _such that for any_ \(k\in\mathbf{N}\) _and any_ \(p\leq p_{S}\)_, the error effect survives with positive probability for the_ \((M,p,k,\varepsilon)\)_-CKP._
Note that these results, while natural, require some work to show, due to the locality of the process and our inability to rely on the structure of the DAG over all probabilistic events. Indeed, even simple things like monotonicity in the checking parameters \(p\) and \(k\) (it should be easier to eliminate errors with higher checking probability or higher checking depth) are not obvious to show. In this paper, we prove that error elimination is _monotonic_ with respect to the checking parameters (checking probability and checking depth) in a case where each node connects to one parent node. This answers an open question presented in [1]. We conjecture that monotonicity holds for the case when nodes can connect to more than one parent node, and it remains a compelling open question to prove this. Understanding the monotonicity of the checking parameters is an important component of the study of local checking mechanisms in processes modeling societal knowledge. This is because the existence of monotonicity certifies that the model and checking procedure is natural and aligns with our intuition that more checking (higher probability or greater depth) can only improve our chances of eliminating all of the error effect. We prove the following result related to monotonicity.
**Theorem 1.4** (Monotonicity of error elimination for checking parameters on trees).: _For all checking probabilities \(p_{2}\geq p_{1}\) and \(k_{2}\geq k_{1}\), if the error effect in the \((M=1,p_{1},k_{1})\)-simple CKP is eliminated completely, then the error effect in the \((M=1,p_{2},k_{2})\)-simple CKP is eliminated completely._
### Related work
Our work analyzes the DAG version of model introduced in [1]. As mentioned earlier, prior results only deal with the tree case. We now highlight a range of other related works.
**Models of noisy computation.** The model studied in [1] is a model of noisy computation, in the sense that it models the errors in and dependencies between units of knowledge (which may be viewed as computational units). There is a vast literature on noisy computation, see e.g. [22, 23, 24, 25]. As mentioned in [1], the main difference in the setting is that in noisy computation models one is allowed to design the structure of the network, while the design choices here are rather limited. In the example of a network of scientific publications, perhaps we can ask authors to perform more frequent or intensive checks of previous papers; however, we cannot change which papers they depend on, motivating the use of a probabilistic preferential attachment network in the CKP model.
There are other models of noisy propagation of information on a given network with error correction where the information consists of one bit, see e.g. [1, 12, 25]. But in these models there is no accumulation; it is just about being able to remember a bit in noisy broadcast with error correction.
See also [1] for discussion of the models above.
**Error-resilience of preferential attachment networks.** A range of papers study error resilience of preferential attachment networks. These papers, e.g. [1, 13, 14, 25, 26], focus on the retention or loss of aspects of the preferential attachment network's structure, e.g. its scale-free property, degree distribution, or large components, under various assumptions about deletions in the network. A deletion can be compared to marking a False node as PF in our paper. In contrast, our results do not rely on the retention of structural properties of preferential attachment networks (and we instead conjecture that much of this structure is lost in the CKP). We also consider a model of deletion that happens as the graph has been generated and which deletes nodes along a _path_, contrasting the aforementioned papers. Thus the results on error-resilience of preferential attachment networks shed no light on knowledge accumulation as defined in this work.
**Multitype Preferential Attachment Networks.** We may consider a different model, where nodes connect in a preferential attachment fashion and the probability that a unit of knowledge is correct depends on the type (e.g. correct/incorrect) of the nodes it connects to. For such models it is natural to study the fraction of correct nodes in the limit. Such models are analyzed in [1, 1].
### Open problems
We will highlight a set of compelling open problems related to the questions explored in this paper.
**Monotonicity with respect to the checking parameters.** We conjecture that error elimination is monotone with respect to the checking parameters \(p\) and \(k\) for the \((M,p,k,\varepsilon)\)-CKP. That is, for \(p_{2}\geq p_{1}\) and \(k_{2}\geq k_{1}\), if the \((M,p_{1},k_{1},\varepsilon)\)-CKP eliminates all errors then the \((M,p_{2},k_{2},\varepsilon)\)-CKP eliminates all errors. We have proven such a monotonicity result on trees in the simple case, i.e. the case where \(M=1\) and \(\varepsilon=0\).
When \(M\) is a fixed constant, we also conjecture that error elimination is monotone with respect to the number of parents. That is, for \(m_{2}>m_{1}\), if the \((M=m_{1},\varepsilon,p,k)\)-CKP eliminates all errors then the \((M=m_{2},\varepsilon,p,k)\)-CKP eliminates all errors.
See Figure 4 for a few simulations of CKPs with combination factor \(M\) set to various constants which suggest that monotonicity holds in both contexts mentioned. More figures and information can also be found in the appendix A.
**Models where units join similar parents.** In this paper, we considered a model of societal knowledge where each new unit of knowledge connects to \(m\) existing units according to preferential attachment, where \(m\) is distributed according to a random variable \(M\) such that \(\mathbf{E}\{1/M\}\) is fixed and bounded. This model makes sense in a variety of contexts, for example where the combinative Cumulative Knowledge Process is meant to model a cohesive body of knowledge (for example, theorems proven in a certain field of study). More generally, one may ask about errors in societal knowledge beyond those in a cohesive body of knowledge. It would be compelling to explore models where parents are connected to according to a measurement of _relevance_. For example, one may want to consider CKP models where new nodes connect to existing nodes according to a geometric preferential attachment model ([11, 12]) in which nodes connect to existing nodes according to a combination of proximity and preferential attachment.
**Structural properties of the CKP.** Based on simulations, we conjecture that for any \(M\geq 1\), \(k\geq 2\), and \(p\) sufficiently far away from \(0\), the \((M,p,k,\varepsilon)\)-CKP loses its preferential attachment structure. We also conjecture that if the error survives in \((M,p,k,\varepsilon)\)-CKP for some parameters, and \(p\) is sufficiently far away from \(0\), the CKP is shallow, meaning every node is close in distance to all of its ancestor nodes. (For \(p\) very close to \(0\), the CKP should retain its preferential attachment structure, due to the robustness of preferential attachment under small amounts of adversarial deletion [1, 11, 12, 13].)
**Parameters.** We conjecture that there is a phase transition between error survival and elimination with respect to \(p\) and \(k\) (which may vary with the combination factor \(M\)). Understanding the precise parameters that yield error survival and elimination requires further study.
Figure 4: Simulations of the \((M=m,k,p)\)-simple CKP with Exhaustive BFS checking mechanism, for fixed \(m=1,2,5\). The parameter regime in which we prove that errors survive with positive probability is shaded in purple, while the proven error extinction region is shaded in orange. The heat map displays the percentage of trials that survived until time step 2000. We run 20 simulations for each \((m,p,k)\) choice, with an initialization of a chain of 25 nodes (one CF followed by 24 CT nodes) with \(m\) edges between each node and its parent.
We created and ran CKP simulations for the special case where the combination factor \(M=m\) is constant, i.e., where each node connects to \(m\) parent nodes. Experiments on these simulations indicate where the conjectured phase transition takes place with respect to \(p\) and \(k\) for different values of \(m\). See Figure 4 for the simple CKP, and Appendix A for the general CKP and further plots of both cases.
### Proof sketches
We outline the key ideas used in the proofs of our main results.
**Proof of Theorem 1.1 (Error elimination).** The full proofs can be found in Section 2.
We prove error elimination for simple and general CKPs where the number of edges between new nodes and existing nodes is determined by a random variable \(M\). In this proof sketch, we focus on the simple CKP setting, for which we obtain error elimination results that depend on \(\mathbf{E}\{1/M\}\). To prove our error elimination results, we carefully design a potential over steps \(X_{t}=(\mathcal{G}_{t},L_{t})\) of the CKP which one one hand we can prove forms a super-martingale converging to \(0\), and on the other hand also upper bounds the number of false nodes in \(\mathcal{G}_{t}\) (in the simple case, this is \(|\mathcal{G}_{t}|\)). We can then conclude that there exists a time \(t^{\prime}\) such that for all \(t\geq t^{\prime}\), the error effect in \(X_{t}\) is completely eliminated.
The main challenge in creating a suitable potential is that this potential needs to have symmetry between the cases where the new node performs a check or doesn't perform a check. As opposed to the tree-like CKP from [1], the DAG CKP possesses asymmetry in the sense that if no check is performed, the new node connects to many parents; however, if a check is performed it may only remove one parent and a corresponding path. Without careful consideration, this asymmetry presents difficulties in extending the analysis from the tree CKP to the DAG CKP setting.
We define a potential that possesses this symmetry and captures a very natural quantity in the CKP: the distance between nodes and their closest minimal false ancestor node. Given a directed acyclic graph \(\mathcal{G}_{t}\) with labels \(L_{t}\) of the vertices of the graph, and \(c>1\), we define the _minimum-distance potential_\(\Phi(\mathcal{G}_{t})\) as follows:
\[\Phi(\mathcal{G}_{t})=\sum_{v\in\mathcal{G}_{t}^{\mathsf{PT},F}}d(v)\cdot c^{ |v|},\]
where \(\mathcal{G}_{t}^{\mathsf{PT},F}=\mathcal{G}_{t}^{\mathsf{PT}}\) is the sub-DAG of \(\mathsf{PT}\) False nodes in \(\mathcal{G}_{t}\), \(d(v):=\deg_{\mathsf{PT}}(v)+1\), and \(|v|\) is the number of edges on the shortest path from \(v\) to a minimal false node. Note that if \(v\) is itself a minimal false node, then \(|v|=0\).
We prove error elimination under the Exhaustive BFS checking mechanism, which has a natural correspondence with the minimum-distance potential (and extends to the Parent-wise BFS and Complete checks). For each parent, this mechanism searches its closest minimal false node (up to distance \(k\) away from the new node). The minimum-distance potential measures the distance to this node. Let \(m\sim M\) be drawn according to the combination factor. In the analysis, we consider the following cases:
* If the new node is \(\mathsf{CT}\) and performs a successful check, then we can identify an entire sub-DAG of nodes whose minimum-distance values all reduce by at least one.
* If the new node is \(\mathsf{CF}\) and performs a successful check, only the new node gets removed.
* If the new node does not perform a check or the check is unsuccessful, the new node contributes to the potential. Suppose the new node connects to parent nodes \(u_{1},\ldots u_{m}\). Then \(|v|=1+\min_{i\in[m]}|u_{i}|\). To obtain a dependence on \(M\) in our error elimination result, we use that the minimum is less than the average: \(1+\min_{i\in[m]}|u_{i}|\leq 1+\frac{1}{m}\sum_{i\in[m]}|u_{i}|\).
We show that the expected change in the potential is negative at each time step in the CKP when the potential is not zero. The potential forming a super-martingale then allows us to show error elimination.
**Proof of Theorem 1.2 (Error survival).** The full proofs for error survival can be found in Section 3.
We study error survival for both simple and general CKPs in the setting where we have a lower bound \(\min(M)\) on the combination factor. To prove error survival, we identify some key structural components of the CKP that propagate the error effect. These are:
* \(\mathsf{CF}\) nodes. These introduce errors to the CKP.
* Root nodes: \(\mathsf{CT}\) nodes with a \(\mathsf{PF}\) parent. These nodes (and the respective sub-DAGs rooted at these nodes) should be marked \(\mathsf{False}\) because they have a \(\mathsf{False}\) parent.
* Leaf nodes. These propagate error because there is a relatively low probability of connecting to a leaf node and checking if it has a \(\mathsf{False}\) ancestor node. We use that the expected number of non-root leaf nodes that a new node with \(m\sim M\) parents connects to is less than \(1\). On the other hand, every new node is a leaf node, and thus the expected number of leaves grows over time.
Using these three structural components, we define a potential on DAGs which incorporates the number of minimal false (\(\mathsf{CF}\) and root) nodes and leaf nodes. We show the expected change in the potential is positive at each time step in the CKP, which we then prove implies error survival.
**Proof of Theorem 1.4 (Monotonicity for simple tree-CKPs).** The full proofs for monotonicity can be found in Section 4.
We separate the proof of the monotonicity of error elimination into two parts: monotonicity with respect to \(p\), and monotonicity with respect to \(k\). While it may be possible to combine the analysis, separating the two is clearer to analyze and present. Since the two parts are similar, we now focus on the proof of monotonicity with respect to \(p\).
For the proof, we construct a coupling of the \((p_{1},k)\) simple CKP and \((p_{2},k)\) simple CKP process, where \(p_{2}\geq p_{1}\), such that the \((p_{2},k)\)-CKP (which runs checks with higher probability and thus has less nodes) is embedded inside the coupled \((p_{1},k)\)-CKP. At a high level, we ensure that all nodes in \((p_{1},k)\)-CKP are also nodes in the \((p_{2},k)\)-CKP or have already been removed by the \((p_{2},k)\)-CKP. When a new node is added to the \((p_{1},k)\)-CKP, if it connects to a node that is still alive (not caught to be \(\mathsf{False}\)) in the \((p_{2},k)\)-CKP, we also add it to the \((p_{2},k)\)-CKP. If it connects to a node that was found to be \(\mathsf{False}\) in the \((p_{2},k)\)-CKP, it is not added to the \((p_{2},k)\)-CKP and the \((p_{2},k)\)-CKP does not update.
At each time step of the coupling, with probability \(p_{1}\), both CKPs perform checks. With probability \(p_{2}-p_{1}\), only the \((p_{2},k)\)-CKP performs a check. We keep track of the nodes that are alive in the \((p_{1},k)\)-CKP but dead (caught to be \(\mathsf{False}\)) in the \((p_{2},k)\)-CKP, which we call "zombie" nodes.
Due to the fact that the \((p_{2},k)\)-CKP may not update when the \((p_{1},k)\)-CKP does, the two CKPs may evolve at different rates, but we can still always pair time steps in one process with the other to ensure that error elimination in the \((p_{1},k)\)-CKP implies error elimination in the corresponding \((p_{2},k)\)-CKP. It is possible to generate every \((p_{2},k)\)-CKP through the mechanism considered in this coupling. Therefore, assuming the coupled \((p_{1},k)\)-CKP process eliminates all error, we can conclude that error elimination is monotonic with respect to the checking probability.
## 2 Proof of Error Elimination
In this section, we study error elimination for the DAG Cumulative Knowledge Process. Our results will imply Theorem 1.1 for specific parameter settings. We first describe the potential that is used to prove both the simple and general CKP extinction results.
**Minimum-distance potential.** To specify the potential, we first define the sub-DAG of \(\mathsf{PT}\)\(\mathsf{False}\) nodes \(\mathcal{G}_{t}^{\mathsf{PT},F}\), \(\mathsf{CF}\) nodes and \(\mathsf{CT}\) nodes with some \(\mathsf{CF}\) or \(\mathsf{PF}\) ancestor (i.e., also some root ancestor). Now consider the following potential \(\Phi\) on a CKP state \(X_{t}=(\mathcal{G}_{t},L_{t})\), which satisfies \(\Phi(\mathcal{G}_{t})\geq|\mathcal{G}_{t}^{\mathsf{PT},F}|\):
\[\Phi(\mathcal{G}_{t})=\sum_{v\in\mathcal{G}_{t}^{\mathsf{PT},F}}d(v)\cdot c^{ |v|}, \tag{2}\]
where \(d(v):=\deg_{\mathsf{PT}}(v)+1\), \(|v|\) is the number of edges in the shortest path from \(v\) to a minimal false node, and \(c>1\) (we fix it in the proofs of Lemmas 2.2 and 2.4). Note that in the simple CKP case, \(\mathcal{G}_{t}^{\mathsf{PT},F}=\mathcal{G}_{t}^{\mathsf{PT}}\) since all nodes are \(\mathsf{False}\). Note that if \(v\) is itself a minimal false node, then \(|v|=0\).
**BFS-components.** The reason we choose this potential is that we can partition the \(\mathsf{PT}\) False sub-DAG into disjoint components according to their closest minimal false node, and split up \(\Phi\) accordingly. The partitioning is thus indexed over \(\mathcal{F}_{t}\), and we denote the components \(\{\mathcal{G}_{t}[u]\}_{u\in\mathcal{F}_{t}}\).
We now describe the assignment of \(\mathsf{PT}\) False nodes into BFS-components \(\{\mathcal{G}_{t}[u]\}_{u\in\mathcal{F}_{t}}\). For each \(v\in\mathcal{G}_{t}^{\mathsf{PT},F}\), let \(u_{F}\in\mathcal{F}_{t}\) be the closest \(\mathsf{CF}\) or root node
\[\operatorname*{arg\,min}_{u\in\mathcal{F}_{t}}\,\operatorname{dist}(u,v),\]
chosen uniquely by breaking ties according to the BFS ordering, i.e., if this minimum is attained for several \(u\in\mathcal{F}_{t}\), we set \(u_{F}\) to be the first one reached in a BFS search starting from \(v\). See Figure 5 for visualizations of this partition.
We note that each \(\mathcal{G}_{t}[u]\) is connected, because if some \(w\) is in \(\mathcal{G}_{t}[u]\), all of its ancestor nodes along its shortest path to \(u\) must also be in \(\mathcal{G}_{t}[u]\). Also, \(|\mathcal{G}_{t}[u]|\geq 1\) for each \(u\in\mathcal{F}_{t}\), as it at least contains \(u\) itself.
We can now write the minimum-distance potential equivalently as
\[\Phi(\mathcal{G}_{t})=\sum_{u\in\mathcal{F}_{t}}\Phi(\mathcal{G}_{t}[u])\ \ \text{where}\ \ \Phi(\mathcal{G}_{t}[u])=\sum_{v\in\mathcal{G}_{t}[u]}d(v)\cdot c^{ \operatorname{dist}(u,v)}. \tag{3}\]
### Simple Ckp
We begin by proving the error elimination result in the simple CKP setting, which highlights all the key ideas of the analysis.
**Theorem 2.1** (Simple CKP Error Elimination).: _For all bounded random variables \(M\geq 1\) and parameters \(k\geq 2\), \(p\in(0,1]\) satisfying_
\[\max\left(\frac{1+3\mathbf{E}\{1/M\}}{1+3\mathbf{E}\{1/M\}+2/3},\frac{1+3 \mathbf{E}\{1/M\}}{(2k-1)(2/3)}\right)\leq p\leq 1, \tag{4}\]
_the error effect is eliminated in the \((M,p,k)\)-simple CKP with the Exhaustive BFS, Parent-wise BFS and Complete checking mechanisms._
Note that \(p\geq 6/7\) upper bounds the first term in the maximum for all \(M\geq 1\). As for the second term, we have a non-trivial (\(\leq 1\)) upper bound if \(k\geq 2\) and \(\mathbf{E}\{1/M\}\leq 1/3\); and if \(k\geq 3\) and \(\mathbf{E}\{1/M\}\leq 7/9\approx 1/1.3\). Note that these are satisfied respectively when we have the deterministic lower bounds \(M\geq 3\) and \(M\geq 2\).
In the following lemma, we prove that the sequence of minimum-distance potential values over time forms a super-martingale, for certain parameter settings.
Figure 5: A partition of a general (left) and simple (right) CKP state into BFS-components, denoted in the figure by different colors. The circled nodes are the minimal false nodes in \(\mathcal{F}_{t}\). Notice the tie-breaking based on the BFS left-to-right ordering.
**Lemma 2.2**.: _Let \(M\), \(k\) and \(p\) satisfy all conditions from Theorem 2.1. Let \(X_{t}=(\mathcal{G}_{t},L_{t})\) be the state of the \((M,p,k)\)-simple CKP at time \(t\). If \(\Phi(\mathcal{G}_{t})>0\), then \(\Delta_{t}=\Phi(\mathcal{G}_{t+1})-\Phi(\mathcal{G}_{t})\) satisfies \(\mathbf{E}\left\{\Delta_{t}\mid\mathcal{G}_{t}\right\}<0\)._
Proof.: Fixing a sequence \(S=\{\mathcal{G}_{t}^{\mathsf{PT}}[u_{1}],\mathcal{G}_{t}^{\mathsf{PT}}[u_{2}],\dots\}\) of BFS-components of \(\mathcal{G}_{t}^{\mathsf{PT}}\), we define \(E_{S}\) the event that a new node added to the CKP \(\mathcal{G}_{t}^{\mathsf{PT}}\) connects to components in the order given by \(S\). To complete the proof, it suffices to show that for any arbitrary \(S\), \(\mathbf{E}\left\{\Delta_{t}\mid E_{S},\mathcal{G}_{t}\right\}<0\).
We denote by \(\alpha_{i}\) the probability that an edge from a new node \(v\) to the \(i\)-th BFS-component \(\mathcal{G}_{t}^{\mathsf{PT}}[u_{i}]\) connects within distance \(k\) of \(u_{i}\), i.e., \(\mathrm{dist}(v,u_{i})\leq k\). If \(|\mathcal{G}_{t}^{\mathsf{PT}}[u_{i}]|<k\), then this probability must be \(1\), and otherwise, in the worst case \(\mathcal{G}_{t}^{\mathsf{PT}}[u_{i}]\) consists of a chain of length \(k\) with a hanging sub-component. Combining these cases, we have
\[\alpha_{i}\geq\min(1,(2k-1)/Z_{i}), \tag{5}\]
where \(Z_{i}=\sum_{v\in\mathcal{G}_{t}^{\mathsf{PT}}[u_{i}]}(\deg_{\mathsf{PT}}(v)+1)\).
In the simple CKP process, a new \(\mathsf{CT}\) node \(v\) is added. If a check is successful, the potential of one of its parents' BFS-component decreases. On the other hand, if no check is successful, the potential increases, from two contributions:
* the degree of each of the parents \(p_{i}\) increases by \(1\), and
* the new node \(v\) adds a \(c^{|v|}\) term, where \(|v|=\min_{i}|p_{i}|+1\).
We would like to break up this change in potential into BFS-components, i.e., the change in component \(i\)'s potential \(\Phi_{i}\coloneqq\Phi(\mathcal{G}_{t}^{\mathsf{PT}}[u_{i}])\). To do this for the contribution of \(v\), we observe that, if \(v\) has \(M=m\) parents (we later take an expected value over this combination factor),
\[c^{|v|}=\min_{i=1}^{m}c^{|p_{i}|+1}\leq\sum_{i=1}^{m}c^{|p_{i}|+1}/m. \tag{6}\]
We can now analyze component-by-component the change in \(\Phi_{i}\), which we denote by \(\Delta_{t}^{[i,m]}\). Set the checking mechanism to be Exhaustive BFS. For each parent edge \(i\geq 1\), the checking procedure is still ongoing (has not yet been successful) with probability \((1-p\alpha_{1})\cdots(1-p\alpha_{i-1})\). Now, there are two possible events for the corresponding BFS-component \(\Phi_{i}\):
1. A check is performed and successful, which happens with probability \(p\alpha_{i}\). In this case, \(u_{i}\) is deleted from \(\mathcal{G}_{t}^{\mathsf{PT}}[u_{i}]\). At time \(t+1\), \(\mathcal{G}_{t}^{\mathsf{PT}}[u_{i}]\) loses nodes and is partitioned into new components \(\{\mathcal{G}_{t+1}^{\mathsf{PT}}[u_{i}]\}\), where \(u_{ij}\) are children of \(u_{i}\). In each of these new components, every node \(v^{\prime}\in\mathcal{G}_{t+1}^{\mathsf{PT}}[u_{ij}]\) is closer to \(u_{ij}\) than it was to \(u_{i}\), specifically, \(\mathrm{dist}(u_{ij},v^{\prime})\leq\mathrm{dist}(u_{i},v^{\prime})-1\). Therefore, \[\sum_{j}\Phi(\mathcal{G}_{t+1}^{\mathsf{PT}}[u_{ij}])<c^{-1}\Phi_{i}\] and so \[\mathbf{E}\big{\{}\Delta_{t}^{[i,m]}\mathbb{1}_{[\mathrm{th\ check\ success}]}\bigm{|}E_{S},\mathcal{G}_{t}\big{\}}<p\alpha_{i}(c^{-1}-1) \Phi_{i}.\] (7) The potentials of the other BFS-components either remain unchanged or decrease.
2. Otherwise, the check fails and we move on to the next edge, with probability \(1-p\alpha_{i}\). We denote the new component resulting from the connection by \(\mathcal{G}_{t+1}^{\mathsf{PT}}[u_{i}]\). In the worst case, no check for any future edge is successful and we have two contributions to the change in potential: the parent \(w\) chosen by this edge gains \(1\) to its degree, contributing \(c^{|w|}\), and the new node contributes \(c^{|w|+1}/m\), by (6). Therefore, \[\mathbf{E}\big{\{}\Delta_{t}^{[i,m]}\mathbb{1}_{[\mathrm{th\ check\ fail}]}\bigm{|}E_{S},\mathcal{G}_{t}\big{\}} \leq\mathbf{E}\big{\{}\Delta_{t}^{[i,m]}\mathbb{1}_{[\mathrm{th\ check\ and\ all\ future\ checks\ fail}]}\bigm{|}E_{S},\mathcal{G}_{t}\big{\}}\] \[\leq\sum_{w\in\mathcal{G}_{t}^{\mathsf{PT}}[u_{i}]}\frac{d(w)}{ Z_{i}}\cdot\left(\frac{c^{|w|+1}}{m}+c^{|w|}\right)=\frac{1+c/m}{Z_{i}}\sum_{w\in \mathcal{G}_{t}^{\mathsf{PT}}[u_{i}]}d(w)\cdot c^{|w|}\] \[=\frac{1+c/m}{Z_{i}}\Phi_{i}.\] (8)
Combining cases (i) and (ii), for each BFS-component, on the event that the check is still ongoing,
\[\mathbf{E}\big{\{}\Delta_{t}^{[i,m]}\ \big{|}\ E_{S},\mathcal{G}_{t}\big{\}}<p \alpha_{i}(c^{-1}-1)\Phi_{i}+(1-p\alpha_{i})\frac{1+c/m}{Z_{i}}\Phi_{i}. \tag{9}\]
We can now compute \(\mathbf{E}\{\Delta_{t}\mid E_{S},\mathcal{G}_{t}\}\), taking an expectation over the combination factor \(M\). Let \(M\) be equal to \(m\) with probability \(q_{m}\) for each \(m\geq 1\). We have
\[\mathbf{E}\{\Delta_{t}\mid E_{s},\mathcal{G}_{t}\} \leq q_{1}\mathbf{E}\big{\{}\Delta_{t}^{[1,1]}\ \big{|}\ E_{S},\mathcal{G}_{t}\big{\}}+q_{2}\mathbf{E}\big{\{} \Delta_{t}^{[1,2]}+\mathbb{1}_{[e=1\text{ check fail}]}\Delta_{t}^{[2,2]}\ \big{|}\ E_{S},\mathcal{G}_{t}\big{\}}\] \[=\sum_{m\geq 1}q_{m}\sum_{i=1}^{m}\Big{(}\prod_{j=1}^{i-1}(1-p \alpha_{j})\Big{)}\mathbf{E}\big{\{}\Delta_{t}^{[i,m]}\ \big{|}\ E_{S},\mathcal{G}_{t}\big{\}}, \tag{10}\]
where we used \(\mathbf{P}\{\mathbb{1}_{[e=i\text{ check fail}]}\}=(1-p\alpha_{i})\). Inverting the sums over \(m\) and \(i\), we rewrite this as
\[\mathbf{E}\{\Delta_{t}\mid E_{s},\mathcal{G}_{t}\} \leq\sum_{i\geq 1}\Big{(}\prod_{j=1}^{i-1}(1-p\alpha_{j})\Big{)} \sum_{m\geq i}q_{m}\mathbf{E}\big{\{}\Delta_{t}^{[i,m]}\ \big{|}\ E_{S},\mathcal{G}_{t}\big{\}}\] \[<\sum_{i\geq 1}\Big{(}\prod_{j=1}^{i-1}(1-p\alpha_{j})\Big{)}\sum_{m \geq i}q_{m}\left(p\alpha_{i}(c^{-1}-1)\Phi_{i}+(1-p\alpha_{i})\frac{1+c/m}{Z_ {i}}\Phi_{i}\right)\] \[\leq\sum_{i\geq 1}\sum_{m\geq 1}q_{m}\left(p\alpha_{i}(c^{-1}-1) \Phi_{i}+(1-p\alpha_{i})\frac{1+c/m}{Z_{i}}\Phi_{i}\right)\] \[=\sum_{i\geq 1}\bigg{(}p\alpha_{i}(c^{-1}-1)+\frac{1-p\alpha_{i}}{Z _{i}}\Big{(}1+c\sum_{m\geq 1}\frac{q_{m}}{m}\Big{)}\bigg{)}\Phi_{i}.\]
This term is negative if for each \(i\geq 1\),
\[p\alpha_{i}(c^{-1}-1)+\frac{1-p\alpha_{i}}{Z_{i}}\Big{(}1+c\cdot\mathbf{E}\{ 1/M\}\Big{)}\leq 0. \tag{11}\]
Set \(c=3\). If \(\alpha_{i}=1\), as \(Z_{i}\geq 1\), this is satisfied for
\[p\geq\frac{1+3\cdot\mathbf{E}\{1/M\}}{1+3\cdot\mathbf{E}\{1/M\}+2/3},\]
which we note works for \(p\geq 6/7\) for all \(M\geq 1\). On the other hand, if \(\alpha_{i}\leq(2k-1)/Z_{i}<1\), then (11) holds if
\[-p(2k-1)(2/3)+(1+3\cdot\mathbf{E}\{1/M\})\leq 0,\]
i.e., for all \(p\) satisfying \((2k-1)p\geq 3/2(1+3\cdot\mathbf{E}\{1/M\})\).
_Remark 1_.: Note that for the Parent-wise BFS and Complete BFS checking mechanisms, we can only delete more at each step if a check occurs, and add the same amount if no check occurs. Therefore both (9) and (10) (in fact without the \(\mathbb{1}_{[e=i\text{ check fail}]}\) indicators) still hold.
We now obtain Theorem 2.1 from Lemma 2.2 with the following proof.
Proof of Theorem 2.3 given Lemma 2.2.: This is the same as the proof of [1, Theorem 2.1] and we provide the proof for completeness. We hope to show that for a CKP given by \(\{\mathcal{G}_{t}\}_{t=1}^{\infty}\) with the conditions from Theorem 2.3, the sequence of potential values \(\{\Phi(\mathcal{G}_{t})\}_{t=1}^{\infty}\) converges to \(0\) almost surely. Through Lemma 2.2, we have shown that \(\{\Phi(\mathcal{G}_{t})\}_{t=1}^{\infty}\) is a positive super-martingale. The martingale convergence theorem tells us that as \(t\to\infty\), \(\Phi(\mathcal{G}_{t})\) converges almost surely to a limit \(Y\). Combining this with the fact that \(|\Phi(\mathcal{G}_{t})-\Phi(\mathcal{G}_{t+1})|\geq 1\) when \(\Phi(\mathcal{G}_{t})>0\), we can conclude that this limit \(Y\) must be \(0\), i.e., \(\mathbf{P}\{Y=0\}=1\). We previously noted that \(|\mathcal{G}_{t}^{\mathsf{PT},F}|\leq\Phi(\mathcal{G}_{t})\), which implies that \(|\mathcal{G}_{t}^{\mathsf{PT},F}|\) also converges to \(0\) almost surely, yielding error elimination.
### General CKP
We now prove the following result for general CKPs.
**Theorem 2.3** (General CKP error elimination).: _If \(\varepsilon,p\in(0,1)\), \(M\geq 1\) bounded, and \(k\geq 2\) satisfy_
\[(1-\varepsilon)\max\left(-\frac{1}{2}(2k-1)p+3,-\frac{1}{2}p+3(1-p)\right)+2 \varepsilon(1-p)<0,\]
_then the error effect is eliminated in the \((M,p,k,\varepsilon)\)-CKP with the Exhaustive BFS, Parent-wise BFS and Complete checking mechanisms._
We recover the same bound as in the \(M=1\) case for general CKPs, for which we need \(k\geq 4\) to obtain error elimination. However, extending this result to the DAG CKP setting still requires some work. Based on the simulations in appendix A, we conjecture that for general CKPs, the true error elimination threshold does vary with \(M\) and leave as open what this dependence on \(M\) should be.
As in the simple case, we prove this theorem by showing that the sequence of minimum-distance potential values forms a super-martigale.
**Lemma 2.4**.: _Let \(M\), \(\varepsilon\), \(k\), \(p\) satisfy all conditions from Theorem 2.3. Let \(X_{t}=(\mathcal{G}_{t},L_{t})\) be the state of the \((M,p,k,\varepsilon)\)-general CKP at time \(t\). If \(\Phi(\mathcal{G}_{t})>0\), then \(\Delta_{t}=\Phi(\mathcal{G}_{t+1})-\Phi(\mathcal{G}_{t})\) satisfies \(\mathbf{E}\{\Delta_{t}\mid\mathcal{G}_{t}\}<0\)._
Proof of Lemma 2.4.: Recall that for the simple CKP, all \(\mathsf{PT}\) nodes are \(\mathsf{False}\), and so the minimum-distance potential sums over all \(\mathsf{PT}\) nodes. In particular, a new node \(v\) must connect to \(M\) BFS-components (potentially with replacement). In the general CKP however, a new node \(v\) connects to some number \(0\leq M^{\prime}\leq M\) of BFS-components, and \(M-M^{\prime}\)\(\mathsf{True}\) nodes.
If \(v\) is \(\mathsf{True}\), there is no change in the potential \(\Phi(\mathcal{G}_{t})\), so we assume that \(M^{\prime}\geq 1\).
We can begin with the same analysis as in the proof of Lemma 2.2. If \(v\) is \(\mathsf{CT}\), we have the same two events (i) and (ii), so we have for each \(i=1,\ldots,M^{\prime}\), on the event that the check is still ongoing,
\[\mathbf{E}\big{\{}\Delta_{t}^{[i,M^{\prime}]}\mathbbm{1}_{[v\text{ is }\mathsf{CT}]}\mid E_{S},\mathcal{G}_{t}\big{\}}<(1- \varepsilon)\left(p\alpha_{i}(c^{-1}-1)+(1-p\alpha_{i})\frac{1+c/M^{\prime}}{ Z_{i}}\right)\Phi_{i}. \tag{12}\]
On the other hand, if \(v\) is \(\mathsf{CF}\), (i) if a check is performed for any edge, then there is no change in potential as \(v\) is simply removed, and (ii) if there is no check, for each \(i=1,\ldots,M^{\prime}\),
\[\mathbf{E}\big{\{}\Delta_{t}^{[i,M^{\prime}]}\mathbbm{1}_{[v\text{ is }\mathsf{CF},\text{ no check}]}\mid E_{S},\mathcal{G}_{t}\big{\}} \leq\varepsilon(1-p)^{M^{\prime}}\bigg{(}\frac{1}{M^{\prime}}+ \sum_{w\in\mathcal{G}_{t}^{\mathsf{PT}}[u_{i}]}\frac{d(w)}{Z_{i}}c^{|w|}\bigg{)} =\varepsilon(1-p)^{M^{\prime}}\left(\frac{1}{M^{\prime}}+\frac{1}{Z_{i}}\Phi_ {i}\right)\] \[\leq\varepsilon(1-p)^{M^{\prime}}\frac{1+1/M^{\prime}}{Z_{i}}\Phi _{i}. \tag{13}\]
The worst case for both (12) and (13) is when \(M^{\prime}=1\), so we have \(\mathbf{E}\{\Delta_{t}\mid E_{s},\mathcal{G}_{t}\}<0\) if
\[(1-\varepsilon)\max\left(-\frac{1}{2}(2k-1)p+3,-\frac{1}{2}p+3(1-p)\right)+2 \varepsilon(1-p)<0, \tag{14}\]
picking \(c=2\).
To obtain Theorem 2.3 from Lemma 2.4, we use the same exact proof as in the simple CKP setting.
## 3 Proof of Error Survival
We now shift our focus to studying _error survival_ for DAG Cumulative Knowledge Processes. We ask the question: for what parameters \(M\), \(p\), \(k\), and \(\varepsilon\) does error survive in the CKP? Our results will depend on \(\mathbf{E}\{M\}\) and \(\min(M)\).
### Simple Ckp
In this section, we prove the following results about error survival in the simple CKP process. Let \(\min(M)\) denote the minimum value that the random variable \(M\) can take.
**Theorem 3.1** (Simple CKP error survival).: _If \(M\geq 1\) and \(p\in[0,1)\) satisfy_
\[\frac{\mathbf{E}\{M\}}{\min(M)+1}<1\ \ \text{and}\ \ p\leq\frac{1}{2}\left(1- \frac{\mathbf{E}\{M\}}{\min(M)+1}\right), \tag{15}\]
_then for any \(k\geq 1\), the error effect survives with positive probability in the \((M,p,k)\)-simple CKP with the Stringy and BFS checking mechanisms. Similarly, for the Exhaustive BFS and Parent-wise BFS checking mechanisms, survival holds if the parameters satisfy (15) with \(p\) replaced by \(p\mathbf{E}\{M\}\)._
This proof in fact works for any checking mechanism that has a bounded total checking probability (e.g., \(p\) in the Stringy and BFS cases) and bounded number of possible deleted minimal False nodes, and we could obtain a bound on \(p\) in terms of these parameters.
We again prove this result using the potential method, by constructing a potential and showing that its sequence of values forms a super-martingale.
**Minimal-False and Leaves Potential.** We first recall that \(\mathcal{F}_{t}\) denotes the minimal false nodes, i.e., roots and CF nodes. For the checks stopping at the first minimal false node seen, note that we verify if a node is a root by checking if any parent node is \(\mathsf{PF}\), stopping at the first such parent node found.
For the potential, we now additionally define **non-root leaves**\(\mathcal{G}_{t}^{L}\), which are nodes with no \(\mathsf{PF}\) parents and out-degree zero. Specifically, we let \(\mathcal{L}_{t}\subset\mathcal{G}_{t}^{L}\) be the \(\mathsf{CT}\) **non-root leaves**. We define
\[\Phi_{F,L}(\mathcal{G}_{t})=|\mathcal{F}_{t}|+|\mathcal{L}_{t}|, \tag{16}\]
and note that \(\Phi_{F,L}(\mathcal{G}_{t})\leq|\mathcal{G}_{t}^{\mathsf{PT},F}|=|\mathcal{G}_ {t}^{\mathsf{PT}}|\), the number of \(\mathsf{PT}\) False nodes. We prove the following property about the expected change in the potential in a time step.
**Lemma 3.2**.: _Let \(M\) and \(p\) satisfy (15) and let \(X_{t}=(\mathcal{G}_{t},L_{t})\) be the state of the \((M,p,k)\)-simple CKP at time \(t\). Let \(\Delta_{t}=\Phi_{F,L}(\mathcal{G}_{t+1})-\Phi_{F,L}(\mathcal{G}_{t})\). If \(\Phi_{F,L}(\mathcal{G}_{t})>0\), we have \(\mathbf{E}\{\Delta_{t}\mid\mathcal{G}_{t}\}>0\) if_
\[p\leq\frac{1}{2}\left(1-\frac{\mathbf{E}\{M\}}{\min(M)+1}\right) \tag{17}\]
_when the CKP uses the Stringy or BFS checking mechanism. When the CKP uses the Exhaustive or Parent-wise BFS checking mechanism, we have \(\mathbf{E}\{\Delta_{t}\mid\mathcal{G}_{t}\}>0\) if (17) holds with \(p\) replaced by \(p\mathbf{E}\{M\}\)._
When \(\Phi_{F,L}(\mathcal{G}_{t})=0\), then for all \(t^{\prime}>t\), we also have \(\Phi_{F,L}(\mathcal{G}_{t^{\prime}})=0\) because the process has stopped (i.e., all errors have been eliminated).
Proof of Lemma 3.2 for Stringy and BFS.: First, let \(\beta\) be the probability that when an edge from a new node to \(\mathcal{G}_{t}^{\mathsf{PT}}\) is created, it is connected to a \(\mathsf{CT}\) non-root leaf. Letting \(Z\coloneqq\sum_{v\in\mathcal{G}_{t}^{\mathsf{PT}}}\deg_{\mathsf{PT}}(v)+1\), we have that
\[\beta=\frac{|\mathcal{L}_{t}|}{Z}<\frac{1}{\min(M)+1}. \tag{18}\]
Indeed, since a non-root leaf has \(\mathsf{PT}\)-indegree at least \(\min(M)\), it contributes at least \(\min(M)+1\) to the potential: \(\min(M)\) for the outdegrees of its \(\mathsf{PT}\) parent nodes, and \(1\) for the node itself. Thus we have \(Z>(\min(M)+1)|\mathcal{L}_{t}|\), yielding the stated bound. Notice that worst case structure is a rooted star DAG.
Let \(v\) be the new node added at time \(t+1\), with \(m\sim M\) edges created between \(v\) and \(\mathcal{G}_{t}^{\mathsf{PT}}\). We break the analysis into the change in the minimal false nodes \(|\mathcal{F}_{t}|\) and the non-root leaves \(|\mathcal{L}_{t}|\), denoted by \(\Delta_{|\mathcal{F}_{t}|}\) and \(\Delta_{|\mathcal{L}_{t}|}\), respectively. The following holds for both the Stringy and BFS checking mechanisms, which perform an overall check with probability \(p\).
1. \(|\mathcal{F}_{t}|\) can only change if a check is performed and successful. Since we can remove at most one minimal false node, we have \(\Delta_{|\mathcal{F}_{t}|}=-1\) with probability at most \(p\).
2. \(|\mathcal{L}_{t}|\) experiences an increase from the new node \(v\) which is a non-root leaf, and a decrease from existing leaves gaining a child. * With probability at least \(1-p\), there is no successful check and \(v\) contributes \(+1\) to \(\Delta_{|\mathcal{L}_{t}|}\). * Each edge connects to a non-root leaf parent with probability \(\beta\), contributing \(-1\) to \(\Delta_{|\mathcal{L}_{t}|}\).
Combining,
\[\mathbf{E}\{\Delta_{t}\mid M=m\}\geq-p+(1-p)+m\beta(-1)>1-2p-\frac{m}{\min(M)+ 1},\]
and taking an expectation over \(M\) yields the bound
\[\mathbf{E}\{\Delta_{t}\mid\mathcal{G}_{t}\}>1-2p-\frac{\mathbf{E}\{M\}}{\min( M)+1}. \tag{19}\]
For \(M=m\) a fixed constant, this is positive if \(p\leq 1/(2m+2)\).
Proof of Lemma 3.2 for Exhaustive and Parent-wise BFS.: The analysis is very similar to the previous case. The differences in (i) and (ii), which hold for both checking mechanisms, are as follows:
1. Now, each of the \(m\) edges performs a separate check with probability \(p\), potentially removing a minimal false node and contributing \(-1\) to \(\Delta_{|\mathcal{F}_{t}|}\).
2. As for the change in \(|\mathcal{L}_{t}|\), * With probability at least \(1-mp\), there is no successful check and \(v\) contributes \(+1\) to \(\Delta_{|\mathcal{L}_{t}|}\). * Each edge once again connects to a non-root leaf parent with probability \(\beta\).
Combining,
\[\mathbf{E}\{\Delta_{t}\mid M=m\}>-mp+(1-mp)-\frac{m}{\min(M)+1},\]
which gives
\[\mathbf{E}\{\Delta_{t}\mid\mathcal{G}_{t}\}\geq 1-2p\mathbf{E}\{M\}-\frac{ \mathbf{E}\{M\}}{\min(M)+1} \tag{20}\]
after the expectation over \(M\). To obtain a non-trivial bound, we need \(\mathbf{E}\{M\}/(\min(M)+1)<1\).
We now use the properties of the minimal-false and leaves potential to prove Theorem 3.1. First, we recall a Lemma about sub-martingales from [1]. We then consider a truncated version of our potential to ultimately prove error survival.
**Lemma 3.3** ([1], Lemma 3.6).: _Let \(\{X_{t}\}_{t\geq 0}\) be a non-negative sub-martingale with \(X_{0}>0\). Assume there exist constants \(c_{1},c_{2}>0\) such that, for every \(t\geq 0\), when \(X_{t}\neq 0\):_
1. \(|X_{t+1}-X_{t}|\leq c_{1}\) _almost surely, and_
2. \(\mathbf{E}\{X_{t+1}-X_{t}|X_{t}\}>c_{2}\)_._
_With positive probability, \(X_{t}>0\) for all \(t\in\mathbf{N}\)._
Proof of Theorem 3.1 given Lemma 3.2.: For any large constant \(C\), there exists some time \(t_{0}\) such that \(\Phi_{F,L}(\mathcal{G}_{t_{0}})\geq C\) with probability bounded away from \(0\). We condition on this event, and consider an upper-bounded version \(Y_{t}\) of the process \(X_{t}=\Phi_{F,L}(\mathcal{G}_{t_{0}+t})\). To do so, we define \(Y_{0}=\Phi(\mathcal{G}_{t_{0}})\) and for each \(t\geq 0\), let \(Y_{t+1}=\max(Y_{t}+\min(X_{t+1}-X_{t},1),0)\). Note that since \(X_{t}\geq 0\), we have \(Y_{t}\leq X_{t}\).
Then, \(Y_{t}\) is a non-negative sub-martingale satisfying the conditions of Lemma 3.3 for \(c_{1}=2\), (in the Stringy case), \(1+\max(M)\) (in the BFS and Exhaustive BFS cases) or \(2\max(M)\) (for the Parent-wise BFS), and \(c_{2}\) being the right hand side of either (19) or (20). So we have
\[\mathbf{P}\{\min_{t\geq 0}X_{t}\geq 0\}\geq\mathbf{P}\{\min_{t\geq 0}Y_{t}\geq 0\}>0\]
by Lemma 3.3. Since \(X_{t}=\Phi_{F,L}(\mathcal{G}_{t})\leq|\mathcal{G}_{t}^{\mathsf{PT},F}|=| \mathcal{G}_{t}^{\mathsf{PT}}|\), this proves error survival.
### General Ckp
We now study error survival for the general CKP model. We first note that the error survival bound for simple CKPs carries over to the general CKP setting. As in the simple setting, every new node contributes to the count of the number of non-root leaves. If the new node is \(\mathsf{CT}\) and there is a check, we experience the same expected change in potential as in the simple case. If the new node is \(\mathsf{CF}\) and there is a check, only the new node gets removed; however, the expected change in potential is still lower-bounded by the bound from the simple case. In this section, we prove stronger bounds on error survival, obtaining a bound that depends on both \(M\) and \(\varepsilon\). We prove the following Theorem.
**Theorem 3.4** (General CKP error survival).: _Let \(M\geq 1\) be a bounded random variable such that \(\mathbf{E}\{M\}/(\min M+1)<1\). If \(M\) and \(p,\varepsilon\in[0,1]\) satisfy either_
\[p<\varepsilon\ \ \text{or}\ \ p\left(-2+\varepsilon\left(1+\frac{1}{2}\frac{ \mathbf{E}\{M\}}{\min(M)+1}\right)\right)+\left(1+\frac{\mathbf{E}\{M\}}{\min( M)+1}\left(-1+\frac{\varepsilon}{2}\right)\right)\geq 0, \tag{21}\]
_the error effect survives with positive probability in the \((M,p,k,\varepsilon)\)-simple CKP with the Stringy and BFS checking mechanisms, for any \(k\geq 1\). Similarly, survival holds with positive probability for the \((M,p,k,\varepsilon)\) CKP with Exhaustive BFS and Parent-wise BFS checking mechanisms if the parameters satisfy (21) with \(p\) replaced by \(p\mathbf{E}\{M\}\)._
We define a potential and show that for the range of parameters in the Theorem above, the expected change in the potential is positive. We can the apply the same steps as in the proof of Theorem 3.1 to obtain error survival from a positive expected change in potential. To apply the same proof, we will also need that the change in potential is bounded below, which still holds in this context (with the lower-bound of \(-\max(M)\)).
We define and analyze two different potentials and find the range of parameters for which the expected change in potential is positive. While the second of the two potentials often yields a better range of parameters, they can be combined to obtain overall stronger dependencies on the parameters.
**Minimal-False potential.** We define
\[\Phi_{F}(\mathcal{G}_{t})=|\mathcal{F}_{t}|.\]
**Lemma 3.5**.: _Consider parameters \(M\), \(k\geq 2\), \(p,\varepsilon\in[0,1]\) and an \((M,p,k,\varepsilon)\)-general CKP process. Let \(t\in\mathbf{N}\). Let \(\Delta_{t}=\Phi_{R,F}(\mathcal{G}_{t+1})-\Phi_{R,F}(\mathcal{G}_{t})\). When \(\Phi_{R,F}(\mathcal{G}_{t})>0\) then \(\mathbf{E}\{\Delta_{t}\mid\mathcal{G}_{t}\}\geq\varepsilon-p\) if the CKP uses Stringy or BFS checking mechanisms, and \(\mathbf{E}\{\Delta_{t}\mid\mathcal{G}_{t}\}\geq\varepsilon-\mathbf{E}\{M\}p\) if the CKP uses Exhaustive or Parent-wise BFS._
Proof.: Start with the Stringy and BFS checking mechanisms. If the new node is \(\mathsf{CT}\), the potential only changes if there is a check, and \(\Delta_{t}\geq-1\), because the check stops at the first minimal false node found. If it is \(\mathsf{CF}\), it only changes if there is no check, then \(\Delta_{t}=1\). Combining all cases yields the stated bound, which is non-negative for \(p<\varepsilon\). For the Exhaustive and Parent-wise BFS checking mechanisms, the same analysis can be performed edge-by-edge.
We now prove a result involving \(\varepsilon\) and \(M\) for both types of checks, by adapting the minimal-false and leaf potential from the simple CKP case. For this potential, we revise the definition of leaf nodes to be nodes with \(\deg_{\mathsf{CT}}=0\). A node \(u\) is thus now a **non-root leaf** if it is a \(\mathsf{CT}\) node with no \(\mathsf{PF}\) parents and \(\deg_{\mathsf{CT}}(u)=0\). Note that this is still consistent with the simple CKP setting since every node other than the root is \(\mathsf{CT}\).
**General Minimal-False and Leaves potential.** We define
\[\Phi_{F,L}(\mathcal{G}_{t})=|\mathcal{F}_{t}|+\sum_{u\in\mathcal{L}_{t}}\frac{ 1}{1+\deg_{\mathsf{CF}}(u)}. \tag{22}\]
The reason we modify the potential so that each \(u\in\mathcal{L}_{t}\) contributes \(1/(1+\deg_{\mathsf{CF}}(u))\) is that, with our new definition of non-root leaves, we can no longer upper-bound the probability of connecting to a non-root leaf
as in the simple CKP case. However, we can still upper-bound \(|\mathcal{L}_{t}|/Z\) by \(1/(\min(M)+1)\), and making this change to the potential yields a \(|\mathcal{L}_{t}|/Z\) term in the analysis.
We prove the following Lemma regarding this potential.
**Lemma 3.6**.: _Consider a bounded random variable \(M\geq 1\), parameters \(k\geq 2\), \(p,\varepsilon\in[0,1]\) and an \((M,p,k,\varepsilon)\)-general CKP process with the Stringy or BFS checking mechanism. Let \(t\in\mathbf{N}\) and \(\Delta_{t}=\Phi_{F,L}(\mathcal{G}_{t+1})-\Phi_{F,L}(\mathcal{G}_{t})\). When \(\Phi_{F,L}(\mathcal{G}_{t})>0\) then_
\[\mathbf{E}\{\Delta_{t}\mid\mathcal{G}_{t}\}>p\left(-2+\varepsilon\left(1+\frac {1}{2}\frac{\mathbf{E}\{M\}}{\min(M)+1}\right)\right)+\left(1-\frac{\mathbf{E} \{M\}}{\min(M)+1}\left(1-\frac{\varepsilon}{2}\right)\right). \tag{23}\]
Proof.: We first prove this lemma for the checking mechanisms that perform an overall check with probability \(p\). Let \(\beta\) again be the probability that an edge from the new node connects to a CT non-root leaf. Note that
\[\beta=\sum_{u\in\mathcal{L}_{t}}\frac{1+\deg_{\mathsf{CF}}(u)}{Z},\]
where \(Z=\sum_{w\in\mathcal{G}_{t}^{\text{\sc PT}}}1+\deg_{\mathsf{CF}}(w)\).
We break the analysis into four cases, depending on if the new node \(v\) is CT or CF and on whether it runs a check. Recall that the number of edges \(m\) that the new node creates is chosen according to the random variable \(M\).
1. \(v\) is CT and no check: Since the new node \(v\) is a CT non-root leaf, it contributes \(+1\) to the potential. For each edge, with probability \(\beta\) the edge connects to a non-root leaf. Call this non-root leaf \(u\); in this case, there is a change of \(-(1+\deg_{\mathsf{CF}}(u))^{-1}\) to the potential because adding the CT node to the leaf node makes it no longer a leaf. So \[\mathbf{E}\{\Delta_{t}\mathbb{1}_{\left[\text{\sc CT, no check} \right]}\mid\mathcal{G}_{t}\} =(1-\varepsilon)(1-p)\left(1+\mathbf{E}\{M\}\cdot\sum_{u\in \mathcal{L}_{t}}\frac{1+\deg_{\mathsf{CF}}(u)}{Z}\left(-\frac{1}{1+\deg_{ \mathsf{CF}}(u)}\right)\right)\] \[=(1-\varepsilon)(1-p)\left(1-\mathbf{E}\{M\}\frac{|\mathcal{L}_ {t}|}{Z}\right).\] Now, \(|\mathcal{L}_{t}|/Z<1/(\min(M)+1)\) because every non-root leaf has at least \(\min(M)\) parent nodes for which it contributes \(+1\) to the potential. Therefore we obtain \[\mathbf{E}\{\Delta_{t}\mathbb{1}_{\left[\text{\sc CT, no check} \right]}\mid\mathcal{G}_{t}\}>(1-\varepsilon)(1-p)\left(1-\frac{\mathbf{E} \{M\}}{\min(M)+1}\right).\]
2. \(v\) is CT and check: We assume the check is successful, since if \(\frac{\mathbf{E}\{M\}}{\min(M)+1}<1\), an unsuccessful check will result in an expected increase in the potential, while a successful check will result in a decrease. We can remove at most one minimal false node, and additionally, if any of the parent nodes is a non-root leaf, it could be removed. Therefore, \[\mathbf{E}\{\Delta_{t}\mathbb{1}_{\left[\text{\sc CT, check}\right]}\mid\mathcal{G}_{t}\} =(1-\varepsilon)p\left(-1+\mathbf{E}\{M\}\sum_{u\in\mathcal{L}_{t }}\frac{1+\deg_{\mathsf{CF}}(u)}{Z}\left(-\frac{1}{1+\deg_{\mathsf{CF}}(u)} \right)\right)\] \[>-(1-\varepsilon)p\left(1+\frac{\mathbf{E}\{M\}}{\min(M)+1} \right).\]
3. \(v\) is CF and no check: The new node contributes \(+1\) to the count of CF nodes in the potential. For each edge, with probability \(\beta\) the edge connects to a non-root leaf, say \(u\). In this case, the potential changes by \[\frac{1}{2+\deg_{\mathsf{CF}}(u)}-\frac{1}{1+\deg_{\mathsf{CF}}(u)}\geq-\frac{ 1/2}{1+\deg_{\mathsf{CF}}(u)}\]
as a leaf node with an additional CF child node is still a leaf. Therefore, \[\mathbf{E}\{\Delta_{t}\mathbbm{1}_{\left[\text{CF, no check}\right]} \mid\mathcal{G}_{t}\} =\varepsilon(1-p)\left(1+\mathbf{E}\{M\}\sum_{u\in\mathcal{L}_{t}} \frac{1+\deg_{\text{CF}}(u)}{Z}\left(-\frac{1/2}{1+\deg_{\text{CF}}(u)}\right)\right)\] \[>\varepsilon(1-p)\left(1-\frac{1}{2}\cdot\left(\frac{\mathbf{E}\{ M\}}{\min(M)+1}\right)\right)\]
4) \(v\) is CF and check: The check removes only the new node, and \(\mathbf{E}\{\Delta_{t}\mathbbm{1}_{\left[\text{CF, check}\right]}\mid\mathcal{G}_{t}\}=0\).
Combining all four cases yields (23) and the associated bound on \(p\).
We now state the comparable Lemma for the Exhaustive BFS and Parent-wise BFS checking mechanisms, which are the ones that run separate checks with probability \(p\) for each parent node.
**Lemma 3.7**.: _Consider a bounded random variable \(M\geq 1\), parameters \(k\geq 2\), \(p,\varepsilon\in[0,1]\) and an \((M,p,k,\varepsilon)\)-general CKP process with the Exhaustive BFS or Parent-wise BFS checking mechanism. Let \(t\in\mathbf{N}\) and \(\Delta_{t}=\Phi_{F,L}(\mathcal{G}_{t+1})-\Phi_{F,L}(\mathcal{G}_{t})\). When \(\Phi_{F,L}(\mathcal{G}_{t})>0\) then_
\[\mathbf{E}\{\Delta_{t}\mid\mathcal{G}_{t}\}>p\mathbf{E}\{M\}\left(-2+ \varepsilon\left(1+\frac{1}{2}\frac{\mathbf{E}\{M\}}{\min(M)+1}\right)\right)+ \left(1-\frac{\mathbf{E}\{M\}}{\min(M)+1}\left(1-\frac{\varepsilon}{2}\right) \right). \tag{24}\]
Proof.: As for Lemma 3.2 in the simple CKP case, we can modify the previous proof by arguing about the change in potential one edge at a time. We obtain similar expressions for the various cases in the proof of Lemma 3.6 for the new node being CT or CF, and writing the expressions out gives (24).
## 4 Monotonicity of error elimination for the tree-like simple CKP
We now present couplings of tree-like simple CKP processes with different \(p\)-values or \(k\)-values, through which we obtain the following monotonicity result with respect to the checking probability \(p\) or the checking depth \(k\). In the sections below, \(M=1\) always, and so we we just list the parameters \((p,k)\) when specifying the simple CKP. We prove the following theorem.
**Theorem 4.1** (Monotonicity of error elimination for checking parameters).: _For all checking probabilities \(p_{2}\geq p_{1}\) and \(k_{2}\geq k_{1}\), if the error effect in the \((p_{1},k_{1})\)-CKP is eliminated completely, then the error effect in the \((p_{2},k_{2})\)-CKP is eliminated completely._
Therefore, if the error effect in the \((p_{2},k_{2})\)-simple CKP survives with positive probability, then the error effect in the \((p_{1},k_{1})\)-simple CKP survives as well with positive probability.
We prove this theorem by separately fixing \(p\) and \(k\) in Lemmas 4.5 and 4.8, noting that we can then chain error elimination from the \((p_{1},k_{1})\)-simple CKP to the \((p_{2},k_{1})\)-simple CKP and finally to the \((p_{2},k_{2})\)-simple CKP.
Remark that, while we can obtain monotonicity results for each of \(p\) and \(k\), there is no monotonicity with respect to the product \(p\cdot k\). That is, there exist parameters satisfying \(p_{2}k_{2}>p_{1}k_{1}\) such that the \((p_{1},k_{1})\)-simple CKP eliminates all errors but errors survive with positive probability in the \((p_{2},k_{2})\)-simple CKP. For example, [1, Theorems 2.1, 2.2] give that the error effect in the \((6/7,4)\)-CKP is completely eliminated, while the error effect in the \((1/4,20)\)-CKP survives with positive probability.
### Overview of the couplings
We first describe the motivation and high-level ideas of the coupling. This coupling is inspired by the coupling given in Section 5 of [10], which couples two preferential attachment processes: one with adversarial deletions at each time step \(G_{s}\) and one with adversarial deletions at a fixed ending time step \(G_{s}^{*}\). The process keeps track of which nodes and edges are "alive" in both processes, and which ones are alive only in \(G_{s}^{*}\) and not \(G_{s}\), to form their coupling and argue about the impact of adversarially deleted vertices and edges. Here,
we also keep track of nodes that are "alive" in either both processes or only one, and similarly ensure that the two processes are each individually generated according to preferential attachment.
First, recall that a tree-like CKP with checking probability \(p\) and checking depth \(k\) is defined as a series of states \(X_{t}=(\mathcal{T}_{t},L_{t})\) where each \(\mathcal{T}_{t}\) is a finite rooted tree and labels are chosen from \(\mathcal{L}=\{\mathsf{CT},\mathsf{CF},\mathsf{PF}\}\).
Consider the coupling with \(k\) fixed. Then, \(p\) is our only variable so we can simplify the notation and write \(p_{1}\)-CKP and \(p_{2}\)-CKP. For the purpose of the coupling, we define a new process that is given as a series of states \(Z_{0},Z_{1},\ldots\), where each \(Z_{t}\) is given by a finite rooted tree \(\mathcal{T}_{t}\) with each of the vertices labelled from \(\mathcal{L}^{\prime}=\{\mathsf{CT},\mathsf{CF},\mathsf{PF},\mathsf{ZCF}, \mathsf{ZCT},\mathsf{ZNCT},\mathsf{ZNPF}\}\):
* \(\mathsf{ZCF}\) and \(\mathsf{ZCT}\) stand for zombie-\(\mathsf{CF}\) and zombie-\(\mathsf{CT}\): these nodes are alive (not yet found to be \(\mathsf{False}\)) in the \(p_{1}\)-CKP but are dead (found to be \(\mathsf{False}\)) in the \(p_{2}\)-CKP.
* \(\mathsf{ZNCT}\) nodes are nodes that should never have been added to the \(p_{2}\)-CKP (as their parent was already found to be \(\mathsf{False}\)). They are also zombie nodes: alive in the \(p_{1}\)-CKP but dead in the \(p_{2}\)-CKP. These nodes will ultimately be removed in the construction of the \(p_{2}\)-CKP. Note that there are no \(\mathsf{ZNCF}\) nodes since we are in the simple process and all added nodes are \(\mathsf{CT}\).
* \(\mathsf{ZNPF}\) nodes are those that were once \(\mathsf{ZNCT}\), then are found to be \(\mathsf{False}\) in the \(p_{1}\)-CKP. These are also removed in the \(p_{2}\)-CKP as they never existed in the first place.
See Figure 6 for a visualization. At a high level, we design this coupling such that
* the \(p_{1}\)-CKP is generated by relabelling \(\mathsf{ZCF}\Rightarrow\mathsf{CF}\) and \(\{\mathsf{ZCT},\mathsf{ZNCT}\}\Rightarrow\mathsf{CT}\).
* the \(p_{2}\)-CKP is generated as a "slowed down" subset of \(\{Z_{i}\}\) by relabelling \(\{\mathsf{ZCF},\mathsf{ZCT}\}\Rightarrow\mathsf{PF}\), and removing all \(\mathsf{ZNCT}\) and \(\mathsf{ZNPF}\) vertices.
This coupling allows us to track the error elimination in both processes together. The coupling with \(p\) fixed and \(k\) varying is similar, and in fact uses all the same labels in \(\mathcal{L}^{\prime}\).
We note that this proof only works for the simple tree-CKP because of issues that arise when \(\mathsf{CT}\) or \(\mathsf{CF}\) nodes have \(\mathsf{CF}\) children (for extending from simple to general) or when \(\mathsf{CF}\) nodes have multiple paths to various \(\mathsf{False}\) ancestors (for extending to the DAG CKP). We leave as an open question whether a variation of this technique can be used to prove monotonicity more generally.
Figure 6: Evolution of \(Z_{t}\) and the coupled processes \(X_{t}\) and \(Y_{s}\) when performing a check in \(Y_{s}\sim(p_{2},k)\)-CKP but not in \(X_{t}\sim(p_{1},k)\)-CKP, i.e., \(U\in(p_{1},p_{2}]\). Recall that the \(Y\) process does not update at each time step of the \(Z\) process; let \(s=\textsc{Y-time}(t)\) be the time step of the \(Y\) process corresponding to time \(t\) in the \(Z\) process. If at a future time, a new node picks one of the red nodes in \(Z_{t+1}\) as its parent, it will then be labelled \(\mathsf{ZNCT}\).
### Monotonicity with respect to \(p\)
We describe the coupling of the \((p_{1},k)\)-simple CKP and \((p_{2},k)\)-simple CKP, where \(p_{2}\geq p_{1}\). Let the \(p_{1}\)-CKP be given by the series of states \(\{X_{i}\}\), and the \(p_{2}\)-CKP be given by the series of states \(\{Y_{i}\}\). We define the coupling \((\{Z_{i}\},\{X_{i}\},\{Y_{i}\})\) as follows. Each \(Z_{t}\) is given by a finite rooted tree, labels to each of the vertices, and a mapping \(\textsc{Y-time}(t)\) that indicates the time step in the \(Y\) process corresponding to time step \(t\) in the \(Z\) process.
**Initial state.**\(Z_{0}\) is given by a single \(\mathsf{CF}\) node. Let \(X_{0}=Y_{0}=Z_{0}\), and \(\textsc{Y-time}(0)=0\).
**State evolution.** Given \(\{Z_{0},Z_{1},\ldots,Z_{t}\}\), we define \(Z_{t+1}\) through the following series of steps.
* **Choose parent node:** If every node in \(\mathcal{T}_{t}\) is \(\mathsf{PF}\), the process stops. Set \(Z_{t+1}=Z_{t}\). Otherwise, choose a node \(u\in\mathcal{T}_{t}\) according to preferential attachment among nodes with label in \(T=\{\mathsf{CT},\mathsf{CF},\mathsf{ZCT},\mathsf{ZCF},\mathsf{ZNCT}\}\), i.e., with probability proportional to \(\deg_{T}(u)+1\), where \(\deg_{T}(u)\) is the number of children of \(u\) with label in \(T\).
* **Add new node:** Let \(\mathcal{T}_{t+1}\) be \(\mathcal{T}_{t}\) with the new leaf node \(v\) connected to parent \(u\).
* **Specify the Update value:** If the parent \(u\) has label in \(\{\mathsf{CT},\mathsf{CF}\}\), set \(\textsc{Update}\gets 1\). This means that the \(Y\) process updates. Otherwise, if \(u\) has label in \(\{\mathsf{ZCT},\mathsf{ZCF},\mathsf{ZNCT}\}\), set \(\textsc{Update}\gets 0\).
* **Label the new node:** If the parent \(u\) has label in \(\{\mathsf{CT},\mathsf{CF}\}\), label the \(v\) as \(\mathsf{CT}\). Otherwise, if \(u\) has label in \(\{\mathsf{ZCT},\mathsf{ZCF},\mathsf{ZNCT}\}\), label \(v\) as \(\mathsf{ZNCT}\).
* **Error detection phase:*
* Choose \(U\sim\mathrm{Unif}([0,1])\). We denote \(v_{0}=v\), and let \(v_{i+1}\) be the parent of \(v_{i}\) for each \(i=0,\ldots,k\). 1. If \(U\in[0,p_{1}]\): a check is performed in both CKPs. If \(v\) has label \(\mathsf{CT}\), then perform the following check of a path with up to \(k\) edges. Define and set a variable \(\textsc{Zombie}\gets 0\). For \(i=0,\ldots,k\),
* If \(v_{i}\) is \(\mathsf{PF}\), \(\mathsf{CF}\) or \(\mathsf{ZCF}\) (\(\mathsf{CF}\) in the \(X\) process and \(\mathsf{PF}\) in the \(Y\) process), relabel all of the nodes \(\{v_{0},...,v_{i}\}\) as \(\mathsf{PF}\) and end the error detection phase.
* If \(\textsc{Zombie}=0\), and if \(v_{i}\) is \(\mathsf{ZCT}\) (\(\mathsf{CT}\) in the \(X\) process and \(\mathsf{PF}\) in the \(Y\) process), relabel the nodes \(\{v_{0},...,v_{i}\}\) by replacing \(\mathsf{CT}\) labels with \(\mathsf{ZCT}\) labels. (Note that by construction, all nodes encountered in this case up until the \(\mathsf{ZCT}\) node must have been \(\mathsf{CT}\).) Set \(\textsc{Zombie}\gets 1\) and continue the for loop. Otherwise, if \(v\) has label \(\mathsf{ZNCT}\), for \(i=0,\ldots,k\),
* If \(v_{i}\) is \(\mathsf{CF}\), \(\mathsf{ZCF}\), or \(\mathsf{PF}\), relabel \(\{v_{0},...,v_{i}\}\) as \(\mathsf{PF}\) and end the error detection phase.
* If \(v_{i}\) is \(\mathsf{ZNPF}\), then all nodes on the path checked so far were all \(\mathsf{ZNCT}\). Relabel all of these nodes as \(\mathsf{ZNPF}\) and end the error detection phase. 2. Otherwise, if \(U\in(p_{1},p_{2}]\): a check is performed in the \(p_{2}\)-CKP but not in the \(p_{1}\)-CKP. If \(v\) has label \(\mathsf{CT}\), then for \(i=0,\ldots,k\):
* If \(v_{i}\) is in \(\{\mathsf{CF},\mathsf{ZCT},\mathsf{PF}\}\), relabel the nodes \(\{v_{0},...,v_{i}\}\) by replacing \(\mathsf{CF}\) labels with \(\mathsf{ZCF}\) labels and replacing \(\mathsf{CT}\) labels with \(\mathsf{ZCT}\) labels. End the error detection phase. Otherwise, if \(v\) has label \(\mathsf{ZNCT}\), end the error detection phase. 3. Finally, if \(U\in(p_{2},1]\): no check in either CKP, and we do not relabel anything.
**Updating \(\{X_{i}\}\), \(\{Y_{i}\}\) from \(Z_{t}\).** We also update the mapping \(\textsc{Y-time}\).
1. \(X_{t+1}\) is obtained from \(Z_{t+1}\) by relabelling \(\mathsf{ZCF}\Rightarrow\mathsf{CF}\), \(\{\mathsf{ZCT},\mathsf{ZNCT}\}\Rightarrow\mathsf{CT}\) and \(\mathsf{ZNPF}\Rightarrow\mathsf{PF}\).
2. If \(\textsc{Update}=0\), we do not add a state to the list of \(\{Y_{i}\}\). Let \(\textsc{Y-time}(t+1)=\textsc{Y-time}(t)\). If \(\textsc{Update}=1\), let \(\textsc{Y-time}(t+1)=\textsc{Y-time}(t)+1\). Add to the \(Y\)-sequence a new state \(Y_{\textsc{Y-time}(t+1)}\), which is defined from \(Z_{t+1}\) by relabelling \(\{\mathsf{ZCT},\mathsf{ZCF}\}\Rightarrow\mathsf{PF}\) and removing all \(\mathsf{ZNCT}\) and \(\mathsf{ZNPF}\) nodes. Note that if a node is labelled \(\mathsf{ZNCT}\) or \(\mathsf{ZNPF}\), then so will all of the vertices in the subtree rooted at this node.
We now prove that this coupling has the correct marginals, i.e., the two processes \(\{X_{i}\}\) and \(\{Y_{j}\}\) are indeed CKPs with the correct parameters.
**Lemma 4.2**.: _Let \(\{X_{i}\}\) be generated according to the process above. Then \(\{X_{i}\}\sim(p_{1},k)\)-simple CKP._
Proof.: The initial state is consistent with the \((p_{1},k)\)-simple CKP process. We now verify this for the evolution, where we note that unlike \(\{Y_{i}\}\), for every time step that the \(Z\)-process is updated, the \(X\)-process is also updated.
* **Choose parent node:** We note that \(Z_{t}^{T}\), the set of \(T=\{\mathsf{CT},\mathsf{CF},\mathsf{ZCT},\mathsf{ZCF},\mathsf{ZNCT}\}\) nodes in \(Z_{t}\), is the same as \(X_{t}^{\mathsf{PT}}\), the set of \(\mathsf{PT}\) nodes in \(X_{t}\). This also implies that the \(T\)-degree of a node \(u\) in \(Z_{t}\) is equal to the \(\mathsf{PT}\)-degree of this node in \(X_{t}\): \(\deg_{T}^{Z_{t}}(u)=\deg_{\mathsf{PT}}^{X_{t}}(u)\). Therefore, a parent \(u\) is chosen among \(Z_{t}^{T}=X_{t}^{\mathsf{PT}}\) with probability proportional to \(1+\deg_{T}^{Z_{t}}(u)=1+\deg_{\mathsf{PT}}^{X_{t}}(u)\), as required.
* **Add and label the new node:** Same process as the \((p_{1},k)\)-simple CKP.
* **Error detection phase:** In the \(\{Z_{j}\}\) process, with probability \(p_{1}\), a check is performed which checks a path of length up to \(k\) and stops at the first node found that is labelled \(\mathsf{PF}\), \(\mathsf{CF}\), \(\mathsf{ZCF}\), or \(\mathsf{ZNPF}\) (i.e., \(\mathsf{PF}\) or \(\mathsf{CF}\) in the \(\{X_{i}\}\) process); once such a node is found the entire path is labelled \(\mathsf{PF}\). Along the way, nodes may be relabelled from \(\mathsf{CT}\) to \(\mathsf{ZCT}\), but this does not change the labelling of nodes in the \(\{X_{i}\}\) process. Therefore, with probability \(p_{1}\), a check is performed in the \(\{X_{i}\}\) process that is consistent with how a check is performed in a \((p_{1},k)\)-simple CKP, as required. With probability \(p_{2}-p_{1}\), the \(\{Z_{j}\}\) process also performs a check. However, this check can only re-label \(\mathsf{CT}\) nodes as \(\mathsf{ZCT}\) nodes and \(\mathsf{CF}\) nodes as \(\mathsf{ZCF}\) nodes, which does not change the labelling of the coupled \(\{X_{i}\}\) process.
We have verified that the initialization and state evolution with the checking procedure all align with the \((p_{1},k)\)-simple CKP process.
We now show that \(\{Y_{j}\}\sim\mathsf{CKP}(p_{2},k)\). Recall that for any time step \(t\) of the \(\{Z_{i}\}\) process, \(\textsc{Y-time}(t)\) is equal to the number of steps \(i\leq t\) for which \(\textsc{Update}=1\). That is, the states \(Z_{t}\) and \(X_{t}\) correspond to the state \(Y_{s}\) in the coupling, for \(s:=\textsc{Y-time}(t)\). To justify the correctness of the checking procedure for the \(\{Y_{j}\}\) process, we first state the following lemma that follows simply by construction.
**Lemma 4.3**.: _If a node \(v\) is labelled \(\mathsf{ZCT}\) or \(\mathsf{ZCF}\) in \(Z_{t}\), all ancestor nodes of \(v\) in the path to the nearest \(\mathsf{PF}\) node must be labelled \(\mathsf{ZCT}\) or \(\mathsf{ZCF}\). Furthermore, if a node \(v\) is labelled \(\mathsf{ZNCT}\), there exists an \(\ell\in\mathbf{N}\) such that the closest \(\ell\) ancestor nodes from \(v\) are labelled \(\mathsf{ZNCT}\) and all subsequent ancestor nodes of \(v\) on the path to the nearest \(\mathsf{PF}\) node are labelled \(\mathsf{ZCT}\) or \(\mathsf{ZCF}\)._
Proof.: If a node \(v\) is \(\mathsf{ZCT}\) or \(\mathsf{ZCT}\) in \(Z_{t}\), then it was originally created \(\mathsf{CT}\) or \(\mathsf{CF}\) in \(\{Z_{i}\}\). By construction, \(v\) became a zombie when a check through it found a \(\mathsf{ZCT}\) node (if \(U\in[0,p_{1}]\)) or a \(\mathsf{CF}\), \(\mathsf{ZCF}\), or \(\mathsf{PF}\) node (if \(U\in(p_{1},p_{2}]\)). When this occurs, all nodes along the path to the node found are marked \(\mathsf{ZCT}\) or \(\mathsf{ZCF}\), so all ancestor nodes of \(v\) on the path to the nearest \(\mathsf{PF}\) node must be labelled \(\mathsf{ZCT}\) or \(\mathsf{ZCF}\). A similar argument can be given if \(v\) is \(\mathsf{ZNCT}\): it then connected to a \(\mathsf{ZNCT}\) or \(\mathsf{ZCF}\) parent when created.
**Lemma 4.4**.: _Let \(\{Y_{i}\}\) be generated according to the process above. Then \(\{Y_{i}\}\sim(p_{2},k)\)-simple CKP._
Proof.: The initial state is consistent with the \((p_{2},k)\)-simple CKP process. We verify that this is also the case for state evolution.
For a time step \(i\in[0,t]\) in which \(Z_{i}\) is updated, the \(Y\)-process is only updated when \(\textsc{Update}=1\). Indeed, \(\textsc{Update}=0\) when the parent node is in \(\{\mathsf{ZCT},\mathsf{ZCF},\mathsf{ZNCT}\}\); the new node added is therefore \(\mathsf{ZNCT}\), and all these nodes are removed in the coupled \(Y\)-process. Therefore, we only need to focus on the evolution of the \(Y\)-process between two steps in the \(Z\)-process when \(\textsc{Update}=1\); all steps where \(\textsc{Update}=0\) do not impact the resulting \(Y\)-process. Call these two steps \(Y_{s}\) and \(Y_{s+1}\), where \(s=\textsc{Y-time}(t)\). Recall that to go from the \(Z\) to \(Y\) process, we relabel \(\{\mathsf{ZCT},\mathsf{ZCF}\}\Rightarrow\mathsf{PF}\) and remove all \(\mathsf{ZNCT}\) and \(\mathsf{ZNPF}\) nodes.
* **Choose parent node:** When \(\textsc{Update}=1\), the new node connects to a \(\mathsf{CT}\) or \(\mathsf{CF}\) parent node. We want to verify that this connection is distributed according to preferential attachment _in the graph corresponding to \(Y_{s}\)_. Note that since all children of a \(\mathsf{CT}\) or \(\mathsf{CF}\) node in \(\{Z_{i}\}\) must also be \(\mathsf{CT}\) or \(\mathsf{CF}\), the degree of a \(\{\mathsf{CT},\mathsf{CF}\}\) node in \(Z_{t}\) equals the degree of the corresponding node in \(Y_{s}\). Summing over the preferential attachment weights of the \(\mathsf{PT}\) nodes in \(Y_{s}\), we get \[Z:=\sum_{u\in Y_{s}^{\mathsf{PT}}}\deg_{\mathsf{PT}}^{Y_{s}}(u)+1=\sum_{w\in Z _{t}^{\mathsf{CF}}}(\deg_{T}^{Z_{t}}(w)+1)+\sum_{w\in Z_{t}^{\mathsf{CT}}}( \deg_{T}^{Z_{t}}(w)+1).\] Therefore, conditioning on the parent node being \(\mathsf{CF}\) or \(\mathsf{CT}\) (when \(\textsc{Update}=1\)), we find that \(u\) is chosen as the parent node of the new node in \(Y_{s}\) with probability \((\deg_{T}^{Z_{t}}(u)+1)/Z=(\deg_{\mathsf{PT}}^{Y_{s}}(u)+1)/Z\) as required.
* **Add and label the new node:** Same as in the \((p_{2},k)\)-simple CKP.
* **Error detection phase:** Recall that if a node is labelled \(\mathsf{PF}\) in \(Y_{s}\), it must have a label among \(\{\mathsf{PF},\mathsf{ZCT},\mathsf{ZCF}\}\) in \(Z_{t}\). With probability \(p_{1}\), if the new node \(v\) is labelled \(\mathsf{CT}\), a check is performed which checks a path of length up to \(k\) and stops at the first node that is labelled \(\mathsf{PF}\), \(\mathsf{CF}\), or \(\mathsf{ZCF}\) in \(Z_{t}\) (i.e., \(\mathsf{PF}\) or \(\mathsf{CF}\) in \(Y_{s}\)); if such a node is found, the entire path is labelled \(\mathsf{PF}\). Along the way, if a \(\mathsf{ZCT}\) ancestor is found, nodes on the path may be relabelled from \(\mathsf{CT}\) to \(\mathsf{ZCT}\); in \(Y_{s}\), this corresponds to finding a \(\mathsf{PF}\) ancestor and relabelling the corresponding nodes from \(\mathsf{CT}\) to \(\mathsf{PF}\). Even though the check continues in \(Z_{t}\) after the \(\mathsf{ZCT}\) ancestor was found, Lemma 4.3 guarantees that all ancestor node of this \(\mathsf{ZCT}\) ancestor are themselves \(\mathsf{ZCT}\) or \(\mathsf{ZCF}\), so they are already marked \(\mathsf{PF}\) in \(Y_{s}\). We can disregard the case where the new node in \(Z_{t}\) has label \(\mathsf{ZNCT}\), as \(\textsc{Update}=0\) and these nodes are removed in \(Y_{s}\). Therefore this check has the desired behaviour. With probability \(p_{2}-p_{1}\), if \(v\) is \(\mathsf{CT}\), we also have a check which stops at the first \(\mathsf{CF}\), \(\mathsf{PF}\) or \(\mathsf{ZCT}\) node (i.e., \(\mathsf{CF}\) or \(\mathsf{PF}\) in \(Y_{j}\)), and makes nodes along the checked path either \(\mathsf{ZCF}\) or \(\mathsf{ZCT}\), i.e., \(\mathsf{PF}\) in \(Y_{j}\). This also is consistent with the \((p_{2},k)\)-simple CKP.
We have verified that the initialization and state evolution with the checking procedure all align with the \((p_{2},k)\)-simple CKP. \(\blacksquare\)
**Proof of monotonicity with respect to \(p\).** Using this coupling, we can now prove the following required lemma for error elimination with \(k\) fixed.
**Lemma 4.5**.: _For all \(p_{2}\geq p_{1}\) and all \(k\), if the error effect in the \((p_{1},k)\)-simple CKP is eliminated completely, then the error effect in the \((p_{2},k)\)-simple CKP is eliminated completely._
Proof.: Assume that for every \((p_{1},k)\)-CKP, the error effect is eliminated completely: there exists some time \(t\) such that the sub-tree of the first node, i.e., the entire tree \(\mathcal{T}_{t}\), is entirely marked \(\mathsf{PF}\). Given a \((p_{2},k)\)-simple CKP, we consider its generation according to the \((\{Z_{i}\},\{X_{i}\},\{Y_{i}\})\) coupling defined in Section 4.1. Let \(t\) be the time at which the \(X_{t}=(\mathcal{T}_{t},L_{t})\) state is entirely marked \(\mathsf{PF}\). Then, let \(s=\textsc{Y-time}(t)\) be the corresponding time step of the \(Y\)-process. Since every \(\mathsf{PF}\) node in \(X_{t}\) is either \(\mathsf{PF}\) or \(\mathsf{ZNPF}\) in \(Z_{s}\), it consequently is either \(\mathsf{PF}\) or does not exist in \(Y_{s}\). Therefore, by construction \(Y_{s}\) is also entirely marked \(\mathsf{PF}\), and the error is eliminated. \(\blacksquare\)
### Monotonicity with respect to \(k\)
We now present the coupling of the \((p,k_{1})\)-simple CKP and \((p,k_{2})\)-CKP processes for \(k_{1}\leq k_{2}\). This coupling follows the same overarching structure as the coupling used to show monotononicity with respect to \(p\), and so we will highlight the components of the coupling that differ in this setting. The key difference arises in the _checking procedure_ utilized in the constructed \(Z\)-process, which must handle the checking for different values of \(k\) in the coupled processes differently than how the checking for different \(p\) values was handled.
Let the \(k_{1}\)-CKP be given by the series of states \(\{X_{i}\}\), and let the \(k_{2}\)-CKP be given by the series of states \(\{Y_{i}\}\). We once again define the coupling \((\{Z_{i}\},\{X_{i}\},\{Y_{i}\})\) and the mapping \(\textsc{Y-time}(t)\), with key difference as follows in the state evolution.
**State evolution.** Given \(\{Z_{0},Z_{1},\ldots,Z_{t}\}\), we define \(Z_{t+1}\) through the following series of steps. The first four steps are the same as those in section 4.2: choose a parent node, add the new node, specify the Update value, and label the new node. The error detection phase is different in this setting. Let \(v_{0}=v\) be the new node added, and for every \(v_{i}\) in the path checked, let \(v_{i+1}\) denote its parent.
* **Error detection phase:*
* If the new node is labelled \(\mathsf{CT}\), perform the following procedure. Set indicator variable \(\mathsf{Zombie}\gets 0\). For \(i=0,\ldots,k_{1}\):
* If \(v_{i}\) is \(\mathsf{CF}\), \(\mathsf{ZCF}\), or \(\mathsf{PF}\), label all the nodes \(\{v_{0},\ldots,v_{i}\}\) as \(\mathsf{PF}\) and stop the check. Else, continue the checking procedure.
* If \(\mathsf{Zombie}=0\), and if \(v_{i}\) is \(\mathsf{ZCT}\), relabel all the nodes \(\{v_{0},\ldots,v_{i}\}\) from \(\mathsf{CT}\) and \(\mathsf{CF}\) to \(\mathsf{ZCT}\) and \(\mathsf{ZCF}\) correspondingly. Set \(\mathsf{Zombie}\gets 1\). Continue the checking procedure. Next, for \(i\) from \(k_{1}+1\) to \(k_{2}\):
* If \(v_{i}\) is \(\mathsf{ZCF}\), \(\mathsf{ZCT}\), or \(\mathsf{PF}\), relabel all the nodes \(\{v_{0},\ldots,v_{i}\}\) from \(\mathsf{CT}\) and \(\mathsf{CF}\) to \(\mathsf{ZCT}\) and \(\mathsf{ZCF}\) correspondingly, and stop the check. Else, continue the checking procedure. Otherwise, if the new node is labelled \(\mathsf{ZNCT}\), then for \(i=0,\ldots,k_{1}\):
* If \(v_{i}\) is \(\mathsf{CF}\), \(\mathsf{ZCF}\), or \(\mathsf{PF}\), label all the nodes \(\{v_{0},\ldots,v_{i}\}\) as \(\mathsf{PF}\) and stop the check.
* If \(v_{i}\) is \(\mathsf{ZNPF}\), then all nodes on the path checked so far were \(\mathsf{ZNCT}\); relabel all of these nodes as \(\mathsf{ZNPF}\). Else, continue the checking procedure.
Finally, the processes of creating the state \(Z_{t+1}\) and \(\textsc{Y-time}\) and of forming \(\{X_{i}\}\) and \(\{Y_{j}\}\) from \(\{Z_{i}\}\) are the same as for 4.2.
We now state the lemmas regarding the correctness of the \(\{X_{i}\}\) and \(\{Y_{j}\}\) processes.
**Lemma 4.6**.: _Let \(\{X_{i}\}\) be generated according to the process above. Then \(\{X_{i}\}\sim(p,k_{1})\)-simple CKP._
Proof.: The only component of the \(\{X_{i}\}\) process that needs to be analyzed is the error detection phase of the state evolution. All other components of the correctness proof follow from the proof of Lemma 4.2, due to the similarities between the couplings for \(p\) values and for \(k\) values. Recall that if a node is labelled \(\mathsf{CF}\) in \(X_{t}\), it must have had a label from the set \(\{\mathsf{CF},\mathsf{ZCF},\mathsf{ZNCF}\}\) in \(Z_{t}\). If a node is labelled \(\mathsf{PF}\) in \(X_{t}\), it must have been labelled \(\mathsf{PF}\) or \(\mathsf{ZNPF}\) in \(Z_{t}\).
With probability \(p\), a check is performed which checks a path of length up to \(k_{1}\) and stops at the first node found that is labelled \(\mathsf{PF}\), \(\mathsf{CF}\), \(\mathsf{ZCF}\) or \(\mathsf{ZNPF}\) (so, \(\mathsf{PF}\) or \(\mathsf{CF}\) in the \(\{X_{i}\}\) process); once such a node is found the entire path is labelled \(\mathsf{PF}\). Along the way, nodes may be relabelled from \(\mathsf{CT}\) to \(\mathsf{ZCT}\), but this does not change the labelling of nodes in the \(\{X_{i}\}\) process. The check also examines the following \(k_{2}-k_{1}\) nodes, but this process only relabels nodes from \(\mathsf{CT}\) to \(\mathsf{ZCT}\) and from \(\mathsf{CF}\) to \(\mathsf{ZCF}\), which does not change the labelling of nodes in the \(\{X_{i}\}\) process.
Therefore, with probability \(p\), a check is performed in the \(\{X_{i}\}\) process that is consistent with how a check is performed in a \((p,k_{1})\)-CKP. We can conclude that \(\{X_{i}\}\sim(p,k_{1})\)-CKP..
**Lemma 4.7**.: _Let \(\{Y_{j}\}\) be generated according to the process above. Then \(\{Y_{j}\}\sim(p,k_{2})\)-simple CKP._
Proof.: We again only need to analyze the error detection phase of the state evolution of the \(\{Y_{j}\}\) process. To align with a check in a \((p,k_{2})\)-simple CKP, we need to show that the error detection phase performs a check with probability \(p\), and that such a check examines the path of ancestor nodes of up to length \(k_{2}\), stopping at the first node labelled \(\mathsf{CF}\) or \(\mathsf{PF}\) in \(Y_{j}\). Suppose that \(j=\textsc{Y-time}(t)\). Recall that if a node is labelled \(\mathsf{CF}\) in \(Y_{j}\), it must have had a label of \(\mathsf{CF}\) in \(Z_{t}\). If a node is labelled \(\mathsf{PF}\) in \(Y_{j}\), it must have had a label in the set \(\{\mathsf{PF},\mathsf{ZCT},\mathsf{ZCF}\}\) in \(Z_{t}\).
With probability \(p\), if the new node is labelled \(\mathsf{CF}\) or \(\mathsf{PF}\), a check is performed which checks a path of length up to \(k_{1}\) and stops at the first node that is labelled \(\mathsf{PF}\), \(\mathsf{CF}\), or \(\mathsf{ZCF}\) (so, \(\mathsf{PF}\) or \(\mathsf{CF}\) in the \(\{Y_{j}\}\) process); once such a node is found the entire path is labelled \(\mathsf{PF}\). Along the way, nodes may be relabelled from \(\mathsf{CT}\) to \(\mathsf{ZCT}\); in \(Y_{j}\), this relabels the corresponding nodes from \(\mathsf{CT}\) to \(\mathsf{PF}\). This occurs when a \(\mathsf{ZCT}\) ancestor of a node has been found (i.e. a \(\mathsf{PF}\) ancestor node in \(Y_{j}\)), aligning with the desired checking mechanism for \(Y_{j}\)
Even though the check continues in \(Z_{t}\) after the ZCT ancestor was found, Lemma 4.3 guarantees that all ancestor node of this ZCT ancestor are themselves ZCT or ZCF, so they are already marked PF in \(Y_{j}\). We can disregard the case where the new node and its parent node have labels in the set \(\{\texttt{ZCT},\texttt{ZCF},\texttt{ZNCT},\texttt{ZNCF}\}\), as these nodes are removed when moving from \(Z_{t}\) to \(Y_{j}\).
For the remaining \(k_{2}-k_{1}\) steps of the check, nodes along a path are checked, stopping at the first node that is labelled CF, ZCT, ZCF, or PF (i.e. CF or PF in \(Y_{j}\)). It makes nodes along the checked path either ZCF or ZCT, i.e. PF in \(Y_{j}\).
Altogether, the error detection phase is consistent with that of the \((p,k_{2})\)-CKP defined by \(Y_{j}\). We can conclude that \(\{Y_{j}\}\sim(p,k_{2})\)-CKP.
Proof of monotonicity with respect to \(k\).: We prove the following Theorem regarding the monotonicity of error elimination with respect to the checking parameter \(k\).
**Lemma 4.8**.: _For all \(k_{2}\geq k_{1}\) and all \(p\), if the error effect in the \((p,k_{1})\)-simple CKP is eliminated completely, then the error effect in the \((p,k_{2})\)-simple CKP is eliminated completely._
Proof.: The proof is exactly the same as the proof of Lemma 4.5, since the argument entirely relied on comparing the labelling of nodes in the coupled \(X\) and \(Y\) processes, which carries over to this setting.
|
2301.00162 | Diagnosis of ultrafast ultraintense laser pulse characteristics by
machine-learning-assisted electron spin | Rapid development of ultrafast ultraintense laser technologies continues to
create opportunities for studying strong-field physics under extreme
conditions. However, accurate determination of the spatial and temporal
characteristics of a laser pulse is still a great challenge, especially when
laser powers higher than hundreds of terawatts are involved. In this paper, by
utilizing the radiative spin-flip effect, we find that the spin depolarization
of an electron beam can be employed to diagnose characteristics of ultrafast
ultraintense lasers with peak intensities around $10^{20}$-$10^{22}$~W/cm$^2$.
With three shots, our machine-learning-assisted model can predict,
simultaneously, the pulse duration, peak intensity, and focal radius of a
focused Gaussian ultrafast ultraintense laser (in principle, the profile can be
arbitrary) with relative errors of $0.1\%$-$10\%$. The underlying physics and
an alternative diagnosis method (without the assistance of machine learning)
are revealed by the asymptotic approximation of the final spin degree of
polarization. Our proposed scheme exhibits robustness and detection accuracy
with respect to fluctuations in the electron beam parameters. Accurate
measurements of the ultrafast ultraintense laser parameters will lead to much
higher precision in, for example, laser nuclear physics investigations and
laboratory astrophysics studies. Robust machine learning techniques may also
find applications in more general strong-field physics scenarios. | Zhi-Wei Lu, Xin-Di Hou, Feng Wan, Yousef I. Salamin, Chong Lv, Bo Zhang, Fei Wang, Zhong-Feng Xu, Jian-Xing Li | 2022-12-31T09:17:07Z | http://arxiv.org/abs/2301.00162v1 | Diagnosis of ultrafast ultraintense laser pulse characteristics by machine-learning-assisted electron spin
###### Abstract
Rapid development of ultrafast ultraintense laser technologies continues to create opportunities for studying strong-field physics under extreme conditions. However, accurate determination of the spatial and temporal characteristics of a laser pulse is still a great challenge, especially when laser powers higher than hundreds of terawatts are involved. In this paper, by utilizing the radiative spin-flip effect, we find that the spin depolarization of an electron beam can be employed to diagnose characteristics of ultrafast ultraintense lasers with peak intensities around \(10^{20}\)-\(10^{22}\) W/cm\({}^{2}\). With three shots, our machine-learning-assisted model can predict, simultaneously, the pulse duration, peak intensity, and focal radius of a focused Gaussian ultrafast ultraintense laser (in principle, the profile can be arbitrary) with relative errors of 0.1%-10%. The underlying physics and an alternative diagnosis method (without the assistance of machine learning) are revealed by the asymptotic approximation of the final spin degree of polarization. Our proposed scheme exhibits robustness and detection accuracy with respect to fluctuations in the electron beam parameters. Accurate measurements of the ultrafast ultraintense laser parameters will lead to much higher precision in, for example, laser nuclear physics investigations and laboratory astrophysics studies. Robust machine learning techniques may also find applications in more general strong-field physics scenarios.
+
Footnote †: These authors have contributed equally to this work.
+
Footnote †: These authors have contributed equally to this work.
## I Introduction
Recent rapid advances in ultrafast ultraintense laser technology [1; 2] have opened up broad prospects for vital investigations in laser-plasma physics [3; 4; 5], laser nuclear physics [6; 7], laboratory astrophysics [8; 9] and particle physics [10; 11]. In particular, laser systems of peak intensities in the hundreds of terawatts to multi-petawatts have achieved laboratory intensities of the order of \(10^{20}-10^{22}\) W/cm\({}^{2}\), recently even reaching \(\sim 10^{23}\) W/cm\({}^{2}\) with a pulse duration of tens-of-femtoseconds [12]. These achievements are paving the way for explorations of strong-field quantum electrodynamics (SF-QED), among other significant applications. Meanwhile, the unprecedented laser intensities not only cause large fluctuations in the laser output (\(\sim 1\%-20\%\) in the peak power [12]) but also make accurate determination of the laser parameters increasingly difficult. These parameters play key roles throughout the laser-driven physical processes. For instance, in detection of the quantum radiation reaction effects, energy loss of the scattered electron beam serves as the SF-QED signal and is highly correlated with the laser intensity and pulse duration [13; 14]. In the fast ignition of inertial confinement fusion, specific and precise pulse duration and intensity (\(\sim 10^{20}\) W/cm\({}^{2}\)) of the ignition laser are required for improving the energy conversion from laser to fuel and suppressing uncertainties in the laser-plasma interactions [6; 15]. In laser-plasma acceleration, the peak intensity and pulse duration affect the electron and proton acceleration efficiency and stability [16; 17; 18]. Uncertainties in the focal spot, pulse duration, and intensity of the laser pulse can lead to significant deviations from the parameters present in experiments. Thus, accurate determination of the spatiotemporal properties of the ultrafast ultraintense laser pulses is a fundamental concern for today's laser-matter interaction experiments.
Current schemes to measure the laser spatiotemporal characteristics are based on separate measurements of the focal spot radius (spatial) and the pulse duration (temporal) under low pulse energy, which can minimize the damage to the optical instruments used, followed by extrapolation of the results to the case of full laser power [19; 20; 21]. Due to the nonlinear effects in the amplification and focusing systems, however, the laser intensity obtained with this method may significantly deviate from the exact value [22; 23; 24]. By comparison, more reliable parameter diagnosis may be achieved via laser-matter interactions, making it possible to directly extract the spatial and temporal information of the ultrafast ultraintense (\(I_{0}\gtrsim 10^{20}\) W/cm\({}^{2}\)) laser pulses. Three mainstream diagnostic mechanisms are
currently in use. First, atomic tunneling ionization, in which nonlinear dependence of the multiple-tunneling-ionization rate on the field strength can only be used to diagnose the laser peak intensity with accuracy of \(\lesssim 30\%\sim 50\%\). However, the barrier suppression effect destroys the accuracy and the atom species should be carefully chosen to match the laser intensity requirements [20; 25; 26]. Second, vacuum acceleration of charged particles, in which the laser peak intensity, focal spot size, and pulse duration can be retrieved from the particle spectral analysis. Here, though, the prepulse and plasma effects and the low statistics substantially influence the final spectra and, therefore, one still needs more elaborate considerations [19; 27; 28; 29; 30]. Third, SF-QED effects, e.g., predict the laser intensity and pulse duration separately via analyzing the spectra of electrons [31; 32], photons [33; 34; 35; 21; 32] and positrons [36], with detection accuracy of the order of \(10\%-50\%\) for laser intensities within the range of \(10^{20}-10^{23}\) W/cm\({}^{2}\). Apparently, these methods either require separate diagnoses or can only measure low-precision laser parameter values (the inaccuracy can reach \(\simeq 50\%\)). Thus, new detection methods which can achieve high accuracy and simultaneously diagnose the laser intensity, pulse duration, and focal information, are still in great demand.
Recent studies have indicated that spin polarization of the electrons is sensitive to the field strength and profile of the intense laser pulse and, thus, can be manipulated by a laser pulse via the radiative spin-flip effect [37; 38; 39]. These findings have motivated us to explore the possibilities of decoding the pulse information from the spin-polarization of the laser-scattered electron beam.
For decades now, machine learning (ML) techniques have been widely used in particle physics [40] and astrophysics [41], with their impact continuously growing on multiscale, highly nonlinear physics such as condensed matter physics and quantum materials science [42; 43; 44]. ML-assisted methods are more specialized in comprehending multi-modal data (acoustic, visual, and numerical) and optimizing nonlinear extreme physical systems than humans [45] and, thus, can save much time and human effort when integrated into working practices [46; 47]. In particular, the data-driven methods are reshaping our exploration of extreme physical systems, e.g., interaction of the ultrafast ultraintense laser with materials [48]. These extreme conditions in the laboratory millimeter-sized plasmas are epitomes of astrophysical scenarios [49]. Large quantities of data from such experiments or simulations need to be systematically managed. For instance, around 150 GB of data can be generated in each shot of the National Ignition Facility (NIF) and over 70 GB per minute in the Linac-Coherent-Light-Source (LCLS) [50]. Handling data this size, from both experiments and simulations, is reaching the limits of conventional methods and can obscure the physics behind them. By contrast, the ML-assisted method can be data-driven and run in parallel on large-scale CPU or GPU platforms to extract internal correlations between the desired physical quantities.
In this paper, we propose an ML-assisted method to directly diagnose the spatiotemporal characteristics (peak intensity, focal spot size, and pulse duration) of a linearly polarized (LP) laser pulse, based on the spin-analysis of nonlinear Compton-scattered electron beams. The interaction scenario and framework of the ML-assisted diagnosis method are shown in Fig. 1. When a transversely polarized (probe) beam of electrons (mean energy \(\varepsilon_{i}\), beam radius \(w_{e}\), and degree of polarization \(\bar{S}_{i}\)) propagates along the \(z\) direction and collides with the ultrafast ultraintese laser pulse to be diagnosed, electrons can undergo strong nonlinear Compton scattering (NCS) [10]. Due to the radiative spin-flip effect [37; 38; 51], the degree of polarization changes from an initial \(\bar{S}_{i}\) to a final \(\bar{S}_{f}\). The differences (i.e., degree of depolarization) \(\delta\bar{S}=\bar{S}_{i}-\bar{S}_{f}\) from three different beams may be used to determine the laser pulse parameters: normalized intensity \(\xi\equiv eE_{0}/m\omega_{0}\), focal radius \(w_{0}\), and pulse duration \(\tau\), where \(-e\) and \(m\) are the charge and mass of the electron, \(E_{0}\) and \(\omega_{0}\) are the electric field strength and frequency of the laser field, respectively. Relativistic units with \(c=\hbar=1\) will be used throughout. In addition to those fixed laser parameters, \(\delta\bar{S}\) is related to the spatial distribution (beam radius \(w_{e}\)), average energy \(\varepsilon_{i}\), and initial degree of polarization \(\bar{S}_{i}\) of the electron beam. However, a one-to-one mapping between the beam parameters \((\varepsilon_{i},w_{e},\bar{S}_{i},\bar{S}_{f})\) and the laser parameters \((\xi,w_{0},\tau)\) can be a formidable task, because only one output is of relevance, i.e., \(\bar{S}_{f}\). In order to determine the three unknown laser parameters \((\xi,w_{0},\tau)\) simultaneously, at least three sets of output values of \(\bar{S}_{f}\) are required. Therefore, three independent beams with different parameter combinations are employed here. These complex multidimensional relationships can be properly handled by the Neural Network topology shown in Fig. 1. Note that this method can induce a spin depolarization of \(\simeq 30\%\) for 1-GeV electrons, and \(\simeq 40\%\) for 2-GeV ones (laser parameters \(\xi\simeq 80\) and \(\tau=14T_{0}\)). Currently, available spin polarimetries for electrons are based on Mott scattering [52], Moller scattering [53], linear Compton scattering [54], or more efficient NCS [55]. Some recent studies indicate that the detection precision of NCS-based polarimetry can reach about \(0.3\%\)[55], which qualifies the spin-based method as a new type of high-accuracy diagnostic scheme for ultrafast ultraintense laser pulses.
In Sec. II, a brief description of the Monte-Carlo (MC) simulation method of spin-resolved NCS will be given, together
Figure 1: Left: Three different electron beams with parameters \((\varepsilon_{i},w_{e},\bar{S}_{i})\) scatter off the same laser pulse and produce final spin degrees of polarization \(\bar{S}_{f}\). Right: Topology of the back-propagation neural network (BPNN) used for the parameter prediction which takes the \(\vec{I}_{j,j=1,2,3}=\{\varepsilon_{i},w_{e},S_{i},\bar{S}_{f},\ln(\bar{S}_{f}/ \bar{S}_{i})\}\) as input data, and produces \((\xi,w_{0},\tau)\) as output; see details in Sec. II.2.
with the simulation parameters. This is followed by introducing our laser-parameter retrieval technique based on the ML algorithms (see Fig. 1) and the associated asymptotic formulas. Numerical results and a brief discussion will be given in Sec. III. Our conclusions will be presented in Sec. IV.
## II Spin-based laser-parameter diagnostic methods
As an illustrative example, diagnosis of a tightly focused laser with a double-Gaussian (spatial and temporal) distribution is considered. In principle, the envelope of the laser can be arbitrary, but should be predetermined via experimental methods, for instance, from a low-power splitting beam. Once the envelope form is known, the following methods can be used to retrieve the laser pulse parameters from the spin diagnosis of the scattered electrons.
### Spin-resolved NCS and interaction scenario
Our analysis of the radiative spin-flip effect is based on MC simulation method proposed in [37; 56], in which the spin-resolved probability of NCS in the laser-beam scattering is considered in the local constant field approximation (LCFA) [37; 57]. After emission of a photon, the electron spin state collapses into one of its basis states defined with respect to an instantaneous spin quantization axis (SQA) chosen along the magnetic field in the rest frame of the electron. In Fig. 1, the laser is linearly polarized along the \(x\)-direction, so its magnetic field component is \(B_{y}\). The SQA tends to be anti-parallel to the magnetic field in the rest frame of the electron. Depolarization amounts to the electron spin acquiring a certain spin polarization in the \(y\)-direction, which gets cancelled from the net polarization by the periodic magnetic field, i.e., \(\bar{S}_{\,f_{\,y}}\approx 0\). Therefore, we focus our analysis, in what follows, on the electron polarization in the \(x\)-direction. In NCS, the invariant parameter characterizing the quantum effects is \(\chi\equiv e\sqrt{-(F_{\mu\mu}p^{\nu})^{2}}/m^{357,58}\), where \(F_{\mu\nu}\) and \(p^{\nu}\) denote the electromagnetic field tensor and the four-momentum of the electron, respectively. In a colliding geometry, \(\chi\approx 2\xi\gamma_{e}\omega_{0}/m\), where \(\gamma_{e}\) denotes the electron's Lorentz factor. To excite the radiative spin-flip process, \(\chi\) should be in the range of \(0.01\) to \(1\), over which the nonlinear Breit-Wheeler pair-production can be suppressed.
The LP laser parameter set for the training data includes: wavelength \(\lambda_{0}=0.8\)\(\mu\)m, focal radius \(w_{0}=[2,3,4,5]\lambda_{0}\), peak intensity \(\xi=[10,15,20,30,40,45,60,80]\), and pulse duration \(\tau=[2,6,10,14]T_{0}\), with \(T_{0}\) denoting the laser period. The probe electron beam has a polar angle \(\theta_{e}=\pi\), azimuthal angle \(\phi_{e}=0\), and angular divergence \(\sigma_{\theta}=0.3\) mrad. The initial kinetic energies are \(\varepsilon_{i}=[0.5,1,1.5,2]\) GeV, with relative energy spread \(\sigma_{e}/\varepsilon_{i}=0.05\), and initial average degree of spin-polarization along the \(x\)-direction \(S_{\,i,x}=[0.6,0.8,1.0]\) (here, \(\chi_{\rm max}\lesssim 1\), i.e., the pair-production effect on the final electron distribution is negligible for the present parameters). The beam radius \(w_{e}=[1,2,3,4]\lambda_{0}\), beam length \(L_{e}=5\lambda_{0}\), and the total number of electrons is \(5\times 10^{5}\) with transversely Gaussian and longitudinally uniform distributions, attainable by current laser wakefield accelerators [3].
### Neural Network assisted diagnosis
Decoding the spatiotemporal characteristics of the ultrafast ultraintentense laser from information carried by the scattered electron beam is an inverse transformation that requires multi-dimensional input and output. To make full use of the electron beam data, we build a standard BPNN via PyTorch to train and predict the scattering laser parameters [59]. The input data is composed of the energy, beam radius, initial and final average spins, and logarithm of the ratio of final spin to initial spin of the electron beam, in the vector \(\vec{I}\equiv[\varepsilon_{i},w_{e},\tilde{S}_{i},\tilde{S}_{f},\ln(\tilde{S} _{f}/\tilde{S}_{i})]\); see Fig. 1. About \(1000\) sets of input data are obtained via the MC simulation and rearranged/recombined to about \(3\times 10^{4}\) sets for training. Then the input data (\(\tilde{I_{1}},\tilde{I_{2}},\tilde{I_{3}}\)) is normalized via StandScaler function. After random permutation, the input information is preprocessed by the second-order polynomial feature function (PolynomialFeatures) to construct implicit connections between them.
In our BPNN, we choose eight fully connected hidden layers and the corresponding numbers of nodes are (128, 256, 512, 512, 512, 512, 128). The numbers of hidden layers and nodes here ensure adequate prediction accuracy and appropriate computing resources. The activation functions alternatively use tanh and PReLU between different layers. Mean squared error (MSELoss) is used as the loss function, and the stochastic gradient descent (SGD) method is used as the optimizer. After each training iteration, the optimizer clears old gradients, and losses are back-propagated for the calculation of new gradients. Finally, the network parameters are updated according to the new gradients. The initial learning rate is set as \(0.3\) and the adjustment factor of the exponential learning rate (ExponentialLR) scheduler is set as \(0.9\). In our calculations, the total number of training iterations is \(4\times 10^{4}\). In order to enhance the learning efficiency of the model on the laser pulse duration \(\tau\), we set the learning ratios of \(\xi\), \(w_{0}\) and \(\tau\) as 1:1:1
Figure 2: (a) and (b): Training loss (mean squared errors for all training samples) evolutions of \(\xi\), \(w_{0}\), \(\tau\) and the total loss (tot.) vs training times. Learning ratios of \(\xi\),\(w_{0}\) and \(\tau\) are 1:1:1 in (a), and 1:1:2 in (b).
and 1:1:2; see Figs. 2 (a) and (b) in the two separate models, respectively. Note that the training loss measures the training efficiency of the model. The training loss may increase due to the inappropriate network structure design and will decrease due to effective learning. In the final stable stage, there may be over-fitting to the training data. However, the over-fitting can be restrained by using a technique such as weight upper limit [60] or dropout [61]. For instance, the losses of \(\xi\), \(w_{0}\), and \(\tau\) are reduced for the learning ratios of 1:1:2, and further increasing the ratio of \(\tau\) will produce larger losses in other parameters. This BPNN model will be used in the latter prediction. In principle, the ML-assisted method is not limited to the current application, but can also be used for other inverse problems.
### Analytical asymptotic models
Asymptotic estimation of the depolarization effect is done below analytically from the radiative equations of motion for the dynamics [Landau-Lifshitz (LL) equation [62]] and the spin [modified Thomas-Bargmann-Michel-Telegdi (T-BMT) equation [63]]. Dependence of the spin dynamics on the electron energy follows assuming weak radiation. Then the quantum-corrected LL equation is used to obtain the approximated electron energy, which is then plugged into the solution for spin dynamics.
The radiative spin evolution is composed of the Thomas precession (subscript "T") and radiative correction terms (subscript "R") [63]. That evolution is governed by
\[\frac{\mathrm{d}\mathbf{S}}{\mathrm{d}\eta}=\left(\frac{\mathrm{ d}\mathbf{S}}{\mathrm{d}\eta}\right)_{T}+\left(\frac{\mathrm{d}\mathbf{S}}{ \mathrm{d}\eta}\right)_{R}, \tag{1a}\] \[\left(\frac{\mathrm{d}\mathbf{S}}{\mathrm{d}\eta}\right)_{T}= \frac{e\gamma_{e}}{(k\cdot p_{i})}\mathbf{S}\times\left[-\left( \frac{g}{2}-1\right)\frac{\gamma_{e}}{\gamma_{e}+1}\left(\mathbf{\beta}\cdot \mathbf{B}\right)\cdot\mathbf{\beta}\right.\] \[\left.+\left(\frac{g}{2}-1+\frac{1}{\gamma_{e}}\right)\mathbf{B} -\left(\frac{g}{2}-\frac{\gamma_{e}}{\gamma_{e}+1}\right)\mathbf{\beta}\times \mathbf{E}\right],\] (1b) \[\left(\frac{\mathrm{d}\mathbf{S}}{\mathrm{d}\eta}\right)_{R}=-P \left[\psi_{1}(\chi)\mathbf{S}+\psi_{2}(\chi)(\mathbf{S}\cdot\mathbf{\beta}) \mathbf{\beta}+\psi_{3}(\chi)\mathbf{n}_{B}\right], \tag{1c}\]
where \(\mathbf{E}\) and \(\mathbf{B}\) are the laser electric and magnetic fields, respectively, \(p_{i}\), \(k\), \(\eta\) and \(g\) are: the electron momentum 4-vector, the laser wavevector, the laser phase, and the electron gyromagnetic ratio, respectively, \(P=\alpha_{f}m^{2}/[\sqrt{3}\pi(k\cdot p_{i})]\), \(\psi_{1}(\chi)=\int_{0}^{\infty}u^{\prime\prime}du\xi_{\frac{1}{2}}(u^{\prime })\), \(\psi_{2}(\chi)=\int_{0}^{\infty}u^{\prime\prime}du\int_{u^{\prime\prime}}^{ \infty}dx\,\mathrm{d}\xi_{\frac{1}{2}}(\chi)\)-\(\psi_{1}(\chi)\), \(\psi_{3}(\chi)=\int_{0}^{\infty}u^{\prime\prime}du\xi_{\frac{1}{2}}(u^{\prime })\), \(u^{\prime}=2u/3\chi\), \(u^{\prime\prime}=u^{2}/(1+u)^{3}\), \(u=\varepsilon_{\gamma}/\left(\varepsilon_{0}-\varepsilon_{\gamma}\right)\), \(\varepsilon_{0}\) and \(\varepsilon_{\gamma}\) are the electron energy before radiation and the emitted photon energy, respectively, \(\mathbf{K}_{n}\) is the \(n\)th-order modified Bessel function of the second kind, and \(\alpha_{f}=1/13\) is the fine structure constant. The SQA is chosen along the magnetic field \(\mathbf{n}_{B}=\mathbf{\beta}\times\mathbf{\hat{a}}\), with \(\mathbf{\beta}=\mathbf{v}/c\) the scaled electron velocity and \(\mathbf{\hat{a}}=\mathbf{a}/|\mathbf{a}|\) a unit vector along the electron acceleration \(\mathbf{a}\).
To facilitate theoretical analysis and extract analytical formulas, some approximations will be made with current laser and electron beam parameters in mind, i.e., GeV electron beam interacting with an LP laser (\(\xi<100\)) and \(0.1\lesssim\chi\lesssim 1\). Due to laser defocusing, the Thomas-term-induced variation \(\delta S_{T}\) is \(\lesssim 10^{-4}\), and only the dominant term, i.e., the radiative correction will be considered. Furthermore, initial velocity of the electron beam is along the \(z\) direction, with \(\beta_{z}\gg\beta_{x}(\beta_{y})\), thus the \(\psi_{2}\)-term is negligible for initial TSP electrons. Moreover, due to the periodic nature of the magnetic field, contribution of the \(\psi_{3}\)-term vanishes on average within one laser period. Hence, approximate evolution of the spin components may be obtained from
\[\frac{\mathrm{d}S_{x}}{\mathrm{d}\eta}\simeq\frac{C\psi_{1}(\chi)} {\gamma_{e}}S_{x}, \tag{2a}\] \[\frac{\mathrm{d}S_{y}}{\mathrm{d}\eta}\simeq\frac{C\psi_{1}(\chi) }{\gamma_{e}}S_{y},\] (2b) \[\frac{\mathrm{d}S_{z}}{\mathrm{d}\eta}\simeq\frac{C(\psi_{1}(\chi) +\psi_{2}(\chi))}{\gamma_{e}}S_{z}, \tag{2c}\]
where \(C=-\frac{\alpha_{f}}{2\sqrt{3}\pi}\frac{\omega_{0}}{m}\). Because \(\psi_{1}(\chi)>0\) and \(\psi_{2}(\chi)<0\), depolarization in the \(x\)- and \(y\)-directions is faster than in the \(z\)-direction. For instance, for a laser with parameters of \(\xi=60\), \(\tau=8T_{0}\), and \(w_{0}=5\lambda_{0}\), and the electron beam of Fig. 4 (a), the final average spin degrees of polarization are \(S_{x,f}\approx 0.8201\), \(S_{y,f}\approx 0.8211\) and \(S_{z,f}\approx 0.8741\), for \(\bar{S}_{i,x}=1\), \(\bar{S}_{i,y}=1\), and \(\bar{S}_{i,z}=1\). Thus, in this paper, we take the electron beam initially polarized along the \(x\)-direction for a larger detection signal.
Under the assumption of weak radiation loss \(\frac{\mathrm{d}\gamma_{e}}{\mathrm{d}\eta}\simeq 0\) and \(\chi(\eta)\simeq 2\frac{\omega_{0}}{m}\gamma_{e}\xi\sin^{2}(\eta)\), one can obtain, to leading-order approximation, \(\psi_{1}(\chi)\simeq f_{1}\chi^{2}\), for \(0.1\lesssim\chi\lesssim 1\) and \(f_{1}\approx 0.25\), is obtained via curve fitting. Integrating Eq. (2a), the asymptotic \(\bar{S}_{f,x}\), using the laser-beam parameters, will be given by
\[\ln\frac{S_{f,x}(\tau)}{S_{i,x}(0)}\simeq M_{1}\gamma_{e}\xi^{2}\tau, \tag{3}\]
with the factor \(M_{1}=-\frac{\sqrt{3}}{2}\alpha_{f}\frac{\omega_{0}}{m}f_{1}\approx-4.81\times 1 0^{-9}\) and \(\tau\) is the pulse duration in units of the laser period \(T_{0}\).
To be precise, the radiated photon energy (radiation loss \(\overline{\varepsilon}_{\gamma}\)) should be taken into account for \(0.1\lesssim\chi\lesssim 1\). Here, we use the quantum-corrected LL equation to include the radiation loss [64] via
\[\frac{\mathrm{d}\mathbf{P}}{\mathrm{d}t}=\mathbf{F}_{L}+\mathbf{F }_{\mathrm{rad}}, \tag{4a}\] \[\mathbf{F}_{\mathrm{rad}}=-C^{\prime}\chi^{2}\mathcal{G}(\chi)\mathbf{ \beta}/(\mathbf{\beta}^{2}), \tag{4b}\]
where \(\mathbf{F}_{L}\equiv q(\mathbf{E}+\mathbf{v}\times\mathbf{B})\) denotes the Lorentz force and \(\mathbf{F}_{\mathrm{rad}}\) the radiation reaction force, \(C^{\prime}=2\alpha_{f}^{2}m/(3\tau_{e})\), \(r_{e}\) the classical electron radius and \(\mathcal{G}(\chi)\simeq[1+4.8(1+\chi)\ln(1+1.7\chi)+2.44\chi^{2}]^{-2/3}\) the quantum correction function [65]. For \(0.1\lesssim\chi\lesssim 1\), assuming \(\chi(\eta)\simeq 2\frac{\omega_{0}}{m}\gamma_{e}\xi\sin^{2}(\eta)\) and making the approximation \(\chi^{2}\mathcal{G}(\chi)\simeq f_{2}\chi^{2}\) (with a fitting factor of \(f_{2}\simeq 0.077\)), the radiation loss (averaged over all electrons, i.e., ignoring the stochastic effect) is given by \(\overline{\varepsilon}_{\gamma}=\int_{0}^{\eta}\mathrm{d}\eta\mathbf{F}_{ \mathrm{rad}}\frac{\mathrm{d}t}{\mathrm{d}t}\simeq M_{2}\tau\gamma_{e}^{2}\xi^{2}\), where \(M_{2}=\pi\alpha_{f}\frac{\omega_{0}}{m}f_{2}\approx 5.36\times 10^{-9}\). Then, replacing \(\gamma_{e}\) in Eq. (3) with \(\gamma_{e}-\overline{\varepsilon}_{\gamma}\), analytical asymptotic estimation of the final spin \(\bar{S}_{f,x}\) will be given by
\[\ln\frac{\bar{S}_{f,x}(\tau)}{\bar{S}_{i,x}(0)}\simeq M_{1}\gamma_{e}\xi^{2} \tau(1-M_{2}\gamma_{e}\xi^{2}\tau). \tag{5}\]
## III Results and discussions
To demonstrate efficiency of the proposed diagnosis method, some operational parameters of petawatt-scale lasers at a number of international facilities will be used; see Table. 1. The corresponding depolarization processes, investigated via MC simulations, indicate that the relative errors between the predicted and input parameters are of orders 0.1% to 10%; see Fig. 3 (a). After consecutive training, the BPNN model grasps the pattern of the radiative spin-flip effect and, therefore, is capable of accurately predicting the laser characteristics, i.e., (\(\xi\), \(\tau\), \(w_{0}\)), simultaneously. Due to limited training data and cycle, the relative prediction errors for \(\xi\), \(\tau\), and \(w_{0}\) (simultaneously) are of the order of \(\mathcal{R}_{\{12\}}\lesssim 10\%\); see Figs. 3 (b) and (c). Compared with cases of \(w_{0}\gtrsim 3\lambda_{0}\), the number of electrons scattered by a tightly focused laser (\(w_{0}\lesssim 3\lambda_{0}\)) is lower due to the small Rayleigh range (\(z_{R}=\pi w_{0}^{2}/\lambda\)). Thus, the beam-averaged spin-flip effect is relatively more sensitive to variations in the electron beam parameters and the relative error \(\mathcal{R}_{1}\) is larger for \(w_{0}\lesssim 3\lambda_{0}\). For a laser radius \(w_{0}\gtrsim 5\lambda_{0}\), already beyond the current training range, certain overfitting is expected. For the SIIEX-II, for example, the relative error \(\mathcal{R}_{w}\sim 15\%\). By comparison, the prediction error for the laser pulse duration is \(\mathcal{R}_{2}\lesssim 5\%\), while for most regions, \(\mathcal{R}_{2}\sim 1\%\), i.e., the prediction of \(\tau\) is more accurate than \(w_{0}\); see Figs. 3(a) and (c). This scheme is quite stable with respect to fluctuations in the electron beam parameters, as is shown in Fig. 5.
Physical essence of the ML-assisted pulse information decoding method can be revealed by our analytical asymptotic estimation on the basis of Eq. (5) which is in good agreement with the numerical MC results over a wide range of laser parameter values; see Figs. 4 (a)-(c). The distributions of \(\delta\tilde{S}_{x}^{\,ACAE}\) with respect to \(\xi\) and \(\tau\) are shown in Figs. 4 (a) and (b), where superscripts "MC" and "AE" denote the results from MC and analytical asymptotic estimation (AE) methods, respectively. As expected, \(\delta\tilde{S}_{x}\) increases as \(\xi\) and \(\tau\) both increase, and a specific spin change \(\delta\tilde{S}_{x}\) determines a curve that binds \(\xi\) with
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline \hline Project & \(E_{L}\) (\(J\)) & \(\lambda\) (\(\mu\)m) & \(I_{0}\) (W/cm\({}^{2}\)), \(\xi\) & \(\tau\) (fs); (\(T_{0}\)) & \(w_{0}\) (\(\lambda\)) \\ \hline ELI-NP[66] & 20 & 0.82 & \(5.6\times 10^{21}\); 52.43 & 18.75; 6.86 & 3.63 \\ \hline J-KAREN[67] & 28.4 & 0.8 & \(3.8\times 10^{21}\); 42.14 & 32.9; 12.33 & 4.75 \\ \hline GIST[68] & 44.5 & 0.81 & \(10^{22}\); 69.21 & 30; 11.1 & 3.79 \\ \hline SIIEX-II[69] & 30 & 0.8 & \(5\times 10^{20}\); 15.28 & 30; 11.24 & 6.16 \\ \hline APOLLON[70] & 10 & 0.815 & \(2\times 10^{21}\); 31.14 & 24; 8.83 & 2.92 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Operational parameters of some international ultrafast ultrafenetense laser facilities: total energy \(E_{L}\), central wavelength \(\lambda\), peak intensity \(I_{0}\), pulse duration \(\tau\), and focal radius \(w_{0}\).
\(\tau\) (or a hyperplane for \(\xi\), \(\tau\) and \(w_{0}\)), i.e., the NCS acts as a nonlinear function \(\mathcal{F}(\cdot,\cdot)\) which maps the laser pulse parameters \((\xi,\tau)\) to a degree of depolarization of the electron beam \(\mathcal{F}(\xi,\tau)\rightarrow\delta\bar{S}_{x}\). Quite remarkably, the corresponding relative error \(\mathcal{R}_{s}\) in the parameter ranges of \(\xi\in(10,60)\) or \(\tau\in(2,6)T_{0}\) is \(\mathcal{R}_{s}\simeq 1\%\); see Fig. 4(c). With the analytical AE extracted subject to the condition \(0.1<\chi\lesssim 1\), and for \(\xi>60\) and \(\tau>6\), the low-order estimation deviates from the MC result, due to the nonlinear radiative effects. However, the ML-assisted method is data-driven, i.e., the algorithms can still grasp the correlations between laser pulse parameters and depolarization of the electron beam, without artificial restrictions; see the prediction accuracy (the total relative error \(\mathcal{R}_{2}\sim 1\%\)) for high laser intensity and long pulse duration in Fig. 3(c).
Figure 4(d) illustrates how to determine \(\xi\) and \(w_{0}\) via AE for a specific set of parameters (\(\xi=50\), \(\tau=6T_{0}\) and \(w_{0}=5\lambda_{0}\)) marked as white circles \(P_{1}\) in Figs. 4 (a)-(c). Here, the pulse duration \(\tau\) is pre-acquired with other diagnostics, for instance, from the low-power mode of detection. This is a restriction not encountered in the ML-assisted method. Then, a sub-micrometer probe is used to collide with the laser pulse from which one obtains \(\delta\bar{S}_{1}\); see solid line labelled by "\(1\lambda_{0}\)" in Fig. 4(d) which has been obtained from Eq. (5). After that, a second probe with beam radius \(w_{e}=4\lambda_{0}\) produces \(\delta\bar{S}_{2}\), the dot-dashed line labelled by "\(4\lambda_{0}\)" in Fig. 4(d). According to Eq. (5), two average intensities \(\bar{\xi}_{1}\) and \(\bar{\xi}_{2}\) can be determined from \(\delta\bar{S}_{1}\) and \(\delta\bar{S}_{2}\), corresponding to different beam radii, respectively. Since \(w_{0}\gg w_{e}\), the average laser intensity sensed by the sub-micrometer probe can approximately serve as the peak intensity in the focusing region. Thus, \(\bar{\xi}_{1}=51.62\) is identified as the peak intensity of the laser pulse, with a relative error of \(3.2\%\). Whereas, \(\bar{\xi}_{2}=42.96\), corresponding to \(w_{e}=4\lambda_{0}\), is taken as the average intensity within the probe radius, i.e., \(\bar{\xi}_{2}=\bar{\xi}_{1}\int_{-w_{e}}^{w_{e}}e^{-r^{2}/w_{0}^{2}}dr\). Numerical calculation gives the focal radius \(w_{0}=5.18\lambda_{0}\), with a relative error of \(3.6\%\). Note that, in Eq. (5), once \(\tau\) (or \(\xi\)) is given, the map between \(\delta\bar{S}\) and the other parameter is uniquely fixed. For instance, in Fig. 4 (d), once \(\xi\) is fixed (points \(P_{2}\) and \(P_{3}\) in Figs. 4 (a)-(c)), there will be only one intersection (the final phase \(\eta_{f}\)) between Eq. (5) and the temporal evolution of the average spin. Here, \(\bar{S}(\eta_{f})\) is the final degree of polarization of the electron beam. Conversely, once \(\tau\) is fixed (points \(P_{4}\) and \(P_{5}\) in Figs. 4 (a)-(c)), the MC final results will evolve to a unique \(\xi\) value; see Fig. 4(f).
Compared with the signals from dynamical statistics, the degree of spin polarization is more accurate and more robust with respect to fluctuations in energy and angular spread of the electron beam probe; see Figs. 5 (a)-(d). As the initial energy spread \(\sigma_{e}/\varepsilon_{i}\) varies from \(1\%\) to \(30\%\), the average energy (\(\bar{\varepsilon}_{f}\sim 500\) MeV) of the final electron beam (\(w_{e}=1\lambda_{0},\varepsilon_{i}=1\) GeV) changes by \(\sim 1\%\); see Fig. 5 (a). However, the effect of energy spread on the spin polarization \(\bar{S}_{f,x}\) of the final state is about \(\sim 0.3\%\); see Fig. 5 (c). According to Eq. (5) expressing analytical AE, \(S_{f}\sim e^{-k_{1}\gamma_{e}}\) and \(\delta S_{f}\sim\delta\gamma_{e}k_{1}e^{-k_{1}\gamma_{e}}\), which leads to the conclusion that the spin variations due to dynamics exhibit exponential decay. Similarly, while the initial angular spread \(\sigma_{\theta}\) changes from \(0.3\) to \(100\) mrad, the normalized variation of angular spread \(N_{\theta}\) is \(\sim 30\%\), and the effect on the spin \(\bar{S}_{f,x}\) is \(\sim 0.2\%\). In short, detection accuracy of the spin signal is one to two orders of magnitude higher than that of the dynamic signal. Relative errors \(\mathcal{R}\) of the analytical AE and ML-assisted spin signals are shown in Figs. 5 (e) and (f). Due to angular and energy spread, the relative errors \(\mathcal{R}\) of the analytical AE, for \(\xi\) and \(w_{0}\), are both kept within \(5\%\), while the ML-assisted method can simultaneously predict three parameter values for \(\xi\), \(w_{0}\) and \(\tau\), with relative errors \(\mathcal{R}\lesssim 10\%\). Especially for \(w_{0}\), accuracy of the ML-assisted method is at least twice as good as that of the analytical prediction.
## IV Conclusion
We have put forward an ML-assisted method to diagnose the spatiotemporal properties of an ultrafast ultraintense laser pulse, namely, the pulse duration \(\tau\), peak intensity \(\xi\) and focal spot size \(w_{0}\), based on the radiative spin-flip effect of the electrons while experiencing strong NCS. Our trained BPNN
Figure 5: Impact of the probe electron beam parameters on the detection signals. (a) Final average kinetic energies \(\bar{\varepsilon}_{f}\) vs the initial energy spreads \(\sigma_{e}/\varepsilon_{i}\) of the probe electron beams (\(\sigma_{\theta}=0.3\) mrad). Lines marked with triangles, circles, and diamonds denote probe electrons with different beam radii and energies. The initial spin polarization is \(\bar{S}_{i,x}=1\) and the laser parameters are the same as in Fig. 4 (d). (b) Relative changes of the angular spread \(N_{\theta}=(\Delta\theta_{f,x}-\Delta\theta_{i,x})/\Delta\theta_{f,x}\) vs the initial angular spread \(\sigma_{\theta}\) (\(\sigma_{e}/\varepsilon_{i}=0.05\)) of the probe electron beams, where \(\Delta\theta_{i,x}\) and \(\Delta\theta_{f,x}\) denote the full-width-at-half-maximum (FWHM) of the initial and final angular spectra along the \(x\)-direction, and \(\theta_{x}=\arctan(p_{x}/p_{z})\). (c) and (d): Final transverse spin degrees of polarization of the scattered electron beams \(\bar{S}_{f,x}\) vs \(\sigma_{e}/\varepsilon_{i}\) and \(\sigma_{\theta}\), respectively. (e) and (f): Relative errors \(\mathcal{R}\) vs \(\sigma_{e}/\varepsilon_{i}\) and \(\sigma_{\theta}\), respectively. The red and blue lines are the relative errors from analytical asymptotic estimation and BPNN, respectively. Lines marked with triangles, circles, and diamonds, denote \(\mathcal{R}\) of \(\xi\), \(w_{0}\), and \(\tau\), respectively.
can accurately predict the spatiotemporal characteristics of petawatt-level laser systems with relative errors \(\lesssim 10\%\). The proposed method is accurate and robust with respect to fluctuations in the electron beam parameters, and can be suitably deployed to currently running or planned multi-petawatt-scale laser facilities. Accurate measurement of the ultrafast ultraintense laser parameters may pave the way for future strong-field experiments, of importance to laser nuclear physics investigations, laboratory astrophysics studies, and other fields.
## V Acknowledgement
This work is supported by the National Natural Science Foundation of China (Grants Nos. 11874295, 12022506, U2267204, 11905169, 12275209, 11875219, and 12171383), the Open Fund of the State Key Laboratory of High Field Laser Physics (Shanghai Institute of Optics and Fine Mechanics), and the foundation of science and technology on plasma physics laboratory (no. JCKYS2021212008). The work of YIS is supported by an American University of Sharjah Faculty Research Grant (FRG21).
|
2309.09764 | Application-driven Validation of Posteriors in Inverse Problems | Current deep learning-based solutions for image analysis tasks are commonly
incapable of handling problems to which multiple different plausible solutions
exist. In response, posterior-based methods such as conditional Diffusion
Models and Invertible Neural Networks have emerged; however, their translation
is hampered by a lack of research on adequate validation. In other words, the
way progress is measured often does not reflect the needs of the driving
practical application. Closing this gap in the literature, we present the first
systematic framework for the application-driven validation of posterior-based
methods in inverse problems. As a methodological novelty, it adopts key
principles from the field of object detection validation, which has a long
history of addressing the question of how to locate and match multiple object
instances in an image. Treating modes as instances enables us to perform
mode-centric validation, using well-interpretable metrics from the application
perspective. We demonstrate the value of our framework through instantiations
for a synthetic toy example and two medical vision use cases: pose estimation
in surgery and imaging-based quantification of functional tissue parameters for
diagnostics. Our framework offers key advantages over common approaches to
posterior validation in all three examples and could thus revolutionize
performance assessment in inverse problems. | Tim J. Adler, Jan-Hinrich Nölke, Annika Reinke, Minu Dietlinde Tizabi, Sebastian Gruber, Dasha Trofimova, Lynton Ardizzone, Paul F. Jaeger, Florian Buettner, Ullrich Köthe, Lena Maier-Hein | 2023-09-18T13:44:36Z | http://arxiv.org/abs/2309.09764v1 | # Application-driven Validation of Posteriors in Inverse Problems
###### Abstract
Current deep learning-based solutions for image analysis tasks are commonly incapable of handling problems to which multiple different plausible solutions exist. In response, posterior-based methods such as conditional Diffusion Models and Invertible Neural Networks have emerged; however, their translation is hampered by a lack of research on adequate validation. In other words, the way progress is measured often does not reflect the needs of the driving practical application. Closing this gap in the literature, we present the first systematic framework for the application-driven validation of posterior-based methods in inverse problems. As a methodological novelty, it adopts key principles from the field of object detection validation, which has a long history of addressing the question of how to locate and match multiple object instances in an image. Treating modes as instances enables us to perform mode-centric validation, using well-interpretable metrics from the application perspective. We demonstrate the value of our framework through instantiations for a synthetic toy example and two medical vision use cases: pose estimation in surgery and imaging-based quantification of functional tissue parameters for diagnostics. Our framework offers key advantages over common approaches to posterior validation in all three examples and could thus revolutionize performance assessment in inverse problems.
Validation, Metrics, Posterior, Deep Learning, Inverse Problems.
## I Introduction
Deep learning has led to breakthrough successes in various areas of image analysis. State-of-the-art approaches, however, commonly lack the capacity of representing the fact that multiple (substantially different) plausible solutions may exist. One medical vision example is the registration of two-dimensional (2D) X-ray images with preoperative computed tomography (CT) images in intraoperative surgical guidance systems (Fig. 1). To this end, the pose of the X-ray modality relative to the patient coordinate system (given by the preoperative three-dimensional (3D) image) has to be inferred from the 2D X-ray images. Standard methods compute a single point estimate based on the input images, thereby ignoring the fact that multiple different solutions may exist. Posterior-based methods, such as conditional Diffusion Models [1, 2] and Invertible Neural Networks [3, 4, 5], overcome this bottleneck by converting the input to a 'posterior' - a full probability distribution conditioned on the input, capable of capturing several plausible solutions via multiple modes. While the field is currently experiencing much progress on a methodological level, little attention is given to the adequate validation of posterior-based methods, impeding their translation into practice.
Most commonly, methods are either validated by extracting the maximum a posteriori (MAP) probability location and using it as a point estimate [3], by feeding the posteriors into the forward model (if available) and choosing suitable metrics in the 'observable space' [3, 6, 1], or through more qualitative validation schemes as, for example, in the task of image generation [4, 7]. However, these validation approaches are often inadequate as they neglect the requirements imposed by the underlying application. For instance, providing the actual posteriors to the users of a system that can handle ambiguous solutions is typically not useful (lack of interpretability) and may not even be feasible (due to high dimensionality). Here, validation should rather focus on the assessment of the modes themselves, as this is what users base their decisions on in practice. In the aforementioned clinical example (Fig. 1), images may commonly be acquired with the patient in supine position, i.e., lying on the back. As the prior influences the posterior, the biggest mode would correspond to the standard position. The standard validation procedure based on MAP estimates would ignore the small mode corresponding to a 180\({}^{\circ}\) rotated pose. Ignoring the smaller mode(s) does not reflect clinical needs as a clinician could easily choose between a small set of modes and even benefit from the model
information if a surprising mode appears.
In this paper, we therefore propose choosing a validation approach that reflects the requirements of the driving application. Specifically, we argue that most applications require a mode-centric validation, reflecting the fact that domain experts (e.g., clinicians) work with concrete decisions/solutions rather than probability distributions. In this vein, we propose metrics that go beyond common regression errors and directly compare (multiple) predicted to (multiple) reference modes. While this may sound trivial at first glance, the specific implementation is not straightforward (How exactly are modes localized? What to do in the case of mode assignment ambiguities?, etc.), which may be one reason why the topic has - to our knowledge - not yet been addressed in the literature. Closing this gap, our novel approach takes inspiration from the object detection community (Fig. 2). By adopting the principles of a localization criterion and assignment strategies, we are able to perform a mode-centric validation with much more meaningful and better interpretable metrics from an application perspective. In the above example, this approach would require defining the localization criterion (and its hyperparameters) such that the augmented reality visualization of surrounding structures in the intraoperative X-ray can be achieved with acceptable accuracy through the pose estimation. Classification-based performance metrics would then be well-interpretable by the domain expert: The False Positives Per Image (FPPI) at the computed Recall, for example, would inform the clinician that they would need to select the most plausible pose from an average of about FPPI+1 options during a surgery. While this would not be a problem for FPPI = 1, it would be infeasible to choose from, for example, ten different options.
Although it would be desirable to validate posteriors as comprehensively as possible with both distribution-based and mode-based metrics, this may not always be possible in real
Fig. 1: Example of an inverse problem in medical vision. The task is to recover the pose of an intraoperative X-ray system relative to the 3D patient coordinate system to enable augmented reality visualization during medical intervention. The ambiguity of the problem can be captured with an invertible architecture, which represents multiple solutions (here: two) via modes in a posterior distribution. Used abbreviation: Convolutional Neural Network (CNN).
Fig. 2: Object detection validation methodology lends itself well to posterior validation. This validation is subdivided into the steps of instance localization, assignment, and computing of classification metrics. These steps have natural analogs in the posterior validation case. Used abbreviations: Average Precision (AP), True Positive (TP), False Positive (FP), False Negative (FN), Standard Deviation (STD).
world scenarios. In many cases, for example, a ground truth posterior (required for distribution-based comparison) may not be available. Moreover, the set of reference solutions may be non-exhaustive and, for example, only contain one out of possibly multiple plausible solutions. We address this challenge with a problem fingerprint that abstracts from the specific problem by capturing key problem characteristics and available data in a structured format. Guided by this fingerprint, metrics are then recommended via a decision tree. The specific contributions of this paper are:
1. Object detection analogy: To our knowledge, we are the first to uncover an analogy between validation practice in an object detection setting and validation of posteriors.
2. Application-driven framework: Based on this analogy, we propose a posterior validation framework that takes into account both the requirements of the underlying application as well as the mathematical restrictions enforced by the available validation data.
3. Use case instantiation: An instantiation of the framework for three complementary use cases reveals flaws in common validation practices and showcases the benefit of a mode-centric approach.
## II Related Work
Prior work on recommendations for posterior validation is extremely sparse. While recent efforts have focused on recommendations in the context of classification, segmentation, and object detection [8], we have not found any framework dedicated to the validation of posteriors in inverse problems. Our analysis of the literature revealed the following common validation principles: (1) Use of the MAP as a point estimate and application of classic regression metrics. This validation scheme is extended by regression metrics computed on resimulations (i.e., computing the forward model on the posteriors), if available. Furthermore, statistical distances to reference posteriors are commonly computed if the reference is actually given as a posterior. Lastly, visual inspection and qualitative analyses of the posterior (or interesting marginals) are also common practice (e.g., in [3, 9, 10, 11, 12]).
(2) In the context of conditional image generation, particular focus is put on the quality of the generated images and their diversity. This is reflected by commonly applied metrics such as peak signal-to-noise ratio (PSNR) or measures of variability (e.g., variance or standard deviation (STD)) of the generated images. At the same time, distribution-based metrics such as the Frechet Inception Distance (FID) are also common but rarely applied to posteriors as a reference posterior is often lacking. Instead, validation or test images are interpreted as samples from an unconditional distribution and compared to samples drawn from the image generator. Depending on the exact image generation task, direct resimulation (e.g., in super-resolution tasks) or'resimulation via a downstream task' (e.g., using an image classifier for class-conditioned image generators) might be an option, in which case such metrics are often reported (e.g., under the name of Consistency) [4, 13, 7, 6].
To the best of our knowledge, a mode-centric validation has not been proposed before. Consequently, there is no prior work on using object detection validation methodology on posterior-based inverse problem solvers.
## III Methods
This section presents our posterior validation framework (Sec. III-A - III-C) as well as the conditional Invertible Neural Network (cINN)-based architectures [3, 4] that we developed to instantiate the framework for medical vision problems (Sec. III-D).
Our validation framework features three main components to guide a user through the process of application-relevant metric selection. First, to enable an application-driven, modality-agnostic metric recommendation approach that generalizes over domains, we encapsulate validation-relevant characteristics of a given problem in a problem fingerprint. To this end, the parameters listed in Tab. I are instantiated according to the domain interest. In a second step, suitable metrics are selected based on this problem fingerprint (Fig. 3). A key novelty in this step is the mode-centric validation perspective inspired by the field of object detection (Fig. 5). Finally, as this process can result in a pool of suitable metric candidates, the third step involves the traversal of decision guides to help users understand the tradeoffs and choose between different candidates, wherever necessary. The following sections provide details on the three main components.
### _Problem fingerprint_
The fingerprint is summarized in Tab. I. While we assume the method to be validated to provide a posterior distribution, the framework can handle different types of references. Therefore, the most central fingerprint item is _P1: Reference granularity_ as it is the prerequisite for deciding whether distribution-based metrics and/or object-inspired metrics should be used for validation. Specifically, we distinguish four main formats in which the reference may be available (corresponding to the colored paths in Fig. 3 and 4): posteriors with or without explicitly labeled modes, or a discrete set of modes that may either be exhaustive or non-exhaustive. Note that a non-exhaustive set of modes is very common in inverse problems because validation data is often generated with a forward model for which the underlying input serves as the (only) reference even if other inputs could have generated the same output (see Fig. 1). Further properties will be detailed in the following.
### _Metric selection_
The workflow for metric selection, guided by the fingerprint, is provided in Fig. 3. The two main steps are:
**Selection of distribution-based metrics**
If reference posteriors are provided (Property P1), distribution-based metrics can be selected. The decision tree for selecting such a metric is depicted in Fig. 4. The following properties are relevant in this context:
* _P4: Prediction density:_ Generative models can be categorized by whether they give access to the underlying
density of the distribution they model (e.g., cINNs) or not (e.g., classic Generative Adversarial Networks (GANs) [14]). There is also a grey area where the models provide bounds on the density (e.g., Variational Autoencoders (VAEs) [15]). If the density is available, we can exploit it to gauge the mismatch between the predicted and reference distribution using the Cross Entropy [16]. The Cross Entropy needs access to the prediction density, but the reference density can be given as a sample. We propose the usage of Cross Entropy as it optimally exploits the availability of the density where it is accessible, whereas the other metrics make no explicit use of its existence.
* _P5: Natural discretization scale:_ Many problems allow for natural discretization, for instance, where there is a maximum necessary resolution for an application (e.g., 1 percentage point (pp) oxygenation resolution might be sufficient), and the range of the values is known. In such cases, the predicted and reference posteriors can be binned with acceptable discretization errors. Hence, the densities become mass functions, and the (discrete)
Fig. 3: Overview of metric selection framework for posterior validation. Depending on the reference granularity (reference posterior with/without labeled modes, exhaustive or non-exhaustive list of reference modes), the user follows the correspondingly colored path in the decision tree. When a tree branches, the fingerprint items determine which exact path to take. Recommendations for distribution-based metrics (Subprocess S1) are provided in Fig. 4. The main novelty of the proposal relates to the selection of object detection-inspired metrics, which is presented in a separate Subprocess S2 (Fig. 5). The notation Metric1@Metric2 refers to providing the value for Metric1 for a specific target value (e.g. Recall = 0.95) of Metric 2.
Fig. 4: Subprocess S1 for selecting distribution-based metrics. Based on the exact representation of the predicted posterior and the dimensionality of the problem, different metrics become available.
Kullback-Leibler (KL) Divergence [17] is accessible. We propose this metric due to its lack of hyperparameters (except for the discretization parameters). However, if the solution space to the inverse problem is high - dimensional, meaningful discretization is difficult due to the curse of dimensionality. In this case, we would encounter many empty bins and/or bins containing only a single sample. Such a binning is inadequate to estimate the probability mass function, and we discourage the use of the KL Divergence.
* _P6: Univariate posterior:_ In some rare cases, we are interested in a single variable of interest as the solution to the inverse problem. If this is the case, the posterior will be univariate and there are statistical distances that tailor specifically to this setting. One example is the Kolmogorov-Smirnov (KS) statistic [18, 19], which gauges the difference between two univariate distributions based on their cumulative distribution function. The statistic itself can be used as a distance measure. Additionally, the KS statistic is the basis of a classic hypothesis test, which allows testing whether the posteriors significantly differ given some \(\alpha\)-level. An alternative to the KS statistic is the Wasserstein Distance [20], which is defined for arbitrary dimensions but is computationally expensive for higher dimensions. Both distances have in common that they are almost free of hyperparameters (the KS test has the \(\alpha\)-level, and for the Wasserstein Distance, we have to choose the underlying L\({}_{p}\)-norm with a tendency to choose \(p=1\) because the formula is particularly simple), which alleviates us of the necessity to 'optimize' the metrics on a validation data set. The Wasserstein Distance defines a metric (in the mathematical sense) on the space of distributions but does not directly lead to a hypothesis test in the same way the KS statistic does.
* _P7: Accurate uncertainty required:_ In contrast to the previous properties, which relate to "hard facts" about the inverse problem (such as the dimension of the solution space), this property is application-driven. In other words, whether we are interested in accurate uncertainty quantification does not depend on the underlying inverse problem but on the target application that requires solving the inverse problem. While not directly visible in the decision trees, the need for uncertainty quantification will influence the metric selection, for example, at the localization criterion (where we can decide to take a measure of variability of the modes into account). The influence is
Fig. 5: Subprocess S2 for selecting object detection-inspired metrics, comprising the steps of selecting the localization criterion, the assignment strategy, and the actual classification metric(s). The notation Metric1@Metric2 refers to providing the value for Metric1 for a specific target value (e.g. Recall = 0.95) of Metric 2. Decision guides for selecting a suitable option from a list of candidates are provided in section III-C. Used abbreviations: Average Precision (AP), Free-response Receiver Operating Characteristic (FROC), False Positives Per Image (FPPI).
elaborated in the decision guides below. Additionally, the need for accurate uncertainty will inform the importance of the Calibration curve suggested as a metric for discrete reference modes in Fig. 3.
If none of the properties P4 - P6 lead to a suitable distribution-based metric, the user is left with two options (see Fig. 4). The first is the Wasserstein Distance already introduced in the previous section. Its disadvantage is the computational cost in higher dimensions. A pragmatic solution is to apply the Wasserstein Distance to all 1D marginals individually and aggregate the results. This reduces the expressiveness of the Wasserstein Distance because there are distinct distributions with identical marginals, which could not be distinguished by this heuristic Wasserstein Distance. The other option is Maximum Mean Discrepancy (MMD) [21], which is a kernel method that introduces a metric on the space of distributions (at least for suitable kernels) and whose computational costs are acceptable. Its main downside is the sensitivity of the metric scores to the choice of the kernel (both the family and the hyperparameters parametrizing the family). This sensitivity often results in a separate validation set being required to optimize the hyperparameters of the metric and also reduces the interpretability of MMD.
Note that distribution-based metrics can also be used as a localization criterion when using object detection-inspired metrics which will be described in the following paragraph as depicted in Fig. 5.
**Selection of object detection-inspired metrics**
If the reference comes with explicit modes, the quality of the modes should be explicitly assessed, possibly irrespective of the shape of the posterior (which is heavily influenced by the prior). We take inspiration from object detection validation by regarding predicted and reference modes as instances and transferring object detection principles to our setting. Our proposal is summarized in Fig. 2.
* Localization criterion: To decide whether a mode matches the reference, a criterion incorporating the location and (optionally) the shape of both reference and prediction is needed. Based on the application and goal, hyperparameters can be used to control the strictness of the criterion.
* Assignment strategy: To match the correct prediction/reference pairs, an adequate assignment strategy must be chosen. In this way, the matching of multiple predictions to one reference mode or vice versa is avoided.
* and thus the potential solutions to a problem
- as the central objects of interest.
Note that treating modes as instances introduces a hierarchy, where each posterior consists of one or more modes, and the data set consists of posteriors. This hierarchy should be respected during metric aggregation [8].
To choose metrics for object-centric validation (if any), the following properties are of key importance:
* _P2: Resimulation (available/unavailable):_ While the set of reference modes may be incomplete, it may be possible to verify whether a given mode (of the prediction) is another plausible solution to the problem. This can be achieved by applying the forward process (_resimulation available_) to the given mode and choosing suitable metrics in the 'observable space'. The resimulation allows to decide whether a detected mode is a True Positive (TP) or False Positive (FP). With this information, the Precision, a highly relevant classification metric, can be computed.
* _P3: Confidence score (available/unavailable):_ Object detection metrics operating on the confusion matrix (e.g. the F\({}_{1}\) Score) are highly sensitive to the method chosen to convert (fuzzy) algorithm output to actual decisions [22]. Multi-threshold metrics, such as Average Precision (AP), overcome the need to decide on specific hyperparameters with ranking-based approaches. Transferring these principles to posterior validation requires the ability to rank the modes according to their likelihood of actually being a mode. This property should be set to true if the predicted modes come with a score that gauges the certainty of the model that the mode actually exists. While our framework is agnostic to the source of the score, we provide possible instantiations in our use cases in section III-D.
### _Decision guides_
Our framework may result in users obtaining a pool of applicable metric candidates instead of only a single candidate. The decision guides presented in this section aim to help the user understand the tradeoffs between different metrics and selecting the most suitable candidate for their underlying problem. As many of the metrics are based on the observed object detection analogy, there are many parallels to the recommendations in [8]. The following paragraphs contain the decision guides for the ambiguous parts of the framework.
* _Localization criterion:_ The localization criterion is used to gauge the agreement between pairs of predicted and reference modes. The choice of the localization criterion mainly depends on two properties: first, the granularity of the reference (P1, which is already covered in Subprocess S2 in Fig. 5) and second, whether an accurate uncertainty is required (P7). If uncertainty quantification is important, the shape of the posterior modes should be taken into account when computing the mode localization. For a reference given as a mode location (without a spread or similar), this could take the form of computing the Mahalanobis Distance [23], which takes the covariance of the predicted mode into account. This is an instance of the "Centroid Distance" category. The advantage of this metric is that it provides a continuous distance. On the other hand, the predicted mode could be used to construct a confidence ellipsoid (or a more general confidence region) to a given confidence level, and a match could be performed based on whether the reference location falls
within the confidence ellipsoid ("Point inside Confidence Ellipsoid" category). This approach also takes uncertainty into account but leads to a binary score. If the reference is given as a distribution and accurate uncertainty is important, distribution-based metrics should be considered as these do not only match the mode location but incorporate the shape of the predicted and reference mode. If accurate uncertainty estimation is less important, the localization criterion should focus on the correct location of the mode centers. In this case, the predicted and reference mode should be collapsed to their centers, and a distance on these centers should be computed ("Centroid Distance" category). The exact distance should be chosen according to the application. Examples could be an L\({}_{p}\)-norm for translation parameters, the cosine similarity [24] for rotational variables, or structural similarity index [25] for images.
* _Assignment strategy:_ Whenever an uncertainty score is available, greedy matching via the (confidence) score [26, 8] should be applied. The rationale behind this recommendation is that models that confidently predict wrong or far-off modes should be penalized. If no confidence score is available, there are multiple complementary options. Greedy matching via the localization criterion [8] has the advantage of being methodologically simple and computationally fast. Furthermore, depending on the application, it can be sensible to match the closest modes first. An alternative would be to apply Hungarian matching [27, 8], which finds an optimal matching that minimizes the total mode distances. Such a matching can lead to a predicted mode not being matched with its closest reference mode. Hungarian matching can be suitable for a more theory-focused validation or method comparison (independent of a downstream application). However, as elaborated in [8], Hungarian matching can lead to overly optimistic assignments, artificially reducing the number of FNs and FPs. Lastly, assigning modes via a fixed localization threshold ("Matching via Localization > Fixed Threshold") can be useful if the application requires an exact number of predicted modes but less focus on the precise localization of the modes. An example downstream task would be to count the occurrence of certain structures.
* _Distance aggregation:_ An important aspect of distance aggregation is to respect the hierarchical structure of the data, as elaborated in [8]. In this posterior-based inverse problem setting, a data set consists of data points, where each data point corresponds to a set of reference modes and a set of predicted modes. This two-stage hierarchy implies that first, the distances between modes per posterior should be aggregated before these per-data-point distances should be aggregated over the whole data set. In Fig. 3, we explicitly mention mean, median, STD, and Interquartile Range (IQR) as aggregation methods for distance aggregation. However, these solely represent examples of common choices. Depending on the application, it might be advantageous to report other quantiles of the distribution (instead of IQR) or weight the data points in the mean. Overall, it should be noted that quantile-based aggregates (such as median or IQR) are more robust to noise and outliers, which might make them superior to mean and STD, as many models produce rather noisy posteriors.
* _Classification metrics:_ If a confidence score is available, we recommend multi-threshold metrics such as AP or FPPI in almost all cases. They address the problem of noisy modes due to imperfections in the posterior generation and/or clustering methods. Metric@(TargetMetric=TargetValue), as introduced in [8], is a notation to report the value of a metric while a target metric is optimized on a dedicated validation split to conform to the target value. An example would be Precision@(Recall=0.95). This type of metric should be chosen if the application requires certain bounds, e.g., on the frequency of FPs, as might for instance be derived from regulatory requirements. Reporting of this form is also common practice in clinically-focused communities. F\({}_{\beta}\)[28, 29, 8] aggregates both Precision and Recall and can be useful if there is no target value for either one, but instead, the model (hyper-)parameters are optimized (on an additional validation set) to maximize F\({}_{\beta}\).
### _Conditional Invertible Neural Networks for ambiguous problems_
To showcase the benefit of our framework, we investigate three complementary inverse problems that feature inherent ambiguity (see Figs. 6 - 8). In the following, we present these use cases along with the methods whose performance is to be assessed with our framework. Further implementation details can be found in A.
#### Iv-D1 Toy example
As a toy example, we chose a well-understood, but ambiguous, inverse problem, namely finding the \(n\)-th roots of a complex number \(w\) for varying \(n\) (cf. Fig. 6 (a), left). The input to the inverse problem is the complex number \(w\) for which to find the root(s) and the integer \(n\) describing the order of the root. We considered two models: (1) A multi-layer perceptron (MLP) (based on [30]) as a naive baseline, which, given \(n\) and \(w\), produces a Gaussian posterior represented by a mean and a diagonal covariance matrix. (2) A cINN [4], which, given \(n\) and \(w\), produces a posterior distribution over \(z\) by sampling a latent space. As a mode detection algorithm, we used the clustering algorithm Density-Based Spatial Clustering of Applications with Noise (DBSCAN) [31]. To estimate the MAP probability location, we used the mean of the largest cluster.
#### Iv-D2 cINNs for pose estimation in intraoperative 2D/3D registration
Image registration is the basis for many applications in the fields of medical image computing and computer-assisted interventions. One example is the registration of 2D X-ray images with preoperative 3D CT images in intraoperative surgical guidance systems, as illustrated in Fig. 1. Previously proposed methods [32, 33, 34, 35] lack the capacity to represent the inherent ambiguity a registration problem may contain, i.e., they cannot handle a situation where multiple substantially different solutions exist. We address this lack with
cINNs, by representing the possible solutions to a registration problem through a non-parametric probability distribution that encodes different plausible solutions via multiple modes. The challenge of detecting modes in high-dimensional parameter space is tackled by interpreting the task as a clustering problem performed on the samples defining the posterior. The neural network architecture is illustrated in Fig. 1. The input images are passed through a conditioning network such that a relatively low-dimensional vector (here: 256) can be used for conditioning the actual invertible net.
#### Iii-B3 cINNs for quantification of functional tissue parameters
Photoacoustic imaging is an emerging modality that enables the recovery of functional tissue parameters. However, the underlying inverse problems are ill-posed (Fig. 8). Specifically, the problem might have ambiguous solutions, meaning that different tissue compositions could lead to the same photoacoustic measurement. We address this ambiguity with a cINN-based architecture as proposed in [5]. As a naive baseline, we chose the state-of-the-art method "Learned Spectral Decoloring" (LSD) [36] based on a fully connected neural network architecture, which provides a single point estimate as a prediction. We optimized the architecture and training procedure for better performance. For clustering, we used the UniDip Clustering algorithm [37], which is based on the Hartigan-Dip test for unimodality [38]. It provides robust estimations with respect to resampling of the posterior and is basically parameter-free (apart from a statistical significance level).
## IV Experiments & Results
The purpose of the experiments was to instantiate our framework for several use cases and showcase the added value by means of examples. Note that we did not aim to optimize the models for the use cases or solve the underlying tasks. Instead, the focus was on the insights that can be derived from the proposed validation scheme.
### _Synthetic toy example_
The purpose of the toy experiment was to validate the framework branch for a reference with an exhaustive list of modes. As described in Sec. III, the task was to, given an integer \(n\) and a complex number \(w=R\cdot e^{i\phi}\), compute the \(n\)th root of \(w\). The distinct solutions (assuming \(w\neq 0\)) to this inverse problem can be explicitly enumerated as \(z_{k}=\sqrt[6]{R}\cdot e^{i\frac{\phi+2\pi k}{n}}\) for \(k=0,\ldots,n-1\). The training data consisted of tuples \((z,n,w)\), such that \(z^{n}=w\). To highlight the pitfalls of treating such a problem as a simple regression task, we trained an MLP and a cINN to estimate \(z\) from \(w\) and \(n\) and evaluated their performance using the absolute error. Additionally, we instantiated our framework, which provided us with additional metrics taking the number of modes and the matching process into account. These additional metrics were Precision, Recall, \(F_{\beta}\) (we report \(\beta=1\)), AP, and the absolute error computed for matched modes and aggregated per posterior. The ranking of modes, required for the computation of AP was achieved by bootstrapping the posteriors. More specifically, we resampled each posterior two times and computed the Intersection over Union (IoU) of the new clusters with the original clustering. The average IoU per cluster was used as confidence score.
To instantiate our training and testing sets, we drew \(n\) uniformly from the set \(\{1,2,3\}\), \(z\) uniformly from an annulus centered at \(0\) with inner radius \(0.8\) and outer radius \(1.2\), and'simulated' the forward process via \(w=z^{n}\). The training set consisted of \(10^{6}\) samples, and the testing set of \(10^{5}\) samples. \(n\) was one-hot encoded, and \(z\) and \(w\) were represented using their real and imaginary part, respectively. As a localization criterion, we chose the mode center distance. The predicted and reference modes were matched greedily by the assignment strategy.
Fig. 6 (b) left shows the classic absolute error distribution of the two models, indicating that while both models perform poorly, the MLP might be superior to the cINN. This is in contrast to qualitative observations (example shown in Fig. 6 (a)), which suggest good performance of the cINN, while the MLP seems to predict the mean of the ambiguous solutions (which is 0).
The framework metrics unmask this performance difference between the models. While both models perform similarly regarding Precision, the cINN outperforms the MLP in terms of Recall, which is due to the fact that the cINN is capable of predicting multiple modes, while the MLP is restricted to a single mode. The absolute error of the matched modes underlines that for higher-order roots, the cINN correctly identifies the locations of the root, while the MLP predictions are only close to the ground truth in the unambiguous case of \(n=1\). The cINN achieved an AP of approximately \(1\).
### _Medical vision use case 1: Pose estimation_
To showcase the potential of the framework for model optimization, we picked a surgical use case (Fig. 1). In this setting, ambiguity in pose estimation results from the general symmetry of the spine. To generate a validation data set with reliable references, we simulated X-ray images taken by a C-Arm with multiple orientations using the principle of digitally reconstructed radiographs (DRRs) [39]. As our experimental data set, we used the UWSpine data set [40, 41], which comprises spine-focused CT volumes of 125 patients. We transformed the volumes to a homogeneous voxel spacing and discarded images smaller than 128x256x128 as well as patients with an asymmetric spine. For every CT volume, we sampled a set of different poses of the C-Arm device and computed corresponding DRRs.
From an application perspective, the conversion of posteriors to modes (i.e., the actual solutions of interest) is a crucial step in the system. Often, mode detection algorithms can be configured to provide either higher Recall (at the cost of more FPs or higher Precision (at the cost of more False Negatives (FNs)). To address this tradeoff, we applied our framework for hyperparameter tuning. Based on the suggested mode matching, we plotted the Recall (using only the reference modes provided by the simulation) as a function of the (upper bound of the) FPPI for different hyperparameters of the mode clustering algorithm. Note that we speak of an upper bound because of the non-exhaustive list of modes. We varied
the minimum samples parameter of the DBSCAN algorithm. Given that we only worked with symmetric spines, we regarded a mode corresponding to a left anterior oblique (LAO) angle of \(\text{LAO}_{\text{ref}}+180^{\circ}\) as a TP. Based on the recommendation framework (Figs. 3 / 4), we chose the Centroid Distance as the localization criterion (threshold \(20^{\circ}\)) and Greedy by Localization as the assignment strategy. Fig. 7 reveals that the cluster algorithm hyperparameters corresponding to an FPPI of approximately 0.35 provide the best tradeoff. This analysis was enabled by the detection-driven validation approach.
### _Medical vision use case 2: Functional tissue parameter estimation_
The second medical vision use case is illustrated in Fig. 8 and concerns the quantification of tissue oxygenation from photoacoustic measurements. The purpose of this experiment was to demonstrate that common validation methods are not well-suited for application-driven performance assessment. To this end, we trained two models (see Sec. III) for the given use case - a naive baseline that treats the problem as a regression problem with a unique solution as well as a solution based on cINNs. Since ground truth tissue properties are unavailable for in vivo photoacoustic measurements, we simulated a synthetic data set of human forearm images using the Monte Carlo method for light propagation (Monte Carlo eXtreme (MCX)). Our digital tissue model was inspired by the work of [42] and consists of different tissue types (skin, muscle background, arteries, veins, ultrasound gel, membrane, and water). The anatomic and optical properties are based on knowledge from literature. The whole simulation was implemented using the _Simulation and Image Processing for Photonics and Acoustics_ (SIMPA) toolkit [43]. For our validation, we focused on samples that were detected to be multimodal by our cINN. As a first naive validation approach, we compared the MAP estimate, i.e., for the cINN, we used the median of the largest cluster as a point estimate. As can be seen in Fig. 8 (b) left, both methods seem to perform equally well. Note that for bimodal posterior distributions, the MAP is not necessarily the best solution, as sometimes the smaller mode might correspond to the reference. Point prediction methods such as LSD usually predict a value that is either close to the largest mode of the cINN or lies between the two modes (as in the toy example). Our framework addresses this issue.
Fig. 6: Results for the synthetic toy example. (a) The task consists of computing the \(n\)th root(s) (\(n=1,2,3\)) of a non-zero complex number. While the conditional Invertible Neural Network (cINN) captures the ambiguity of the problem via multiple modes in the posterior, a classical multi-layer perceptron (MLP) typically outputs the mean of plausible solutions. (b) Left: The superiority of the cINN is not captured by classical validation methods that treat the problem as a regression task (with a unique solution) using the maximum a posteriori probability as the cINN estimate. Right: The explicit mode localization and assignment offered by our framework enables the computation of classification metrics and regression metrics applied on matched modes. These reveal the poor performance of the MLP compared to the cINN.
Fig. 7: Use case: pose estimation in surgery. Mode detection algorithms can be configured to provide either higher Recall (at the cost of more False Positives (FPs)) or higher Precision (at the cost of more False Negatives (FNs)). Our framework captures this tradeoff by performing explicit mode localization/matching and recommending the plotting of the Recall as a function of the FPs per image (FPPI).
Following the recommendation framework, we first performed mode matching (Greedy by Localization) with a threshold of 5 percentage points (pp) sO\({}_{2}\) difference to enable object detection-inspired metrics. In analogy to the previous example, we then computed Recall and FPPI upper bound. The cINN method outperforms LSD in terms of Recall (90% vs. 71%). However, this comes at the cost of more FPPI.
## V Discussion and conclusion
Validation of deep learning-based methods attempting to solve inverse problems is key for both measuring progress as well as their eventual translation to real-world applications. Currently, however, common validation practice frequently neglects the requirements of the underlying application, leading to the resulting metric scores often not reflecting the actual needs. This especially holds true for posterior-based methods tackling inverse problems for which multiple different but plausible solutions exist.
Currently, inverse problem solvers, whether using a posterior or classical representation, are often validated in an ad hoc manner specifically tailored to the problem at hand. Our posterior validation framework takes one step back and proposes key properties that allow us to abstract from the specific inverse problem and advance toward a unified, generic inverse problem validation methodology. As we argue that flaws in common validation practice can largely be attributed to a lack of best practices, in our opinion, dedicating efforts towards improving common practice becomes imperative to advance the field. Our framework provides a first step towards structured and standardized validation practice. We hope that an according shift in research focus exemplified by our work sparks further research on how to best validate inverse problem methods and allows for better and more meaningful comparisons of algorithms.
With the proposed framework, we are - to the best of our knowledge - the first to systematically address this problem. A particular novelty is the leveraging of object detection-inspired metrics for posterior validation, which enables a mode-centric validation. The mode-centric view aligns naturally with applications, for example in the medical domain, where interpretation of a posterior distribution might be infeasible, but the scanning of a (short) list of plausible solutions might provide a benefit over a point prediction both in terms of predictive performance as well as uncertainty quantification.
While a direct evaluation of our proposed framework is not possible, we instead demonstrated its value in various medical vision use cases. As this was the primary goal of the paper, it should be noted that we did not focus on actually solving a specific clinical problem. This is why neither the models used nor the experimental data have been optimized for the particular use case.
Multi-threshold metrics such as AP are widely used metrics in object detection and, as such, are also included in our framework. However, it must be noted that a critical requirement for their computation is the availability of a confidence score. Natural choices for confidence scores such as the relative mass of the mode have disadvantages such as the confidence score depending on the number of detected modes. Future work should thus be directed toward developing alternative confidence scores overcoming this limitation and enabling the use of these robust metrics. Also, a future implementation of the metrics in a library will be useful in providing the community with a standardized and reliable resource for validation, given that previous work highlighted the problems of non-standardized metric implementation [8, 44]. On a further note, the pool of available metrics in the case of non-exhaustive reference modes is currently rather limited. We hope that the clear structure using the inverse problem
Fig. 8: Use case: functional tissue parameter estimation. (a) The task is to estimate blood oxygenation (sO\({}_{2}\)) from multispectral photoacoustic imaging data. The potential ambiguity of the problem for a given location (e.g., a vessel) can be resolved by changing the pose of the image modality (pose 1: unique solution; pose 2: multiple plausible solutions). (b) Left: The superiority of the conditional Invertible Neural Network (cINN) over a state-of-the-art point estimation network *Learned Spectral Decoding* (LSD) is not captured by classical validation methods based on maximum a posteriori estimates. Right: The explicit mode localization and assignment offered by our framework enable the computation of classification metrics. These reveal the application-relevant properties of the methods, namely the Recall and the False Positives Per Image (FPPI).
fingerprints will spark a fruitful discussion on new metric candidates suitable for this setting.
In conclusion, our experiments clearly demonstrate the added value of mode-centric validation compared to the standard validation approach. Our framework could thus evolve as an important tool for posterior validation in inverse problems.
## Acknowledgements
The authors would like to thank Melanie Schellenberg for her contribution to figure design.
## References
* [1]G. Batzolins, J. Stanczuk, C.-B. Schonlieb, and C. Etmann (2021) Conditional image generation with score-based diffusion models. arXiv preprint arXiv:2111.13606. Cited by: SSI.
* [2]H. Chung, B. Sim, D. Ryu, and J. C. Ye (2022) Improving diffusion models for inverse problems using manifold constraints. Advances in Neural Information Processing Systems35, pp. 25683-25696. Cited by: SSI.
* [3]H. Chubing, B. Sim, D. Ryu, and J. C. Ye (2022) Improving diffusion models for inverse problems using manifold constraints. Advances in Neural Information Processing Systems35, pp. 2583-2596. Cited by: SSI.
* [4]H. Chubing, C. D. Manning, and P. Raghavan (2008) Introduction to information retrieval. Cambridge University Press Cambridge. Cited by: SSII-A.
* [5]H. Chubing, C. D. Manning, and P. Raghavan (2008) Introduction to information retrieval. Cambridge University Press Cambridge. Cited by: SSII-A.
* [6]H. Chubing, B. Sim, D. Ryu, and J. C. Ye (2022) Improving diffusion models for inverse problems using manifold constraints. Advances in Neural Information Processing Systems35, pp. 2583-2596. Cited by: SSI.
* [7]H. Chubing, C. D. Manning, and P. Raghavan (2008) Introduction to information retrieval. Cambridge University Press Cambridge. Cited by: SSII-A.
* [8]H. Chubing, B. Sim, D. Ryu, and J. C. Ye (2022) Improving diffusion models for inverse problems using manifold constraints. Advances in Neural Information Processing Systems35, pp. 2583-2596. Cited by: SSI.
* [9]H. Chubing, C. D. Manning, and P. Raghavan (2008) Introduction to information retrieval. Cambridge University Press Cambridge. Cited by: SSII-A.
* [10]H. Chubing, B. Sim, D. Ryu, and J. C. Ye (2022) Improving diffusion models for inverse problems using manifold constraints. Advances in Neural Information Processing Systems35, pp. 2583-2596. Cited by: SSI.
* [11]H. Chubing, C. D. Manning, and P. Raghavan (2008) Introduction to information retrieval. Cambridge University Press Cambridge. Cited by: SSII-A.
* [12]H. Chubing, B. Sim, D. Ryu, and J. C. Ye (2022) Improving diffusion models for inverse problems using manifold constraints. Advances in Neural Information Processing Systems35, pp. 2583-2596. Cited by: SSI.
* [13]H. Chubing, C. D. Manning, and P. Raghavan (2008) Introduction to information retrieval. Cambridge University Press Cambridge. Cited by: SSII-A.
* [14]H. Chubing, B. Sim, D. Ryu, and J. C. Ye (2022) Improving diffusion models for inverse problems using manifold constraints. Advances in Neural Information Processing Systems35, pp. 2583-2596. Cited by: SSI.
* [15]H. Chubing, C. D. Manning, and P. Raghavan (2008) Introduction to information retrieval. Cambridge University Press Cambridge. Cited by: SSII-A.
* [16]H. Chubing, B. Sim, D. Ryu, and J. C. Ye (2022) Improving diffusion models for inverse problems using manifold constraints. Advances in Neural Information Processing Systems35, pp. 2583-2596. Cited by: SSI.
* [17]H. Chubing, C. D. Manning, and P. Raghavan (2008) Introduction to information retrieval. Cambridge University Press Cambridge. Cited by: SSII-A.
* [18]H. Chubing, B. Sim, D. Ryu, and J. C. Ye (2022) Improving diffusion models for inverse problems using manifold constraints. Advances in Neural Information Processing Systems35, pp. 2583-2596. Cited by: SSI.
* [19]H. Chubing, C. D. Manning, and P. Raghavan (2008) Introduction to information retrieval. Cambridge University Press Cambridge. Cited by: SSII-A.
* [20]H. Chubing, B. Sim, D. Ryu, and J. C. Ye (2022) Improving diffusion models for inverse problems using manifold constraints. Advances in Neural Information Processing Systems35, pp. 2583-2596. Cited by: SSI.
* [21]H. Chubing, C. D. Manning, and P. Raghavan (2008) Introduction to information retrieval. Cambridge University Press Cambridge. Cited by: SSII-A.
* [22]H. Chubing, B. Sim, D. Ryu, and J. C. Ye (2022) Improving diffusion models for inverse problems using manifold constraints. Advances in Neural Information Processing Systems35, pp. 2583-2596. Cited by: SSI.
* [23]H. Chubing, B. Sim, D. Ryu, and J. C. Ye (2022) Improving diffusion models for inverse problems using manifold constraints. Advances in Neural Information Processing Systems35, pp. 2583-2596. Cited by: SSI.
* [24]H. Chubing, C. D. Manning, and P. Raghavan (2008) Introduction to information retrieval. Cambridge University Press Cambridge. Cited by: SSII-A.
* [25]H. Chubing, B. Sim, D. Ryu, and J. C. Ye (2022) Improving diffusion models for inverse problems using manifold constraints. Advances in Neural Information Processing Systems35, pp. 2583-2596. Cited by: SSI.
* [26]H. Chubing, C. D. Manning, and P. Raghavan (2008) Introduction to information retrieval. Cambridge University Press Cambridge. Cited by: SSII-A.
* [27]H. Chubing, B. Sim, and D. Ryu (2022) Multi-scale diffusion models for inverse problems using manifold constraints. Advances in Neural Information Processing Systems35, pp. 2583-2596. Cited by: SSI.
* [28]H. Chubing, C. D. Manning, and P. Raghavan (2008) Introduction to information retrieval. Cambridge University Press Cambridge. Cited by: SSII-A.
* [29]H. Chubing, B. Sim, and D. Ryu (2022) Multi-scale diffusion models for inverse problems using manifold constraints. Advances in Neural Information Processing Systems35, pp. 2583-2596. Cited by: SSI.
* [30]H. Chubing, C. D. Manning, and P. Raghavan (2008) Introduction to information retrieval. Cambridge University Press Cambridge. Cited by: SSII-A.
* [31]H. Chubing, B. Sim, and D. Ryu (2022) Multi-scale diffusion models for inverse problems using manifold constraints. Advances in Neural Information Processing Systems35, pp. 2583-2596. Cited by: SSI.
* [32]H. Chubing, C. D. Manning, and P. Raghavan (2008) Introduction to information retrieval. Cambridge University Press Cambridge. Cited by: SSII-A.
* [33]H. Chubing, B. Sim, and D. Ryu (2022) Multi-scale diffusion models for inverse problems using manifold constraints. Advances in Neural Information Processing Systems35, pp. 2583-2596. Cited by: SSI.
* [34]H. Chubing, C. D. Manning, and P. Raghavan (2008) Introduction to information retrieval. Cambridge University Press Cambridge. Cited by: SSII-A.
* [35]H. Chubing, B. Sim, and D. Ryu (2022) Multi-scale diffusion models for inverse problems using manifold constraints. Advances in Neural Information Processing Systems35, pp. 2583-2596. Cited by: SSI.
* [36]H. Chubing, C. D. Manning, and P. Raghavan (2008) Introduction to information retrieval. Cambridge University Press Cambridge. Cited by: SSII-A.
* [37]H. Chubing, B. Sim, and D. Ryu (2022) Multi-scale diffusion models for inverse problems using manifold constraints. Advances in Neural Information Processing Systems35, pp. 2583-2596. Cited by: SSI.
* [38]H. Chubing, C. D. Manning, and P. Raghavan (2008) Introduction to information retrieval. Cambridge University Press Cambridge. Cited by: SSII-A.
* [39]H. Chubing, B. Sim, and D. Ryu (2022) Multi-scale diffusion models for inverse problems using manifold constraints. Advances in Neural Information Processing Systems35, pp. 2583-2596. Cited by: SSI.
* [40]H. Chubing, C. D. Manning, and P. Raghavan (2008) Introduction to information retrieval. Cambridge University Press Cambridge. Cited by: SSII-A.
* [41]H. Chubing, C. D. Manning, and P. Raghavan (2008) Introduction to information retrieval. Cambridge University Press Cambridge. Cited by: SSII-A.
* [42]H. Chubing, B. Sim, and D. Ryu (2022) Multi-scale diffusion models for inverse problems using manifold constraints. Advances in Neural Information Processing Systems35, pp. 2583-2596. Cited by: SSI.
* [43]H. Chubing, B. Sim, and D. Ryu (2022) Multi-scale diffusion models for inverse problems using manifold constraints. Advances in Neural Information Processing Systems35, pp. 2583-2596. Cited by: SSI.
* [44]H. Chubing, B. Sim, and D. Ryu (2022) Multi-scale diffusion models for inverse problems using manifold constraints. Advances in Neural Information Processing Systems35, pp. 2583-2596. Cited by: SSI.
* [45]H. Chubing, B. Sim, and D. Ryu (2022) Multi-scale diffusion models for inverse problems using manifold constraints. Advances in Neural Information Processing Systems35, pp. 2583-2596. Cited by: SSI.
* [46]H. Chubing, C. D. Manning, and P. Raghavan (2008) Introduction to information retrieval: theory and practice. In Proceedings of the Joint IBM/University of Newcastle upon Time Seminar on Data Bases Systems, Vol. 79, pp.. Cited by: SSII-A.
* [47]H. Chubing, B. Sim, and D. Ryu (2022) Multi-scale diffusion models for inverse problems using manifold constraints. Advances in Neural Information Processing Systems35, pp. 2583-2596. Cited by: SSI.
* [48]H. D. Kingma and M. Welling (2013) Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114. Cited by: SSI.
* [49]H. W. Kuhn (1955) The hungarian method for the assignment problem. Naval research logistics quarterly2 (1-2), pp. 83-97. Cited by: SSII-A.
* [50]H. W. Kuhn (1955) The hungarian method for the assignment problem. Naval research logistics quarterly2 (1-2), pp. 83-97. Cited by: SSII-A.
* [51]H. W. Kuhn (1955) The hungarian method for the assignment problem. Naval research logistics quarterly2 (1-2), pp. 83-97. Cited by: SSII-A.
* [52]S. Kullback and R. A. Leibler (1951) On information and sufficiency. The annals of mathematical statistics22 (1), pp. 79-86. Cited by: SSII-A.
* [53]S. M. Salehi, S. Khan, D. Erdogmus, and A. Gholipour (2018) Real-time deep registration with geodesic loss. arXiv preprint arXiv:1803.05982. Cited by: SSII-A.
* [54]S. M. Salehi, S. Khan, D. Erdogmus, and A. Gholipour (2018) Real-time deep registration with geodesic loss. arXiv preprint arXiv:1803.05982. Cited by: SSII-A.
* [55]S. M. Salehi, S. Khan, D. Erdogmus, and A. Gholipour (2018) Real-time deep registration with geodesic loss. arXiv preprint arXiv:1803.05982. Cited by: SSII-A.
* [56]S. M. Salehi, S. Khan, D. Erdogmus, and A. Gholipour (2018) Real-time deep registration with geodesic loss. arXiv preprint arXiv:1803.05982. Cited by: SSII-A.
* [57]S. Kullback and R. A. Leibler (1951) On information and sufficiency. The annals of mathematical statistics22 (1), pp. 79-86. Cited by: SSII-A.
*
- MICCAI 2012_, N. Ayache, H. Delingette, P. Golland, and K. Mori, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012, pp. 590-598.
* [42] M. Schellenberg, J. Grohl, K. K. Dreher, J.-H. Nolke, N. Holzwarth, M. D. Tizab, A. Seitel, and L. Maier-Hein, "Photoacoustic image synthesis with generative adversarial networks," _Photoacoustics_, vol. 28, p. 100402, 2022.
* [43] J. Grohl, K. K. Dreher, M. Schellenberg, T. Rix, N. Holzwarth, P. Vieten, L. Ayala, S. E. Bohndriek, A. Seitel, and L. Maier-Hein, "Simpa: an open-source toolkit for simulation and image processing for photonics and acoustics," _Journal of biomedical optics_, vol. 27, no. 8, pp. 083 010-083 010, 2022.
* [44] A. Reinke, M. D. Tizabi, M. Baumgartner, M. Eisenmann, D. Heckmann-Notzel, A. E. Kavur, T. Radsch, C. H. Sudre, L. Acion, M. Antonelli _et al._, "Understanding metric-related pitfalls in image analysis validation," _ArXiv_, 2023.
\begin{tabular}{c c} & Tim J. Adler received his Ph. D. degree in computer science from Heidelberg University in 2023. He did his thesis at the division of Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ). His work focused on uncertainty quantification in multispectral and photoacoustic imaging. He currently holds a position as Senior Data \& Applied Scientist at hema.to, where he works on data-driven methods for leukemia diagnosis using flow cytometry. \\ \end{tabular} \begin{tabular}{c c} & Jan-Hinrich Nolke received his M.Sc. degree in Physics from Heidelberg University in 2021. He is currently pursuing an interdisciplinary Ph.D. at the division of Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ). His research focuses on deep learning-based uncertainty quantification in medical imaging. \\ \end{tabular} \begin{tabular}{c c} & Annika Reinke received her Ph.D. degree in computer science from Heidelberg University in 2023. She currently holds a position as a postdoctoral researcher at the division of Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), leading the group "Validation of Intelligent Systems". She further serves as an active member of several international groups such as the MICCAI Special Interest Group on biomedical challenges. \\ \end{tabular} \begin{tabular}{c c} & Minu Dietlinde Tizabi received her Doctorate of Medicine from Heidelberg University in 2017. She is a physician, scientist and writer in the division of Intelligent Medical Systems (IMSY) at the German Cancer Research Center (DKFZ). \\ \end{tabular} \begin{tabular}{c c} & Sebastian Gruber earned his M.Sc. in Physics from Heidelberg University in 2022. He completed both his Bachelor's and Master's theses at the division of Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ). He is presently employed as a Machine Learning Engineer at On-linedoctor, a Swiss-based health technology startup, specializing in skin disease classification. \\ \end{tabular} \begin{tabular}{c c} & Dasha Trofimova received her PhD in Physics from Heidelberg university in 2015. After work as a data scientist in industry, she joined the division of Intelligent Medical Systems at the German Cancer Research Center (DKFZ), focusing on medical image analysis. Her goal is to use machine learning to make a real impact in healthcare, especially diagnostics. \\ \end{tabular} \begin{tabular}{c c} & Lynton Ardizzone completed his M.Sc. degree in Physics in 2018 at Heidelberg University. Until 2022, he conducted research at the Visual Learning Lab in Heidelberg for his pending PhD degree computer science. Since 2022, he has served as Head of Machine Learning at CORRESENCE AG. \\ \end{tabular}
\begin{tabular}{c c} & Paul F. Jaeger is a principal investigator at the Interactive Machine Learning Group at the German Cancer Research Center and Helmholtz Imaging. After Studying in Karlsruhe, Stockholm, Melbourne, and Montreal, he received his Ph.D. in Computer Science from Karlsruhe Institute of Technology. His research focuses on image analysis algorithms, with a particular focus on human interaction. \\ \end{tabular}
\\ \end{tabular} \begin{tabular}{c c} & Florian Buettner is professor at Goethe University Frankfurt (Germany) and head of department at the German Cancer Research Center (DKFZ)/German Cancer Consortium (DKTK). His research focusses on the application-driven development of novel machine learning algorithms in oncology. \\ \end{tabular} \begin{tabular}{c c} & Ullrich Kothe received the diploma degree in physics from the University of Rostock, Rostock, Germany, and the PhD degree in computer science from the University of Hamburg, Hamburg, Germany. He is currently a professor for computer science in the Interdisciplinary Center for Scientific Computing, at Heidelberg University. His research focuses on the connection between machine learning and the sciences. He is particularly interested in interpretable generative model architectures, learning methods, and applications. \\ \end{tabular}
\begin{tabular}{c c} & Lena Maier-Hein is a full professor at Heidelberg University (Germany) and division head at the German Cancer Research Center (DKFZ). She is managing director of the National Center for Tumor Diseases (NCT) Heidelberg and of the DKFZ Data Science and Digital Oncology cross-topic program. Her research concentrates on machine learning-based biomedical image analysis with a specific focus on surgical data science, computational biophotonics and validation of machine learning algorithms. \\ \end{tabular}
## Appendix
### _Use case implementation details_
The following sections describe the implementation details of the models used to solve the inverse problems in the use cases. All deep learning models were implemented using PyTorch. The cINNs further made use of the Framework for Easily Invertible Architectures (FrEIA) [1].
**Synthetic toy example**
* _Architecture:_ The MLP was implemented following the architecture proposed in [2]. More precisely, we implemented a six-layer fully-connected neural network with rectified linear unit (ReLU) activations, 128 dimensions for each hidden layer, and a dropout rate of 0.2. The output was four-dimensional, and we interpreted the first two dimensions as the mean and the second two dimensions as the logarithmic standard deviation of a Gaussian distribution with diagonal covariance over the solution space (i.e., the space of possible roots). The network was trained using maximum likelihood training under the Gaussian assumption, which corresponds to the loss \[L=\mathbb{E}_{(z,n,w)}\left[\frac{1}{2}\sum_{i=1}^{2}\left(e^{-2\delta_{i}} \cdot(z_{i}-\hat{z}_{i})^{2}+2\delta_{i}+\log(2\pi)\right)\right],\] where \(\hat{z},\delta=f_{\Theta}(w,n)\) are the model predictions. We applied Monte Carlo dropout at inference time, which led to multiple predictions \((\hat{z}(k),\delta(k))_{k=1,\dots,N}\), which we aggregated via \[\hat{z} =\frac{1}{N}\sum_{k=1}^{N}\hat{z}(k),\] \[\hat{\sigma}_{i}(k) =e^{\delta_{i}(k)},\text{ and }\] \[\hat{\sigma}_{i} =\sqrt{\frac{1}{N}\sum_{k=1}^{N}\hat{z}_{i}(k)^{2}-\hat{z}_{i}^{2 }+\frac{1}{N}\sum_{k=1}^{N}\hat{\sigma}_{i}(k)^{2}},\] for \(i\in\{1,2\}\) denoting the axes of the standard identification of \(\mathbb{R}^{2}\) with the complex numbers via real and imaginary part. The model was trained for 1000 epochs using the AdamW [3] optimizer, with a learning rate of \(10^{-3}\), a weight decay parameter of \(10^{-5}\), and a batch size of 2048. For inference, we chose \(N=50\) following [2]. The cINN was implemented using affine coupling blocks [4, 5, 6, 7] followed by (fixed) random permutations and a global affine transformation (i.e., an affine transformation with learnable parameters but independent of the input to it) [7, 8]. We used 20 affine coupling blocks with shallow fully-connected subnetworks with a single hidden layer with 256 dimensions and ReLU activations. The scaling of the affine coupling was soft-clamped with a clamping constant of 2.0, and we initialized the global affine transformation with the scaling parameter 0.7 (in FrEIA this parameter is called global_affine_init). The cINN works by transforming the solution of the inverse problem (z in our case) into a latent space, conditioned on the observables (w and n), i.e., the cINN is a map g(z; w, n) that is invertible with regard to z, given w and n. During training, a Gaussian distribution on the latent space is enforced via maximum likelihood training: \[L=\mathbb{E}_{(z,n,w)}\left[\frac{1}{2}\sum_{i=1}^{2}g_{i}(z;n,w)^{2}-\log|\det Jg (z;n,w)|\right],\] where \(Jg\) denotes the Jacobi matrix of \(g\). The architecture of the cINN is chosen in such a way that the Jacobi matrix is triangular, such that the log-determinant is efficiently computable. At inference time, we draw samples in the latent space and transform them to the solution space via \(g^{-1}\) (given w and n). The cINN was trained for 1000 epochs using the AdamW optimizer with a learning rate of \(10^{-2}\), which was reduced by a factor of 10 after epochs 200, 500, and 900. We used a weight decay parameter of \(10^{-5}\) and a batch size of 2048. In this experiment, we used 1024 latent samples to build the posterior during inference. Before training, z and w were normalized to zero mean and unit variance. The one-hot encoded n was left unchanged. Furthermore, we applied noise augmentation with a standard deviation of 0.02 to the normalized z and w dimensions.
* _Mode Processing:_ For mode detection of the cINN posteriors, we applied the DBSCAN [9] clustering algorithm using the scikit-learn library. DBSCAN was applied to the denormalized data with a minimum sample size of 20 and \(\varepsilon=0.2\).
**Medical vision use case 1: Pose estimation**
* _Data Set:_ For every CT volume, we sampled 100 different poses of the C-Arm device and computed corresponding DRRs. The virtual C-Arm poses (relative to the 3D volume coordinate system) were determined as follows: The translation along the sagittal, longitudinal, and transverse axis was randomly sampled from a continuous uniform distribution with range [-20 mm, 20 mm]. The two angles representing the rotation around the longitudinal (LAO) and transverse (CRAN) axis of the patient were sampled from a discrete uniform distribution with range [-20\({}^{\circ}\), 20\({}^{\circ}\)] and a step size of 1\({}^{\circ}\). With a probability of 0.5, the LAO angle was shifted by 180\({}^{\circ}\) to capture a possible ambiguity in the projections. We split the data into a disjoint training and test data set (no overlap between patients) with 131,900 and 2,700 samples, respectively. For our validation, we only considered samples with a highly symmetric spine, which resulted in 196 samples.
* _Architecture:_ To eliminate the need for the affine coupling blocks to learn the complete representation of the input images, a conditioning network was applied that transformed the two input images into an intermediate representation. The choice of the architecture of the conditioning network was inspired by [10], where core elements of the registration network are blocks with convolutional layers followed by batch normalization,
dropout layers (\(p\) = 0.2), and ReLU activations. In the first stage of the training, we pre-trained the conditioning network with a mean squared error loss to predict the pose parameters. The cINN consisted of three affine coupling blocks, each followed by a (fixed) random permutation. The subnetworks were implemented as fully-connected networks with a single hidden layer with 128 dimensions, dropout layers \(p\) = 0.02, and tanh activations. Soft clamping was applied with a constant of 1.9. The cINN was trained with a maximum likelihood loss, batch size of 32, and noise and contrast augmentation for both CT volume and 2D projections. The model was trained for 3000 epochs with the Adam optimizer with a weight decay of 10\({}^{-4}\) and an initial learning rate of 10\({}^{-2}\). Every 200 epochs, the learning rate was reduced by a factor of two. During the training of the cINN, the conditioning network was further optimized.
* _Mode Processing:_ Upon test time, CT volume and 2D projection serve as conditioning input, and repeated sampling from the latent space (here: 1028 samples) results in a full posterior over the five-dimensional parameter space. For mode detection, the DBSCAN clustering algorithm, as implemented in the scikit-learn library, was used. We fixed the parameter \(\varepsilon\) = 0.19 and varied the minimum sample size between 3 and 500 for hyperparameter optimization. For the localization criterion and the assignment strategy, we solely considered the LAO angle as this is the dimension with expected ambiguous solutions.
**Medical vision use case 2: Functional tissue parameter estimation**
* _Data Set:_ For the functional tissue parameter quantification use case, a total of 1100 synthetic photoacoustic images of the human forearm were simulated (Train:Val:Test; 900:100:100 images) [11]. The simulations were performed on 16 equidistant wavelengths between 700 and 850 nm. The optical Monte Carlo simulation was performed with \(5\cdot 10^{8}\) photons with a spatial resolution of 0.15625mm. The volumes were of dimension: 75mm (transducer dim) x 20mm (planar dim) x 20mm (height). The simulated 3D images were cropped, and additive and multiplicative Gaussian noise components were added to match the contrast of real photoacoustic images. Finally, the spectra of the tissue classes artery and vein were extracted, L\({}_{1}\)-normalized, and used as input for our models.
* _Architecture:_ The original architecture of our baseline method (LSD) was adapted, resulting in a fully connected network with two hidden layers of size 256, dropout (\(p\) = 0.5), and ReLU activations. For the cINN, 20 coupling blocks and (fixed) random permutations were used. The subnetworks were implemented as fully connected networks with one hidden layer of size 1024, dropout (\(p\) = 0.5), and ReLU activations. Soft clamping was applied with \(\alpha\) = 1.0. As the coupling blocks require a minimum channel dimension of two due to the internal dimension splitting, a second dummy dimension with standard Gaussian noise was concatenated to the one-dimensional quantity of interest (oxygenation). Both models were trained with a batch size of 1024 for 100 epochs. The AdamW optimizer was used with a learning rate of \(10^{-3}\) and weight decay of 0.01. After epochs 80 and 90, the learning rate was reduced by a factor of ten. For the cINN, 5000 posterior samples were drawn during inference time. The UniDip clustering algorithm [12] was used with a statistical significance level of \(\alpha\) = 0.5.
|
2309.07991 | Franks' dichotomy for toric manifolds, Hofer-Zehnder conjecture, and
gauged linear sigma model | We prove that for any compact toric symplectic manifold, if a Hamiltonian
diffeomorphism admits more fixed points, counted homologically, than the total
Betti number, then it has infinitely many simple periodic points. This provides
a vast generalization of Franks' famous two or infinity dichotomy for periodic
orbits of area-preserving diffeomorphisms on the two-sphere, and establishes a
conjecture attributed to Hofer-Zehnder in the case of toric manifolds. The key
novelty is the application of gauged linear sigma model and its bulk
deformations to the study of Hamiltonian dynamics of symplectic quotients. | Shaoyun Bai, Guangbo Xu | 2023-09-14T19:07:46Z | http://arxiv.org/abs/2309.07991v2 | # Hofer-Zehnder conjecture for toric manifolds
###### Abstract.
We prove that for any compact toric symplectic manifold, if a Hamiltonian diffeomorphism admits more fixed points, counted homologically, than the total Betti number, then it has infinitely many simple periodic points. This provides a vast generalization of Franks' famous two or infinity dichotomy for periodic orbits of area-preserving diffeomorphisms on the two-sphere, and establishes a conjecture attributed to Hofer-Zehnder in the case of toric manifolds. The key novelty is the application of gauged linear sigma model and its bulk deformations to the study of Hamiltonian dynamics of symplectic quotients.
The second author is supported by NSF DMS-2345030.
Here a point \(x\in X\) is called a simple periodic point (of period \(k\)) if \(\phi^{k}(x)=x\) for some positive integer \(k\) and \(\phi^{l}(x)\neq x\) for all \(l<k\). The number \(N(\phi,\mathbb{Q})\) can be viewed as a quantity which measures the number of fixed points of a generic small Hamiltonian perturbation of \(\phi\). In particular, the following statement holds because \(\dim_{\mathbb{Q}}\mathit{HF}^{\mathrm{loc}}(\phi,x;\mathbb{Q})=1\) if \(x\) is nondegenerate.
**Corollary 1.1**.: _If all the fixed points of \(\phi\) are nondegenerate, i.e., for any \(x\in\mathrm{Fix}(\phi)\), we have \(\det(D\phi_{x}-id)\neq 0\), and the inequality_
\[\#\mathrm{Fix}(\phi)>\sum_{i=0}^{2n}\dim_{\mathbb{Q}}H_{i}(X;\mathbb{Q})\]
_holds, then \(\phi\) has infinitely many simple periodic points._
Theorem A is a high-dimensional generalization of a famous result due to Franks [10, 11]: any area-preserving homeomorphism of \(S^{2}\) has either two or infinitely many simple periodic points. Just as in Franks' theorem, the assumption (1.1) is necessary. Indeed, for a toric manifold \((X,\omega)\) with a Hamiltonian \(T^{n}\)-action, a generic element of the torus \(T^{n}\), which can be regarded as a higher-dimensional analogue of an irrational rotation on \(S^{2}\), has exactly \(\sum_{i=0}^{2n}\dim_{\mathbb{Q}}H_{i}(X;\mathbb{Q})\) many simple periodic points given by the fixed points of the \(T^{n}\)-action. In fact, using the Anosov-Katok method, the paper [12] constructed a Hamiltonian diffeomorphism on \(X\) with exactly \(1+\sum_{i=0}^{2n}\dim_{\mathbb{Q}}H_{i}(X;\mathbb{Q})\) ergodic measures, which come from the measure induced by the volume form \(\omega^{n}\) and the Dirac measures supported at the toric fixed points.
Theorem A broadens the scope of the recent investigation of generalizations of Franks' dichotomy to higher dimensional symplectic manifolds initiated by Shelukhin [14], who proved a similar result on the existence of infinitely many periodic orbits of Hamiltonian diffeomorphisms defined over monotone symplectic manifolds with semisimple quantum homology. Our main theorem resolves a visionary conjecture set forth by Hofer and Zehnder [15, Page 263], which asserts the existence of infinitely many simple periodic points of a Hamiltonian diffeomorphism \(\phi\) of a compact symplectic manifold if \(\#\mathrm{Fix}(\phi)\) exceeds the lower bound provided by the Arnold conjecture, in the case of toric manifolds.
### Context from Hamiltonian dynamics and symplectic geometry
As mentioned in the statement of the Hofer-Zehnder conjecture, one rigidity aspect of Hamiltonian diffeomorphisms is governed by the _Arnold conjecture_[1] (see various proofs in [11][12][13][14][15][16][17][18][19][20][21]), which implies that the inequality \(N(\phi,\mathbb{Q})\geq\sum_{i=0}^{2n}\dim_{\mathbb{Q}}H_{i}(X;\mathbb{Q})\) is always true. Notice that one does not detect simple periodic points of higher periods by applying the Arnold conjecture to \(\phi^{k}\), as fixed points of \(\phi\) are automatically fixed points of \(\phi^{k}\).
On the other hand, Hamiltonian diffeomorphisms "tend" to have infinitely many simple periodic points. The _Conley conjecture_ asserts the infinitude of the number of simple periodic points for all Hamiltonian diffeomorphisms on a certain class of symplectic manifolds (originally only conjectured for torus [13]). This conjecture has been proved for wider and wider ranges of manifolds, see [14][15][16][17, 18, 19] and the survey article [18]. However, the Conley conjecture does not hold for all symplectic manifolds, as easily seen from the case of irrational rotations on \(S^{2}\) (or general toric manifolds).
While Conley conjecture is an unconditional statement about Hamiltonian diffeomorphisms on certain symplectic manifolds, the Hofer-Zehnder conjecture observes a simple condition on the Hamiltonian diffeomorphism responsible for the infinitude of periodic points: the number of fixed points (counted homologically) is strictly greater than the "Arnold lower bound". A broader interpretation of this condition is the existence of "unnecessary" fixed points, such
as non-contractible ones. Distinguished from the intensive research activities surrounding the Conley conjecture, the Hofer-Zehnder conjecture has only been understood in limited cases. Except for the cases covered in [10] (including projective spaces, Grassmannians, and their monotone products), the Hofer-Zehnder conjecture is known for weighted projective spaces [11], some cases related to non-contractible orbits ([12][13][14][15, 16]) or hyperbolic fixed points ([13]). Our Theorem A covers a large family of new instances of the original homological version of the Hofer-Zehnder conjecture. Very interestingly, our proof actually indicates a surprising connection between mirror symmetry and Hamiltonian dynamics, see the discussions in Section 1.3.
The holomorphic curve method, most notably the package of Floer homology, has been a dominant tool in the study of Hamiltonian dynamics. The developments of Conley conjecture and Hofer-Zehnder conjecture have shown that holomorphic curves have more subtle influence on Hamiltonian dynamics than being merely a tool. For example, Conley conjecture is true for Calabi-Yau or negatively monotone manifolds. These manifolds do not have "very many" holomorphic curves. Recent progress [14][15] further reveals a close connection between the failure of Conley conjecture and the effect of holomorphic curves (which is also related to the _Chance-McDuff conjecture_). The semisimplicity condition in the proof of the Hofer-Zehnder conjecture [10] can also be viewed as a characterization of the abundance of holomorphic curves. Note that toric manifolds underlies rational algebraic varieties, so they provide examples demonstrating this phenomenon. It will be very interesting to have a more precise and systematic formulation of such a mechanism for general symplectic manifolds.
### Key ingredient: GLSM
While inspired by the method of Shelukhin [10], our resolution of the Hofer-Zehnder conjecture for toric symplectic manifolds is largely based on introducing a new player in Hamiltonian dynamics: _gauged linear sigma model (GLSM)_.
Gauged linear sigma model was originally introduced by Witten [16] in the physics context in much greater generality. In the current situation, the basic usage of the GLSM is to replace holomorphic curves in a toric manifold \(X\) by certain gauge-theoretic objects called _vortices_. This is possible because \(X\) is the symplectic quotient (or GIT quotient) of a vector space \(V\cong\mathbb{C}^{N}\) by a torus \(K\cong(S^{1})^{N-n}\) with a moment map \(\mu\). In this situation, a vortex over any Riemann surface \(\Sigma\) consists of a principal \(K\)-bundle \(P\to\Sigma\), a connection \(A\in\mathcal{A}(P)\), and a section \(u\) of the associated vector bundle \(P(V):=(P\times V)/K\), solving the _vortex equation_
\[\overline{\partial}_{A}u =0, *F_{A}+\mu(u) =0.\]
Mathematically, the general symplectic vortex equation was firstly introduced by Cieliebak-Gaio-Salamon [11] and Mundet [16, 17], with many related technical works by Cieliebak-Gaio-Mundet-Salamon [14], Ott [18], Mundet-Tian [19], the second author [15], Zilter [11, 12, 13], Venugopalan [20], etc. In particular, one can use Hamiltonian perturbed vortex equation over surfaces with cylindrical ends to develop the _vortex Floer theory_ (see Frauenfelder [10, 11] and the second author [15]).
Many aspects of ordinary Hamiltonian Floer theory has a counterpart in vortex Hamiltonian Floer theory, including continuation maps and energy filtration. Accordingly, recent advances on quantitative Floer theory, especially the theory of persistence modules [12, 13], can be adapted to the vortex context. For readers who are not familiar with this variant of Floer theory, just keep in mind that the chain complex underlying the vortex Hamiltonian Floer homology is still freely generated by \(1\)-periodic orbits of the given Hamiltonian diffeomorphism, and the differentials are defined by counting solutions to Hamiltonian-perturbed vortex equations instead of Floer equations, provided that all the \(1\)-periodic orbits are nondegenerate. For the general
isolated degenerate case, the theory of _local Floer homology_ carries over to the vortex context without much difficulty, therefore all of our results hold in such a generality.
One remarkable feature of the vortex Floer theory is that we can define Floer theories over _integers_ in our setting. Indeed, as the target space \(V\) is a symplectic vector space, the Uhlenbeck-Gromov-Floer compactification of moduli spaces of solutions to vortex equations do not acquire adding configurations with sphere bubbles, much as in the case of symplectically aspherical manifolds. Except for simplifying the technical arguments for achieving transversality, the ability of reducing to characteristic \(p\) allows us to extend the scope of applicability of _symplectic Smith theory_[12, 13] beyond the exact or semi-positive setting.
With the above explanation, our main results concerning the structural aspects of (filtered) vortex Hamiltonian Floer theory can be summarized as follows. Given a commutative ring \(R\), let \(\Lambda=\Lambda_{R}\) be the upward Novikov ring
\[\Lambda_{R}=\Big{\{}\sum_{i=1}^{\infty}a_{i}T^{g_{i}}\ |\ g_{i}\in\mathbb{R},\ a_ {i}\in R,\ \lim_{i\to\infty}g_{i}=+\infty\Big{\}}.\]
Denote by \(\mathbf{z}_{1},\ldots,\mathbf{z}_{N}\) the \(K\)-equivariant degree 2 cohomology classes dual to the coordinate hyperplanes \(V_{1},\cdots,V_{N}\) in \(V\).
**Theorem B**.: _There exists a bulk-deformation of the form_
\[\mathfrak{b}=\sum_{j=1}^{N}\log c_{j}\cdot\mathbf{z}_{j}\ \mathrm{where}\ c_{j} \in\mathbb{Z}[\mathbf{i}] \tag{1.2}\]
_satisfying the following properties._
1. _The_ \(\mathfrak{b}\)_-deformed vortex quantum homology algebra_ \(\mathit{VHF}^{\mathfrak{b}}_{\bullet}(V;\Lambda_{\overline{\mathbb{Q}}})\) _is semisimple over_ \(\Lambda_{\overline{\mathbb{Q}}}\)_, with the number of idempotent summands, all of which are 1-dimensional, equal to_ \(\sum_{i=0}^{2n}\dim_{\mathbb{Q}}H_{i}(X;\mathbb{Q})\)_._
2. _The operator_ \(\mathbb{E}_{\mathfrak{b}}:\mathit{VHF}^{\mathfrak{b}}_{\bullet}(V;\Lambda_{ \overline{\mathbb{Q}}})\to\mathit{VHF}^{\mathfrak{b}}_{\bullet}(V;\Lambda_{ \overline{\mathbb{Q}}})\) _given by the quantum multiplication with the equivariant first Chern class has distinct nonzero eigenvalues._
_Remark 1.2_.: The reader may wonder about the legitimacy of taking the logarithm of elements of \(\mathbb{Z}[\mathbf{i}]\). In reality, we will take the exponential of the intersection number between the bulk \(\mathfrak{b}\) and Riemann surfaces to deform Floer-theoretic operations in the spirit of the divisor axiom in Gromov-Witten theory, and we take the above formal expression for the sake of conciseness.
We explain the central ingredient in the proof of Theorem B: the GLSM version of the _closed string mirror symmetry_. To find such a bulk and calculate the quantum homology ring, we develop _closed-open field theory_, in particular, the _closed-open map_
\[\mathrm{CO}^{\mathfrak{b}}:\mathit{VHF}^{\mathfrak{b}}_{\bullet}(V;\Lambda_{ \overline{\mathbb{Q}}})\to\mathit{HH}^{\bullet}(\mathcal{F}^{K}_{\mathfrak{b} }(V),\mathcal{F}^{K}_{\mathfrak{b}}(V))\]
where \(\mathcal{F}^{K}_{\mathfrak{b}}(V)\) is an equivariant version of the Fukaya category: its objects are roughly Lagrangians in the toric manifold \(X\) and its structural maps are defined via equivariant counts of holomorphic disks upstairs in \(V\). Hence we reduce the calculation of the quantum homology algebra to the determination of Hochschild cohomology by showing that the closed-open map is a unital ring isomorphism. Moreover, the structural coefficients of the \(A_{\infty}\) operations are governed by the mirror superpotential, which is a Laurent polynomial \(W:(\mathbb{C}^{*})^{N}\to\mathbb{C}\). The Hochschild cohomology can be accordingly computed as the Jacobian ring of \(W\), and the bulk-deformations can be understood as taking certain unfoldings of \(W\) by adding elements from the Jacobi ring. It is well-known that a generic unfolding of \(W\) only has nondegenerate critical points, and such unfolding can be
realized by adjusting the bulk-deformation, implying that the quantum homology is generically semisimple. As for the statement on the first Chern class, it is the application of the folklore principle, usually attributed to Auroux-Kontsevich-Seidel (see [1, Section 6] and [1, Lemma 2.7]), that generalized eigenvalues of quantum multiplication with the first Chern class have a one-to-one correspondence with critical values of the mirror superpotential.
Such a closed-string mirror symmetry statement has been established using ordinary pseudo-holomorphic curves and Floer theory. For general toric manifolds, the symplectic version of the mirror superpotential is defined by counting _stable_ pseudoholomorphic disks, usually having infinitely many terms (see [1, 1, 2, 3]). To Morsify such a superpotential, one usually needs very general bulk deformations. This in turn demonstrates another advantage of the GLSM: the mirror superpotential takes a rather simple form. As shown by Woodward [10], the mirror superpotential in GLSM agrees with the mirror superpotential given by Givental [11] and Hori-Vafa [12]. One can Morsify this superpotential (called the Givental-Hori-Vafa potential) by only using "small" bulk deformation, i.e., divisor classes.
### Proof of the main theorem
Once the crucial Theorem B is established, the rest of the proof can be streamlined in the same way as [13]. Many key arguments are in fact algebraic while other geometric arguments need nontrivial, but straightforward extensions to the vortex setting.
The first step is to take mod \(p\) reductions of the vortex quantum homology algebra. Notice that as the coefficients of \(\mathfrak{b}\) are integral, the mod \(p\) deformed counts also define a vortex Hamiltonian Floer homology \(\mathit{VHF}^{\mathfrak{b}}_{\bullet}(V;\Lambda_{\overline{\mathbb{F}}_{p}})\), where \(\overline{\mathbb{F}}_{p}\) is the algebraic closure of \(\mathbb{F}_{p}\cong\mathbb{Z}/p\mathbb{Z}\). By a purely algebraic argument (see Theorem 3.9), one obtains the following corollary of Theorem B.
**Corollary 1.3**.: _There exists \(p_{0}>0\) such that for all primes \(p\geq p_{0}\), the vortex quantum homology algebra \(\mathit{VHF}^{\mathfrak{b}}_{\bullet}(V;\Lambda_{\overline{\mathbb{F}}_{p}})\) is semisimple with the number of idempotent summands, all of which are 1-dimensional, equal to \(\sum_{i=1}^{2n}\dim_{\mathbb{Q}}H_{i}(X;\mathbb{Q})\)._
We also need two quantitative results about the filtered theories in finite characteristics. Recall that to each Floer-Novikov complex such as the bulk-deformed vortex Floer complex, one can associate a **barcode**. In our case one needs the general formulation by Usher-Zhang [14]. We consider two associated numerical invariants: the **boundary depth**, which is the length of the longest finite bar, and the **total bar length**, which is the sum of lengths of all finite bars.
**Theorem C**.: _Let \(\mathfrak{b}\) be a bulk-deformation satisfying Theorem B and \(p_{0}\) be the one from Corollary 1.3. Then there exists \(C>0\) satisfying the following condition. Let \(\mathit{VCF}^{\mathfrak{b}}_{\bullet}(H;\Lambda_{\overline{\mathbb{F}}_{p}})\) be the \(\mathfrak{b}\)-deformed filtered vortex Floer chain complex associated to a nondegenerate Hamiltonian \(H\) on \(X^{1}\) and let \(\beta^{\mathfrak{b}}_{(p)}(H)\) be its boundary depth, then for all \(p\geq p_{0}\),_
\[\beta^{\mathfrak{b}}_{(p)}(H)\leq C. \tag{1.3}\]
The barcodes (with bottleneck distance) have a Lipschitz dependence on Hamiltonian diffeomorphisms (with the Hofer metric). Hence the above uniform bound on boundary depth extends to all Hamiltonians on the toric manifold \(X\). On the other hand, the total bar length, denoted by \(\tau^{\mathfrak{b}}_{(p)}(-)\) in characteristic \(p\), can be extended to Hamiltonian diffeomorphisms with isolated fixed points. In particular, the barcode of a possibly degenerate Hamiltonian diffeomorphism \(\phi\) with
isolated fixed points is still finite, and the number of bar ends agrees with the homological counts of fixed points \(N(\phi;\overline{\mathbb{F}}_{p})\) (see Theorem 5.9).
The last key input is about the growth of total bar length of prime iterations of Hamiltonian diffeomorphisms. Suppose \(\phi:X\to X\) is a Hamiltonian diffeomorphism such that all prime iterations of \(\phi\) have isolated fixed points. If \(\phi\) is the time-\(1\) map of a Hamiltonian \(H:S^{1}\times X\to\mathbb{R}\), then the \(p\)-fold iteration \(\phi^{p}\) is the time-\(1\) map of \(H^{(p)}:=H_{pt}\).
**Theorem D**.: _For any bulk \(\mathfrak{b}\) for the form (1.2) and any odd prime \(p\), we have the inequality_
\[\tau^{\mathfrak{b}}_{(p)}(H^{(p)})\geq p\cdot\tau^{\mathfrak{b}}_{(p)}(H). \tag{1.4}\]
_Remark 1.4_.: The above equality should also hold for \(p=2\), but we do not give a full treatment for this case because this would introduce extra notations in the discussion of equivariant Floer theory. In fact, for the proof, we do not need the \(p=2\) version of (1.4) anyway.
With the above technical ingredients, establishing the Hofer-Zehnder conjecture for toric manifolds is a matter of elementary arguments.
Proof of Theorem A.: Let \(\phi:X\to X\) be a Hamiltonian diffeomorphism satisfying (1.1). By Proposition 5.2, one can replace the ordinary local Floer homology by the (bulk-deformed) local vortex Floer homology. One also knows from Proposition 4.8 and Theorem 4.12 that the total rank of rational homology of \(X\) agrees with the rank of the bulk-deformed vortex Floer homology of \(V\). Hence (1.1) can be rewritten as
\[N(\phi,\mathbb{Q})=\sum_{x\in\operatorname{Fix}(\phi)}\dim_{\overline{ \mathbb{Q}}}\!V\!H\!F^{\operatorname{loc}}(\phi,x;\mathbb{Q})>\dim_{\Lambda_{ \mathbb{Q}}}\!V\!H\!F^{\mathfrak{b}}_{\bullet}(V;\Lambda_{\mathbb{Q}}).\]
Because the (local) vortex Floer homology are defined over the integers (see Section 5), by the universal coefficient theorem, for \(p\) sufficiently large,
\[\sum_{x\in\operatorname{Fix}(\phi)}\dim_{\overline{\mathbb{F}}_{p}}\mathit{ VHF}^{\operatorname{loc}}(\phi,x;\overline{\mathbb{F}}_{p})>\dim_{\Lambda_{ \overline{\mathbb{F}}_{p}}}\mathit{VHF}^{\mathfrak{b}}_{\bullet}(V;\Lambda_{ \overline{\mathbb{F}}_{p}}).\]
Suppose on the contrary that \(\phi\) only has only finitely many periodic points. Then for any sufficiently large prime \(p\), for all \(k\geq 1\), \(\operatorname{Fix}(\phi^{p^{k}})=\operatorname{Fix}(\phi)\). Then by Theorem 5.1, one has
\[\sum_{x\in\operatorname{Fix}(\phi^{p^{k}})}\dim_{\overline{\mathbb{F}}_{p}} \mathit{VHF}^{\operatorname{loc}}(\phi^{p^{k}},x;\overline{\mathbb{F}}_{p})= \sum_{x\in\operatorname{Fix}(\phi)}\dim_{\overline{\mathbb{F}}_{p}}\mathit{ VHF}^{\operatorname{loc}}(\phi,x;\overline{\mathbb{F}}_{p})>\dim_{\Lambda_{ \overline{\mathbb{F}}_{p}}}\mathit{VHF}^{\mathfrak{b}}_{\bullet}(V;\Lambda_{ \overline{\mathbb{F}}_{p}}).\]
Consider the barcode of \(\phi^{p^{k}}\) coming from the bulk-deformed vortex Floer theory (over the Novikov field \(\Lambda_{\overline{\mathbb{F}}_{p}}\)). The above implies that the number of finite bars is positive and independent of the iteration \(p^{k}\). The uniform bound on the boundary depth (length of the longest finite bar) given by Theorem C implies that the total bar length \(\tau^{\mathfrak{b}}_{(p)}(\phi^{p^{k}})\) is uniformly bounded.
On the other hand, by Theorem D, for any \(k\geq 1\), the total bar length grows as
\[\tau^{\mathfrak{b}}_{(p)}(\phi^{p^{k}})\geq p^{k}\cdot\tau^{\mathfrak{b}}_{(p )}(\phi)\geq Cp^{k}>0.\]
This is a contradiction. Hence \(\phi\) must have infinitely many periodic points.
Because the above argument works for any \(p\geq p_{0}\), we know that the number of periodic points of \(\phi\) grows like \(\frac{k}{\log(k)}\) as \(k\to\infty\), as a result of the prime number theorem.
_Remark 1.5_.: Arguments of the above form first appeared in [20, Section 8], which we reproduce in our context for completeness. As noted above, Shelukhin's result on the Hofer-Zehnder conjecture relies on the assumptions that the ambient symplectic manifold is monotone and that the quantum homology is semisimple, which respectively account for the inequalities (1.4) (the monotonicity condition allows one to define Floer theory integrally using classical methods) and (1.3) (which will be discussed in more detail in the body part of this paper). For general toric symplectic manifolds, traditional Hamiltonian Floer homology is only defined over the rationals, which sets difficulties for establishing symplectic Smith-type inequalities. Moreover, the quantum homology of toric symplectic manifolds fails to be semisimple in general, which is already the case even for Fano/monotone toric manifolds [10].
### Outlook and speculations
We are very surprised to find out that classical considerations from mirror symmetry can be quite useful for investigations in Hamiltonian dynamics. We expect such a connection could open up new avenues for future research. As mentioned above, GLSM can more generally be used to study symplectic topology and Hamiltonian dynamics of other symplectic/GIT quotients or complete intersections in them, the latter of which requires studing the gauged Witten equation (see [14, 14]) with Hamiltonian perturbations. It is conceivable that one could resolve the Hofer-Zehnder conjecture for a broader class of symplectic quotients, provided that certain form of closed string mirror symmetry can be established.
On a different note, except for deploying tools like GLSM, there are some recent advances [1, 2] on defining Hamiltonian Floer theory over integers for general symplectic manifolds. The methods from _loc. cit._ are general enough for us to expect that a version of symplectic Smith-type inequality should hold using such a theory. Deriving dynamical applications using such a toolkit, including proving the Hofer-Zehnder conjecture in more general settings, is another topic for future research.
### Outline of the paper
The following provides an outline of this paper.
* Basic notions related to toric manifolds are recalled in Section 2, which also includes an introduction to the symplectic vortex equations arising from GLSM.
* In Section 3, various algebraic preliminaries relevant for our purpose, including semisimple algebras over Novikov rings defined over fields with possibly positive characteristics, abstract setups for filtered Floer theories, persistence modules, and \(A_{\infty}\) algebras and their Hochschild cohomology, are recalled systematically.
* A (filtered) Hamiltonian Floer theory package in the vortex setting is recorded in Section 4. Most notably, we introduce bulk deformations in vortex Hamiltonian Floer theory which allow us to incorporate ideas from generic semisimplicity of quantum homology to derive applications in quantitative symplectic topology.
* In Section 5, we introduce local Floer theory in the vortex setting in order to establish Theorem A for Hamiltonian diffeomorphisms with isolated but degenerate fixed points.
* The main purpose of Section 6 is to prove Theorem 6.2 = Theorem C, which ensures a uniform upper bound on the boundary depth of the bulk-deformed vortex Hamiltonian Floer persistence module of any Hamiltonian diffeomorphism provided that the bulk-deformed vortex quantum homology is semisimple.
* In Section 7, we develop \(\mathbb{Z}/p\)-equivariant vortex Hamiltonian Floer theory by adapting the work [21, 22] in the GLSM setting. Theorem D = Theorem 7.1 is proven as a consequence by appealing to the work of Shelukhin [20].
* We turn our attention to Lagrangian Floer theory in Section 8. The key result is to demonstrate the existence of a "convenient" bulk deformation (cf. Definition 8.20) whose
associated Fukaya category (in the GLSM setting) takes a very simple form, such that its Hochschild cohomology is a semisimple algebra.
* Lastly, in Section 9, Theorem 9.1 = Theorem B is proven by showing that the closed-open string map is a unital ring isomorphism.
**Acknowledgements.** : We thank Marcelo Atallah, Hiroshi Iritani, Han Lou, Egor Shelukhin, Nick Sheridan, Michael Usher, and Chris Woodward for useful discussions and email correspondences. The first-named author is grateful to the Simons Center for Geometry and Physics for its warm hospitality during Spring 2023.
## 2. Geometric Preliminaries
We recall basic notions about toric symplectic manifolds and symplectic vortex equations.
### Toric manifolds as symplectic quotients
We recall the notion of symplectic reduction/quotients. Let \(K\) be a compact Lie group with Lie algebra \(\mathfrak{k}\). Let \((V,\omega_{V})\) be a symplectic manifold. A smooth \(K\)-action on \(V\) is called a **Hamiltonian action** if there exists a **moment map**
\[\mu:V\to\mathfrak{k}^{*}\]
satisfying
1. \(\mu\) is \(K\)-equivariant (with respect to the co-adjoint action on \(\mathfrak{k}^{*}\)).
2. For each \(\xi\in\mathfrak{k}\), let the infinitesimal action of \(\xi\) be \(\mathcal{X}_{\xi}\). Then \[\omega_{V}(\mathcal{X}_{\xi},\cdot)=d\langle\mu,\xi\rangle.\]
It follows that the level set \(\mu^{-1}(0)\) is \(K\)-invariant. Define the **symplectic reduction** of \(V\) (with respect to the \(K\)-action and the moment map) to be
\[X:=\mu^{-1}(0)/K.\]
We always assume that \(0\) is a regular value of \(\mu\) and the \(K\)-action on \(\mu^{-1}(0)\) is free. This assumption implies that \(X\) is a smooth manifold. In this case, \(X\) carries a canonically induced symplectic form \(\omega_{X}\).
When \(V\) has a \(K\)-invariant integrable almost complex structure \(J_{V}\), the \(K\)-action can be extended to its complexification \(K^{\text{C}}\) as a holomorphic action. When this is the case, (under certain extra conditions), the Kempf-Ness theorem says that the symplectic reduction can be identified with the geometric invariant theory (GIT) quotient.
#### 2.1.1. Compact symplectic toric manifolds
Symplectic toric manifolds can be realized as symplectic quotients of a vector space. We provide a minimal description of symplectic toric manifolds necessary for this paper. An \(2n\)-dimensional compact symplectic toric manifold \(X\) is described by a convex polytope \(P\subset\mathbb{R}^{n}\) satisfying the following conditions.
1. For each face \(\partial_{j}P\) of \(P\), there are \(\mathbf{v}_{j}\in\mathbb{Z}^{n}\) and \(\lambda_{j}\in\mathbb{R}\) such that \(\mathbf{v}_{j}\) is an inward normal vector and the face is defined by \[\partial_{j}P=\{\mathbf{u}\in\mathbb{R}^{n}\ |\ \langle\mathbf{v}_{j}, \mathbf{u}\rangle=\lambda_{j}\}.\]
2. For each vertex of \(P\), the normal vectors \(\mathbf{v}_{j_{1}},\dots,\mathbf{v}_{j_{n}}\) of all adjacent faces form a \(\mathbb{Z}\)-basis of \(\mathbb{Z}^{n}\).
In this paper, denote by \(N\) the number of faces of \(P\). We can realize \(X\) as the symplectic quotient of \(\mathbb{C}^{N}\) (with the standard symplectic form) by the \(N-n\) dimensional torus \(K=T^{N-n}=(S^{1})^{N-n}\). The collection of vectors \(\mathbf{v}_{1},\ldots,\mathbf{v}_{N}\) defines a linear map
\[\tilde{\pi}_{K}:\mathbb{R}^{N}\to\mathbb{R}^{n}\]
which sends \(\mathbb{Z}^{N}\) onto \(\mathbb{Z}^{n}\). Hence it induces a surjective group homomorphism
\[\pi_{K}:T^{N}\to T^{n}.\]
Let \(K\) be the kernel of \(\pi_{P}\). Hence \(K\) acts on \(\mathbb{C}^{N}\) as a subgroup of \(T^{N}\). Notice that for the standard \(\widehat{K}=T^{N}\)-action, the moment map can be written as
\[\widehat{\mu}(x_{1},\ldots,x_{N})=\left(\pi|x_{1}|^{2}-\lambda_{1},\ldots,\pi| x_{N}|^{2}-\lambda_{N}\right)\in\mathbb{R}^{N}\cong\widehat{\mathfrak{k}}^{*}.\]
Then the moment map of the \(K\)-action is simply the composition
\[\mu=(d\pi_{K})^{*}\circ\widehat{\mu}:V\to\mathfrak{k}^{*}.\]
On the other hand, there is a residual torus action on \(X\) by the quotient \(T^{n}\) which is the torus action usually appear in the discussion of toric manifolds. The associated moment map is denoted by
\[\pi_{X}:X\to\mathbb{R}^{n} \tag{2.1}\]
whose range is actually the moment polytope \(P\).
Notice that if one translates the moment polytope \(P\) in \(\mathbb{R}^{N}\) by a vector in \(\mathbb{R}^{n}\), then it does not change the moment map \(\mu\) and hence the symplectic form on \(X\).
### Symplectic vortex equation
The symplectic vortex equation was originally introduced by Cieliebak-Gaio-Salamon [10] and Mundet [12]. It is a generalization of the pseudo-holomorphic curve equation to the equivariant setting. Here we briefly recall its setup and some basic analytical result.
#### 2.2.1. Gauged maps and vortex equation
Let \(V\) be the complex vector space acted on by the complex torus \(G=K^{\mathbb{C}}\) with moment map \(\mu\) under the \(K\)-action and symplectic quotient the toric manifold \(X\). Let \(\Sigma\) be a Riemann surface. A **gauged map** from \(\Sigma\) to \(V\) is a triple \(\mathfrak{u}=(P,A,u)\) where \(P\to\Sigma\) is a principal \(K\)-bundle, \(A\in\mathcal{A}(P)\) is a connection on \(P\), and \(u\) is a section of the associated vector bundle \(P(V):=P\times_{K}V\). The group of gauge transformations \(\mathcal{G}(P)\), which, in the abelian case, is the group of smooth maps
\[g:\Sigma\to K,\]
which acts on gauged maps by
\[g^{*}\mathfrak{u}=g^{*}(P,A,u)=(P,g^{*}A,g^{*}u)=(P,A+g^{-1}dg,g^{-1}u).\]
We need three quantities to define the vortex equation. First the covariant derivative of \(u\) is a section
\[d_{A}u\in\Omega^{1}(P,u^{*}TV)\]
which descends to an element in \(\Omega^{1}(\Sigma,u^{*}TV/K)\). There are also the curvature and the moment potential
\[F_{A}\in\Omega^{2}(\Sigma,\mathrm{ad}P), \mu(u)\in\Omega^{0}(\Sigma,\mathrm{ad}P^{*}).\]
By choosing an invariant inner product on the Lie algebra \(\mathfrak{k}\) one can identify \(\mathrm{ad}P\cong\mathrm{ad}P^{*}\); by choosing a volume form \(\nu_{\Sigma}\) one can identify \(\Omega^{2}\cong\Omega^{0}\). The gauged map \(\mathfrak{u}\) is called a **vortex** if
\[\overline{\partial}_{A}u=0, *F_{A}+\mu(u)=0. \tag{2.2}\]
Here \(\overline{\partial}_{A}u\) is the \((0,1)\)-part of the covariant derivative \(d_{A}u\). Both equations are invariant under gauge transformations. The energy of a vortex is defined to be
\[E(\mathfrak{u})=\frac{1}{2}\int_{\Sigma}\left(\|d_{A}u\|^{2}+\|F_{A}\|^{2}+\| \mu(u)\|^{2}\right)\nu_{\Sigma}.\]
Analogous to pseudoholomorphic curves, vortices satisfy an energy identity. Suppose \(\Sigma\) is closed. Then each gauged map \(\mathfrak{u}\) represents an equivariant homology class \([\mathfrak{u}]\in H^{K}_{2}(V;\mathbb{Z})\) defined as follows. The section \(u:\Sigma\to P(V)\) can be identified with a \(K\)-equivariant map \(\tilde{u}:P\to V\). Let \(EK\to BK\) be the universal \(K\)-bundle. The classifying map of \(P\to\Sigma\) is a map \(\iota:\Sigma\to BK\) which is covered by a bundle map \(\tilde{\iota}:P\to EK\). Then the equivariant map \((\tilde{\iota},\tilde{u}):P\to EK\times V\) descends to a continuous map from \(\Sigma\) to \((EK\times V)/K\), which represents a class \([\mathfrak{u}]\in H^{K}_{2}(V;\mathbb{Z})\). In the toric case, this class is just the degree of the principal bundle \(P\to\Sigma\). Then for any gauged map \(\mathfrak{u}=(P,A,u)\), one has
\[E(\mathfrak{u})=\langle\omega^{K},[\mathfrak{u}]\rangle+\|\overline{\partial} _{A}u\|^{2}_{L^{2}(\Sigma)}+\|*F_{A}+\mu(u)\|^{2}_{L^{2}(\Sigma)}.\]
Here \(\omega^{K}\in H^{2}_{K}(V;\mathbb{R})\) is the equivariant class represented by the equivariant \(2\)-form \(\omega-\mu\) (see[1, Proposition 3.1] and [15, Lemma 14]).
_Remark 2.1_.: An important feature of the symplectic vortex equation in the toric setting is that no bubbling happens as the space \(V\) is symplectically aspherical. In general, energy concentration could cause bubbling of holomorphic spheres as shown in [15, 16, 17].
One can introduce Hamiltonian perturbations. Given a \(1\)-form
\[\mathcal{H}\in\Omega^{1}(\Sigma,C^{\infty}(V)^{K})\]
with coefficients in the space of \(K\)-invariant smooth functions on \(V\), we can define a family of Hamiltonian vector fields
\[X_{\mathcal{H}}\in\Gamma(\Sigma\times V,\pi_{\Sigma}^{*}T^{*}\Sigma\otimes TV)\]
which is \(K\)-invariant, where \(\pi_{\Sigma}:\Sigma\times V\to\Sigma\) is the projection to the first factor. Hence for any principal \(K\)-bundle \(\pi_{P}:P\to\Sigma\), the vector field \(X_{\mathcal{H}}\) induces a section on the total space of the vector bundle \(\pi_{P(V)}:P(V)\to\Sigma\)
\[X_{\mathcal{H}}\in\Gamma(P(V),\pi_{P(V)}^{*}T^{*}\Sigma\otimes P(TV)),\]
where \(P(TV):=P\times_{K}u^{*}TV\). The perturbed symplectic vortex equation is
\[\overline{\partial}_{A,\mathcal{H}}u=0,\hskip 56.905512pt*F_{A}+\mu(u)=0. \tag{2.3}\]
where
\[\overline{\partial}_{A,\mathcal{H}}u=(d_{A}u)^{0,1}+(X_{\mathcal{H}}(u))^{0,1}.\]
For our applications, \(\mathcal{H}\) is obtained by extending the pullback of Hamiltonian connections in \(\Omega^{1}(\Sigma,C^{\infty}(X))=\Omega^{1}(\Sigma,C^{\infty}(\mu^{-1}(0)/K))\).
#### 2.2.2. Compactness
Although in aspherical targets vortices cannot bubble off holomorphic spheres, in general holomorphic curves can bubble off. It is the case when one considers Lagrangian boundary conditions. Let \(L\subset V\) be a \(K\)-invariant Lagrangian submanifold. One can impose the Lagrangian boundary condition for gauged maps \(\mathfrak{u}=(P,A,u)\) from \(\Sigma\) to \(V\) with \(u|_{\partial\Sigma}\subset P(L)\). Given a sequence of solutions \(\mathfrak{u}_{i}\) to the vortex equation on \(\Sigma\) subject to the Lagrangian boundary condition, if \(u_{i}\) has uniformly bounded image and \(E(\mathfrak{u}_{i})\) is uniformly bounded, the energy density could blow up near a boundary point. The boundedness of the images of \(u_{i}\) implies that the curvatures \(F_{A_{i}}\) do not blowup. Moreover, if one scales by the rate of energy concentration, the sequence of connections \(A_{i}\) converge subsequentially (up to gauge transformation) to a flat connection. All
Hamiltonian perturbations and variations of almost complex structures will also be scaled off. Hence a subsequece can bubble off a (stable) holomorphic disk in \(V\) with boundary in \(L\) with respect to a fixed almost complex structure. See details in [20].
## 3. Algebraic preliminaries
### Novikov rings
We set up the notations for our coefficient rings. In this paper, \(R\) always denotes a commutative ring with a unit, hence comes with a canonical ring map
\[\mathbb{Z}\to R.\]
Let \(\Lambda=\Lambda_{R}\) be the (upward) **Novikov ring**
\[\Lambda_{R}=\Big{\{}\sum_{i=1}^{\infty}a_{i}T^{g_{i}}\ |\ g_{i}\in\mathbb{R},\ a_{ i}\in R,\ \lim_{i\to\infty}g_{i}=+\infty\Big{\}}\]
The **valuation** on \(\mathfrak{v}:\Lambda\to\mathbb{R}\cup\{+\infty\}\) is defined by
\[\mathfrak{v}\left(\sum_{i=1}^{\infty}a_{i}T^{g_{i}}\right)=\inf\big{\{}g_{i} \ |\ a_{i}\neq 0\big{\}}\quad\text{and}\quad\mathfrak{v}(0)=+\infty.\]
We will also need the following version
\[\Lambda_{0,R}=\Big{\{}\sum_{i=1}^{\infty}a_{i}T^{g_{i}}\ |\ g_{i}\in\mathbb{R}_ {\geq 0},\ a_{i}\in R,\ \lim_{i\to\infty}g_{i}=+\infty\Big{\}},\]
which also comes with a valuation by restricting the above valuation. When \(R\) is a field, \(\Lambda_{R}\) is also a field, and it is the field of fraction of \(\Lambda_{0,R}\).
In many cases we can restrict to a Novikov ring of series \(\sum a_{i}T^{g_{i}}\) where \(g_{i}\) are restricted to a finitely generated additive group \(\Gamma\subsetneq\mathbb{R}\). In this paper \(\Gamma\) is fixed and actually determined by the GIT presentation of a toric manifold. Indeed, the discrete monoid \(\Gamma\) associated with the toric manifold \(X(\Sigma)\) is defined to be the image of effective \(1\)-cycles in \(\mathbb{R}\) defined from pairing with the cohomology class represented by the symplectic form, see Section 2.1.1. Denote
\[\Lambda_{R}^{\Gamma}:=\Big{\{}\sum_{i=1}^{\infty}a_{i}T^{g_{i}}\in\Lambda_{R} \ |\ g_{i}\in\Gamma\Big{\}}\]
and
\[\Lambda_{0,R}^{\Gamma}:=\Lambda_{0,R}\cap\Lambda_{R}^{\Gamma}.\]
However \(\Lambda_{R}^{\Gamma}\) does not enjoy certain algebraic properties of \(\Lambda_{R}\). For example, when \(R=\mathbb{K}\) is an algebraically closed field, \(\Lambda_{\mathbb{K}}\) is algebraically closed but \(\Lambda_{\mathbb{K}}^{\Gamma}\) is not.
#### 3.1.1. Modules and algebras over Novikov rings
**Definition 3.1**.: A **non-Archimedean normed free module** over \(\Lambda_{R}\) is a pair \((C,\ell)\) where \(C\) is a free \(\Lambda_{R}\)-module endowed with a function \(\ell:C\to\mathbb{R}\cup\{-\infty\}\) satisfying
1. **(Nondegeneracy)**\(\ell(x)=-\infty\) if and only if \(x=0\).
2. **(Homogeneity)** For all \(\lambda\in\Lambda_{R}\) and \(x\in C\), \(\ell(\lambda x)=\ell(x)-\mathfrak{v}(\lambda)\).
3. **(Subadditivity)** For \(x,y\in C\), \(\ell(x+y)\leq\max\{\ell(x),\ell(y)\}\); if \(\ell(x)\neq\ell(y)\), then \(\ell(x+y)=\max\{\ell(x),\ell(y)\}\).2
Now suppose \(\mathbb{K}\) is an \(R\)-module which is also a field. Then one can extend the function \(\ell\) to \(C\otimes_{\Lambda_{R}}\Lambda_{\mathbb{K}}\) via **(Homogeneity)**. The then obtained pair is a non-Archimedean normed vector space in the sense of [16, Definition 2.2] (except that the coefficient field was \(\Lambda_{\mathbb{K}}^{\Gamma}\) rather than \(\Lambda_{\mathbb{K}}\)).
We also need to consider multiplicative structures compatible with the non-Archimedean norm.
**Definition 3.2**.: A **non-Archimedean normed algebra** over \(\Lambda_{R}\) is a non-Archimedean normed free module \((C,\ell)\) together with a \(\Lambda_{R}\)-algebra structure satisfying the
* **(Triangle inequality)** For all \(x,y\in C\), \[\ell(xy)\leq\ell(x)+\ell(y).\]
#### 3.1.2. Specific coefficients and mod \(p\) reductions
In this paper we need to use certain non-traditional coefficient ring or fields. Here we briefly summarize them and set up the notations. First, let \(\overline{\mathbb{Q}}\) be the algebraic closure of \(\mathbb{Q}\), which is viewed as a subfield of \(\mathbb{C}\). Inside \(\overline{\mathbb{Q}}\) there is the subring of algebraic integers \(\overline{\mathbb{Z}}\), which is the set of algebraic numbers which are solutions to monic polynomials with integer coefficients. Further, in characteristic \(p\) (where \(p\) is an odd prime), let \(\mathbb{F}_{p}\cong\mathbb{Z}/p\mathbb{Z}\) be the smallest field with characteristic \(p\). Let \(\overline{\mathbb{F}}_{p}\) be the algebraic closure of \(\mathbb{F}_{p}\), which is only well-defined up to isomorphism of field extensions.
Notice that the notion of non-Archimedean normed algebras can be transferred between different coefficient rings via tensor products. A crucial feature of the geometric construction of this paper is that, as long as one has a counting theory over \(\mathbb{Z}\) (or \(\overline{\mathbb{Z}}\)), then it automatically induces a theory over any ring \(R\) (or \(\overline{\mathbb{Z}}\)-algebra). In particular, one needs to perform the "mod \(p\) reduction" which is roughly associated to the ring map \(\mathbb{Z}\to\mathbb{F}_{p}\). In our situation, one needs the corresponding extension to the algebraic closure of \(\mathbb{F}_{p}\).
**Lemma 3.3**.: _For each prime \(p\), there exists a unital ring map_
\[\overline{\pi}_{p}:\overline{\mathbb{Z}}\to\overline{\mathbb{F}}_{p}.\]
Proof.: (Following the mathoverflow post [(\(\dot{\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:
### Semisimple algebras over Novikov fields
In this paper we use a more restrictive notion of semisimplicity of algebras over Novikov fields.
**Definition 3.6**.: Let \(\mathbb{F}\) be a field. A unital \(\mathbb{F}\)-algebra \((A,*)\) is called **semisimple** if it splits as a direct sum of rings
\[A=F_{1}\oplus\cdots\oplus F_{k}\]
where \(F_{i}\cong\mathbb{F}\) as a ring. Each summand \(F_{i}\) is called an **idempotent summand** of \(A\) and the splitting is called the **idempotent splitting**.3
Footnote 3: It is easy to see that if \(A\) is semisimple, then the idempotent splitting is unique up to permuting idempotent summands.
_Remark 3.7_.: In many papers such as [1, 10, 11], the meaning of semisimplicity is more general: for example, each summand \(F_{i}\) is allowed to be a finite extension of the field \(\mathbb{F}\). The number of idempotent summands also depends on the choice of the field. In our situation, one can achieve the above stronger semisimplicity of a version of quantum cohomology algebra by turning on bulk deformations and taking a sufficiently large field.
Suppose \(A\) is semisimple. Then for each idempotent summand \(F_{i}\), there is a unique generator \(e_{i}\in F_{i}\) such that \(e_{i}*e_{i}=e_{i}\). We call \(e_{i}\) the **idempotent generator**. Then \((e_{1},\ldots,e_{k})\) is a basis of \(A\). Given any element \(\alpha=\lambda_{1}e_{1}+\cdots+\lambda_{k}e_{k}\), one can see that the linear map
\[\alpha*:A\to A\]
has eigenspace decomposition \(F_{1}\oplus\cdots\oplus F_{k}\) with eigenvalues \(\lambda_{1},\ldots,\lambda_{k}\). The following statement shows that the converse also holds under additional assumptions.
**Lemma 3.8**.: _Let \(A\) be a \(k\)-dimensional commutative unital \(\mathbb{F}\)-algebra and \(\alpha\in A\). Suppose \(\alpha*:A\to A\) has \(k\) distinct nonzero eigenvalues \(\lambda_{1},\ldots,\lambda_{k}\). Then \(A\) is semisimple._
Proof.: Let \((\varepsilon_{1},\ldots,\varepsilon_{k})\) be an eigen-basis of \(\alpha*\). Write
\[\alpha=\sum_{i=1}^{k}\mu_{i}\varepsilon_{i}.\]
Then we see
\[\alpha*(\varepsilon_{i}*\varepsilon_{j})=\lambda_{i}\varepsilon_{i}*\varepsilon _{j}=\lambda_{j}\varepsilon_{i}*\varepsilon_{j}.\]
As \(\lambda_{i}\) are all distinct, one has \(\varepsilon_{i}*\varepsilon_{j}=0\) whenever \(i\neq j\). Then one obtains
\[\alpha*\varepsilon_{i}=\mu_{i}\varepsilon_{i}*\varepsilon_{i}=\lambda_{i} \varepsilon_{i}.\]
As \(\lambda_{i}\neq 0\), one can see \(\mu_{i}\neq 0\). Define \(e_{i}=\lambda_{i}^{-1}\mu_{i}\varepsilon_{i}\). Then
\[e_{i}*e_{i}=(\lambda_{i}^{-1}\mu_{i})^{2}\varepsilon_{i}*\varepsilon_{i}= \lambda_{i}^{-1}\mu_{i}\varepsilon_{i}=e_{i}.\]
Hence \(A\) is semisimple.
#### 3.2.1. Semi-simplicity and different characteristics
Here we prove a useful algebraic fact which allows us to derive semi-simplicity in finite characteristics from semi-simplicity in characteristic zero. We set up the problem as follows. Let \((A,\ell)\) be a non-Archimedean normed (free) algebra over the Novikov ring \(\Lambda_{\overline{\mathbb{Z}}}\). Denote
\[A_{(0)}:=A\otimes_{\Lambda_{\overline{\mathbb{Z}}}}\Lambda_{\overline{\mathbb{ Q}}}\]
and for each prime \(p\)
\[A_{(p)}:=A\otimes_{\Lambda_{\overline{\mathbb{Z}}}}\Lambda_{\overline{\mathbb{ F}}_{p}}.\]
Denote the induced valuations by
\[\ell_{0}:A_{(0)}\to\mathbb{R}\cup\{-\infty\}, \ell_{p}:A_{(p)}\to\mathbb{R}\cup\{-\infty\}.\]
Moreover, let \(\mathcal{U}\in A\) be a distinguished nonzero element (which will be the first Chern class in quantum homology in our later discussions), and let \(\mathcal{U}_{(0)}\in A_{(0)}\), \(\mathcal{U}_{(p)}\in A_{(p)}\) be the corresponding induced element. They induce linear operators
\[E_{(m)}:A_{(m)}\to A_{(m)},\ x\mapsto\mathcal{U}_{(m)}*x,\ m=0,p.\]
**Theorem 3.9**.: _Suppose \(A_{(0)}\) is semisimple over \(\Lambda_{\overline{\mathbb{C}}}\) and all eigenvalues of \(E_{(0)}\) are nonzero and distinct. Then there exist \(p_{0}>0\) and \(C>0\) such that for all prime \(p\geq p_{0}\), the following conditions hold._
1. \(A_{(p)}\) _is semisimple over_ \(\Lambda_{\overline{\mathbb{F}}_{p}}\)_._
2. _If_ \(e_{1,p},\ldots,e_{m,p}\) _are idempotent generators of_ \(A_{(p)}\)_, then_ \[\ell(e_{l,p})\leq C.\]
Proof of Theorem 3.9 (1).: Consider the operator \(E:A\to A\) and its characteristic polynomial \(f_{E}\). Notice that \(\Lambda_{\overline{\mathbb{C}}}\) is the field of fraction of \(\Lambda_{\overline{\mathbb{Z}}}\). Hence \(f_{E}\) has \(m\) distinct roots in \(\Lambda_{\overline{\mathbb{Q}}}\) and so the discriminant of \(f_{E}\), denoted by \(D(f_{E})\in\Lambda_{\overline{\mathbb{Z}}}\), is nonzero. Hence for sufficiently large prime \(p\), the discriminant of \(f_{E_{(p)}}\), which is the mod \(p\) reduction of \(D(f_{E})\), is nonzero. It follows that \(E_{(p)}\) also has \(m\) distinct eigenvalues. Moreover, as all eigenvalues of \(E_{(0)}\) are nonzero, \(f_{E}(0)\neq 0\). Hence \(f_{E_{(p)}}(0)\neq 0\) when \(p\) is sufficiently large. Hence \(E_{(p)}\) is invertible and has no zero eigenvalue. By Lemma 3.8, \(A_{(p)}\) is semisimple for sufficiently large \(p\).
#### 3.2.2. Proof of Theorem 3.9 (2)
To prove the quantitative statement of Theorem 3.9, we introduce the notion of truncation. First, given an element
\[\lambda=\sum_{i=1}^{\infty}a_{i}T^{g_{i}}\in\Lambda_{\overline{\mathbb{C}}},\]
and \(Z\in\mathbb{R}\), define the \(Z\)-truncation of \(\lambda\) to be the element
\[\lambda^{Z}:=\sum_{g_{i}\leq Z}a_{i}T^{g_{i}},\]
which has only finitely many terms. Then it follows easily
\[\mathfrak{v}(\lambda-\lambda^{Z})\geq Z. \tag{3.1}\]
For an element in a module over \(\Lambda_{\overline{\mathbb{Q}}}\), its truncations are not canonically defined. We fix, throughout the proof, a basis \(\mathfrak{x}_{1},\ldots,\mathfrak{x}_{m}\), of the \(\Lambda_{\overline{\mathbb{Z}}}\)-module \(A\). Without loss of generality, we can choose the basis such that
\[\ell(\mathfrak{x}_{1})=\cdots=\ell(\mathfrak{x}_{m})=0.\]
By abuse of notations, denote the induced basis of \(A_{(0)}\) and \(A_{(p)}\) still by \(\mathfrak{x}_{1},\ldots,\mathfrak{x}_{m}\). Then for each \(\alpha\in A_{(0)}\), we can write
\[\alpha=\sum_{j=1}^{m}\alpha_{j}\mathfrak{x}_{j}\]
where \(\alpha_{j}\in\Lambda_{\overline{\mathbb{Q}}}\). Then define the \(Z\)-truncation
\[\alpha^{Z}=\sum_{j=1}^{B}\alpha_{j}^{Z}\mathfrak{x}_{j}.\]
Then by (3.1) we have the estimate
\[\ell_{0}(\alpha-\alpha^{Z})=\ell_{0}\left(\sum_{l=1}^{m}(\alpha_{l}- \alpha_{l}^{Z})_{\mathfrak{f}_{l}}\right)\leq\max_{1\leq l\leq m}\ell_{0}\left(( \alpha_{l}-\alpha_{l}^{Z})_{\mathfrak{f}_{l}}\right)\\ =\max_{1\leq l\leq m}\left(\ell_{0}(\mathfrak{f}_{l})-\mathfrak{v }(\alpha_{l}-\alpha_{l}^{Z})\right)\leq\max_{1\leq j\leq m}\ell(\mathfrak{r}_ {j})-Z=-Z. \tag{3.2}\]
**Running convention.** Within this proof, \(Z\) is a large real number which can be fixed from the beginning. The lower bound of \(p_{0}\) which is valid for the statement of Theorem 3.9 depends on the choice of \(Z\). The letter \(C>0\) denotes a real number which is independent of \(Z\) and \(p\geq p_{0}\) but whose value is allowed to change from line to line.
**Lemma 3.10**.: _Suppose \(e_{1,(0)},\ldots,e_{m,(0)}\) constitute the idempotent generators of \(A_{(0)}\), with valuations \(\lambda_{1,(0)},\ldots,\lambda_{m,(0)}\). Then for \(Z\) sufficiently large, \(e_{1,(0)}^{Z},\ldots,e_{m,(0)}^{Z}\) form a basis of \(A_{(0)}\), and \(\lambda_{1,(0)}^{Z},\ldots,\lambda_{m,(0)}^{Z}\) are all nonzero and distinct._
Proof.: With respect to the basis \((\mathfrak{r}_{1,(0)},\ldots,\mathfrak{r}_{m,(0)})\) of \(A_{(0)}\), we identify \(e_{l,(0)}\) with its coordinate vector in \((\Lambda_{\overline{\mathbb{C}}})^{m}\). Then the \(m\times m\) matrix with columns \(e_{l,(0)}\) is invertible with a nonzero determinant. Then when \(Z\) is sufficiently large, the corresponding determinant with \(e_{l,(0)}\) replaced by \(e_{l,(0)}^{Z}\) is also nonzero. On the other hand, as all \(\lambda_{l,(0)}\) are nonzero, \(\lambda_{l,(0)}^{Z}\neq 0\) when \(Z\) is large.
We would like to construct, for large primes \(p\), eigenvectors and eigenvalues of \(\mathbb{E}_{(p)}\) over the field \(\Lambda_{\overline{\mathbb{F}}_{p}}\). The basic idea is to take some truncation \(e_{l,(0)}^{Z}\) of the idempotent generators and their mod \(p\) reductions as an appropriate eigenbasis and then to apply certain corrections.
By Lemma 3.5, for each \(Z\in\mathbb{R}\), there exists \(m^{Z}\in\mathbb{Z}\) such that
\[m^{Z}\lambda_{l,(0)}^{Z}\in\Lambda_{\overline{\mathbb{Z}}},\qquad\qquad\qquad \qquad\qquad m^{Z}e_{l,(0)}^{Z}\in A_{\overline{\mathbb{Z}}}.\]
This allows us to define the "mod \(p\) reduction" of \(\lambda_{l,(0)}^{Z}\) and \(e_{l,(0)}^{Z}\) as follows. Fixing \(m^{Z}\), by choosing a sufficiently large \(p\) so that it cannot divide \(m^{Z}\), the quantity \(m^{Z}\) has a nonzero reduction \([m^{Z}]_{p}\in\mathbb{F}_{p}\). Moreover, \(m^{Z}\lambda_{l,(0)}^{Z}\) has a mod \(p\) reduction \([m^{Z}\lambda_{l,(0)}^{Z}]_{p}\in\Lambda_{\overline{\mathbb{F}}_{p}}\) and \(m^{Z}e_{l,(0)}^{Z}\) has a mod \(p\) reduction \([m^{Z}e_{l,(0)}^{Z}]_{p}\in A_{(p)}\) (defined via the integral basis \(\mathfrak{r}_{1},\ldots,\mathfrak{r}_{m}\)). Then define
\[\lambda_{l,(p)}^{Z}:=[m^{Z}]_{p}^{-1}[m^{Z}\lambda_{l,(0)}^{Z}]_{p},\qquad \qquad\qquad e_{l,(p)}^{Z}:=[m^{Z}]_{p}^{-1}[m^{Z}e_{l,(0)}^{Z}]_{p}.\]
**Lemma 3.11**.: _There exists \(C>0\) such that for any sufficiently large \(Z\), upon choosing \(m^{Z}\) as above, there exists \(p^{Z}>0\) such that whenever \(p\geq p^{Z}\), \(e_{l,(p)}^{Z}\) is a basis of \(A_{(p)}\) and all \(\lambda_{l,(p)}^{Z}\) are nonzero and distinct. Moreover, for some constant \(C>0\) one has_
\[\ell_{p}(e_{l,(p)}^{Z})\geq-C,\qquad\qquad\qquad\qquad\qquad\mathfrak{v}( \lambda_{l,(p)}^{Z})\leq C.\]
_Moreover, for all \(k\neq l\)_
\[\mathfrak{v}(\lambda_{l,(p)}^{Z}-\lambda_{k,(p)}^{Z})\leq C.\]
Proof.: Straightforward.
**Proposition 3.12**.: _There exists \(C>0\) such that given any sufficiently large \(Z\), for all sufficiently large prime \(p\), there exist eigenvectors \(\varepsilon_{l,(p)}\) of \(E_{(p)}\) with corresponding distinct eigenvalues \(\lambda_{l,(p)}\in\Lambda_{\overline{\mathbb{F}}_{p}}\) such that_
\[\ell_{p}(e_{l,(p)}^{Z}-\varepsilon_{l,(p)})\leq-Z+C\]
_and_
\[\mathfrak{v}(\lambda_{l,(p)}^{Z}-\lambda_{l,(p)})\geq Z-C.\]
Proof.: In \(A_{(0)}\), one has
\[(m^{Z})^{-1}E_{(0)}(m^{Z}e_{l,(0)})=((m^{Z})^{-1}\lambda_{l,(0)})(m^{Z}e_{l,(0)}).\]
Using (3.2), it follows that
\[\begin{split}&\
Here \(\rho_{l}\) is the error term with \(\mathfrak{v}(\rho_{l})\leq-Z+C\) by (3.4). To simplify notations, assume \(l=1\). Then using the basis \(e^{Z}_{1,(p)},\dots,e^{Z}_{m,(p)}\), this equation is equivalent to the linear system
\[\left(\left[\begin{array}{cccc}1&0&\cdots&0\\ &&T^{\prime}_{(p)}\end{array}\right]-\left[\begin{array}{cccc}0&0&\cdots&0 \\ 0&\lambda^{Z}_{1,(p)}&\cdots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\cdots&\lambda^{Z}_{1,(p)}\end{array}\right]\right)\left[\begin{array}{c} \delta\\ x_{2}\\ \vdots\\ x_{m}\end{array}\right]=Q(\delta,x_{2},\dots,x_{m})+\rho_{1}.\]
Here the left hand side is linear and \(Q\) is quadratic. Let the matrix on the left hand side be \(F_{1}\). Lemma 3.13 implies that \(F_{1}\) is invertible with
\[\mathfrak{v}(F_{1})\leq C\]
where \(C\) is independent of \(Z\) and \(p\). Then one can use an iteration argument to solve the equation term by term. The correction term has valuation at least \(Z-C\) for some constant \(C\). So the theorem follows.
We continue the proof of (2) of Theorem 3.9. By the proof of Lemma 3.8, each idempotent generator of \(A_{(p)}\) is a multiple of \(\varepsilon_{l,(p)}\). Indeed, if
\[\varepsilon_{l,(p)}*\varepsilon_{l,(p)}=\mu_{l}\varepsilon_{l,(p)}\]
then the corresponding idempotent generator is
\[e_{l,(p)}=\mu_{l}^{-1}\varepsilon_{l,(p)}.\]
So we need to estimate the valuation of \(\mu_{l}\). In characteristic zero one has
\[e_{l,(0)}*e_{l,(0)}=e_{l,(0)}.\]
Taking truncation at \(Z\) one has
\[\ell_{0}\big{(}e^{Z}_{l,(0)}*e^{Z}_{l,(0)}-e^{Z}_{l,(0)}\big{)}\leq C-Z.\]
Taking mod \(p\) reduction, one obtains
\[\ell_{p}\big{(}e^{Z}_{l,(p)}*e^{Z}_{l,(p)}-e^{Z}_{l,(p)}\big{)}\leq C-Z.\]
Then
\[\ell_{p}\big{(}\mu_{l}\varepsilon_{l,(p)}-e^{Z}_{l,(p)}\big{)}= \ell_{p}\big{(}\varepsilon_{l,(p)}*\varepsilon_{l,(p)}-e^{Z}_{l,(p)}*e^{Z}_{l, (p)}+e^{Z}_{l,(p)}*e^{Z}_{l,(p)}-e^{Z}_{l,(p)}\big{)}\\ \leq\max\left\{\ell_{p}(\varepsilon_{l,(p)}-e^{Z}_{l,(p)})+\ell_ {p}(\varepsilon_{l,(p)}+e^{Z}_{l,(p)}),C-Z\right\}\leq C-Z.\]
As we have \(\ell_{p}(e^{Z}_{l,(p)})\geq-C\), it follows that \(\ell_{p}(\mu_{l}\varepsilon_{l,(p)})=\ell_{p}(e^{Z}_{l,(p)})=\ell_{p}( \varepsilon_{l,(p)})\). Hence \(\mathfrak{v}(\mu_{l})=0\) and hence
\[\ell_{p}(e_{l,(p)})=\ell_{p}(\mu_{l}^{-1}\varepsilon_{l,(p)})=\ell_{p}( \varepsilon_{l,(p)})+\mathfrak{v}(\mu_{l})=\ell_{p}(e^{Z}_{l,(p)})=\ell_{0}(e _{l,(0)})\]
which is independent of \(p\). This finishes the proof of Theorem 3.9.
### Floer-Novikov complexes
Let \(\Gamma\subsetneq\mathbb{R}\) be a proper additive subgroup.
**Definition 3.14** (Floer-Novikov complex).: (cf. [15, Definition 1.1]) A \(\mathbb{Z}_{2}\)**-graded Floer-Novikov package** over a commutative unital ring \(R\) consists of data
\[\mathfrak{c}=\Big{(}P,\mathcal{A},gr,n_{R}\Big{)}\]
where
1. \(P\) is a \(\Gamma\)-torsor with \(P:=P/\Gamma\) finite.
2. \(\mathcal{A}:P\to\mathbb{R}\) is the "action functional" and \(gr:P\to\mathbb{Z}_{2}\) is the "grading."
3. For \(p\in P\) and \(g\in\Gamma\), one has \[\mathcal{A}(gp) =\mathcal{A}(p)-g, gr(gp) =gr(p)\]
4. \(n_{R}:P\times P\to R\) is a function such that * \(n_{R}(p,q)\neq 0\Longrightarrow gr(p)=gr(q)+1,\ \mathcal{A}(p)>\mathcal{A}(q)\); * for all \(p\in P\) and \(C\in\mathbb{R}\), the set \[\{q\in P\ |\ n_{R}(p,q)\neq 0,\ \mathcal{A}(q)\geq C\}\] is finite; * for any \(g\in\Gamma\), we have \(n_{R}(gp,gq)=n_{R}(p,q)\); * the \(\Lambda_{R}\)-linear map \(\partial\) defined in (3.5) satisfies \(\partial^{2}=0\).
Given a Floer-Novikov package one can construct the associated Floer chain complex. First, define
\[CF_{\bullet}(\mathfrak{c}) =\Big{\{}\sum_{p\in P}a_{p}p\ |\ a_{p}\in R,\ \forall C\in\mathbb{R},\#\{p\in P\ |\ a_{p}\neq 0,\ \mathcal{A}(p)\geq C\}<\infty\Big{\}}\]
which is \(\mathbb{Z}_{2}\)-graded. The \(\Lambda_{R}^{\Gamma}\)-module structure is induced from the \(\Gamma\)-action on \(P\). Define the differential
\[\partial:CF_{\bullet}(\mathfrak{c}) \to CF_{\bullet-1}(\mathfrak{c})\] by \[\partial\left(\sum_{p\in P}a_{p}p\right) =\sum_{q\in P}\left(\sum_{p\in P}a_{p}n_{R}(p,q)\right)q. \tag{3.5}\]
We also define the function
\[\ell:CF_{\bullet}(\mathfrak{c}) \to\mathbb{R}\cup\{-\infty\},\ \ell\left(\sum_{p\in P}a_{p}p\right) =\sup\big{\{}\mathcal{A}(p)\ |\ a_{p}\neq 0\Big{\}}. \tag{3.6}\]
Given a Floer-Novikov package \(\mathfrak{c}\) over \(R\), if \(\iota:R\to\widetilde{R}\) is a ring map, then one can extend \(\mathfrak{c}\) to a Floer-Novikov package \(\mathfrak{c}\otimes_{R}\widetilde{R}\) by simply defining \(n_{\widetilde{R}}:=\iota\circ n:P\times P\to\widetilde{R}\)
**Proposition 3.15**.: _If \(R=\mathbb{K}\) is a field, the triple \((CF_{\bullet}(\mathfrak{c}),\partial,\ell)\) is a Floer-type complex over \(\Lambda_{\mathbb{K}}^{\Gamma}\) in the sense of [16, Definition 4.1]._
Proof.: It follows directly from the definitions of Floer-type complexes. The proof serves rather as a brief clarification about this concept. First, for each \(k\in\mathbb{Z}_{2}\), the pair \((CF_{k}(\mathfrak{c}),\ell|_{CF_{k}(\mathfrak{c})})\) is a non-Archimedean normed vector space over \(\Lambda_{\mathbb{K}}^{\Gamma}\) (see [16, Definition 2.2]. In addition it is an orthogonalizable \(\Lambda_{\mathbb{K}}^{\Gamma}\)-space (see [16, Definition 2.7]. The last requirement for being a Floer-type complex is the inequality
\[\ell(\partial(x))\leq\ell(x)\ \forall x\in CF_{\bullet}(\mathfrak{c}),\]
which is a consequence of the property of the function \(n_{\mathbb{K}}\) in the data \(\mathfrak{c}\)
#### 3.3.1. Spectral invariants
Following Usher [10], one can also define spectral invariants in an abstract way. First, define the "energy filtration" on the complex \(CF_{\bullet}(\mathfrak{c})\): for each \(\tau\in\mathbb{R}\), define
\[CF_{\bullet}^{\leq\tau}(\mathfrak{c}):=\left\{\sum_{p\in P}a_{p}p\in CF_{ \bullet}(\mathfrak{c})\ |\ a_{p}\neq 0\Longrightarrow\mathcal{A}(p)\leq\tau\right\}.\]
Then since the differential decreases the action, it is a subcomplex with homology
\[HF_{\bullet}^{\leq\tau}(\mathfrak{c})\]
and natural maps when \(\tau\leq\kappa\)
\[\iota^{\tau,\kappa}:HF_{\bullet}^{\leq\tau}(\mathfrak{c})\to HF_{ \bullet}^{\leq\kappa}(\mathfrak{c}). \tag{3.7}\]
For \(\alpha\in HF_{\bullet}(\mathfrak{c})\), define
\[\rho(\alpha):=\inf\left\{\tau\in\mathbb{R}\ |\ \alpha\in\operatorname{Im} \left(\iota^{\tau}:HF_{\bullet}^{\leq\tau}(\mathfrak{c})\to HF_{\bullet}( \mathfrak{c})\right)\right\}\in\mathbb{R}\cup\left\{-\infty\right\}\]
**Theorem 3.16**.: _[_10_, Theorem 1.3, 1.4]_ _Given a Floer-Novikov package \(\mathfrak{c}\) (over a Noetherian ring \(R\)) and \(\alpha\in HF(\mathfrak{c})\setminus\{0\}\), \(\rho(\alpha)>-\infty\) and \(\alpha\in\operatorname{Im}(\iota_{\rho(\alpha)})\)._
#### 3.3.2. Boundary depth
**Definition 3.17**.: [10] Let \(\mathfrak{c}\) be a Floer-Novikov package and let \(CF^{\leq\lambda}(\mathfrak{c})\) be the associated filtered Floer-Novikov complex over \(\Lambda^{\Gamma}_{\mathbb{K}}\). Then the **boundary depth** of the filtered complex is the infimum of \(\beta>0\) such that for all \(\lambda\in\mathbb{R}\)
\[CF^{\leq\lambda}(\mathfrak{c})\cap\operatorname{Im}\partial\subset\partial( CF^{\leq\lambda+\beta}(\mathfrak{c})).\]
**Theorem 3.18**.: _[_10_, Theorem 1.3]_ _Given a Floer-Novikov package \(\mathfrak{c}\), the boundary depth of the associated Floer-Novikov complex is finite._
#### 3.3.3. Quasiequivalence distance
We rephrase the notion of quasiequivalences between Floer-Novikov complexes, which was originally introduced in [11] for the more general situation of Floer-type complexes.
**Definition 3.19**.: (cf. [11, Definition 1.3]) Let \((CF_{\bullet}(\mathfrak{c}_{i}),\partial_{i})\), \(i=1,2\), be two Floer-Novikov complexes associated to Floer-Novikov data \(\mathfrak{c}_{i}\) over a field \(\mathbb{K}\). Let \(\ell_{i}\) be the valuation function on the two complexes defined by (3.6). Let \(\delta\geq 0\). A \(\delta\)**-quasiequivalence** between \(CF_{\bullet}(\mathfrak{c}_{1})\) and \(CF_{\bullet}(\mathfrak{c}_{2})\) is a quadruple \((\Phi,\Psi,K_{C},K_{D})\) where
1. \(\Phi:CF_{\bullet}(\mathfrak{c}_{1})\to CF_{\bullet}(\mathfrak{c}_{2})\) and \(\Psi:CF_{\bullet}(\mathfrak{c}_{2})\to CF_{\bullet}(\mathfrak{c}_{1})\) are chain maps with \[\ell_{2}(\Phi(x_{1}))\leq\ell_{1}(x_{1})+\delta, \ell_{1}(\Psi(x_{2}))\leq\ell_{2}(x_{2})+\delta\] for all \(x_{1}\in CF_{\bullet}(\mathfrak{c}_{1})\) and \(x_{2}\in CF_{\bullet}(\mathfrak{c}_{2})\).
2. \(K_{i}:CF_{\bullet}(\mathfrak{c}_{i})\to CF_{\bullet+1}(\mathfrak{c}_{i})\), \(i=1,2\), obey the homotopy equations \[\Psi\circ\Phi-\operatorname{Id}_{CF_{\bullet}(\mathfrak{c}_{1})}=\partial_{1} K_{1}+K_{1}\partial_{1}, \Phi\circ\Psi-\operatorname{Id}_{CF_{\bullet}(\mathfrak{c}_{2})}=\partial_{2} K_{2}+K_{2}\partial_{2}\] and for all \(x_{i}\in CF_{\bullet}(\mathfrak{c}_{i})\), \(i=1,2\), one has \[\ell_{i}(K_{i}(x_{i}))\leq\ell_{i}(x_{i})+2\delta.\]
The **quasiequivalence distance** between \(CF_{\bullet}(\mathfrak{c}_{1})\) and \(CF_{\bullet}(\mathfrak{c}_{2})\), denoted by \(d_{Q}(CF_{\bullet}(\mathfrak{c}_{1}),CF_{\bullet}(\mathfrak{c}_{2}))\), is the infimum of \(\delta\) such that there exists a \(\delta\)-quasiequivalence between them.
### Persistence modules and stability of boundary depth
**Definition 3.20**.: Let \(\mathbb{K}\) be a field.
1. A **persistence module**\(V\) is a family of \(\mathbb{K}\)-vector spaces \[\mathbf{V}=(V^{s})_{s\in\mathbb{R}}\] together with linear maps (called the **structural maps** of \(\mathbf{V}\)) \[\iota^{s,t}:=\iota^{s,t}_{\mathbf{V}}:V^{s}\to V^{t}\ \forall s\leq t\] such that \(\iota^{s,s}=\operatorname{Id}_{V^{s}}\) for all \(s\) and \(\iota^{t,r}\circ\iota^{s,t}=\iota^{s,r}\) for all \(s\leq t\leq r\).
2. Let \(\mathbf{V}\) be a persistence module and \(\delta\in\mathbb{R}\). The \(\delta\)-shift of \(\mathbf{V}\) is the persistence module \(\mathbf{V}[\delta]\) with \(V[\delta]^{s}=V^{s+\delta}\) and \(\iota[\delta]^{s,t}=\iota^{s+\delta,t+\delta}\).
3. Let \(\mathbf{V}\) and \(\mathbf{W}\) be two persistence modules. A morphism from \(\mathbf{V}\) to \(\mathbf{W}\) is a collection of linear maps \(\mathbf{f}=(f^{s}:V^{s}\to W^{s})_{s\in\mathbb{R}}\) such that for all \(s\leq t\) the following diagram commutes. \[\xy(0,0)*{V^{s}}="t";(0,0)*{V^{t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{s}}="t";(0,0)* {W^{t}}="t";(0,0)*{W^{s}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{s}}="t"; (0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^ {t}}="t";(0,0)*{W^{s}}="t";(0,0)*{W^{s}}="t";(0,0)*{W^{s}}="t";(0,0)*{W^{s}}="t"; (0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W ^{t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{s}}=" t";(0,0)*{W^{s}}="t";(0,0)*{W^{s}}="t";(0,0)*{W^{s}}="t";(0,0)*{W^{s}}="t";(0,0)* {W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t} }="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t"; (0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)* {W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t} }="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t"; (0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)* {W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t} }="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t"; (0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)* {W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t} }="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t";(0,0 )*{W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t }}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t"; (0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^ {t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t"; (0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^ {t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t"; (0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t }}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)* {W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t"; (0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t }}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W^{t}}="t";(0,0)*{W
Define the **interleaving distance** between \(V\) and \(W\) to be the infimum of all \(\delta\geq 0\) such that \(V\) and \(W\) are \(\delta\)-interleaved; if such \(\delta\) does not exist, define the interleaving distance to be \(+\infty\). Here ends this definition.
#### 3.4.2. Boundary depth of persistence modules and stability
We observe that one can generalize the notion of boundary depth to persistence modules.
**Definition 3.23**.: Let \(V\) be a persistence module over \(\mathbb{K}\). The **boundary depth** of \(V\), denoted by \(\beta(V)\), is the infimum of \(\beta>0\) such that for all \(s\in\mathbb{R}\), \(x\in V^{s}\), if \(\iota^{s,t}(x)=0\) for some \(t>s\), then \(\iota^{s,s+\beta}(x)=0\).
As we allow persistent modules to be infinite-dimensional, we reprove the stability result of boundary depth.
**Proposition 3.24**.: _Suppose \(V\), \(W\) are \(\delta\)-interleaved persistence modules. Suppose \(V\) has finite boundary depth. Then \(W\) has finite boundary depth and_
\[\beta(W)\leq\beta(V)+2\delta.\]
Proof.: Suppose on the contrary that \(\beta(W)\geq\beta(V)+2\delta+2\epsilon\) for some \(\epsilon>0\). Then there exist \(s\in\mathbb{R}\) and \(x\in W_{s}\) such that \(\iota^{s,s+\beta(V)+2\delta+\epsilon}(x)\neq 0\). Then by the definition of \(\delta\)-interleaving, one has \(y:=f^{s,s+\delta}(x)\neq 0\) and
\[\iota^{s+\delta,s+\delta+\beta(V)+\epsilon}(y)\neq 0\]
but \(y\) cannot survive eventually. This contradicts the definition of \(\beta(V)\).
#### 3.4.3. Persistence modules associated to filtered Floer-Novikov complexes
Fix a field \(\mathbb{K}\). Let \(\mathfrak{c}\) be a Floer-Novikov package (see Definition 3.14) and \(CF_{\bullet}(\mathfrak{c})\) be the associated filtered Floer-Novikov complex. Then the collection of homology groups
\[V^{s}(\mathfrak{c}):=HF_{\bullet}^{\leq s}(\mathfrak{c};\Lambda_{\mathbb{K}}^ {\Gamma})\]
together with the natural maps \(\iota^{s,t}\) (cf. Equation (3.7)) is a persistence module over \(\mathbb{K}\), denoted by \(V(\mathfrak{c})\).
It is easy to derive from definitions the following stability results of persistence modules coming from Floer-Novikov complexes.
**Proposition 3.25**.: _Let \(CF_{\bullet}(\mathfrak{c}_{i})\), \(i=1,2\) be two Floer-Novikov complexes over a field \(\mathbb{K}\) and \(V(\mathfrak{c}_{i})\) be the associated persistence module. Then the interleaving distance between \(V(\mathfrak{c}_{1})\) and \(V(\mathfrak{c}_{2})\) is no greater than the quasiequivalence distance between \(CF_{\bullet}(\mathfrak{c}_{1})\) and \(CF_{\bullet}(\mathfrak{c}_{2})\)._
Moreover, the two notions of boundary depths (Definition 3.17 and Definition 3.23) agree.
**Proposition 3.26**.: _Let \(\mathfrak{c}\) be a Floer-Novikov package over \(\mathbb{K}\). Then the boundary depth of the filtered Floer-Novikov complex \(CF_{\bullet}(\mathfrak{c})\) and the boundary depth of the persistence module \(V(\mathfrak{c})\) coincide._
Proof.: Let \(\beta_{1}\) be the boundary depth of \(CF_{\bullet}(\mathfrak{c})\) and \(\beta_{2}\) be the boundary depth of \(V(\mathfrak{c})\). Suppose \([x]\in HF_{\bullet}^{\leq s}(\mathfrak{c})\) which does not survive eventually. Let \(x\in CF_{\bullet}^{\leq s}(\mathfrak{c})\) be a representative. Then \(x\) is exact. Then by Definition 3.17, for all \(\epsilon>0\), one has
\[x\in\partial(CF_{\bullet}^{\leq s+\beta_{1}+\epsilon}(\mathfrak{c})).\]
Hence \(\iota^{s,s+\beta_{1}+\epsilon}([x])=0\). As \(\epsilon\) is arbitrary, this implies that \(\beta_{2}\leq\beta_{1}\).
On the other hand, for all \(s\in\mathbb{R}\) and all exact \(x\in CF_{\bullet}^{\leq s}(\mathfrak{c})\), the class \([x]\in HF_{\bullet}^{\leq s}(\mathfrak{c})\) does not survive eventually. Then by Definition 3.23, for any \(\epsilon>0\), one has \(\iota^{s,s+\beta_{2}+\epsilon}([x])=0\). This implies that
\[x\in\partial(CF_{\bullet}^{\leq s+\beta_{2}+\epsilon}(\mathfrak{c})).\]
It follows that \(\beta_{1}\leq\beta_{2}\). Hence \(\beta_{1}=\beta_{2}\).
### Barcodes and reduced barcodes
In the symplectically aspherical or monotone case, the notion of barcodes is the same as the one used in topological data analysis. In more general situations, Usher-Zhang [20] gave a modification for any Floer-type complexes (in particular Floer-Novikov complexes) over any Novikov field \(\Lambda_{\mathbb{K}}^{\Gamma}\).
**Definition 3.27**.: (cf. [20, Definition 8.13, 8.14]) Fix a finitely generated subgroup \(\Gamma\subsetneq\mathbb{R}\).
1. A **barcode** is a finite multiset \(\tilde{\mathcal{B}}\) of elements of \((\mathbb{R}/\Gamma)\times(0,+\infty]\). A member of \(\tilde{\mathcal{B}}\), which is usually called a **bar**, is denoted by \(([a],L)\) where \([a]\in\mathbb{R}/\Gamma\) and \(L\in(0,+\infty]\).
2. A **reduced barcode** is a finite multiset \(\mathcal{B}\) of elements of \((0,+\infty]\). Although \(\mathcal{B}\) is not a set in general and a member \(L\in\mathcal{B}\) may appear multiple times, we still use the same notations as if \(L\) is an element of a set \(\mathcal{B}\), such as \(L\in\mathcal{B}\), without confusion. Let \(\mathcal{B}_{\mathrm{finite}}\subset\mathcal{B}\) denote the submultiset of finite bars, i.e., those with \(L<+\infty\). Let \(\mathcal{B}_{\mathrm{finite}}\subset\mathcal{B}\) be the submultiset of finite bars. Notice that a barcode \(\tilde{\mathcal{B}}\) induces a reduced barcode \(\mathcal{B}\) by forgetting the first coordinates.
3. The **total bar length** of a reduced barcode \(\mathcal{B}\) is \[\tau(\mathcal{B}):=\sum_{L_{i}\in\mathcal{B}_{\mathrm{finite}}}L_{i}.\]
4. The **reduced bottleneck distance** between two reduced barcodes \(\mathcal{B}\) and \(\mathcal{B}^{\prime}\), denoted by \(d_{B}(\mathcal{B},\mathcal{B}^{\prime})\), is the infimum of \(\delta>0\) such that, after removing certain submultisets \(\mathcal{B}_{\mathrm{short}}\subset\mathcal{B}\) and \(\mathcal{B}^{\prime}_{\mathrm{short}}\subset\mathcal{B}^{\prime}\) whose members all have length at most \(2\delta\), there is a bijection between \(\mathcal{B}\setminus\mathcal{B}_{\mathrm{short}}\) and \(\mathcal{B}^{\prime}\setminus\mathcal{B}^{\prime}_{\mathrm{short}}\) such that the differences of the corresponding bar lengths are all bounded by \(\delta\).
The bottleneck distance is symmetric and satisfies the triangle inequality. It is not a metric in the usual sense as it may take infinite value. Indeed, \(d_{B}(\mathcal{B},\mathcal{B}^{\prime})<\infty\) if and only if \(\mathcal{B}\) and \(\mathcal{B}^{\prime}\) has the same number of infinite bars.
**Proposition 3.28**.: _(cf. [19, Proposition 20]) For any \(k\geq 0\), the completion of the set of reduced barcodes having \(k\) infinite bars is the set of possibly infinite reduced barcodes (with \(k\) infinite bars) such that for all \(\epsilon>0\), the number of finite bars with length greater than \(\epsilon\) is finite._
#### 3.5.1. Barcodes associated to Floer-Novikov complexes
Usher-Zhang [20] defined for each \(\mathbb{Z}_{2}\)-graded Floer-type complexes over \(\Lambda_{\mathbb{K}}^{\Gamma}\) and each \(k\in\mathbb{Z}_{2}\) the associated barcodes (which allows bars of length zero). As Floer-Novikov complexes are all Floer-type complexes, one has an associated barcode (and hence a reduced barcode). Let the reduced barcode associated to a Floer-Novikov complex \(CF_{\bullet}(\mathfrak{c})\) be \(\mathcal{B}(\mathfrak{c})\). As the differential strictly decreases the action, there are no bars of length zero (which was allowed in the abstract setting of [20]). We do not recall the detail of the definition here.
**Proposition 3.29**.: _Let \(\mathbf{V}(\mathfrak{c})\) be the persistence module induced from a filtered Floer-Novikov complex \(CF_{\bullet}(\mathfrak{c})\) over \(\Lambda_{\mathbb{K}}^{\Gamma}\). Then the boundary depth of \(\mathbf{V}(\mathfrak{c})\) (see Definition 3.17 and Definition 3.23) coincides with the length of the longest finite bar in \(\mathcal{B}(\mathfrak{c})\). In particular, the boundary depth is zero if and only if \(\mathcal{B}(\mathfrak{c})\) has no finite bar._
Proof.: It follows from the definitions of boundary depth and barcodes (via singular value decompositions, see [10]). The details are left to the reader.
#### 3.5.2. Stability of barcodes
**Theorem 3.30**.: _[_10_, Theorem 8.17]_ _Let \((CF_{\bullet}(\mathfrak{c}_{1}),\partial_{1},\ell_{1})\) and \((CF_{\bullet}(\mathfrak{c}_{2}),\partial_{2},\ell_{2})\) be two Floer-Novikov complexes associated to Floer-Novikov data \(\mathfrak{c}_{1}\), \(\mathfrak{c}_{2}\) over a field \(\mathbb{K}\). Suppose the quasiequivalence distance between \(CF_{\bullet}(\mathfrak{c}_{1})\) and \(CF_{\bullet}(\mathfrak{c}_{2})\) is finite. Then_
\[d_{B}(\mathcal{B}(\mathfrak{c}_{1}),\mathcal{B}(\mathfrak{c}_{2}))\leq 2d_{Q} (CF_{\bullet}(\mathfrak{c}_{1}),CF_{\bullet}(\mathfrak{c}_{2})).\]
### \(A_{\infty}\) algebras and Hochschild cohomology
Let \(\mathbb{K}\) be a field of characteristic zero. We recall the notion of \(\mathbb{Z}_{2}\)-graded \(A_{\infty}\) algebras over the Novikov field \(\Lambda_{\mathbb{K}}\).
**Definition 3.31** (Curved \(A_{\infty}\) algebra).:
1. A \(\mathbb{Z}_{2}\)-graded **curved \(A_{\infty}\) algebra** over \(\Lambda_{\mathbb{K}}\) consists of a \(\mathbb{Z}_{2}\)-graded \(\Lambda_{\mathbb{K}}\)-vector space \(\mathcal{A}\) (the degree of a homogeneous element \(a\) is denoted by \(|a|\)) and for all positive integers \(k\geq 0\)**higher composition maps** \[m_{k}:\mathcal{A}^{\otimes k}\to\mathcal{A}\text{ (where }m_{0}:\Lambda_{\mathbb{K}}\to A\text{)}\] (which are \(\Lambda_{\mathbb{K}}\)-linear and have degree \(k\) mod \(2\)). The higher composition maps need to satisfy the following \(A_{\infty}\) composition law: for all \(k\geq 1\) and \(a_{k},\ldots,a_{1}\in\mathcal{A}\),4 Footnote 4: There are two different conventions: the variables are either ordered as \(a_{1},\ldots,a_{k}\) or ordered as \(a_{k},\ldots,a_{1}\). \[\sum_{i=0}^{k}\sum_{j=0}^{k-i}(-1)^{\overleftarrow{\mathbf{x}}_{i}^{j}}m_{k- i+1}\left(a_{k},\ldots,a_{i+j+1},m_{i}(a_{i+j},\ldots,a_{j+1}),a_{j},\ldots,a_{1} \right)=0\] where the symbol \(\overleftarrow{\mathbf{x}}_{a}^{b}\) for all \(a<b\) is defined as \[\overleftarrow{\mathbf{x}}_{a}^{b}=\sum_{a\leq i\leq b}\|a_{i}\|\quad\text{ where }\|a_{i}\|=|a_{i}|+1.\] (3.8)
2. The **curvature** of a curved \(A_{\infty}\) algebra is the element \[m_{0}(1)\in\mathcal{A}.\] If \(m_{0}=0\), then we say that the \(A_{\infty}\) algebra is **flat**.
3. Given a (curved or flat) \(A_{\infty}\) algebra \(\mathcal{A}\), a **cohomological unit** is an even element \(e\in\mathcal{A}\) such that \(m_{1}(e)=0\) and that for all homogeneous \(x\in\mathcal{A}\) \[(-1)^{|x|}m_{2}(e,x)=m_{2}(x,e)=x.\] \(e\) is called a **strict unit** if in addition \[m_{k}(\ldots,e,\ldots)=0\ \forall k\geq 3.\] In these two cases we call \((\mathcal{A},e)\) a cohomologically unital (resp. strictly unital) \(A_{\infty}\) algebra.
4. When \(\mathcal{A}\) is flat, \(A_{\infty}\) composition law implies that \(m_{1}\circ m_{1}=0\). The **cohomology algebra** of \(A\), denoted by \(H^{\bullet}(\mathcal{A})\), is the \(\mathbb{Z}_{2}\)-graded associative \(\Lambda_{\mathbb{K}}\) algebra whose underlying space is \(H^{\bullet}(\mathcal{A})=\text{kerm}_{1}/\text{Im}m_{1}\) and whose multiplication is induced from \(m_{2}\).
Because of bubbling of holomorphic disks, \(A_{\infty}\) algebras associated to a Lagrangian brane is generally curved. There is a way to turn certain curved \(A_{\infty}\) algebras to flat ones.
**Definition 3.32**.: Let \((\mathcal{A},e)\) be a strictly unital \(A_{\infty}\) algebra. A **weakly bounding cochain** of \((\mathcal{A},e)\) is an odd element \(b\in\mathcal{A}^{\mathrm{odd}}\) such that
\[m(b):=\sum_{k\geq 0}m_{k}(b,\ldots,b)=W(b)e\text{ where }W(b)\in\Lambda_{\mathbb{K}}.\]
Suppose \(b\) is a weakly bounding cochain of \((\mathcal{A},e)\). Then define \(\mathcal{A}^{\flat}\) (which depends on the weakly bounding cochain \(b\)) to be the flat \(A_{\infty}\) algebra whose underlying space is the same as \(\mathcal{A}\) and whose composition maps \(m_{k}^{\flat}\) is defined by
\[m_{k}^{\flat}(x_{k},\ldots,x_{1}):=\sum_{l_{0},\ldots,l_{k}\geq 0}m_{k+l_{0}+ \cdots+l_{k}}(\underbrace{b,\ldots,b}_{l_{k}},x_{k},\cdots,x_{1}, \underbrace{b,\ldots,b}_{l_{0}}).\]
**Lemma 3.33**.: \(\mathcal{A}^{\flat}\) _is a flat \(A_{\infty}\) algebra. _
#### 3.6.1. Hochschild cohomology for associative algebras
Let \(A\) be a \(\mathbb{Z}_{2}\)-graded associative algebra over \(\mathbb{K}\). Hochschild cohomology \(\mathit{HH}^{\bullet}(A,M)\) can be defined for all \(\mathbb{Z}_{2}\)-graded bimodules \(M\) of \(A\). Here we only consider the case when \(M=A\). The Hochschild cochain complex (with coefficients in \(A\) itself) is defined by
\[\mathit{CC}^{\bullet,n}(A):=\mathit{CC}^{\bullet,n}(A,A):=\mathrm{Hom}^{ \bullet}_{\Lambda_{\mathbb{K}}}(A^{\otimes n},A[n]).\]
Here the bullet is the \(\mathbb{Z}_{2}\)-grading on linear maps and \(A[n]\) is the \(\mathbb{Z}_{2}\)-graded vector space \(A\) with the \(\mathbb{Z}_{2}\)-grading shifted by \(n\) (modulo 2). Denote the \(\mathbb{Z}_{2}\)-degree of a homogeneous element \(\phi\in\mathit{CC}^{\bullet,\bullet}(A)\) by \(|\phi|\in\mathbb{Z}_{2}\) and the _reduced_ degree by
\[\|\phi\|:=|\phi|+1\in\mathbb{Z}_{2}.\]
A Hochschild cochain is represented by a sequence \(\tau=(\tau_{n})_{n\geq 0}\) of such multi-linear maps. The differential \(\delta_{\mathit{CC}}\), which raises the length grading \(n\) by \(1\), is defined by
\[(\delta_{\mathit{CC}}(\tau))(x_{n+1},\ldots,x_{1})=x_{n+1}\tau_{n} (x_{n},\ldots,x_{1})+(-1)^{\|\tau\|\|x_{1}\|}\tau_{n}(x_{n+1}, \ldots,x_{2})x_{1}\\ -\sum_{0\leq i<n}(-1)^{\|\tau\|+\mathbf{\Phi}^{i}_{1}}\tau_{n}(x _{n+1},\ldots,x_{i+2}x_{i+1},x_{i},\ldots,x_{1}). \tag{3.9}\]
The cohomology defined by \(\delta_{\mathit{CC}}\) is called the **Hochschild cohomology** of \(A\) (with coefficients in \(A\)). As the simplest example, via a straightforward calculation one obtains (for \(A=\mathbb{K}\) trivially graded)
\[\mathit{HH}^{\bullet,n}(\mathbb{K})=\left\{\begin{array}{ll}\mathbb{K},&n= 0\text{ and }n\text{ even},\\ 0,&\text{otherwise},\end{array}\right.\]
where the superscript \(n\) comes from the length filtration of Hochschild cochains.
_Remark 3.34_.: The formula (3.9) differs from the usual version of Hochschild differential, see for example [13, (1.5.1.1)]. Indeed, suppose \(A\) is ungraded, i.e. all elements are even. Then the \(\mathbb{Z}_{2}\)-grading of a length \(n\) cochain is \(n\) mod 2. In this case (3.9) reduces to
\[(\delta_{\mathit{CC}}(\tau))(x_{n+1},\ldots,x_{1})=x_{n+1}\tau_{n} (x_{n},\ldots,x_{1})+(-1)^{n+1}\tau_{n}(x_{n+1},\ldots,x_{2})x_{1}\\ +(-1)^{n+i}\tau_{n}(x_{n+1},\ldots,x_{i+2}x_{i+1},x_{i},\ldots,x_{ 1}).\]
If we replace \(A\) by the opposite algebra \(A^{\mathrm{op}}\) (i.e. the same set with multiplication reversed), and identify a length \(n\) Hochschild cochain \(\tau\) on \(A\) with \(\tau^{\mathrm{op}}\) on \(A^{\mathrm{op}}\) defined by \(\tau^{\mathrm{op}}(x_{1},\ldots,x_{n})=\tau(x_{n},\ldots,x_{1})\). Then the above formula differ from the standard Hochschild differential on \(\tau^{\mathrm{op}}\) up to a sign \((-1)^{n+1}\).
#### 3.6.2. Hochschild cohomology for \(A_{\infty}\) algebras
Now let \(\mathcal{A}^{\flat}\) be a flat \(\mathcal{A}_{\infty}\) algebra. Define the length \(n\)-part of Hochschild cochain complex of \(A^{\flat}\) to be
\[\mathit{CC}^{\bullet,n}(\mathcal{A}^{\flat})=\mathit{CC}^{\bullet,n}(\mathcal{A} ^{\flat},\mathcal{A}^{\flat})=\mathrm{Hom}^{\bullet}_{\Lambda_{\mathrm{K}}}(( \mathcal{A}^{\flat})^{\otimes n},\mathcal{A}^{\flat}[n]).\]
Here \(\bullet\) denotes the \(\mathbb{Z}_{2}\)-grading and \(\mathcal{A}^{\flat}[n]\) denote the super vector space \(\mathcal{A}^{\flat}\) with grading shifted by \(n\) (mod 2).
On the Hochschild cochain complex there is the **Gerstenhaber product** (which is graded with respect to the reduced grading \(\|\cdot\|\)) defined by
\[(\phi\circ\psi)(x_{s},\dots,x_{1})=\sum_{i+j+k=s}(-1)^{\|\psi\|\cdot\bigstar_{ 1}^{i}}\phi(x_{s},\dots,\psi(x_{i+j},\cdots,),x_{i},\dots,x_{1})\]
as well as the **Gerstenhaber superbracket**
\[[\phi,\psi]:=\phi\circ\psi-(-1)^{\|\phi\|\cdot\|\psi\|}\psi\circ\phi.\]
Then the \(A_{\infty}\)-structure on \(\mathcal{A}^{\flat}\) is equivalent to an even Hochschild cochain \(m^{\flat}\) with \(m^{\flat}_{0}=0\) with the \(A_{\infty}\) relation being equivalent to
\[[m^{\flat},m^{\flat}]=2m^{\flat}\circ m^{\flat}=0.\]
We define the Hochschild differential \(\delta_{\mathit{CC}}\) on
\[\mathit{CC}^{\bullet}(\mathcal{A}^{\flat})=\prod_{n\geq 0}\mathit{CC}^{\bullet,n} (\mathcal{A}^{\flat})\]
by the formula
\[\delta_{\mathit{CC}}(\phi):=[m^{\flat},\phi].\]
Notice that if \(m^{\flat}_{k}\neq 0\) only when \(k=2\), \(\mathcal{A}^{\flat}\) is a \(\mathbb{Z}_{2}\)-graded associative algebra with the Hochschild differential reduces to the differential (3.9). The Hochschild cohomology of \(\mathcal{A}^{\flat}\) is defined by
\[\mathit{HH}^{\bullet}(\mathcal{A}^{\flat}):=\ker\delta_{\mathit{CC}}/\mathrm{ im}\delta_{\mathit{CC}}.\]
On \(\mathit{CC}^{\bullet}(\mathcal{A}^{\flat})\) there is also an \(A_{\infty}\) structure whose composition maps start with \(\delta_{\mathit{CC}}\). We only need the 2-fold composition map, i.e., the Yoneda product.
**Definition 3.35**.: The **Yoneda product** on \(\mathit{CC}^{\bullet}(\mathcal{A}^{\flat})\), denoted by \(\star\), is defined by
\[(\phi\star\psi)(a_{k},\dots,a_{1})\\ =\sum(-1)^{\clubsuit}m^{\flat}_{k}\,(a_{k},\dots,\phi_{r}(a_{i+r },\dots,a_{i+1}),\cdots,\psi_{s}(a_{j+s},\dots,a_{j+1}),\dots,a_{1}) \tag{3.10}\]
where the sum is taken over all \(i,j,r,l\) such that each summand makes sense. The sign is defined by (see (3.8) for the definition of \(\clubsuit\))
\[\clubsuit:=\|\phi\|\cdot\big{(}\bigstar_{1}^{i}+|\psi|\big{)}+\|\psi\|\cdot \bigstar_{1}^{j}. \tag{3.11}\]
As there are many inconsistent conventions in literature (see for example [12, 13, 14]), we verify that the Yoneda product indeed reduces a product on the cohomology. As recalled from above, the Yoneda product can be extended to define an \(A_{\infty}\) structure on \(\mathit{CC}^{\bullet}(\mathcal{A}^{\flat})\), so the induced product on the Hochschild cohomology \(\mathit{HH}^{\bullet}(\mathcal{A}^{\flat})\) is associative.
**Proposition 3.36**.: _The map \(\star:\mathit{CC}^{\bullet}(\mathcal{A}^{\flat})\otimes\mathit{CC}^{\bullet}( \mathcal{A}^{\flat})\to\mathit{CC}^{\bullet}(\mathcal{A}^{\flat})\) is a cochain map of even degree._
Proof.: The fact that \(\star\) has even degree follows directly from the definition of the \(\mathbb{Z}_{2}\)-grading on the Hochschild cochain complex and the fact that \(m^{\flat}\) is even. We verify that \(\star\) is a chain map. To save notations, we assume that both \(\phi\) and \(\psi\) are odd; the general situation can be verified similarly. Then in this case, we need to prove that
\[\begin{split}&\big{(}m^{\flat}\circ(\phi\star\psi)\big{)}(a_{k}, \ldots,a_{1})+\big{(}(\phi\star\psi)\circ m^{\flat}\big{)}(a_{k},\ldots,a_{1}) \\ =&\big{(}(m^{\flat}\circ\phi-\phi\circ m^{\flat}) \ast\psi\big{)}(a_{k},\ldots,a_{1})\\ &+(-1)^{|\phi|}\big{(}\phi\star(m^{\flat}\circ\psi-\psi\circ m^{ \flat})\big{)}(a_{k},\ldots,a_{1}).\end{split} \tag{3.12}\]
First we compute the left hand side, in which the involved sign \(\clubsuit\) (see (3.11)) in \(\phi\star\psi\) always vanishes and in which \(\|\phi\star\psi\|=1\). Then
\[\big{(}m^{\flat}\circ(\phi\star\psi)\big{)}(a_{k},\ldots,a_{1})\] \[= \sum(-1)^{\clubsuit^{i}}m^{\flat}\big{(}a_{k},\ldots,(\phi \star\psi)(a_{i+r},\cdots,a_{i+1}),a_{i},\ldots,a_{1}\big{)}\] \[= \sum(-1)^{\clubsuit^{i}}m^{\flat}\big{(}a_{k},\ldots,m^{\flat}( \cdots,\phi(\cdots),\cdots,\psi(\cdots),a_{i},\ldots,a_{1}\big{)}.\]
Notice that this is a sum of \(4\)-fold compositions using two \(m^{\flat}\) with \(\phi\) and \(\psi\) such that \(\phi\) and \(\psi\) are contained in the interior \(m^{\flat}\), which can be abbreviated as
\[m^{\flat}(-,m^{\flat}(-,\phi(-),-,\psi(-),-).\]
We will make similar abbreviations in the following computations; moreover, we always assume that after the interior \(m^{\flat}\) the inputs start with \(a_{i}\). Then similar to above (remember \(\clubsuit=0\)), one has
\[\big{(}(\phi\star\psi)\circ m^{\flat}\big{)}(a_{k},\ldots,a_{1})\] \[= \sum(-1)^{\clubsuit^{i}_{1}}(\phi\star\psi)(-,m^{\flat}(-),-)\] \[= \sum(-1)^{\clubsuit^{i}_{1}}\left(m^{\flat}\big{(}-,\phi(-),-, \psi(-),-,m^{\flat}(-),-\right)\] \[+m^{\flat}\big{(}-,\phi(-),-,m^{\flat}(-),-,\psi(-),-\big{)}+m^{ \flat}\big{(}-,m^{\flat}(-),-,\phi(-),-,\psi(-),-\big{)}\right)\] \[+\sum(-1)^{\clubsuit^{i}_{1}}\left(m^{\flat}\big{(}-,\phi(-,m^{ \flat}(-),-),-,\psi(-),-\big{)}+m^{\flat}\big{(}-,\phi(-),-,\psi(-,m^{\flat}( -),-),-\big{)}\right).\]
On the right hand side, the first part is a sum of \(4\)-fold compositions such that neither \(\phi\) nor \(\psi\) are contained in the interior \(m^{\flat}\) and the second part is a sum of \(4\)-fold compositions such that either \(\phi\) or \(\psi\) contain the interior \(m^{\flat}\). Now we can observe that the chain map property should be a consequence of the \(A_{\infty}\) relation \(m^{\flat}\circ m^{\flat}=0\). Notice that to match the signs of the \(A_{\infty}\) relation, we see
\[\|\phi(a_{i+r},\cdots,a_{i+1})\|=|\phi|+|a_{i+r}|+\cdots+|a_{i+1}|+r+1=\|\phi \|+\clubsuit^{i+r}_{i+1}=\clubsuit^{i+r}_{i+1}\]
as \(\phi\) is assumed to be odd.
Now compute \((m^{\flat}\circ\phi)\star\psi\). Notice that in this computation, because \(m^{\flat}\circ\phi\) is even and \(\psi\) is odd, the sign \(\clubsuit\) (see (3.11)) is
\[\clubsuit=\|m^{\flat}\circ\phi\|\cdot\big{(}\clubsuit^{i}_{1}+|\psi|\big{)}+\| \psi\|\cdot\clubsuit^{i}_{1}=1+\clubsuit^{i}_{1}.\]
Moreover, as \(\phi\) is odd, the signs appearing in \(m^{\flat}\circ\phi\) (in fact, all signs in all Gernstenhaber products until the end of the proof) vanish. Then
\[\big{(}(m^{\flat}\circ\phi)\star\psi\big{)}(a_{k},\dots,a_{1})\] \[= \sum(-1)^{\clubsuit}m^{\flat}\big{(}-,(m^{\flat}\circ\phi)(-),-, \psi(-),-\big{)}\] \[= \sum(-1)^{1+\overline{\mathfrak{K}}_{1}^{i}}m^{\flat}\big{(}-,m^ {\flat}(-,\phi(-),-),-,\psi(-),-\big{)}.\]
This is the sum of \(4\)-fold compositions with the interior \(m^{\flat}\) only contains \(\phi\). Similarly, for computing \(\phi\star(m^{\flat}\circ\psi)\), one has
\[\clubsuit=\|\phi\|\cdot\big{(}\clubsuit_{1}^{i}+|m^{\flat}\circ\psi|\big{)}+ \|m^{\flat}\circ\psi\|\cdot\overline{\mathfrak{K}}_{1}^{j}=\overline{ \mathfrak{K}}_{1}^{j}.\]
As our running convention is that after the second \(m^{\flat}\) the inputs start with \(a_{i}\), this sign is rewritten as \(\overline{\mathfrak{K}}_{1}^{i}\) below. Then
\[(-1)^{|\phi|}\big{(}\phi\star(m^{\flat}\circ\psi)\big{)}(a_{k}, \dots,a_{1})\] \[= \sum(-1)^{1+\frac{\clubsuit}{\mathfrak{K}}}m^{\flat}\big{(}-, \phi(-),-,(m^{\flat}\circ\psi)(-),a_{i},\cdots,a_{1}\big{)}\] \[= \sum(-1)^{1+\overline{\mathfrak{K}}_{1}^{i}}m^{\flat}\big{(}-, \phi(-),m^{\flat}(-,\psi(-),-),-\big{)}.\]
Now compute \(-(\phi\circ m^{\flat})\star\psi\).
\[-\big{(}(\phi\circ m^{\flat})\star\psi\big{)}(a_{k},\dots,a_{1})\] \[= \sum(-1)^{\overline{\mathfrak{K}}_{1}^{j}}m^{\flat}\big{(}-,( \phi\circ m^{\flat})(-),a_{j},\cdots,\psi(-),-\big{)}\] \[= \sum(-1)^{\overline{\mathfrak{K}}_{1}^{j}+\overline{\mathfrak{K }}_{j+1}^{j}}m^{\flat}\big{(}-,\phi(-,m^{\flat}(-),a_{i},\cdots,a_{j+1}),a_{j },\cdots,\psi(-),-\big{)}\] \[= \sum(-1)^{\overline{\mathfrak{K}}_{1}^{i}}m^{\flat}\big{(}-, \phi(-,m^{\flat}(-),-),-,\psi(-),-\big{)}.\]
Lastly we compute \(-(-1)^{|\phi|}\phi\star(\psi\circ m^{\flat})=\phi\star(\psi\circ m^{\flat})\), which is
\[\big{(}\phi\star(\psi\circ m^{\flat})\big{)}(a_{k},\dots,a_{1})\] \[= \sum(-1)^{\overline{\mathfrak{K}}_{1}^{i}}m^{\flat}\big{(}-,\phi (-),-,(\psi\circ m^{\flat})(-),a_{j},\dots,a_{1}\big{)}\] \[= \sum(-1)^{\overline{\mathfrak{K}}_{1}^{i}+\overline{\mathfrak{K }}_{j+1}^{i}}m^{\flat}\big{(}-,\phi(-),-,\psi(-,m^{\flat}(-),a_{i},\dots,a_{ j+1}),a_{j},\dots,a_{1}\big{)}\] \[= \sum(-1)^{\overline{\mathfrak{K}}_{1}^{i}}m^{\flat}\big{(}-,\phi (-),-,\psi(-,m^{\flat}(-),-),-\big{)}.\]
Gathering all computations together, we see that (3.12) follows from the \(A_{\infty}\) relation \(m^{\flat}\circ m^{\flat}=0\).
Therefore the Yoneda product descends to Hochschild cohomology. We still call the induced one by Yoneda product and denote it by the same symbol \(*\).
The Yoneda product has a chain-level unit in the strictly unital case.
**Proposition 3.37**.: _Suppose \(\mathcal{A}^{\flat}\) has a strict unit \(e\). Then the Hochschild cochain \(\mathbf{1}_{\mathcal{A}^{\flat}}\) defined by_
\[\mathbf{1}_{\mathcal{A}^{\flat}}(x_{k},\dots,x_{1})=\left\{\begin{array}{ ll}0,&\quad k\geq 1,\\ e,&\quad k=0.\end{array}\right.\]
_is a unit with respect to the Yoneda product._
Proof.: By the definition of strict unit and Yoneda product, for any Hochschild cochain \(\phi\), one has
\[(\mathbf{1}_{\mathcal{A}^{\flat}}\star\phi)(a_{k},\dots,a_{1})\] \[= \sum(-1)^{\clubsuit}m^{\flat}\big{(}a_{k},\cdots,e,a_{i},\cdots, \phi(a_{j+l},\cdots,a_{j+1}),\cdots\big{)}\] \[= (-1)^{\clubsuit_{1}^{k}+|\phi|}m_{2}^{\flat}\big{(}e,\phi(a_{k}, \dots,a_{1})\big{)}\] \[= (-1)^{|\phi(a_{k},\dots,a_{1})|}m_{2}^{\flat}\big{(}e,\phi(a_{k},\dots,a_{1})\big{)}\] \[= \phi(a_{k},\dots,a_{1}).\]
Similarly
\[(\phi\star\mathbf{1}_{\mathcal{A}^{\flat}})(a_{k},\dots,a_{1})\] \[= \sum(-1)^{\clubsuit}m^{\flat}\big{(}a_{k},\cdots,\phi(a_{j+l}, \cdots,a_{j+1}),\cdots,e,a_{i},\cdots\big{)}\] \[= (-1)^{\clubsuit}m_{2}^{\flat}\big{(}\phi(a_{k},\dots,a_{1}),e \big{)}\] \[= m_{2}^{\flat}\big{(}\phi(a_{k},\dots,a_{1}),e\big{)}\] \[= \phi(a_{k},\dots,a_{1}).\qed\]
Finally, we remark that the Yoneda product on \(\mathit{HH}^{\bullet}(\mathcal{A}^{\flat})\) is graded commutative. It is compatible with the Gerstenhaber bracket, which makes \(\mathit{HH}^{\bullet}(\mathcal{A}^{\flat})\) into a Gerstenhaber algebra.
#### 3.6.3. Clifford algebras
The Lagrangian Floer cohomology ring of a torus is often isomorphic to a Clifford algebra. Hence the Hochschild cohomology of Clifford algebras are one of the most important cases related to symplectic geometry and mirror symmetry. Recall that given a finite-dimensional \(\mathbb{K}\)-vector space \(W\) equipped with a quadratic form \(q\), the Clifford algebra \(Cl(W,q)\) is the tensor algebra of \(W\) modulo the relation
\[w\otimes w^{\prime}+w^{\prime}\otimes w+2q(w,w^{\prime})\mathrm{Id}=0.\]
We only care about the case when \(q\) is nondegenerate and the case when \(\mathbb{K}\) is algebraically closed. In this case, all nondegenerate quadratic forms are equivalent to the standard one. When \(W\) has dimension \(n\), we abbreviate \(Cl(W,q)\) by \(Cl_{n}\).
**Proposition 3.38**.: _For all \(n\geq 0\), the Hochschild cohomology of \(Cl_{n}\) is_
\[\mathit{HH}^{k}(Cl_{n},Cl_{n})=\left\{\begin{array}{ll}\mathbb{K},&\quad k= 0,\\ 0,&\quad k\geq 1.\end{array}\right.\]
_In particular, \(\mathit{HH}^{0}(Cl_{n},Cl_{n})\) is generated by the identity._
Proof.: The calculation was provided by Sheridan [10] and we recall it here. First, Hochschild cohomology is Morita invariant (see [1, 1.5.6]). Second, there are only two Morita equivalence classes among Clifford algebras, the even ones and the odd ones (Bott periodicity). Hence we only need to calculate for \(n=0\) and \(n=1\). When \(n=0\), \(Cl_{0}\cong\mathbb{K}\), giving \(\mathit{HH}^{\bullet}(\mathbb{K},\mathbb{K})=\mathbb{K}\). When \(n=1\)5, the calculation can be deduced from the more general case of J. Smith [14, Section 5] using reduced Hochschild cohomology.
When the Floer cohomology algebra of a Lagrangian brane is isomorphic to a Clifford algebra, the argument via _formality_ shows that the Hochschild cohomology of the corresponding \(A_{\infty}\) algebra is the same as the Hochschild cohomology of the cohomology algebra. Recall that an \(A_{\infty}\) algebra is called _formal_ if it is \(A_{\infty}\) quasi-isomorphic to its cohomology algebra. An associated algebra \(A\) is called _intrinsically formal_ if any \(\mathbb{Z}_{2}\)-graded \(A_{\infty}\) algebra whose cohomology algebra is isomorphic to \(A\) is formal. It was shown in [16, Corollary 6.4] that \(Cl_{n}\) is intrinsically formal. Due to the Morita invariance of Hochschild cohomology, the following statement is immediate.
**Corollary 3.39**.: _If \(\mathcal{A}^{\flat}\) is a flat \(A_{\infty}\) algebra over \(\mathbb{K}\) whose cohomology algebra is isomorphic to \(Cl_{n}\), then_
\[H\!H^{\bullet}(\mathcal{A}^{\flat})=\mathbb{K}.\]
__
Notice that if in addition \(\mathcal{A}^{\flat}\) is strictly unital, \(1_{\mathcal{A}^{\flat}}\neq 0\) and it generates the Hochschild cohomology.
## 4. Vortex Hamiltonian Floer theory
We review the construction of vortex Hamiltonian Floer theory developed by the second author [17] following the proposal of Cieliebak-Gaio-Salamon [14].
### Floer chain complexes
#### 4.1.1. Equivariant action functional
Our convention for Hamiltonian vector field is fixed as follows. Let \((M,\omega)\) be a symplectic manifold and \(H:M\to\mathbb{R}\) be a smooth function. The associated Hamiltonian vector field \(X_{H}\) is specified by
\[dH=\omega(X_{H},\cdot).\]
We would like to consider the Hamiltonian dynamics upstairs in the gauged linear sigma model. Let \(X\) be the toric manifold we are considering. Let \(H:S^{1}\times X\to\mathbb{R}\) be a smooth Hamiltonian function. Let \(\operatorname{Per}(H)\) be the set of \(1\)-periodic orbits 6 of \(H\) whose elements are maps \(x:S^{1}\to X\). The Hamiltonian \(H\) lifts to a \(K\)-invariant function on \(S^{1}\times\mu^{-1}(0)\). Choose an arbitrary \(K\)-invariant extension \(\widehat{H}:S^{1}\times V\to\mathbb{R}\) whose support is compact and disjoint from the unstable locus \(V^{\operatorname{us}}\) under the \(K^{\mathbb{C}}\)-action. Consider the set of **equivariant loops**
Footnote 6: They must be contractible as \(X\) is simply connected.
\[L^{K}(V):=\left\{\mathfrak{x}=(\widehat{x},\zeta):S^{1}\to V\times\mathfrak{k }.\right\} \tag{4.1}\]
Here function \(\zeta:S^{1}\to\mathfrak{k}\) can be viewed as a gauge field on \(S^{1}\). Notice that as \(V\) is contractible, the loop \(x\) is contractible and there is only one homotopy class of cappings. The loop group \(LK\) acts on the set of capped equivariant orbits by
\[g\cdot\mathfrak{x}=(g\cdot\widehat{x},g\cdot\xi)\text{ where }(g\cdot\widehat{x})(t)=g(t) \widehat{x}(t),\ (g\cdot\xi)(t)=\zeta(t)-\frac{d}{dt}\log g(t).\]
Define the action functional
\[\widehat{\mathcal{A}}_{H}:L^{K}(V)\to\mathbb{R},\ \mathfrak{x}\mapsto-\int_{ \mathbb{D}}u^{*}\omega_{V}+\int_{S^{1}}\left(\langle\mu(\widehat{x}(t)),\zeta (t)\rangle-\widehat{H}_{t}(\widehat{x}(t))\right)dt \tag{4.2}\]
where \(u:\mathbb{D}^{2}\to V\) is any capping. Critical points are solutions
\[\mu(\widehat{x}(t))\equiv 0,\qquad\qquad\qquad\widehat{x}^{\prime}(t)=X_{ \widehat{H}_{t}}(\widehat{x}(t))-\mathcal{X}_{\zeta(t)}(\widehat{x}(t)). \tag{4.3}\]
Here \(\mathcal{X}_{\zeta(t)}\) is the Hamiltonian vector field of the function \(\langle\mu,\zeta(t)\rangle\).
The action functional satisfies the following transformational law with respect to the loop group action. Indeed, for \(g\in LK\) and \(\mathfrak{x}\in L^{K}(V)\), one has
\[\widehat{\mathcal{A}}_{H}(g\mathfrak{x})=-\omega^{K}(g)+\mathcal{A}_{H}( \mathfrak{x}). \tag{4.4}\]
Denote
\[\mathfrak{L}^{K}(V):=L^{K}(V)/\mathrm{ker}\omega^{K}.\]
Its elements are denoted by \([\mathfrak{x}]\). Then \(\Gamma\cong LK/\mathrm{ker}\omega^{K}\) acts on \(\mathfrak{L}^{K}(V)\). We denote the action by \(g\cdot[\mathfrak{x}]\). Then \(\widehat{\mathcal{A}}_{H}\) induces a functional on \(\mathfrak{L}^{K}(V)\), denoted by
\[\mathcal{A}_{H}:\mathfrak{L}^{K}(V)\to\mathbb{R}.\]
Each critical point of \(\mathcal{A}_{H}\) is called an **equivariant 1-periodic Hamiltonian orbit**.
There is a correspondence between ordinary Hamiltonian orbits downstairs and equivariant Hamiltonian orbits upstairs. More precisely, let \(\widetilde{\mathrm{Per}}(H)\) be the covering of \(\mathrm{Per}(H)\) consisting of equivalence classes \([u,x]\) of capped 1-periodic orbits of \(H\): the equivalence relation \((u,x)\sim(u^{\prime},x^{\prime})\) is defined by the equality of action values:
\[(u,x)\sim(u^{\prime},x^{\prime})\Longleftrightarrow x=x^{\prime}\in\mathrm{ Per}(H)\text{ and }\int_{\mathbb{D}^{2}}u^{*}\omega_{X}=\int_{\mathbb{D}^{2}}(u^{\prime})^{*} \omega_{X}.\]
Then there is a map
\[\iota:\widetilde{\mathrm{Per}}(H)\to\mathrm{crit}\mathcal{A}_{H}\subset \mathfrak{L}^{K}(V). \tag{4.5}\]
Indeed, suppose \(x:S^{1}\to X\) is a contractible 1-periodic orbit of the Hamiltonian flow of \(H\) and \(u:\mathbb{D}^{2}\to X\) is a capping of \(x\). View \(\mu^{-1}(0)\to X\) as a principal \(K\)-bundle \(P\). The Euclidean metric on \(V\) induces a connection on \(P\) whose horizontal distribution is the orthogonal complement of tangent planes of \(K\)-orbits; equivalently, this gives a connection 1-form \(\theta\in\Omega^{1}(\mu^{-1}(0))\otimes\mathfrak{k}\). The pullback \(u^{*}P\to\mathbb{D}^{2}\) is trivial and different trivializations differ by a smooth map \(g:\mathbb{D}^{2}\to K\). Any trivialization of this pullback bundle induces a connection matrix \(u^{*}\theta\) whose boundary restriction is \(\zeta(t)dt\). A trivialization also induces a map \(\widehat{u}:\mathbb{D}^{2}\to\mu^{-1}(0)\) lifting \(u\). Let the boundary restriction of \(\widehat{u}\) be \(\widehat{x}\). Then \(\mathfrak{x}=(\widehat{x},\zeta)\) is an equivariant 1-periodic orbit, well-defined up to \(L_{0}K\)-actions. Furthermore, if \(u^{\prime}\) is a different capping with the same resp. different action value, then the correspondence we just described gives the same resp. different element in \(\mathfrak{L}^{K}(V)\).
**Lemma 4.1**.: _In the toric case the map (4.5) is bijective._
Proof.: Given any equivariant Hamiltonian orbit \(\mathfrak{x}\) upstairs, the map \(\widehat{x}:S^{1}\to\mu^{-1}(0)\) projects down to a 1-periodic orbit \(x:S^{1}\to X\). As \(X\) is simply connected, \(x\) is contractible. Choose a capping \(u:\mathbb{D}^{2}\to X\) and let \(\mathfrak{x}^{\prime}=(\widehat{x}^{\prime},\zeta^{\prime})\) be equivariant Hamiltonian orbit lifting \([u,x]\). As the \(K\)-action on \(\mu^{-1}(0)\) is free, there is a gauge transformation on the circle making \(\widehat{x}^{\prime}=\widehat{x}\). The condition
\[\widehat{x}^{\prime}(t)=X_{H_{t}}(\widehat{x}(t))-\mathcal{X}_{\zeta(t)}( \widehat{x}(t))\]
implies that \(\zeta=\zeta^{\prime}\).
**Definition 4.2**.: The **Conley-Zehnder index** of an equivariant 1-periodic orbit \(\mathfrak{x}\in\mathrm{crit}\mathcal{A}_{H}\) is the usual Conley-Zehnder index of the capped 1-periodic orbit \(\iota^{-1}(\mathfrak{x})\in\widetilde{\mathrm{Per}}(H)\), denoted by \(\mathrm{CZ}(\mathfrak{x})\in\mathbb{Z}\).
#### 4.1.2. Floer trajectories
Similar to the standard Hamiltonian Floer theory, one considers the equation for the gradient flow of the equivariant action functional. Choose a 1-periodic \(K\)-invariant \(\omega_{V}\)-compatible almost complex structure \(\widehat{J}_{t}\) on \(V\). Formally the negative gradient flow equation of \(\widehat{\mathcal{A}}_{H}\) is the following equation for pairs \((u,\eta):\mathbb{R}\times S^{1}\to V\times\mathfrak{k}\)
\[\partial_{s}u+\widehat{J}_{t}\left(\partial_{t}u+\mathcal{X}_{\eta}(u)-X_{ \widehat{H}_{t}}(u)\right)=0,\qquad\qquad\partial_{s}\eta+\mu(u)=0.\]
This is in fact the symplectic vortex equation on the cylinder \(\mathbb{R}\times S^{1}\) for the trivial \(K\)-bundle and the standard cylindrical volume form, written in temporal gauge \(A=d+\eta dt\). In general, for \(A=d+\xi ds+\eta dt\), the vortex equation (2.2) reads
\[\partial_{s}u+\mathcal{X}_{\xi}(u)+\widehat{J}_{t}\left(\partial_{t}u+ \mathcal{X}_{\eta}(u)-X_{\widehat{H}_{t}}(u)\right)=0,\qquad\qquad\partial_ {s}\eta-\partial_{t}\xi+\mu(u)=0. \tag{4.6}\]
It was shown in [14] that any finite energy solution converges up to gauge transformation to critical points of \(\mathcal{A}_{H}\).
**Theorem 4.3**.: _[_14_, Theorem 3.1, Corollary 4.3]_
1. _Given a bounded solution_ \(\mathfrak{u}=(u,\xi,\eta)\) _(i.e. finite energy solution with_ \(u(\mathbb{R}\times S^{1})\) _bounded) to (_4.6_), there exist a gauge equivalent solution, still denoted by_ \((u,\xi,\eta)\)_, as well as equivariant 1-periodic orbits_ \(\mathfrak{x}_{\pm}=(\widehat{x}_{\pm},\zeta_{\pm})\) _such that uniformly for_ \(t\in S^{1}\)__ \[\lim_{s\to\pm\infty}(u(s,\cdot),\xi(s,\cdot),\eta(s,\cdot))=(\widehat{x}_{\pm },0,\zeta_{\pm}).\] (4.7)
2. _If_ \(\mathfrak{x}^{\prime}_{\pm}\) _are another pair of equivariant 1-periodic orbits satisfying (_4.7_) with_ \(\mathfrak{u}\) _replaced by any gauge equivalent solution, then there exists_ \(g_{\pm}\in LK\) _with_ \(g_{-}g_{+}^{-1}\in L_{0}K\) _such that_ \(\mathfrak{x}^{\prime}_{\pm}=g_{\pm}\mathfrak{x}_{\pm}\)_._
3. _If_ \(H\) _is a nondegenerate Hamiltonian downstairs, then one can make the convergence (_4.7_) exponentially fast by choosing suitable gauge equivalent solutions. More precisely, there exist_ \(C>0\) _and_ \(\delta>0\) _such that_ \[d_{V}(u(s,t),\widehat{x}_{\pm}(t))+|\xi(s,t)|+|\eta(s,t)-\zeta(t)|\leq Ce^{- \delta|s|}.\] _Here_ \(d_{V}\) _is the Euclidean distance on_ \(V\)_. Similar exponential decay estimates hold for covariant derivatives of arbitrary higher orders._7__ Footnote 7: Exponential decay type estimates for vortices can also be found in [10][21][22][23].
Therefore, one can use a pair of elements \(\mathfrak{x}_{\pm}\in\operatorname{crit}\!\mathcal{A}_{H}\subset\mathfrak{L}^ {K}(V)\) to label solutions. Let
\[\mathcal{M}(\mathfrak{x}_{-},\mathfrak{x}_{+})\]
be the set of gauge equivalence classes of bounded solutions \(\mathfrak{u}\) to (4.6) modulo the \(\mathbb{R}\)-translation. One has the energy identity ([14, Proposition 3.8])
\[E(\mathfrak{u})=\mathcal{A}_{H}(\mathfrak{x}_{-})-\mathcal{A}_{H}(\mathfrak{x }_{+}).\]
_Remark 4.4_.: To achieve transversality, one has to avoid certain "bad" \(K\)-equivariant lifts of a given Hamiltonian \(H\) downstairs and choose almost complex structures appropriately. In [14, Section 6] the second author used the notion of _admissible_ almost complex structures and _admissible_\(K\)-invariant lifts of a Hamiltonian downstairs. We briefly recall the precise meanings of them adapted to the toric case. First, in the stable locus \(V^{\mathrm{st}}\) there is the projection \(\pi:V^{\mathrm{st}}\to X\) which is invariant under the complex torus \(G\). Hence there is a splitting
\[TV|_{V^{\mathrm{st}}}\cong\pi^{*}TX\oplus(\mathfrak{k}\otimes\mathbb{C}).\]
Throughout this paper we fix a \(K\)-invariant (small) open neighborhood \(U\) of \(\mu^{-1}(0)\) and consider only \(K\)-invariant, \(\omega_{V}\)-compatible almost complex structures \(\widehat{J}\) on \(V\) which agrees with \(J_{V}\) outside
\(U\) (this is necessary to guarantee the \(C^{0}\)-compactness in [11]). Moreover, given a _nondegenerate_ Hamiltonian \(H:S^{1}\times X\to\mathbb{R}\), an \(S^{1}\)-family of almost complex structures \(\widehat{J}_{t}\) is said to be _admissible_ with respect to the \(H\) downstairs if for any loop \(\widehat{x}:S^{1}\to\mu^{-1}(0)\) that projects to a 1-periodic orbit downstairs, one imposes some conditions on the 1-jet of \(\widehat{J}_{t}\) along \(\widehat{x}\) (see [11, Definition 6.2]. Then the notion of admissibility of \(K\)-invariant lifts of \(H\) was defined (see [11, Definition 6.5]), which is a condition on the infinitesimal behavior of the lifts \(\widehat{H}_{t}\) along 1-periodic orbits given in terms of the Hessian of the equivariant action functional.
**Theorem 4.5**.: _Given a nondegenerate Hamiltonian \(H_{t}\) downstairs, for a generic admissible pair \((\widehat{H}_{t},\widehat{J}_{t})\), the following is true._
1. _Each moduli space_ \(\mathcal{M}(\mathfrak{r},\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{\mathfrak{ \mathfrak{ \mathfrak{ }}}}}}}}}}}})\) _is regular and has dimension_ \(\mathrm{CZ}(\mathfrak{r})-\mathrm{CZ}(\mathfrak{\mathfrak{\mathfrak{ \mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ }}}}}}}}}}}})-1}\)_._
2. _Moduli spaces with bounded energy are compact up to breaking as in the usual setting for the Uhlenbeck-Gromov-Floer compactification._
3. _If_ \(\mathrm{CZ}(\mathfrak{r})-\mathrm{CZ}(\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ }}}}}}}}}}}}})\) \(1}\)_, then the moduli space consists of finitely many points._
4. _When_ \(\mathrm{CZ}(\mathfrak{r})-\mathrm{CZ}(\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ }}}}}}}}}})}}}=2\)_, the compactified moduli space is a compact 1-dimensional manifold with boundary._
We briefly explain the reason for transversality. Indeed, as the total energy is finite and the volume on the cylinder is infinite, near infinity any solution is contained in the neighborhood \(U\) of \(\mu^{-1}(0)\) fixed in Remark 4.4. Therefore, there is a nonempty open subset of the cylinder whose image is contained in the free locus of the \(K\)-action. Then using an equivariant version of the argument of Floer-Hofer-Salamon [10] one can achieve transversality by perturbing \(\widehat{J}_{t}\) in a neighborhood of \(\mu^{-1}(0)\).
It is a standard procedure to construct a coherent system of orientations on the moduli spaces (see [10]). Then for \(R=\mathbb{Z}\), there is a well-defined count
\[n(\mathfrak{r},\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{\mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ }}}}}}}}}}}})\in \mathbb{Z}\]
which is the signed count of the number of Floer trajectories in 0-dimensional components of \(\mathcal{M}(\mathfrak{r},\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ }}}}}}}}}}})\). When \(R\) is any commutative ring with a unit, \(n(\mathfrak{r},\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ }}}}}}}}})\) induces an element
\[n_{R}(\mathfrak{r},\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrakmathfrakmathfrak { \mathfrak{ }}}}}}}}}}})\in R\]
#### 4.1.3. Floer homology
We first define the Floer chain group for a smaller Novikov ring. Recall that one has the finitely generated abelian group
\[\Gamma:=LK/\mathrm{ker}\omega^{K}\]
which naturally embeds into \(\mathbb{R}\). For any commutative ring \(R\), introduce
\[\Lambda^{\Gamma}_{R}:=\Big{\{}\sum_{i=1}^{\infty}a_{i}T^{g_{i}}\in\Lambda_{R} \ |\ g_{i}\in\Gamma\Big{\}}.\]
We define the Floer chain group \(\mathit{VCF}_{\bullet}(\widehat{H})\) to be the "downward" completion:
\[\mathit{VCF}_{\bullet}(\widehat{H};\Lambda^{\Gamma}_{R})=\Big{\{}\sum_{i=1}^ {\infty}b_{i}\mathfrak{r}_{i}\ |\ b_{i}\in R,\ \mathfrak{r}_{i}\in\mathrm{crit}\mathcal{A}_{H},\ \lim_{i\to\infty}\mathcal{A}_{H}(\mathfrak{r}_{i})=-\infty\Big{\}}.\]
It is graded by the Conley-Zehnder index (modulo 2). The \(\Lambda^{\Gamma}_{R}\)-module structure is defined by
\[\Big{(}\sum_{i=1}^{\infty}a_{i}T^{g_{i}}\Big{)}\Big{(}\sum_{j=1}^{\infty}b_{j }\mathfrak{r}_{j}\Big{)}=\sum_{i,j=1}^{\infty}a_{i}b_{j}(g_{i}\cdot\mathfrak{ r}_{j}).\]
By (4.4), the right hand side is in \(\mathit{VCF}_{\bullet}(\widehat{H};\Lambda_{R}^{\Gamma})\) and this is a well-defined action. Define
\[\mathit{VCF}_{\bullet}(\widehat{H};\Lambda_{R}):=\mathit{VCF}_{\bullet}( \widehat{H};\Lambda_{R}^{\Gamma})\otimes_{\Lambda_{R}^{\Gamma}}\Lambda_{R}.\]
The Floer differential \(\partial_{\widehat{J}}\colon\mathit{VCF}_{\bullet}(\widehat{H};\Lambda_{R}^{ \Gamma})\to\mathit{VCF}_{\bullet-1}(\widehat{H};\Lambda_{R}^{\Gamma})\) is defined by the counts \(n_{R}(\mathfrak{x},\mathfrak{y})\). More precisely, on generators,
\[\partial_{\widehat{J}}\mathfrak{x}=\sum_{\mathfrak{y}}n_{R}(\mathfrak{x}, \mathfrak{y})\mathfrak{y}.\]
One has \(\partial_{\widehat{J}}^{2}=0\), resulting in the **vortex Floer homology**
\[\mathit{VHF}_{\bullet}(\widehat{H},\widehat{J};\Lambda_{R}^{\Gamma}).\]
Notice that the differential \(\partial_{\widehat{J}}\)_decreases_ the action.
#### 4.1.4. Adiabatic limit
The adiabatic limit argument allows us to relate the gauged linear sigma model with holomorphic curves in the symplectic quotient. While we do not need a complete analysis of such a correspondence, we do need to consider the family of vortex equations related to the adiabatic limit argument. Indeed, if on the infinite cylinder we choose, instead of the standard area form \(dsdt\), a rescaled one \(\lambda^{2}dsdt\), then the corresponding vortex Floer equation reads
\[\partial_{s}u+\mathcal{X}_{\xi}(u)+\widehat{J}_{t}\left(\partial_{t}u+ \mathcal{X}_{\eta}(u)-X_{\widehat{H}_{t}}(u)\right)=0,\qquad\quad\partial_{s} \eta-\partial_{t}\xi+\lambda^{2}\mu(u)=0. \tag{4.8}\]
One can define a vortex Floer chain complex for the triple \((\lambda,\widehat{H},\widehat{J})\) in completely the same way as the \(\lambda=1\) case, once transversality holds, which can be achieved via perturbation. We denote the vortex Floer chain complex by \(\mathit{VCF}_{\bullet}^{\lambda}(\widehat{H},\widehat{J};\Lambda_{R}^{\Gamma})\). The corresponding homology is denoted by
\[\mathit{VHF}_{\bullet}^{\lambda}(\widehat{H},\widehat{J};\Lambda_{R}^{\Gamma}).\]
There are a few subtleties. First, given a nondegenerate Hamiltonian \(H\) downstairs on \(X\), the notion of admissible almost complex structures ([23, Definition 6.2]) is independent of \(\lambda\); the notion of admissible lifts, however, depends on \(\lambda\).
**Definition 4.6**.: A triple \((\lambda,\widehat{H},\widehat{J})\) is called a **regular triple** if
1. The descent Hamiltonian \(H\) on \(X\) is nondegenerate.
2. \((\widehat{H},\widehat{J})\) is admissible with respect to \(H\).
3. Moduli spaces of gauge equivalence classes of finite energy solutions to (4.8) are all regular.
#### 4.1.5. Continuation map
Given two regular triples \((\lambda_{\pm},\widehat{H}_{\pm},\widehat{J}_{\pm})\), one compares the two associated vortex Floer complexes via continuation maps. By an **interpolation** between these two triples, we mean a triple \((\lambda_{s},\widehat{H}_{s},\widehat{J}_{s})\) where \(\lambda_{s}\in\mathbb{R}_{+}\) is a smooth function in \(s\in\mathbb{R}\) which agrees with \(\lambda_{\pm}\) near \(\pm\infty\), \(\widehat{H}_{s}\) is a smooth family of \(K\)-invariant compactly supported functions parametrized by \((s,t)\in\mathbb{R}\times S^{1}\) which agrees with \(\widehat{H}_{\pm}\) near \(\pm\infty\), and \(\widehat{J}_{s}\) is a smooth family of \(K\)-invariant \(\omega_{V}\)-compatible almost complex structures parametrized by \((s,t)\in\mathbb{R}\times S^{1}\) which agrees with \(\widehat{J}_{\pm}\) near \(\pm\infty\).
Choosing a generic interpolation \((\lambda_{s},\widehat{H}_{s},\widehat{J}_{s})\), by considering moduli spaces of gauge equivalence classes of solutions to the equation
\[\partial_{s}u+\mathcal{X}_{\xi}+\widehat{J}_{s,t}\left(\partial_{t}u+\mathcal{ X}_{\eta}-X_{\widehat{H}_{s,t}}(u)\right)=0,\qquad\qquad\partial_{s}\eta- \partial_{t}\xi+\lambda_{s}^{2}\mu(u)=0, \tag{4.9}\]
one can define a continuation map
\[\mathsf{cont}:\mathit{VCF}_{\bullet}^{\lambda_{-}}(\widehat{H}_{-},\widehat{J }_{-};\Lambda_{R}^{\Gamma})\to\mathit{VCF}_{\bullet}^{\lambda_{+}}(\widehat{H} _{+},\widehat{J}_{+};\Lambda_{R}^{\Gamma})\]
completely analogous to the case of classical Hamiltonian Floer theory. The map \(\mathsf{cont}\) is a chain homotopy equivalence, inducing an isomorphism on Floer homology
\[\mathit{VHF}_{\bullet}^{\lambda_{-}}(\widehat{H}_{-},\widehat{J}_{-};\Lambda_{R }^{\Gamma})\cong\mathit{VHF}_{\bullet}^{\lambda_{+}}(\widehat{H}_{+},\widehat{ J}_{+};\Lambda_{R}^{\Gamma}).\]
Completely analogous to the classical situation, these isomorphisms are natural, hence the resulting homology groups define a common object called the **vortex Hamiltonian Floer homology** of \(V\), denoted by
\[\mathit{VHF}_{\bullet}(V;\Lambda_{R}^{\Gamma}). \tag{4.10}\]
Define
\[\mathit{VHF}_{\bullet}(V;\Lambda_{R}):=\mathit{VHF}_{\bullet}(V;\Lambda_{R}^{ \Gamma})\otimes_{\Lambda_{R}^{\Gamma}}\Lambda_{R}.\]
In order to consider effects on the filtered theories, one needs to estimate the energy of solutions contributing to the continuation maps.
**Proposition 4.7**.: _Given any solution \(\mathfrak{u}=(u,\xi,\eta)\) to (4.9) which converges to \(\mathfrak{x}_{\pm}\in\mathrm{crit}\mathcal{A}_{H_{\pm}}\) at \(\pm\infty\), one has_
\[\int_{\mathbb{R}\times S^{1}}\Big{(}|\partial_{s}u+\mathcal{X}_{\xi}(u)|^{2}+ \lambda_{s}^{2}|\mu(u)|^{2}\Big{)}dsdt=\mathcal{A}_{H_{-}}(\mathfrak{x}_{-})- \mathcal{A}_{H_{+}}(\mathfrak{x}_{+})-\int_{\mathbb{R}\times S^{1}}\frac{ \partial\widehat{H}_{s,t}}{\partial s}(u)dsdt. \tag{4.11}\]
_In particular, if \(\widehat{H}_{s,t}=(1-\chi(s))\widehat{H}_{-}+\chi(s)\widehat{H}_{+}\) for some non-decreasing function \(\chi:\mathbb{R}\to[0,1]\), then one has_
\[\mathcal{A}_{H_{+}}(\mathfrak{x}_{+})\leq\mathcal{A}_{H_{-}}(\mathfrak{x}_{-} )+\int_{0}^{1}\max_{V}\left(\widehat{H}_{-}-\widehat{H}_{+}\right)dt. \tag{4.12}\]
Proof.: When \(\lambda_{s}\) is a constant, (4.11) is [20, Proposition 7.5]. The general case is the same as the area form on the domain does not affect the topological nature of the energy. As the left-hand-side of (4.11) is nonnegative, (4.12) follows.
#### 4.1.6. Computation of \(\mathit{VHF}\)
It is expected that the vortex Floer homology is isomorphic to the Hamiltonian Floer homology of the symplectic quotient, and hence its singular homology (in appropriate coefficients). However, such a calculation relies involved technical constructions. The Piunikhin-Salamon-Schwarz (PSS) approach forces one to deal with multiple covers of equivariant Floer cylinders with \(H\equiv 0\) which may have negative equivariant Chern number. The adiabatic limit approach (similar to [10]) requires the study of affine vortices for a general toric manifold. In particular, for general symplectic quotients both approaches require the use of the virtual technique.
However, in the toric case, even without having the PSS map, it is rather easy to compute the rank of \(\mathit{VHF}(V)\) as one can find a perfect Morse function.
**Proposition 4.8**.: _For any commutative ring \(R\), as \(\Lambda_{R}^{\Gamma}\)-modules, \(\mathit{VHF}_{\bullet}(V;\Lambda_{R}^{\Gamma})\) is isomorphic to \(H_{\bullet}(X;\Lambda_{R}^{\Gamma})\) (with the reduced \(\mathbb{Z}_{2}\)-grading) up to a degree shifting._
Proof.: Recall that the \(2n\)-dimensional toric manifold \(X\) carries a Hamiltonian \(T^{n}\)-action. For a generic circle \(S^{1}\subset T^{n}\), the induced moment map \(f:X\to\mathbb{R}\) is a perfect Morse function whose critical points are the toric fixed points. In particular, the Morse indices are all even. Then for \(\epsilon\) small, \(\epsilon f\) is a nondegenerate time-independent Hamiltonian. After a small perturbation and \(K\)-invariant lift to \(V\), the corresponding vortex Floer chain complex has no two generators with adjacent degrees. Hence the \(\mathit{VHF}_{\bullet}(V;\Lambda_{R}^{\Gamma})\) has the same rank as \(H_{\bullet}(X;\Lambda_{R}^{\Gamma})\). Lastly, the usual
normalization of the Conley-Zehnder index is taken in such a way that if \(x\) is a critical point of \(\epsilon f\) viewed as a \(1\)-periodic orbit with a constant capping, then
\[\text{CZ}(x)=n-\text{index}_{f}(x)\]
where \(2n=\text{dim}X\) and \(\text{index}_{f}(x)\) is the Morse index of \(x\) (see [10, (12.1.7)]).
### Small bulk deformations
Here we define a family of deformations of the vortex Floer homology parametrized by "small" bulk deformations. Recall that the toric manifold \(X\) has \(N\) toric divisors \(D_{j}\) corresponding to the \(N\) faces of the moment polytope. These divisors are GIT quotients of the coordinate hyperplanes
\[V_{j}=\{(x_{1},\dots,x_{N})\in V\ |\ x_{j}=0\}.\]
Introduce a small bulk deformation of the form
\[\mathfrak{b}=\sum_{j=1}^{N}c_{j}V_{j}\text{ where }c_{j}\in\Lambda_{0,R}. \tag{4.13}\]
The \(\mathfrak{b}\)-deformed vortex Floer complex is the complex generated by equivariant \(1\)-periodic orbits upstairs whose differential counts gauge equivalence classes of solutions to the vortex equation in a different way: for each rigid (modulo gauge transformation) solution \(\mathfrak{u}=(u,\xi,\eta)\), we weight the count by the factor
\[\exp\left(\sum_{j=1}^{N}c_{j}(u\cap V_{j})\right)\in\Lambda_{R}\]
where \(u\cap V_{j}\) is the intersection number between the cylinder \(u\) and the divisor \(V_{j}\). Formally, this count coincides with the count of solutions on the cylinder with markings mapped to \(V_{j}\).
_Remark 4.9_.: The use of bulk deformations in Lagrangian Floer theory was invented by Fukaya-Oh-Ohta-Ono [1, 1] which resembles the notion of _big quantum cohomology_ in Gromov-Witten theory. Bulk deformations are adapted to Hamiltonian Floer theory in [13] and [1]. In gauged linear sigma model it was discussed in [14]. The term "small" used here comes from the terminology in Gromov-Witten theory where small means deforming Gromov-Witten invariants by divisor classes and "big" means deforming by classes with arbitrary degrees.
#### 4.2.1. Bulk-avoiding Hamiltonians
One can only have a well-defined topological intersection number between Floer cylinders and the divisors if periodic orbits do not intersect these toric divisors. We introduce the following type of Hamiltonians on the toric manifold.
**Definition 4.10** (Bulk-avoiding Hamiltonians).:
1. A Hamiltonian \(H\) on the toric manifold \(X\) is called **bulk-avoiding** if all \(1\)-periodic orbits of \(nH\) for all \(n\geq 1\) do not intersect the divisor \(D_{1}\cup\dots\cup D_{N}\).
2. Denote by \[\mathcal{H}_{K}^{**}(V)\subset\mathcal{H}_{K}^{*}(V)\] the space of admissible \(K\)-invariant Hamiltonians on \(V\) whose reductions are bulk-avoiding.
3. A bulk-avoiding admissible pair is an admissible pair \((\widehat{H},\widehat{J})\) such that \(\widehat{H}\) descends to a bulk-avoiding Hamiltonian downstairs.
It is easy to see that a \(C^{2}\)-small perturbation of any Hamiltonian is bulk-avoiding. Now we can define the topological intersection numbers. Let \(\mathfrak{u}=(u,\xi,\eta)\) be a solution to (4.6) which
converges to equivariant \(1\)-periodic orbits \(\mathfrak{r}\) resp. \(\mathfrak{u}\) at \(-\infty\) resp. \(+\infty\). Then a generic compactly supported perturbation \(\tilde{u}\) intersects transversely with \(V_{j}\). Define
\[[\mathfrak{u}]\cap V_{j}=\tilde{u}\cap V_{j}\in\mathbb{Z}\]
which counts transverse intersection points with signs. Notice that this number is well-defined: first, if \(\tilde{u}^{\prime}\) is another perturbation, then \(\tilde{u}\cap V_{j}=\tilde{u}^{\prime}\cap V_{j}\); second, if \(\mathfrak{u}^{\prime}=(u^{\prime},\xi^{\prime},\eta^{\prime})\) is gauge equivalent to \(\mathfrak{u}\) via a gauge transformation \(g\), then \(\tilde{u}^{\prime}:=g\tilde{u}\) is a perturbation of \(u^{\prime}\). As \(V_{j}\) is \(K\)-invariant, \(\tilde{u}^{\prime}\) still intersect transversely with \(V_{j}\) and the intersection number is the same.
#### 4.2.2. Bulk-deformed vortex Floer complex
For our application, we only consider small bulk deformations of the form
\[\mathfrak{b}=\sum_{j=1}^{N}\log c_{j}\ V_{j}\ \text{where}\ c_{j}\in\mathbb{Z}[ \mathfrak{i}]=\mathbb{Z}\oplus\mathfrak{i}\mathbb{Z}.\]
Here \(\mathfrak{i}=\sqrt{-1}\) and one can regard \(\mathbb{Z}[\mathfrak{i}]\subset\mathbb{C}\). The weighted counts eventually only depend on \(c_{j}\) so we allow \(c_{j}\) to be zero and the ambiguity of taking logarithm does not affect further discussions. Consider the vortex Floer chain complex
\[\mathit{VCF}_{\bullet}(\widehat{H},\widehat{J};\Lambda_{\mathbb{Z}[ \mathfrak{i}]}).\]
Due to the special behavior of the bulk \(\mathfrak{b}\), the weighted counts of cylinders are still integral. Define the bulk-deformed vortex differential
\[\partial^{\mathfrak{b}}:\mathit{VCF}_{\bullet}(\widehat{H},\widehat{J}; \Lambda_{\mathbb{Z}[\mathfrak{i}]})\to\mathit{VCF}_{\bullet}(\widehat{H}, \widehat{J};\Lambda_{\mathbb{Z}[\mathfrak{i}]})\]
by
\[\partial^{\mathfrak{b}}(\mathfrak{r})=\sum_{\mathfrak{b}\atop\text{CZ}( \mathfrak{r})-\text{CZ}(\mathfrak{r})=1}\left(\sum_{[\mathfrak{u}]\in \mathcal{M}^{\text{cyl}}(\mathfrak{r},\mathfrak{r})}\epsilon([\mathfrak{u}]) \exp\left(\sum_{j=1}^{N}\log c_{j}\ [\mathfrak{u}]\cap V_{j}\right)\right) \mathfrak{r}. \tag{4.14}\]
Here \(\epsilon([\mathfrak{u}])\in\{\pm 1\}\) is the sign of the rigid solution \([\mathfrak{u}]\). In particular, when \(\mathfrak{b}=0\), the above coincides with the original differential map \(\partial\).
**Lemma 4.11**.: \(\partial_{\mathfrak{b}}\) _is a legitimate linear map and \((\partial^{\mathfrak{b}})^{2}=0\)._
Proof.: First, as \(c_{j}\in\mathbb{Z}[\mathfrak{i}]\), the weights
\[\exp\left(\sum_{j=1}^{N}\log c_{j}\ [\mathfrak{u}]\cap V_{j}\right)=\prod_{j=1 }^{N}c_{j}^{[\mathfrak{u}]\cap V_{j}}\in\mathbb{Z}[\mathfrak{i}].\]
Hence the coefficients on the right hand side of (4.14) are still in \(\mathbb{Z}[\mathfrak{i}]\). Second, by Gromov compactness, the sum (4.14) is still in the module \(\mathit{VCF}_{\bullet}(\widehat{H},\widehat{J};\Lambda_{\mathbb{Z}[\mathfrak{ i}]}).\) Hence \(\partial^{\mathfrak{b}}\) is a well-defined linear map. To prove that its square is zero, consider for each \(\mathfrak{r}\) and \(\mathfrak{z}\) with Conley-Zehnder indices differing by \(2\) and consider the \(1\)-dimensional components of the moduli space \(\overline{\mathcal{M}^{\text{cyl}}(\mathfrak{r},\mathfrak{z})}\). It can be further decomposed into connected components. Within each connected components, the topological intersection number for each cylinder with each \(V_{j}\) is a constant. Moreover, for the concatenation of two cylinders \([\mathfrak{u}_{1}]\) and \([\mathfrak{u}_{2}]\) which is in the boundary of such a component, this intersection number with \(V_{j}\) is equal to the sum \([\mathfrak{u}_{1}]\cap V_{j}+[\mathfrak{u}_{2}]\cap V_{j}\). It follows that \((\partial^{\mathfrak{b}})^{2}=0\).
Hence for each regular admissible bulk-avoiding pair \((\widehat{H},\widehat{J})\), one can define the \(\mathfrak{b}\)-deformed vortex Floer homology by
\[\mathit{VHF}_{\bullet}^{\mathfrak{b}}(\widehat{H},\widehat{J};\Lambda_{ \mathbb{Z}[\mathfrak{i}]}):=\text{ker}\partial^{\mathfrak{b}}/\text{im} \partial^{\mathfrak{b}}.\]
Below we summarize its properties.
**Theorem 4.12** (Properties of bulk-deformed vortex Floer complex).:
1. _For each regular bulk-avoiding admissible pair_ \((\widehat{H},\widehat{J})\)_, the complex_ \(\text{VCF}_{\bullet}(\widehat{H},\widehat{J};\Lambda_{\mathbb{Z}[\mathbb{H}]})\) _with differential_ \(\partial^{\mathfrak{b}}\) _is a_ \(\mathbb{Z}_{2}\)_-graded filtered Floer-Novikov complex (see Definition_ (3.14)_)._
2. _For each two regular admissible bulk-avoiding pairs_ \((\widehat{H}_{1},\widehat{J}_{1})\) _and_ \((\widehat{H}_{2},\widehat{J}_{2})\)_, there is a continuation map_ \[\mathsf{cont}:\text{VCF}_{\bullet}^{\mathfrak{b}}(\widehat{H}_{1},\widehat{J} _{1};\Lambda_{\mathbb{Z}[\mathbb{H}]})\rightarrow\text{VCF}_{\bullet}^{ \mathfrak{b}}(\widehat{H}_{2},\widehat{J}_{2};\Lambda_{\mathbb{Z}[\mathbb{H}]})\] _which is canonical up to chain homotopy. Hence there is a_ \(\mathbb{Z}_{2}\)_-graded_ \(\Lambda_{\mathbb{Z}[\mathbb{H}]}\)_-module_ \(\text{VHF}_{\bullet}^{\mathfrak{b}}(V;\Lambda_{\mathbb{Z}[\mathbb{H}]})\)_, called the_ \(\mathfrak{b}\)_-deformed vortex Floer homology, with canonical isomorphisms_ \[\text{VHF}_{\bullet}^{\mathfrak{b}}(\widehat{H},\widehat{J};\Lambda_{\mathbb{ Z}[\mathbb{H}]})\cong\text{VHF}_{\bullet}^{\mathfrak{b}}(V;\Lambda_{ \mathbb{Z}[\mathbb{H}]})\] _for all regular admissible bulk-avoiding pairs_ \((\widehat{H},\widehat{J})\)_._
3. _There is a linear isomorphism_ \[\text{VHF}_{\bullet}^{\mathfrak{b}}(V;\Lambda_{\mathbb{Z}[\mathbb{H}]})\cong H _{\bullet}(X;\Lambda_{\mathbb{Z}[\mathbb{H}]}).\]
#### 4.2.3. Poincare duality
In Morse-Floer theory one can define the Poincare duality on the chain-level by "reversing" the Morse function or the symplectic action functional. We recall this construction in the setting of vortex Floer theory. If \(\widehat{H}:S^{1}\times V\rightarrow\mathbb{R}\) is a \(K\)-invariant Hamiltonian, define \(\widehat{H}^{\text{op}}:S^{1}\times V\rightarrow\mathbb{R}\) by
\[\widehat{H}^{\text{op}}(t,v)=-\widehat{H}(-t,v).\]
Then similar to the case of the ordinary Floer homology (see [14, Section 12.3]), there is a one-to-one correspondence between \(\text{crit}\mathcal{A}_{H}\) and \(\text{crit}\mathcal{A}_{H^{\text{op}}}\). More precisely, if \(\mathfrak{x}=(\widehat{x},\eta)\in L^{K}(V)\) is an equivariant \(1\)-periodic orbit, then
\[\mathfrak{x}^{\text{op}}:=(\widehat{x}^{\text{op}},\eta^{\text{op}})\text{ where }\widehat{x}^{\text{op}}(t)=\widehat{x}(-t),\ \eta^{\text{op}}(t)=-\eta(-t)\]
solves
\[\frac{d}{dt}\widehat{x}^{\text{op}}(t)+\mathcal{X}_{\eta^{\text{op}}(t)}( \widehat{x}^{\text{op}}(t))-X_{\widehat{H}^{\text{op}}}(\widehat{x}^{\text{ op}}(t))=0\]
and hence is an equivariant \(1\)-periodic orbits for \(H^{\text{op}}\). The map \(\mathfrak{x}\mapsto\mathfrak{x}^{\text{op}}\) induces a one-to-one correspondence
\[\text{crit}\mathcal{A}_{H}\cong\text{crit}\mathcal{A}_{H^{\text{op}}}\]
with critical values and Conley-Zehnder indices reversed.
Similarly, if \(\widehat{J}_{t}\) is an \(S^{1}\)-family of \(K\)-invariant almost complex structures on \(V\), then define
\[(\widehat{J}^{\text{op}})_{t}=\widehat{J}_{-t}.\]
One can verify easily that if \((\widehat{H},\widehat{J})\) is admissible, so is \((\widehat{H}^{\text{op}},\widehat{J}^{\text{op}})\).
Now we define a Poincare pairing on the vortex Floer homology. Let \((\widehat{H}_{1},\widehat{J}_{1})\) and \((\widehat{H}_{2},\widehat{J}_{2})\) be two regular bulk-avoiding admissible pairs on \(V\). Consider the genus zero curve with two incoming cylindrical ends, denoted by \(\Sigma_{\supset}\). Choose an area form with cylindrical ends on \(\Sigma_{\supset}\). Define a \(K\)-invariant Hamiltonian perturbation \(\widehat{H}_{\supset}\) on \(\Sigma_{\supset}\) which is equal to \(\widehat{H}_{1}dt\) on the first cylindrical end and which is equal to \(\widehat{H}_{2}^{\text{op}}dt\) on the second cylindrical end. Choose a domain-dependent \(K\)-invariant almost complex structure \(\widehat{J}_{\supset}\) which agrees on \(\widehat{J}_{1}\) on the first cylindrical end and which is equal to \(\widehat{J}_{2}^{\text{op}}\) on the second cylindrical end. Consider the \(\widehat{H}_{\supset}\)-perturbed symplectic vortex equation on \(\Sigma_{\supset}\) with respect to the family of almost complex structures \(\widehat{J}_{\supset}\). Finite energy
solutions converge to critical points of \(\mathcal{A}_{H_{1}}\) resp. \(\mathcal{A}_{H_{2}^{\mathrm{op}}}\) at the two cylindrical ends. Then given \(\mathfrak{x}\in\mathrm{crit}\mathcal{A}_{H_{1}}\) and \(\mathfrak{y}^{\mathrm{op}}\in\mathrm{crit}\mathcal{A}_{H_{2}^{\mathrm{op}}} \cong\mathrm{crit}\mathcal{A}_{H_{2}}\), one can obtain a well-defined count
\[\mathfrak{n}_{\supset}^{\mathfrak{b}}(\mathfrak{x},\mathfrak{y})\in\mathbb{Z}\]
by looking at rigid solutions. Define a bilinear pairing
\[\langle\cdot,\cdot\rangle^{\mathfrak{b}}:\,\mathit{VCF}_{\bullet}^{\mathfrak{ b}}(\widehat{H}_{1},\widehat{J}_{1};\Lambda_{R}^{\Gamma})\otimes\mathit{VCF}_{\bullet}^{ \mathfrak{b}}(\widehat{H}_{2}^{\mathrm{op}},\widehat{J}_{2}^{\mathrm{op}}; \Lambda_{R}^{\Gamma})\to R\]
by
\[\langle\sum_{i=1}^{\infty}a_{i}\mathfrak{x}_{i},\sum_{j=1}^{\infty}b_{j} \mathfrak{y}_{j}^{\mathrm{op}}\rangle^{\mathfrak{b}}:=\sum_{i,j}a_{i}b_{j} \mathfrak{n}_{\supset}(\mathfrak{x}_{i},\mathfrak{y}_{j}^{\mathrm{op}}).\]
An argument via energy inequality shows that the above form is finite and well-defined; by considering 1-dimensional moduli spaces one can show that the above pairing descends to homology
\[\langle\cdot,\cdot\rangle^{\mathfrak{b}}:\,\mathit{VHF}_{\bullet}^{\mathfrak{ b}}(\widehat{H}_{1},\widehat{J}_{1};\Lambda_{R}^{\Gamma})\otimes\mathit{VHF}_{ \bullet}^{\mathfrak{b}}(\widehat{H}_{2},\widehat{J}_{2};\Lambda_{R}^{\Gamma}) \to R.\]
One can also show that the pairing is compatible with respect to the continuation map. Hence it induces a pairing
\[\langle\cdot,\cdot\rangle^{\mathfrak{b}}:\,\mathit{VHF}_{\bullet}^{\mathfrak{ b}}(V;\Lambda_{R}^{\Gamma})\otimes\mathit{VHF}_{\bullet}^{\mathfrak{b}}(V; \Lambda_{R}^{\Gamma})\to R.\]
Now we specialize to the case when \(\widehat{H}_{1}=\widehat{H}_{2}=\widehat{H}\) and \(\widehat{J}_{1}=\widehat{J}_{2}=\widehat{J}\). In this case the pairing takes a simple form on the chain level. Indeed, if we choose \(\widehat{H}_{\supset}\) and \(\widehat{J}_{\supset}\) to be the trivial ones, then the countings \(n_{\supset}^{\mathfrak{b}}(\mathfrak{x},\mathfrak{y}^{\mathrm{op}})\) is 1 if \(\mathfrak{x}=\mathfrak{y}\) and zero otherwise. Then if
\[\alpha=\sum a_{i}\mathfrak{x}_{i}\in\mathit{VCF}_{\bullet}^{\mathfrak{b}}( \widehat{H},\widehat{J};\Lambda_{R}^{\Gamma}),\qquad\qquad\beta=\sum b_{j} \mathfrak{x}_{j}^{\mathrm{op}}\in\mathit{VCF}_{\bullet}^{\mathfrak{b}}( \widehat{H}^{\mathrm{op}},\widehat{J}^{\mathrm{op}};\Lambda_{R}^{\Gamma})\]
one has
\[\langle\alpha,\beta\rangle^{\mathfrak{b}}=\sum_{i}a_{i}b_{i}\in R.\]
This sum is finite as \(\mathcal{A}_{H}(\mathfrak{x}_{i})\to-\infty\) and \(\mathcal{A}_{H^{\mathrm{op}}}(\mathfrak{x}_{j}^{\mathrm{op}})=-\mathcal{A}_{H }(\mathfrak{x}_{j})\to-\infty\).
#### 4.2.4. Pair-of-pants products
A TQFT type construction allows us to define a multiplicative structure on the vortex Floer homology. In particular, using any volume form on the pair-of-pants with cylindrical ends, one can define the pair-of-pants product
\[*_{\mathfrak{b}}:\,\mathit{VHF}_{\bullet}^{\mathfrak{b}}(V;\Lambda_{R}^{ \Gamma})\otimes\mathit{VHF}_{\bullet}^{\mathfrak{b}}(V;\Lambda_{R}^{\Gamma} )\to\mathit{VHF}_{\bullet}^{\mathfrak{b}}(V;\Lambda_{R}^{\Gamma})[n]\]
which is associative. Here \(2n=\mathrm{dim}X\). The details were given in [10].
There is also an identity element in the vortex Floer homology. Fix a regular bulk-avoiding admissible pair \((\widehat{H},\widehat{J})\). Consider a once-punctured sphere \(\Sigma_{\mathrm{cigar}}\) which is biholomorphic to the complex plane. View the puncture as an output. Equip \(\Sigma_{\mathrm{cigar}}\) with a cylindrical volume form \(\nu_{\mathrm{cigar}}\) so that one has has the isometric identification
\[\mathbb{C}\setminus B_{1}\cong[0,+\infty)\times S^{1}.\]
Turn on the Hamiltonian perturbation on this cylindrical end, meaning that one has a Hamiltonian perturbation
\[\mathcal{H}\in\Omega^{1}(\Sigma_{\mathrm{cigar}},C_{c}^{\infty}(V)^{K})\text{ s.t. }\mathcal{H}|_{[S,+\infty)\times S^{1}}=H_{t}dt\text{ for }S \gg 0.\]
Choose a domain-dependent \(K\)-invariant \(\omega_{V}\)-compatible almost complex structure \(\mathcal{J}\) parametrized by \(z\in\Sigma_{\mathrm{cigar}}\) such that over the cylindrical end it agrees with \(J_{t}\). Consider the Hamiltonian perturbed symplectic vortex equation
\[\overline{\partial}_{A,\mathcal{H}}u=0, F_{A}+\mu(u)\nu^{\mathrm{cigar}}=0.\]
Each finite energy solution \(\mathfrak{u}=(A,u)\) converges to an equivariant \(1\)-periodic orbit and hence represents an element \(\mathfrak{x}\in\operatorname{crit}\!\mathcal{A}_{H}\). Hence for each \(\mathfrak{x}\) there is a moduli space
\[\mathcal{M}^{\operatorname{cigar}}(\mathfrak{x}).\]
Elements in this moduli space have a uniform energy bound by \(-\mathcal{A}_{H}(\mathfrak{x})+C\) where \(C\) depends on the perturbation data on the cigar which is uniformly bounded. The virtual dimension is \(n-\operatorname{CZ}(\mathfrak{x})\). Counting elements (with signs) of index zero moduli spaces \(\mathcal{M}^{\operatorname{cigar}}(\mathfrak{x})\) defines an element
\[\mathbf{1}^{\operatorname{GLSM}}_{\mathfrak{b},\widehat{H}}=\sum_{\mathfrak{ x}}\mathfrak{n}^{\mathfrak{b}}_{\operatorname{cigar}}(\mathfrak{x})\mathfrak{x} \in\mathit{VCF}^{\mathfrak{b}}_{n}(\widehat{H},\widehat{J};\Lambda^{\Gamma}_{ R}).\]
Standard TQFT argument shows that \(\mathbf{1}^{\operatorname{GLSM}}_{\mathfrak{b},\widehat{H}}\) is closed, induces a well-defined element in \(\mathit{VHF}^{\mathfrak{b}}_{\bullet}(V;\Lambda^{\Gamma}_{R})\), and is the multiplicative identity of \(\mathit{VHF}^{\mathfrak{b}}_{\bullet}(V;\Lambda^{\Gamma}_{R})\). Denote this element by
\[\mathbf{1}^{\operatorname{GLSM}}_{\mathfrak{b}}\in\mathit{VHF}^{\mathfrak{b} }_{n}(V;\Lambda^{\Gamma}_{R}). \tag{4.15}\]
**Lemma 4.13**.: _The element \(\mathbf{1}^{\operatorname{GLSM}}_{\mathfrak{b}}\) is nonzero._
Proof.: In the undeformed case this was proved using the closed-open map in [14] and the fact that some Lagrangian Floer theory is nontrivial. Here as we know that the algebra \(\mathit{VHF}^{\mathfrak{b}}_{\bullet}(V;\Lambda^{\Gamma}_{R})\) is nonzero (see Lemma 4.8) for any ring \(R\), one must have \(\mathbf{1}^{\operatorname{GLSM}}_{\mathfrak{b}}\neq 0\).
**Lemma 4.14**.: _One has_
\[\langle\alpha,\beta\rangle^{\mathfrak{b}}\neq 0\Longrightarrow\langle\alpha \ast_{\mathfrak{b}}\beta,\mathbf{1}^{\operatorname{GLSM}}_{\mathfrak{b}} \rangle^{\mathfrak{b}}\neq 0.\]
Proof.: This theorem follows from the standard TQFT and cobordism argument. See Figure 1. The details are left to the reader.
Before we end this part, we state a major step towards our proof of the Hofer-Zehnder conjecture.
**Theorem E**.: _There exists a bulk-deformation \(\mathfrak{b}\) of the form_
\[\mathfrak{b}=\sum_{j=1}^{N}\log c_{j}V_{j}\]
Figure 1.
_with \(c_{j}\in\mathbb{Z}[\mathbf{i}]\) such that the algebra \(\text{V\!F}^{\mathfrak{b}}_{\bullet}(V;\Lambda_{\mathbb{K}})\) is semisimple in the sense of Definition 3.6._
The proof occupies Section 8 and Section 9, using the closed-open string map in the vortex setting.
### Bulk-deformed spectral invariants, persistence modules, and barcodes
We fit the bulk-deformed vortex Floer theory into the abstract packages developed by Usher etc. Let \(\mathfrak{b}\) be a bulk-deformation of the form (4.13).
**Proposition 4.15**.: _Given a regular bulk-avoiding pair \((\widehat{H},\widehat{J})\), the quadruple_
\[\mathfrak{c}^{\mathfrak{b}}(\widehat{H},\widehat{J}):=(P_{H},\mathcal{A}_{H}, \mathrm{CZ}_{(2)},n^{\mathfrak{b}})\]
_is a \(\mathbb{Z}_{2}\)-graded Floer-Novikov package over \(R\) (see Definition 3.14)._
Proof.: Straightforward.
Next we consider the quantitative dependence of the vortex Floer chain complex on the Hamiltonian. We restrict to the case where \(R=\mathbb{K}\) is a field. The vortex Floer chain complex \(\text{V\!C\!F}^{\mathfrak{b}}_{\bullet}(\widehat{H},\widehat{J};\Lambda^{ \Gamma}_{\mathbb{K}})\) is the associated Floer-Novikov complex.
**Proposition 4.16**.: _Given two regular bulk-avoiding pairs \((\widehat{H}_{1},\widehat{J}_{1})\) and \((\widehat{H}_{2},\widehat{J}_{2})\), the quasi-equivalence distance (see Definition 3.19) between \(\text{V\!C\!F}^{\mathfrak{b}}_{\bullet}(\widehat{H}_{1},\widehat{J}_{1}; \Lambda^{\Gamma}_{\mathbb{K}})\) and \(\text{V\!C\!F}^{\mathfrak{b}}_{\bullet}(\widehat{H}_{2},\widehat{J}_{2}; \Lambda^{\Gamma}_{\mathbb{K}})\) is no greater than the Hofer distance between the induced Hamiltonians \(H_{1},H_{2}\) downstairs, i.e._
\[d_{Q}\Big{(}\text{V\!C\!F}^{\mathfrak{b}}_{\bullet}(\widehat{H}_{1},\widehat{ J}_{1};\Lambda^{\Gamma}_{\mathbb{K}}),\text{V\!C\!F}^{\mathfrak{b}}_{ \bullet}(\widehat{H}_{2},\widehat{J}_{2};\Lambda^{\Gamma}_{\mathbb{K}})\Big{)} \leq\max\Big{\{}\int_{0}^{1}\max_{X}(H_{2}-H_{1})dt,\ \int_{0}^{1}\max_{X}(H_{1}-H_{2})dt\Big{\}}.\]
Proof.: This follows from the quantitative analysis of the continuation map. As the bulk \(\mathfrak{b}\) and the coefficient field are fixed, we drop it from notations. To show that the complex only depends on the induced Hamiltonian downstairs (measured by quasiequivalence distance), we need to introduce the parameter \(\lambda\) (see (4.8)). For each regular bulk-avoiding triple \((\lambda,\widehat{H},\widehat{J})\), there is a Floer-Novikov package \(\mathfrak{c}^{\lambda}(\widehat{H},\widehat{J})\) defined from (\(\mathfrak{b}\)-deformed) counts of solutions to (4.8). Denote the associated Floer-Novikov complex by \(\text{V\!C\!F}^{\lambda}_{\bullet}(\widehat{H},\widehat{J};\Lambda^{\Gamma}_ {\mathbb{K}})\) with valuation denoted by \(\ell^{\lambda}\).
**Lemma 4.17**.: _The quasi-equivalence distance between \(\text{V\!C\!F}^{\lambda_{1}}_{\bullet}(\widehat{H}_{1},\widehat{J}_{1})\) and \(\text{V\!C\!F}^{\lambda_{2}}_{\bullet}(\widehat{H}_{2},\widehat{J}_{2})\) is bounded by_
\[\widehat{d}_{\mathrm{Hofer}}(\widehat{H}_{1},\widehat{H}_{2}):=\max\left\{ \int_{0}^{1}\max_{V}(\widehat{H}_{2}-\widehat{H}_{1})dt,\ \int_{0}^{1}\max_{V}(\widehat{H}_{1}-\widehat{H}_{2})\right\}.\]
Proof.: Indeed, this follows from the energy calculation for the continuation maps (see Proposition 4.7). One can construct chain homotopy equivalences \(\Phi\), \(\Psi\) between these two complexes and maps \(K_{1}\), \(K_{2}\) as in the diagram
The first item of Definition 3.19 follows directly from (4.12). Using the same method, the second item of Definition 3.19 can be verified for the maps \(K_{1}\), \(K_{2}\)
We fix the two regular bulk-avoiding pairs \((\widehat{H}_{\pm},\widehat{J}_{\pm})\). For each \(\epsilon>0\), one can find a \(K\)-invariant cut-off function \(\rho_{\epsilon}:V\to[0,1]\) supported near \(\mu^{-1}(0)\) such that if we define \(\widehat{H}_{\pm}^{\epsilon}:=\rho_{\epsilon}\widehat{H}_{\pm}\), then
\[\widehat{d}_{\mathrm{Hofer}}(\widehat{H}_{-}^{\epsilon},\widehat{H}_{+}^{ \epsilon})\leq d_{\mathrm{Hofer}}(H_{-},H_{+})+\epsilon.\]
Hence in view of Lemma 4.17 above, we only need to prove the following.
**Lemma 4.18**.: _Suppose \((\widehat{H}_{\pm},\widehat{J}_{\pm})\) are two regular bulk-avoiding pairs such that \(\widehat{H}_{+}\) and \(\widehat{H}_{-}\) descend to the same Hamiltonian \(H\) downstairs. Then_
\[d_{Q}(\text{VCF}_{\bullet}(\widehat{H}_{-},\widehat{J}_{-}),\text{VCF}_{ \bullet}(\widehat{H}_{+},\widehat{J}_{+}))=0.\]
Proof.: We prove that the quasi-equivalence distance is less than \(\epsilon\) for all \(\epsilon>0\). Notice that the potential failure of this assertion comes from the difference between \(\widehat{H}_{-}\) and \(\widehat{H}_{+}\) which is _a priori_ large outside \(\mu^{-1}(0)\). We use the adiabatic limit argument to push solutions contributing to the continuation maps near the level set \(\mu^{-1}(0)\).
Choose a sequence \(\lambda_{i}\to\infty\). For each \(\lambda_{i}\), one can choose a \(\lambda_{i}\)-admissible lift \(\widehat{H}_{\pm}^{\lambda_{i}}\) of \(H\). As the admissible condition is only about the infinitesimal behaviors of the lifts \(\widehat{H}_{\pm}^{\lambda_{i}}\) near lifts of \(1\)-periodic orbits of \(H\), we may require that
\[\|\widehat{H}_{\pm}^{\lambda_{i}}-\widehat{H}_{\pm}\|_{C^{0}}\leq\epsilon.\]
Hence by Lemma 4.17, one only needs to consider the quasi-equivalence
\[d_{Q}\left(\text{VCF}_{\bullet}^{\lambda_{i}}(\widehat{H}_{-}^{\lambda_{i}}, \widehat{J}_{-}^{\lambda_{i}}),\text{VCF}_{\bullet}^{\lambda_{i}}(\widehat{H} _{+}^{\lambda_{i}},\widehat{J}_{+}^{\lambda_{i}})\right).\]
We claim that the above sequence (in \(i\)) converges to zero.
We set up the moduli spaces for the continuation maps. Choose a cut-off function \(\chi:\mathbb{R}\to[0,1]\) which is non-decreasing, equals zero on \((-\infty,0]\), and equals \(1\) on \([1,+\infty]\). Consider the equation with
\[\widehat{H}_{s,t}^{\lambda_{i}}=(1-\chi(s))\widehat{H}_{-}^{\lambda_{i}}+\chi (s)\widehat{H}_{+}^{\lambda_{i}}.\]
We claim that, for all \(\epsilon>0\), there exists \(i_{\epsilon}>0\) such that when \(i\geq i_{\epsilon}\), for all finite energy solutions to (4.9), if the limit at \(\pm\infty\) is \(\mathfrak{x}_{\pm}\), then one has
\[\mathcal{A}_{H}(\mathfrak{x}_{+,i})-\mathcal{A}_{H}(\mathfrak{x}_{-,i})\leq\epsilon.\]
This would establish item (1) of Definition 3.19.
Suppose on the contrary that this is not true. Then there exist \(\delta>0\), a subsequence (still indexed by \(i\)), a sequence of solutions \(\mathfrak{u}_{i}=(u_{i},\xi_{i},\eta_{i})\) to the equation connecting \(\mathfrak{x}_{-,i}\) and \(\mathfrak{x}_{+,i}\) such that
\[\mathcal{A}_{H}(\mathfrak{x}_{+,i})-\mathcal{A}_{H}(\mathfrak{x}_{-,i})\geq \delta>0.\]
By the energy identity (4.11), one has a uniform bound which is independent of \(\lambda_{i}\):
\[E_{\lambda_{i}}(\mathfrak{u}_{i})=\mathcal{A}_{H}(\mathfrak{x}_{-,i})-\mathcal{ A}_{H}(\mathfrak{x}_{+,i})-\int_{[0,1]\times S^{1}}\partial_{s}\widehat{H}_{s,t}^{ \lambda_{i}}(u)dsdt\leq C.\]
Now one can apply the adiabatic limit argument. Notice that although we cannot guarantee the convergence of \(\widehat{H}_{s,t}^{\lambda_{i}}\), but we may require that \(\widehat{J}_{s,t}^{\lambda_{i}}\) converges in sufficiently high order to a fixed almost complex structure \(\widehat{J}\) outside a compact subset of \(V\). In the \(\lambda_{i}\to\infty\) limit, _a priori_ there are three types of bubbles (see [1, Section 11]): holomorphic spheres in \(V\), holomorphic spheres in \(X\), and _affine vortices_, which are solutions to the vortex equation over \(\mathbb{C}\) (without Hamiltonian term). The three kind of bubbles can be classified by the rate of energy concentration compared to the rate of the divergence \(\lambda_{i}\to\infty\). As there is a lower bound on the energy of these bubbles,
the uniform bound on energy implies that, after passing to a subsequence (still indexed by \(i\)), except near a finite subset \(Z\subset[0,1]\times S^{1}=:Q\) at which bubbling could occur, the energy density
\[|\partial_{s}u_{i}+\mathcal{X}_{\xi_{i}}(u_{i})|^{2}+\lambda_{i}^{2}|\mu(u_{i})| ^{2}\]
stays bounded. In particular, the map \(u_{i}|_{Q}\) stays arbitrarily close to \(\mu^{-1}(0)\) except near \(Z\) as \(i\to\infty\). More precisely, for any \(r>0\), there exists \(i_{r}>0\) such that for all \(i\geq i_{r}\),
\[\sup_{z\in[0,1]\times S^{1}\setminus B_{r}(Z)}|\mu(u_{i}(z))|\leq r. \tag{4.16}\]
Then one has
\[\mathcal{A}_{H}(\mathfrak{x}_{+,i})-\mathcal{A}_{H}(\mathfrak{x }_{-,i})\leq\int_{Q}|\partial_{s}\widehat{H}_{s,t}^{\lambda_{i}}(u_{i})|dsdt\\ \leq\int_{Q\setminus B_{r}(Z)}|\partial_{s}\widehat{H}_{s,t}^{ \lambda_{i}}(u_{i})|dsdt+\int_{B_{r}(Z)}|\partial_{s}\widehat{H}_{s,t}^{ \lambda_{i}}(u_{i})|dsdt.\]
As \(\widehat{H}_{\widetilde{-}}^{\lambda_{i}}=\widehat{H}_{+}^{\lambda_{i}}\) on \(\mu^{-1}(0)\), the first item is bounded by \(Cr\); the second term is bounded by \(C\text{Area}(B_{r}(Z))\) which can be arbitrarily small. This contradicts the assumption that \(\mathcal{A}_{H}(\mathfrak{x}_{+,i})-\mathcal{A}_{H}(\mathfrak{x}_{-,i})\geq \delta>0\).
Therefore, we established item (1) of Definition 3.19. The case of item (2) is similar and hence omitted.
Now the proof of Proposition 4.16 is complete.
#### 4.3.1. Spectral invariants
Spectral numbers of Hamiltonian diffeomorphisms were introduced by Oh [10], Schwarz [14] and enhanced by Entov-Polterovich [1, 2, 1]. In [23] Wu and the second author constructed the analogue in the vortex Floer theory.
By Theorem 3.16 and Proposition 4.15, one can define the spectral numbers
\[\rho^{\mathfrak{b}}(\alpha;\widehat{H},\widehat{J}):=\rho_{\mathfrak{c}^{ \mathfrak{b}}(\widehat{H},\widehat{J})}(\alpha)\in\mathbb{R}\cup\{-\infty\},\ \forall\alpha\in\text{VHF}^{\mathfrak{b}}_{\bullet}(V;\Lambda^{\Gamma}_{ \mathbb{Z}[\mathfrak{i}]}).\]
One can establish the following properties of these spectral numbers, which were proved in [23] in the undeformed \((\mathfrak{b}=0)\) case.
**Theorem 4.19**.: _(cf. [23, Proposition 3.6]) The spectral numbers \(\rho^{\mathfrak{b}}(\alpha;\widehat{H},\widehat{J})\) have the following properties._
1. **(Independence of lifting and almost complex structure)**_The number_ \(\rho^{\mathfrak{b}}(\alpha;\widehat{H},\widehat{J})\) _only depends on the induced Hamiltonian_ \(H\) _downstairs. Denote this number by_ \[c^{\mathfrak{b}}(\alpha,H)\in\mathbb{R}.\]
2. **(Homogeneity)** _Given_ \(\alpha\in\text{VHF}(V;\Lambda^{\Gamma}_{\mathbb{Z}[\mathfrak{i}]})\) _and_ \(\lambda\in\Lambda^{\Gamma}_{\mathbb{Z}[\mathfrak{i}]}\)_, for any_ \(H\)_, one has_ \[c^{\mathfrak{b}}(\lambda\alpha,H)=c^{\mathfrak{b}}(\alpha,H)-\mathfrak{v}( \lambda).\] _One uses this formula to extend the spectral numbers to classes in_ \[\text{VHF}^{\mathfrak{b}}_{\bullet}(V;\Lambda_{\mathbb{Z}[\mathfrak{i}]})= \text{VHF}^{\mathfrak{b}}_{\bullet}(V;\Lambda^{\Gamma}_{\mathbb{Z}[\mathfrak{ i}]})\otimes_{\Lambda^{\Gamma}_{\mathbb{Z}[\mathfrak{i}]}}\Lambda_{\mathbb{Z}[ \mathfrak{i}]}.\]
3. **(Lipschitz continuity)** _Given any two nondegenerate Hamiltonians_ \(H_{1},H_{2}\) _downstairs, one has_ \[\int_{S^{1}}\min_{X}(H_{1}-H_{2})dt\leq c^{\mathfrak{b}}(\alpha,H_{1})-c^{ \mathfrak{b}}(\alpha,H_{2})\leq\int_{S^{1}}\max_{X}(H_{1}-H_{2})dt.\] _This implies that_ \(c^{\mathfrak{b}}(\alpha,H)\) _is defined for all Hamiltonians._
4. **(Invariance)** \(c^{\mathfrak{b}}(\alpha,H)\) _only depends on the homotopy class of the Hamiltonian path_ \(\tilde{\phi}_{H}\) _on_ \(X\)_. Let_ \(\operatorname{Ham}(X)\) _be the group of Hamiltonian diffeomorphisms on_ \(X\) _and let_ \(\operatorname{Ham}(X)\to\operatorname{Ham}(X)\) _be the covering of homotopy classes of Hamiltonian isotopies on_ \(X\)_. Then we can define_ \[c^{\mathfrak{b}}(\alpha,\tilde{\phi})\in\mathbb{R}\cup\{-\infty\}\ \forall\alpha\in\text{VHF}^{\mathfrak{b}}_{\bullet}(V;\Lambda_{R}),\ \tilde{\phi}\in \operatorname{Ham}(X).\]
5. **(Triangle inequality)** _For any_ \(\alpha_{1},\alpha_{2}\in\text{VHF}(V;\Lambda_{R})\) _and_ \(\tilde{\phi}_{1},\tilde{\phi}_{2}\in\operatorname{Ham}(X)\) _one has_ \[c^{\mathfrak{b}}(\alpha_{1}*\alpha_{2},\tilde{\phi}_{1}\tilde{\phi}_{2})\leq c ^{\mathfrak{b}}(\alpha_{1},\tilde{\phi}_{1})+c^{\mathfrak{b}}(\alpha_{2}, \tilde{\phi}_{2}).\]
**Definition 4.20**.: The **valuation** of a class \(\alpha\in\text{VHF}^{\mathfrak{b}}_{\bullet}(V;\Lambda_{R})\) is defined to be
\[\mathcal{A}^{\mathfrak{b}}(\alpha):=c^{\mathfrak{b}}(\alpha,\tilde{\mathrm{ Id}})\in\mathbb{R}\cup\{+\infty\}.\]
#### 4.3.2. Poincare duality
One useful property of the spectral numbers is related to the Poincare duality map.
**Proposition 4.21**.: _Let \(\mathbb{K}\) be a field._
1. _For any_ \(\alpha\in\text{VHF}^{\mathfrak{b}}_{\bullet}(V;\Lambda_{\mathbb{K}})\) _and_ \(\tilde{\phi}\in\operatorname{Ham}(X)\)_, there holds_ \[c^{\mathfrak{b}}(\alpha,\tilde{\phi})=-\inf\Big{\{}c^{\mathfrak{b}}(\beta, \tilde{\phi}^{-1})\ |\ \langle\alpha,\beta\rangle^{\mathfrak{b}}\neq 0\Big{\}}.\]
2. _If_ \(\langle\alpha,\beta\rangle^{\mathfrak{b}}\neq 0\)_, then_ \[\mathcal{A}^{\mathfrak{b}}(\alpha)+\mathcal{A}^{\mathfrak{b}}(\beta)\geq 0.\]
Proof.: Notice that one only needs to prove this proposition for coefficient field being \(\Lambda^{\Gamma}_{\mathbb{K}}\). In the case of ordinary Hamiltonian Floer theory, the proof of (1) uses the PSS map and the correspondence between the pairing \(\langle\cdot,\cdot\rangle\) and the intersection pairing on the singular homology of the manifold (see [1][10][11]). It was pointed in [12] that (1) holds for abstract filtered Floer-Novikov complexes. As the complex \(\text{VHF}^{\mathfrak{b}}_{\bullet}(\widehat{H},\widehat{J};\Lambda^{\Gamma} _{\mathbb{K}})\) is an abstract filtered Floer-Novikov complex over \(\Lambda^{\Gamma}_{\mathbb{K}}\) (see Proposition 4.15), (1) follows. For (2), take \(\tilde{\phi}=\mathrm{Id}\). Then
\[\mathcal{A}^{\mathfrak{b}}(\alpha)=c^{\mathfrak{b}}(\alpha,\mathrm{Id})=-\inf \Big{\{}\mathcal{A}^{\mathfrak{b}}(\beta)\ |\ \langle\alpha,\beta\rangle^{\mathfrak{b}}\neq 0\Big{\}}.\]
Hence if \(\langle\alpha,\beta\rangle^{\mathfrak{b}}\neq 0\), \(\mathcal{A}^{\mathfrak{b}}(\beta)\geq-\mathcal{A}^{\mathfrak{b}}(\alpha)\).
#### 4.3.3. Persistence modules and barcodes
Recall that (see Subsection 3.4) to any filtered Floer-Novikov complex \(CF_{\bullet}(\mathfrak{c})\) over the Novikov field \(\Lambda^{\Gamma}_{\mathbb{K}}\) one can associate a persistence module \(\mathbf{V}(\mathfrak{c})\). In particular, for each regular bulk-avoiding admissible pair \((\widehat{H},\widehat{J})\), the bulk-deformed vortex Floer complex \(\text{VCF}^{\mathfrak{b}}_{\bullet}(\widehat{H},\widehat{J};\Lambda^{\Gamma} _{\mathbb{K}})\) gives a persistence module, denoted by
\[\mathbf{V}^{\mathfrak{b}}(\widehat{H},\widehat{J};\Lambda^{\Gamma}_{\mathbb{K}}).\]
We omit the dependence on the bulk deformation \(\mathfrak{b}\) most of the time. One can check easily that we can extend the coefficient field to the universal Novikov field \(\Lambda_{\mathbb{K}}\), obtaining a persistence module \(\mathbf{V}(\widehat{H},\widehat{J};\Lambda_{\mathbb{K}})\) with
\[V^{s}(\widehat{H},\widehat{J};\Lambda_{\mathbb{K}}):=HF^{\leq s}_{\bullet}( \widehat{H},\widehat{J};\Lambda^{\Gamma}_{0,\mathbb{K}})\otimes_{\Lambda^{ \Gamma}_{0,\mathbb{K}}}\Lambda_{0,\mathbb{K}}.\]
When the ground field \(\mathbb{K}\) is clear from the context, we often abbreviate this persistence module by \(\mathbf{V}(\widehat{H},\widehat{J})\). One can prove, using the continuation map, that up to isomorphism, this persistence module is independent of the choice of the almost complex structure \(\widehat{J}\). Hence denote the persistence module by \(\mathbf{V}(\widehat{H})\). One can also use the same idea of proving Proposition 4.16 that, for different lifts \(\widehat{H}_{1},\widehat{H}_{2}\) of the same Hamiltonian \(H\) downstairs, the interleaving distance between
\(\boldsymbol{V}(\widehat{H}_{1})\) and \(\boldsymbol{V}(\widehat{H}_{2})\) is zero. By identifying persistence modules with zero interleaving distance, the persistence module only depends on the Hamiltonian path \(\tilde{\phi}\in\operatorname{\mathrm{H\!\!\tilde{a}m}}(X)\) generated by \(H\). Hence we loosely denote the object by \(\boldsymbol{V}(\tilde{\phi})\).
Recall also that to any Floer-Novikov complex one can associate a barcode (and hence a reduced barcode). The reduced barcode corresponding to a regular bulk-avoiding admissible pair \((\widehat{H},\widehat{J})\) is denoted by \(\mathcal{B}(\widehat{H},\widehat{J})\). One can prove that (similar to the case of ordinary Floer barcodes, see [13, Proposition 5.3]) the reduced barcode only depends on the time-\(1\) map \(\phi=\phi_{H}\) on the toric manifold \(X\). Hence we also denote it by \(\mathcal{B}(\phi)\).
## 5. Local Floer theory
To extend the Hofer-Zehnder conjecture to degenerate Hamiltonian diffeomorphisms, one needs to have a good notion of counts of fixed points. Following [11], we will use the rank of a local version of the vortex Floer homology (with bulk deformation), which is ultimately isomorphic to the local Floer homology in the classical sense, to define a homological count of fixed points. This section can be skipped at first reading, especially if the reader is mainly interested in the nondegenerate case. The following statements will be proved in this section.
**Theorem 5.1**.: _Let \(\mathbb{K}\) be a field. Let \(\phi:X\to X\) be a Hamiltonian diffeomorphism and \(p\in X\) be an isolated fixed point. Then there is a \(\mathbb{Z}_{2}\)-graded \(\mathbb{K}\)-vector space_
\[\text{VHF}^{\mathrm{loc}}(\phi,p;\mathbb{K})\]
_satisfying the following properties._
1. _If_ \(p\) _is a nondegenerate fixed point, then_ \(\text{VHF}^{\mathrm{loc}}(\phi,p;\mathbb{K})\) _has rank_ \(1\) _graded by the Conley-Zehnder index of_ \(p\) _(modulo_ \(2\)_)._
2. _If_ \(\phi^{s}\)_,_ \(s\in[0,1]\) _is a smooth family of Hamiltonian diffeomorphisms such that_ \(p\) _is a uniformly isolated fixed point, i.e., there exists an open neighborhood of_ \(p\) _in which_ \(\phi_{s}\) _has_ \(p\) _as the only fixed point for all_ \(s\)_. Then_ \(\text{VHF}^{\mathrm{loc}}(\phi^{0},p;\mathbb{K})\cong\text{VHF}^{\mathrm{loc}} (\phi^{1},p;\mathbb{K})\)_._
3. _If_ \(\phi^{\prime}\) _is a generic_ \(C^{2}\) _small perturbation of_ \(\phi\) _supported near_ \(p\)_, then the number of fixed points of_ \(\phi^{\prime}\) _near_ \(p\) _is at least_ \(\text{rank}\,\text{VHF}^{\mathrm{loc}}(\phi,p;\mathbb{K})\)_._
4. _If_ \(\phi^{k}\) _is an admissible iteration of_ \(\phi\) _at_ \(p\)_, meaning that_ \(\lambda^{k}\neq 1\) _for all eigenvalues_ \(\lambda\neq 1\) _of_ \(D\phi_{x}\)_, which implies that_ \(p\) _is also an isolated fixed point of_ \(\phi^{k}\)_, then_ \[\text{rank}\,\text{VHF}^{\mathrm{loc}}(\phi,p;\mathbb{K})=\text{rank}\,\text{ VHF}^{\mathrm{loc}}(\phi^{k},p;\mathbb{K})\]
The homology group \(\text{VHF}^{\mathrm{loc}}(\phi,p;\mathbb{K})\) is constructed via generating Hamiltonians of \(\phi\). For a Hamiltonian \(H_{t}:S^{1}\times X\to\mathbb{R}\) with \(x:S^{1}\to X\) being an isolated \(1\)-periodic orbit, we will define the local Floer homology group
\[\text{VHF}^{\mathrm{loc}}(H,x;\mathbb{K}).\]
It turns out that the vortex version of local Floer homology is in fact isomorphic to the classical one.
**Proposition 5.2**.: _There is an isomorphism_
\[\text{VHF}^{\mathrm{loc}}(H,x;\mathbb{K})\cong HF^{\mathrm{loc}}(H,x;\mathbb{ K}).\]
As we will use bulk-deformed vortex Floer theory, the right notion of local Floer homology may _a priori_ depend on the bulk deformation, when the fixed point is contained in the bulk divisor. However, we will prove (see Proposition 5.7) that the bulk-deformed local vortex Floer homology is (non-canonically) isomorphic to the undeformed one. Hence the homological count we defined here does not see the effect of bulk deformation _a posteriori_.
Now we use the rank of local Floer homology to define the so-called homological count of the number of fixed points.
**Definition 5.3**.: Let \(\mathbb{K}\) be a field. Given a Hamiltonian diffeomorphism \(\phi\) on \(X\) with only isolated fixed points, the **homological count** (over coefficient field \(\mathbb{K}\)) of the number of fixed points of \(\phi\) is
\[N(\phi,\mathbb{K}):=\sum_{p\in\operatorname{Fix}\phi}\dim_{\mathbb{K}}\text{{ VHF}}^{\operatorname{loc}}(\phi,p;\mathbb{K})=\sum_{p\in\operatorname{Fix}\phi} \dim_{\mathbb{K}}\text{{HF}}^{\operatorname{loc}}(\phi,p;\mathbb{K}). \tag{5.1}\]
By definition, when \(\phi\) is nondegenerate, we have
\[N(\phi,\mathbb{K})=\#\operatorname{Fix}(\phi).\]
On the other hand, if \(\phi\) be a Hamiltonian diffeomorphism on \(X\) with only isolated fixed points, then for \(\phi^{k}\) being an admissible iteration at all the fixed points of \(\phi\) with all fixed points isolated, we know that
\[N(\phi,\mathbb{K})\leq N(\phi^{k},\mathbb{K}).\]
### Local Morse and Floer homology
#### 5.1.1. Local Morse homology
We follow the treatment of local Morse and Floer homology by Ginzburg [10]. First, let \(M\) be a smooth manifold and \(f:M\to\mathbb{R}\) be a smooth function. Suppose \(x\) is an isolated (but not necessarily nondegenerate) critical point. Then for any coefficient ring \(R\), there is an invariant, the _local Morse homology_
\[HM^{\operatorname{loc}}(f,x)\]
defined by taking the homology of a Morse-type complex over \(R\) of a small generic perturbation of \(f\) in a sufficiently small neighborhood of \(x\), which only takes into account critical points and gradient trajectories (for a generic Riemannian metric) contained in that neighborhood. We recall the details of the construction as this is the prototype of the local version of the (vortex) Floer homology.
First, choose a small neighborhood \(U\) of \(p\) which contains no other critical points of \(f\). Fix a reference Riemannian metric \(g_{U}\) on \(U\) to measure relevant norms. Let \(U^{\prime}\) be a smaller neighborhood of \(p\) whose closure is contained in \(U\). Let \(f_{1}\) be an \(\epsilon\)-small perturbation supported in \(U^{\prime}\), i.e.
\[\operatorname{supp}(f_{1}-f)\subset U^{\prime}, \|f_{1}-f\|_{C^{2}}<\epsilon.\]
A generic such perturbation \(f_{1}\) is Morse inside \(U\) and \(\operatorname{crit}(f_{1}|_{U})\) is contained in \(U^{\prime}\). Then consider the Morse complex of \(f_{1}:U\to\mathbb{R}\), which is freely generated by critical points of \(f_{1}|_{U}\) over \(R\), graded by the Morse index. To define the differential, consider an arbitrary Riemannian metric \(g_{1}\) on \(U\). Consider, for each two critical points \(p_{1},q_{1}\) of \(f_{1}\) the moduli space of negative gradient trajectories of \(f_{1}\) (with respect to \(g_{1}\)) that connect \(p_{1}\) and \(q_{1}\). Then by a compactness argument, for \(\epsilon\) sufficiently small, all trajectories connecting critical points of \(f_{1}|_{U}\) must stay in \(U^{\prime}\). Then by a small perturbation of the Riemannian metric \(g_{1}\) to achieve transversality, one can count rigid negative gradient trajectories over \(\mathbb{Z}_{2}\); choosing an orientation of \(U\) and orientations on unstable manifolds of critical points of \(f_{1}|_{U}\) one can define integral counts. Hence one obtains a chain complex whose homology is defined to be the local Morse homology \(HM^{\operatorname{loc}}(f,p)\).
Using continuation map one can prove that the local Morse homology is an invariant, which only depends on the infinitesimal behavior of \(f\) at \(p\). Indeed, fix \(U\), \(U^{\prime}\), \(g_{U}\) as above. Let \(f_{2}\) be another \(\epsilon\)-small perturbation supported in \(U^{\prime}\). Let \(g_{2}\) be another Riemannian metric on \(U\) for which the local Morse complex of \((f_{2},g_{2})\) is defined. Choose a homotopy \((f_{\chi(s)},g_{\chi(s)})\) between \((f_{1},g_{1})\) and
\((f_{2},g_{2})\) using a fixed cut-off function \(\chi:(-\infty,+\infty)\to[1,2]\). By possibly shrinking the value of \(\epsilon\), one can show that in this case all solutions to the continuation equation
\[\dot{x}(s)+\nabla^{g_{\chi(s)}}f_{\chi(s)}(x(s))=0\]
are contained in \(U^{\prime}\). By slightly perturbing the data \((f_{\chi(s)},g_{\chi(s)})\) one can achieve transversality for the moduli spaces of this equation and hence define a continuation map. The same type of argument can be applied to show that the continuation map is uniquely determined up to chain homotopy. As a result, in the same vein of classical arguments in Floer theory, one can prove that the local Morse homology \(HM^{\mathrm{loc}}(f,x)\) is independent of the small perturbation of the function and the Riemannian metric. One can also see that the homology is neither dependent on the neighborhoods \(U\), \(U^{\prime}\) nor the reference metric \(g_{U}\). Hence \(HM^{\mathrm{loc}}(f,x)\) is an invariant of the germ of \(f\) at \(x\).
#### 5.1.2. Local Floer homology
One can similarly define local homology groups in the Floer setting which admits extensions to the vortex setting. Note that the explicit construction depends on the 1-periodic family of Hamiltonians but we will prove that it only depends on the time-1 map. Let \((M,\omega)\) be a symplectically aspherical manifold and \(H\) be a 1-periodic family of Hamiltonians on \(M\) with time-1 map \(\varphi_{H}:M\to M\). Choose a reference Riemannian metric \(g_{M}\) which induces a distance function \(d_{M}\) on \(M\).
**Lemma 5.4**.: _If \(q\in\mathrm{Fix}(\varphi_{H})\) is an isolated fixed point corresponding to a 1-periodic orbit \(x:S^{1}\to M\), then there exists \(r>0\) such that, for all smooth loop \(y:S^{1}\to M\) with_
\[\sup_{t\in S^{1}}d_{M}(y(t),x(t))<r, \tag{5.2}\]
_if \(y\) is a 1-periodic orbit of \(H\), then \(y\equiv x\)._
Proof.: This follows from the definition of isolated fixed point.
We say that a loop \(y:S^{1}\to M\) is \(r\)-close to \(x\) if \(y(t)\) satisfies (5.2). We may choose \(r\) smaller than the injectivity radius of \(g_{M}\).
Now we review the definition of the local Floer homology. Fix \(r\) as in Lemma 5.4. Let \(R\) be a coefficient ring such as \(\mathbb{Z}_{2}\) or \(\mathbb{Z}\). The local Floer homology \(HF^{\mathrm{loc}}(H,x)\) is a \(\mathbb{Z}_{2}\)-graded \(R\)-module. To define it, consider a perturbation \(H_{1,t}:S^{1}\times M\to\mathbb{R}\) satisfying
1. For all \(t\in S^{1}\), \(\mathrm{supp}(H_{1,t}-H_{t})\subset B_{r/2}(x(t))\).
2. \(\|H_{1}-H\|_{C^{2}(S^{1}\times M)}\leq\delta\).
We call such perturbations \(\delta\)-small perturbations.
**Lemma 5.5**.: _For any \(\rho>0\), there exists \(\delta>0\) such that for all \(\delta\)-small perturbation \(H_{1}\), if \(y\) is a 1-periodic orbit of \(H_{1}\) which is \(r\)-close to \(x\), then \(y\) is \(\rho\)-close to \(x\)._
Proof.: Suppose this is not true, then there exist \(\rho>0\), a sequence \(\delta_{i}\to 0\) and a sequence of \(\delta_{i}\)-small perturbations \(H_{i}\), and a sequence of 1-periodic orbits \(y_{i}\) of \(H_{i}\) with \(d_{M}(y_{i}(t_{i}),x(t_{i}))\geq\rho\) for some \(t_{i}\in S^{1}\). Then \(H_{i}\to H\) in \(C^{2}\). By choosing a subsequence, one may assume that \(y_{i}\) converges to a 1-periodic orbit \(y_{\infty}\) of \(H\). As \(y_{i}(0)\) is \(r\)-close to \(x(0)\), one can see that \(y_{\infty}(0)\) is \(r\)-close to \(x(0)\), hence is a fixed point of \(\varphi\) which is \(r\)-close to \(x(0)\). As \(x(0)\) is an isolated fixed point, \(y_{\infty}(0)=x(0)\) and hence \(y_{\infty}(t)\equiv x(t)\). This contradicts the assumption.
The following construction is analogous to the Morse case. For each loop \(\overline{x}:S^{1}\to M\) which is \(\frac{r}{2}\)-close to \(x\), one can define the action functional
\[\mathcal{A}^{\mathrm{loc}}_{H_{1}}(\overline{x})=-\int_{[0,1]\times S^{1}}u^{*} \omega+\int_{S^{1}}H_{1}(\overline{x}(t))dt.\]
Here \(u:[0,1]\times S^{1}\to M\) is a "small" cobordism connecting \(\overline{x}\) and \(x\) whose homotopy class is canonical as \(\overline{x}\) is sufficiently close to \(x\). Then for each pair of critical points \(x_{1},y_{1}\) of \(\mathcal{A}^{\mathrm{loc}}_{H_{1}}\), the difference
\[\mathcal{A}^{\mathrm{loc}}_{H_{1}}(x_{1})-\mathcal{A}^{\mathrm{loc}}_{H_{1}}( y_{1})\]
is sufficiently small; this is needed to run the compactness argument.
Consider the Floer complex generated over \(R\) by critical points of \(\mathcal{A}^{\mathrm{loc}}_{H_{1}}\), graded by the Conley-Zehnder index modulo \(2\). Notice that for any two generators \(x_{1},y_{1}\), there is a canonical homotopy class of (short) cylinders connecting them. To define the differential, choose a \(1\)-periodic family of \(\omega\)-compatible almost complex structures \(J_{1}\) and consider Floer trajectories (using \(J_{1}\)) connecting generators in the canonical homotopy class. Then the energy identity for Floer trajectories guarantees that when \(\rho\) and \(\delta\) are sufficiently small, the total energy of Floer trajectories can be arbitrarily small. More precisely, we consider Floer trajectories \(u:\mathbb{R}\times S^{1}\to M\) for \((H_{1},J_{1})\) satisfying
\[\sup_{s\in\mathbb{R}}\sup_{t\in S^{1}}d_{M}(u(s,t),x(t))<r.\]
The smallness of total energy guarantee that the above supremum can be arbitrarily small. One can hence guarantee compactness (up to breaking) of such Floer trajectories which are sufficiently close to \(x(t)\) and define a chain complex. Coherent orientations can also be chosen if one would like to define the complex over \(\mathbb{Z}\).
One can prove using the continuation map that the local Floer homology is independent of the pair \((H_{1},J_{1})\). We omit the details. We denote the local Floer homology defined in this way as \(HF^{\mathrm{loc}}(H,x;R)\). Moreover, we can prove that the local Floer homology only depends on the fixed point \(q\) and the time-\(1\) map \(\phi\in\mathrm{Ham}(X)\). Hence we denote the local Floer homology by
\[HF^{\mathrm{loc}}(\phi,q;R).\]
Among various properties of the local Floer homology we only recall the following one.
**Proposition 5.6**.: _[_10_, Theorem 1.1]_ _Let \(\mathbb{K}\) be a field. If \(q\) is an isolated fixed point of \(\phi\), and \(\phi^{k}\) is an iteration admissible at \(p\), then_
\[\mathrm{rank}_{\mathbb{K}}\mathit{HF}^{\mathrm{loc}}(\phi^{k},q;\mathbb{K})= \mathrm{rank}_{\mathbb{K}}\mathit{HF}^{\mathrm{loc}}(\phi,q;\mathbb{K}).\]
### Local vortex Floer homology with bulk
We adapt the definition of local Floer homology in the vortex setting, possibly with bulk deformations, and establish analogues of the statements in Proposition 5.6 as listed in Theorem 5.1.
Let \(\mathfrak{b}\) be a small bulk deformation. Let \(\phi:X\to X\) be a Hamiltonian diffeomorphism and \(q\in X\) be an isolated fixed point. We would like to define a local invariant
\[\mathit{VHF}^{\mathfrak{b}}_{\mathrm{loc}}(\phi,q;\mathbb{K}).\]
Indeed, let \(H\) be a \(1\)-periodic family of Hamiltonian on \(X\) generating the Hamiltonian isotopy \(\phi_{t}\) with \(\phi_{1}=\phi\). Let \(x(t)=\phi_{t}(p)\) be the corresponding \(1\)-periodic orbit of \(H\). Notice that even if \(x(t)\) is nondegenerate, it may intersect the bulk divisor \(D\subset X\). Choose a small perturbation \(H_{1}\) of \(H\) supported near \(x(t)\) such that all nearby \(1\)-periodic orbits are nondegenerate and are disjoint from the bulk divisor \(D\).
Let \(\widehat{H}\) be a \(K\)-invariant lift of \(H\) and \(\widehat{H}_{1}\) be a \(K\)-invariant admissible lift of \(H_{1}\). Then the \(1\)-periodic orbit \(x(t)\) lifts to a gauge equivalence class of equivariant \(1\)-periodic orbits. Let \(\mathfrak{r}(t)=(x(t),\eta(t))\) be a representative. There are also gauge equivalence classes of equivariant \(1\)-periodic orbits of \(H_{1}\) which are near \(\mathfrak{r}\). Indeed, fixing the \(L_{0}K\)-orbit of \(\mathfrak{r}(t)\), there are well-defined \(L_{0}K\)-orbits of equivariant \(1\)-periodic orbits which are nearby. Then for each pair of nearby equivariant \(1\)-periodic orbits \(\mathfrak{r}_{1}\), \(\mathfrak{y}_{1}\) of \(\widehat{H}_{1}\), there is a canonical homotopy class of (small) cylinders connecting \(\mathfrak{r}_{1}\) and \(\mathfrak{y}_{1}\). Consider the moduli space of solutions to the vortex equation over the cylinder connecting \(\mathfrak{r}_{1}\) and \(\mathfrak{y}_{1}\). The energy of these solutions is
\[\mathcal{A}_{H_{1}}(\mathfrak{r}_{1})-\mathcal{A}_{H_{2}}(\mathfrak{y}_{1})\]
which are arbitrarily small. Then similar to the case of ordinary local Floer homology, these moduli spaces can be used to define a chain complex over any coefficient filed \(\mathbb{K}\). As the orbits are disjoint from \(D\), one can also use topological intersection numbers with the bulk divisor and associated weighted counts to define the bulk-deformed version. Denote the resulting homology by
\[\mathit{VHF}^{\mathfrak{b}}_{\mathrm{loc}}(H,x;\mathbb{K}).\]
The continuation argument shows that the homology is independent of the data \((\widehat{H}_{1},\widehat{J}_{1})\). On the other hand, _a priori_ the homology depends on the bulk \(\mathfrak{b}\). When \(\mathfrak{b}=0\), denote this homology by \(\mathit{VHF}_{\mathrm{loc}}(H,x;\mathbb{K})\).
**Proposition 5.7**.: _One has_
\[\mathit{VHF}^{\mathfrak{b}}_{\mathrm{loc}}(H,x;\mathbb{K})\cong\mathit{VHF}^ {\mathrm{loc}}(H,x;\mathbb{K}). \tag{5.3}\]
Proof.: First, suppose \(x\) does not intersect the bulk divisor \(D\subset X\). Then for a small perturbation of \(H\), all cylinders contributing to the definition of the local Floer homology have zero topological intersection number with the divisor upstairs. Hence (5.3) holds in this case.
Now suppose \(x\) intersects the bulk divisor \(D\). One can find a loop of Hamiltonians \(\psi(t)\) supported near \(x(t)\) such that \(y(t):=\psi(t)(x(t))\) is disjoint from \(D\). Moreover, define
\[y(t)=(\psi(t)\phi(t)\psi(0)^{-1})(\psi(0)(q))=(\psi(t)\phi(t)\psi(0)^{-1})(y(0)),\]
then \(y(t)\) is a \(1\)-periodic orbit of the Hamiltonian isotopy \(\psi(t)\phi(t)\psi(0)^{-1}\). Let the generating Hamiltonian function of this new family be \(G\), which can be made sufficiently close to \(H\). Then \(y(t)\) is also an isolated \(1\)-periodic orbit of \(G\). Then a generic perturbation of \(G\) also serves as a perturbation of \(H\). Hence
\[\mathit{VHF}^{\mathfrak{b}}_{\mathrm{loc}}(H,x;\mathbb{K})\cong\mathit{VHF}^{ \mathfrak{b}}_{\mathrm{loc}}(G,y;\mathbb{K}).\]
However, as \(y\) is disjoint from \(D\), the right hand side is isomorphic to \(\mathit{VHF}^{\mathrm{loc}}(G,y;\mathbb{K})\), which is also isomorphic to \(\mathit{VHF}^{\mathrm{loc}}(H,x;\mathbb{K})\).
Now we prove that the local vortex Floer homology is isomorphic to the local Floer homology inside the symplectic quotient.
Proof of Proposition 5.2.: It follows from the adiabatic limit argument in the same spirit as in [11, 12] and [13]. Let \(H_{1}\) be a nondegenerate Hamiltonian on \(X\) which is arbitrarily close to \(H\). Let \(\widehat{H}_{1}\) be an admissible lift and \(\widehat{J}_{1}\) is a generic time-dependent almost complex structure. Consider the local vortex Floer homology defined by critical points of \(\mathcal{A}_{H_{1}}\) which are close to the fixed point \(x\in\mathrm{Fix}(\varphi_{H})\) whose differential counts rigid solutions to the equation (4.8) (with \((\widehat{H},\widehat{J})\) replaced by \((\widehat{H}_{1},\widehat{J}_{1})\)). Using continuation maps we can show that the resulting homology is independent of \(\lambda\). Moreover, the energy of relevant solutions can be arbitrarily small. Then consider the \(\lambda\to\infty\) limit. For any sequence \(\lambda_{i}\to\infty\) and any sequence of solutions to (4.8) for
\(\lambda=\lambda_{i}\) which contributes to the local vortex Floer differential, there is an upper bound of the energy of these solutions. Then by the adiabatic limit compactness theorem (see [10][11] in similar settings) a subsequence converges to a possibly broken ordinary Floer trajectory inside \(X\) modulo bubbling. As there is a lower bound for the energy of bubbles, we can choose the perturbation \(H_{1}\) sufficiently close to \(H\) so that bubbles can be ruled out. Moreover, we may assume that the pair \((H_{1},J_{1})\) on the symplectic quotient \(X\) induced from the pair \((\widehat{H}_{1},\widehat{J}_{1})\) makes the local Floer complex well-defined (i.e. moduli spaces are transverse). Then if we are considering the zero-dimensional moduli spaces, then the possible limits must be unbroken trajectories in \(X\). Now we claim that for \(\lambda\) sufficiently large, there is an orientation-preserving bijection between index zero solutions to (4.8) (modulo gauge transformation) and index zero solutions to the ordinary Floer equation in \(X\). Indeed, using the same kind of estimates as in [10][10] (and the much simpler case in [10]) one can construct a gluing map from the limiting moduli space to the vortex moduli space with sufficiently large parameter \(\lambda\). The compactness result explained above shows that the gluing map is surjective, while via the implicit function theorem one can show that the gluing map is injective. The fact that the gluing map preserves orientation follows from the explicit comparison of the linearized Fredholm operators (they differ by, roughly speaking, an invertible operator).
In view of Proposition 5.2 and the properties of local Floer homology as proved in, e.g., [12], the assertions in Theorem 5.1 are straightforward. The following is also immediate.
**Corollary 5.8**.: _The local vortex Floer homology has the following properties._
1. _(Up to isomorphism)_ \(\text{VHF}^{\text{\rm loc}}(H,x;\mathbb{K})\) _only depends on the fixed point_ \(q\) _and the time-1 map_ \(\phi\in\operatorname{Ham}(X)\)_. Hence we denote the (bulk-deformed) local vortex Floer homology by_ \[\text{VHF}^{\mathfrak{b}}_{\text{\rm loc}}(\phi,q;\mathbb{K}).\]
2. _If_ \(\phi^{k}\) _is an admissible iteration of_ \(\phi\) _at_ \(q\)_, then_ \[\text{VHF}^{\mathfrak{b}}_{\text{\rm loc}}(\phi,q;\mathbb{K})\cong\text{VHF} ^{\mathfrak{b}}_{\text{\rm loc}}(\phi^{k},q;\mathbb{K}).\]
### Barcodes of degenerate Hamiltonians
Recall that one can associate to each nondegenerate Hamiltonian on a closed symplectic manifold a (finite) barcode. As this association is Lipschitz continuous with respect to the bottleneck distance for barcodes and Hofer metric for Hamiltonians, we hope one can define barcodes for all Hamiltonians using this Lipschitz continuity. However, the bottleneck distance is not complete. Therefore, _a priori_, the barcode for a general Hamiltonian only exists in the completion.
**Theorem 5.9**.: _Let \(\mathbb{K}\) be a field. Let \(\phi\in\operatorname{Ham}(X)\) be a Hamiltonian diffeomorphism with isolated fixed points. Let \(\mathcal{B}(\phi)\) be the (a priori infinite) reduced barcode of \(\phi\) (in coefficient field \(\Lambda^{\Gamma}_{\mathbb{K}}\)). Then \(\mathcal{B}(\phi)\) has finitely many bars whose number of end points is equal to \(N(\phi)\)._
**Corollary 5.10**.: _The total bar length is defined for all \(\phi\in\operatorname{Ham}(X)\) with isolated fixed points._
Now we prove Theorem 5.9. Suppose \(\phi\in\operatorname{Ham}(X)\) has only isolated fixed points. Let \(H\) be a Hamiltonian whose time one map is \(\phi\). Let \(\widehat{H}\) be any \(K\)-invariant lift of \(H\) and let \(\widehat{J}\) be a \(K\)-invariant \(\omega_{V}\)-compatible almost complex structure. Notice that in general \((\widehat{H},\widehat{J})\) is not an admissible pair so does not have a vortex Floer complex. However, one can still consider the vortex equation with the data \((\widehat{H},\widehat{J})\).
**Lemma 5.11**.: _There exists \(\delta>0\) which only depends on \((\widehat{H},\widehat{J})\) satisfying the following condition. Let \(x(t)\neq y(t)\) be two different 1-periodic orbits of \(H\) downstairs. Let \(\mathfrak{u}\) be a possibly broken
solution to (4.6) which connects \(x(t)\) and \(y(t)\) (without conditions on capping). Then the energy of \(\mathfrak{u}\) is at least \(\delta\)._
Proof.: For admissible \((\widehat{H},\widehat{J})\) this statement is proved as [16, Proposition 5.5] using a compactness argument. Notice that to run the compactness argument and to have the notion of converging to a \(1\)-periodic orbit, one does not really need to require that the Hamiltonian is nondegenerate or the pair \((\widehat{H},\widehat{J})\) is admissible.
**Corollary 5.12**.: _The lengths of all bars in \(\mathcal{B}(\phi)\) are no less than \(\delta\)._
Proof.: Suppose on the contrary \(\mathcal{B}(\phi)\) has a finite bar whose length is positive and smaller than \(\delta\). Let \((\widehat{H}_{k},\widehat{J}_{k})\) be a sequence of regular bulk-avoiding pairs such that \((\widehat{H}_{k},\widehat{J}_{k})\) converges to \((\widehat{H},\widehat{J})\). Consider the reduced barcode associated to \(\phi_{H_{k}}\). By the continuous dependence of barcodes on the Hamiltonian, for \(k\) sufficiently large, there exists a finite bar in \(\mathcal{B}(\phi_{H_{k}})\) whose length is between \(\frac{\delta}{2}\) and \(\delta-\epsilon\) for some small \(\epsilon\). By the definition of barcodes by Usher-Zhang, there exists a rigid solution \(\mathfrak{u}_{k}\) to (4.6) with data \((\widehat{H}_{k},\widehat{J}_{k})\) whose energy is between \(\frac{\delta}{2}\) and \(\delta-\epsilon\). Via the compactness argument, there is a subsequence, still indexed by \(k\), such that \(\mathfrak{u}_{k}\) converges to a possibly broken trajectory with data \((\widehat{H},\widehat{J})\) whose total energy is between \(\frac{\delta}{2}\) and \(\delta-\epsilon\). This contradicts Lemma 5.11.
Proof of Theorem 5.9.: Choose a sequence of regular bulk-avoiding admissible pair \((\widehat{H}_{k},\widehat{J}_{k})\) converging to \((\widehat{H},\widehat{J})\). Consider the complex \(\text{VCF}^{\mathfrak{b}}_{\bullet}(\widehat{H}_{k},\widehat{J}_{k};\Lambda^ {\Gamma}_{\aleph})\). One can write
\[\partial=\partial_{\text{short}}+\partial_{\text{long}}\]
where \(\partial_{\text{short}}\) counts rigid trajectories whose energy is smaller than \(\delta\) and \(\partial_{\text{long}}\) counts rigid trajectories whose energy is bigger than \(\delta\). Then \(\partial_{\text{short}}^{2}=0\) and its homology coincides with the direct sum of all local vortex Floer homology of \(\phi\). Moreover, one can decompose the reduced barcode of \(\phi_{H_{k}}\) as
\[\mathcal{B}(\phi_{H_{k}})=\mathcal{B}_{\text{short}}(\phi_{H_{k}})\sqcup \mathcal{B}_{\text{long}}(\phi_{H_{k}})\sqcup\mathcal{B}_{\infty}(\phi_{H_{k}})\]
where the first component consists of finite bars of lengths at most \(\delta\) and the second component consists of other finite bars. As \(\partial_{\text{short}}^{2}=0\), one can also define a barcode \(\mathcal{B}_{\text{local}}(\phi_{H_{k}})\) by modifying the definition of Usher-Zhang, whose finite part coincides with \(\mathcal{B}_{\text{short}}(\phi_{H_{k}})\). Then by the definition,
\[N(\phi_{H_{k}}) =\#\text{End}(\mathcal{B}_{\text{short}}(\phi_{H_{k}}))+\sum_{x \in\text{Fix}(\phi)}\dim\text{VHF}^{\text{loc}}(\phi,x)\] \[=\#\text{End}(\mathcal{B}_{\text{short}}(\phi_{H_{k}}))+\#\text{ End}(\mathcal{B}_{\text{long}}(\phi_{H_{k}}))+\dim\text{VHF}_{\bullet}(V).\]
As in the limit, all short bars disappear and long bars survive with respect to the bottleneck distance, the theorem follows.
## 6. Boundary depth
In this section we prove Theorem C, namely, under the semisimple condition, the boundary depth of the vortex Floer complex of any Hamiltonian diffeomorphism is uniformly bounded from above.
### Vortex Floer persistence modules
Recall that from Section 3.4.3 we know that any Floer-Novikov complex over a Novikov field \(\Lambda^{\Gamma}_{\mathbb{K}}\) induces a persistence module over \(\mathbb{K}\). Given a regular bulk-avoiding admissible pair \((\widehat{H},\widehat{J})\) and a bulk deformation
\[\mathfrak{b}=\sum_{j=1}^{N}\log c_{j}V_{j}\text{ where }c_{j}\in\mathbb{Z}[ \mathfrak{i}],\]
the persistence module induced from the complex \(\text{\it VCF}^{\mathfrak{b}}_{\bullet}(\widehat{H},\widehat{J};\Lambda^{ \Gamma}_{\mathbb{F}_{p}})\) is denoted by
\[\boldsymbol{V}_{(p)}(\widehat{H},\widehat{J}).\]
Recall that each filtered Floer-Novikov complex has a finite boundary depth which coincides with the boundary depth of the associated persistence module. We denote the boundary depth of \(\boldsymbol{V}_{(p)}(\widehat{H},\widehat{J})\) by
\[\beta_{(p)}(\widehat{H},\widehat{J})\in[0,+\infty).\]
It is also equal to the length of the longest finite bar in the associated barcode (cf. Proposition 3.29).
**Proposition 6.1**.: _Given any two regular bulk-avoiding admissible pairs \((\widehat{H}_{1},\widehat{J}_{1})\) and \((\widehat{H}_{2},\widehat{J}_{2}),\) for any prime \(p\), one has_
\[|\beta_{(p)}(\widehat{H}_{1},\widehat{J}_{1})-\beta_{(p)}(\widehat{H}_{2}, \widehat{J}_{2})|\leq 2d_{\rm Hofer}(H_{1},H_{2}). \tag{6.1}\]
_In particular, the boundary depth only depends on the descent Hamiltonian downstairs._
Proof.: This is a consequence of the stability of the persistence module and the boundary depth. Indeed, Proposition 4.16 implies that the quasi-equivalence distance between \(\text{\it VCF}^{\mathfrak{b}}_{\bullet}(\widehat{H}_{1},\widehat{J}_{1}; \Lambda_{\mathbb{F}_{p}})\) and \(\text{\it VCF}^{\mathfrak{b}}_{\bullet}(\widehat{H}_{2},\widehat{J}_{2}; \Lambda_{\mathbb{F}_{p}})\) is at most equal to the Hofer distance \(d_{\rm Hofer}(H_{1},H_{2})\). Using Theorem 3.30, it implies that the interleaving distance between the two associated persistence modules is no greater than the same bound. By Proposition 3.24, one can conclude (6.1).
Using typical arguments, one can also show that the boundary depth only depends on the induced (nondegenerate) Hamiltonian isotopy \(\tilde{\phi}_{H}\) on the toric manifold \(X\). Then Proposition 3.24 implies that \(\beta_{(p)}\) descends to a Hofer continuous function
\[\beta_{(p)}:{\rm\widetilde{H}\widetilde{a}m}(X)\to[0,+\infty).\]
Below is the main theorem of this section.
**Theorem 6.2**.: _Suppose there exist \(p_{0}>0\) and \(C_{0}>0\) such that for all prime \(p\geq p_{0}\), the algebra \(\text{\it VHF}^{\mathfrak{b}}_{\bullet}(V;\Lambda_{\mathbb{F}_{p}})\) is a semisimple \(\Lambda_{\mathbb{F}_{p}}\)-algebra with idempotent generators \(e_{l,(p)},\dots,e_{m,(p)}\) satisfying_
\[\ell_{p}(e_{l,(p)})\leq C_{0}.\]
_Then there exists \(C>0\) such that for all prime \(p\geq p_{0}\) and all \(\tilde{\phi}\in{\rm\widetilde{H}\widetilde{a}m}(X)\), one has_
\[\beta_{(p)}(\tilde{\phi})\leq C.\]
### Action by quantum multiplication
Recall how we define pair-of-pants product on the vortex Floer homology (see [20]). On the pair-of-pants \(\Sigma^{\mathrm{pop}}\), equip the two inputs bulk-avoiding admissible pairs \((\widehat{H}_{1},\widehat{J}_{1})\) and \((\widehat{H}_{2},\widehat{J}_{2})\) and equip the output another bulk-avoiding admissible pair \((\widehat{H}_{3},\widehat{J}_{3})\). Extend these data to a domain-dependent Hamiltonian perturbation and a domain-dependent almost complex structure on \(\Sigma^{\mathrm{pop}}\). By counting solutions to the Hamiltonian perturbed vortex equation on \(\Sigma^{\mathrm{pop}}\) (with appropriate weights coming from the bulk deformation \(\mathfrak{b}\)), one can define a chain map
\[\mathit{VCF}^{\mathfrak{b}}_{\bullet}(\widehat{H}_{1},\widehat{J}_{1};\Lambda_ {\mathbb{K}})\otimes\mathit{VCF}^{\mathfrak{b}}_{\bullet}(\widehat{H}_{2}, \widehat{J}_{2};\Lambda_{\mathbb{K}})\to\mathit{VCF}^{\mathfrak{b}}_{\bullet}( \widehat{H}_{3},\widehat{J}_{3};\Lambda_{\mathbb{K}}).\]
We fix a class \(\alpha\in\mathit{VHF}^{\mathfrak{b}}_{\bullet}(V;\Lambda_{\mathbb{K}})\). For each \(\delta>0\), let \(\widehat{H}_{\delta}\) be a bulk-avoiding admissible Hamiltonian on \(V\) with \(\|\widehat{H}_{\delta}\|_{C^{2}}\leq\delta\). We temporarily omit the dependence on the almost complex structure and the coefficient field from the notations. For notational simplicity, we also omit the bulk \(\mathfrak{b}\) in the formulas at the moment. Consider the chain-level map
\[\mathit{VCF}_{\bullet}(\widehat{H}_{\delta})\otimes\mathit{VCF}_{\bullet}( \widehat{H})\to\mathit{VCF}_{\bullet}(\widehat{H}).\]
By using the energy inequality, one can show that there exists a constant \(C>0\), such that for all \(s,\tau\in\mathbb{R}\), the above multiplication induces a bilinear map
\[\mathit{VHF}^{\leq\tau}_{\bullet}(\widehat{H}_{\delta})\otimes\mathit{VHF}^{ \leq s}_{\bullet}(\widehat{H})\to\mathit{VHF}^{\leq s+\tau+C\delta}_{\bullet} (\widehat{H}). \tag{6.2}\]
Denote
\[\mathcal{A}^{\mathfrak{b}}(\alpha):=c^{\mathfrak{b}}(\alpha,0)=\lim_{\delta \to 0}c^{\mathfrak{b}}(\alpha,H_{\delta}).\]
Then one has the linear map for all \(\epsilon>0\), one can choose \(\delta\) sufficiently small so that by setting \(\tau=\mathcal{A}^{\mathfrak{b}}(\alpha)+\delta\) and inserting a representative of \(\alpha\) in \(\mathit{VHF}^{\leq\tau}_{\bullet}(\widehat{H}_{\delta})\) in (6.2), one obtains a well-defined map
\[m_{\epsilon}(\alpha):\mathit{VHF}^{\leq s}_{\bullet}(\widehat{H})\to\mathit{ VHF}^{\leq s+\mathcal{A}^{\mathfrak{b}}(\alpha)+\epsilon}_{\bullet}(\widehat{H}).\]
Using the standard argument one can show that this map only depends on the class \(\alpha\). By applying any positive shift, the above operation defines a family of operations which are recorded in the following statement.
**Proposition 6.3**.: _For all \(\epsilon>0\), the maps \(m_{\epsilon}(\alpha)\) define a morphism of persistence modules_
\[m_{\epsilon}(\alpha):\boldsymbol{V}(\tilde{\phi})\to\boldsymbol{V}(\tilde{ \phi})[\mathcal{A}(\alpha)+\epsilon]\ \forall\epsilon>0\]
_satisfying for all \(\epsilon<\epsilon^{\prime}\), one has_
\[m_{\epsilon^{\prime}}(\alpha)=\mathrm{shift}_{\epsilon^{\prime}-\epsilon} \circ m_{\epsilon}(\alpha).\]
**Definition 6.4**.: Given \(\alpha\in\mathit{VHF}^{\mathfrak{b}}_{\bullet}(V;\Lambda_{\mathbb{K}}) \setminus\{0\}\) and \(\epsilon>0\), the persistence module \(\boldsymbol{W}_{\alpha}(\tilde{\phi})_{\epsilon}\) is defined by
\[W_{\alpha}(\tilde{\phi})^{s}_{\epsilon}=\mathrm{Im}\left(m_{\epsilon}( \alpha):\mathit{VHF}^{\leq s-\mathcal{A}^{\mathfrak{b}}(\alpha)}_{\bullet}( \tilde{H})\to\mathit{VHF}^{\leq s+\epsilon}_{\bullet}(\tilde{H})\right) \subset V(\tilde{\phi})^{s+\epsilon}.\]
_Remark 6.5_.: Our notion of persistence modules (Definition 3.20) is very different from the traditionally used ones (see for example [21] where similar operators were firstly defined for Floer persistence modules in the monotone case); notably we allow each piece \(V^{s}\) of a persistence module \(\boldsymbol{V}\) to be infinite-dimensional. Hence it is not straightforward, though not necessarily difficult, to prove that when \(\epsilon\to 0\), the above persistence modules "converges," giving a limiting object similar to the one used in [19]. However, we could also carry the \(\epsilon\) everywhere as we are doing here.
### Proof of Theorem 6.2
We prove Theorem 6.2 following the strategy of [11]. This theorem is the consequence of the following three lemmas (Lemma 6.6, Lemma 6.7, and Lemma 6.8).
We first introduce and simplify the notations. As we work with an individual prime, we drop the dependence on the prime \(p\) in most notations. Let \(e_{1},\ldots,e_{m}\) be the idempotent generators of \(\mathit{VHF}_{\bullet}^{\mathfrak{b}}(V;\Lambda_{\overline{\mathbb{F}}_{p}})\). For each nondegenerate \(\tilde{\phi}\in\mathrm{H\widetilde{am}}(X)\), consider the direct sum persistence module
\[\mathbf{W}(\tilde{\phi})_{\epsilon}=\bigoplus_{l=1}^{m}\mathbf{W}_{e_{l}}(\tilde{\phi} )_{\epsilon}.\]
**Lemma 6.6**.: _The interleaving distance between \(\mathbf{V}(\tilde{\phi})\) and \(\mathbf{W}(\tilde{\phi})_{\epsilon}\) is at most \(C_{0}+\epsilon\)._
For all \(\tilde{\phi}\in\mathrm{H\widetilde{am}}(X)\), define
\[\gamma(\tilde{\phi}):=\max_{1\leq l\leq m}\gamma_{e_{l}}(\tilde{\phi}):=\max_ {1\leq l\leq m}\left(c^{\mathfrak{b}}(e_{l},\tilde{\phi})+c^{\mathfrak{b}}(e_ {l},\tilde{\phi}^{-1})\right).\]
Temporarily let \(\mathrm{pr}:\mathrm{H\widetilde{am}}(X)\to\mathrm{Ham}(X)\) be the canonical projection. Define for \(\phi\in\mathrm{Ham}(X)\)
\[\gamma(\phi):=\inf_{\mathrm{pr}(\tilde{\phi})=\phi}\gamma(\tilde{\phi}).\]
The following is an analogue of [11, Proposition 5.4] and [11, Proposition 13].
**Lemma 6.7**.: _The boundary depth of the persistence module \(\mathbf{W}_{e_{l}}(\tilde{\phi})_{\epsilon}\) is finite. Moreover, given nondegenerate \(\tilde{\phi},\tilde{\psi}\in\mathrm{H\widetilde{am}}(X)\), for each \(l=1,\ldots,m\), one has_
\[\left|\beta(\mathbf{W}_{e_{l}}(\tilde{\phi})_{\epsilon})-\beta(\mathbf{W}_{e_{l}}( \tilde{\psi})_{\epsilon})\right|\leq\gamma_{e_{l}}(\tilde{\phi}\tilde{\psi}^{ -1}). \tag{6.3}\]
The following is analogous to [11, Proposition 5.4] and [11, Proposition 13].
**Lemma 6.8**.: _For all \(\tilde{\phi}\in\mathrm{H\widetilde{am}}(X)\), one has_
\[c^{\mathfrak{b}}(e_{l},\tilde{\phi})+c^{\mathfrak{b}}(e_{l},\tilde{\phi}^{-1} )\leq 4C_{0}.\]
Proof of Theorem 6.2.: As the boundary depth depends continuously on the Hamiltonian isotopy \(\tilde{\phi}\), one only needs to prove the theorem for nondegenerate ones. First, by Lemma 6.6, the interleaving distance between \(\mathbf{V}(\tilde{\phi})\) and \(\mathbf{W}(\tilde{\phi})_{\epsilon}\) is bounded by \(C_{0}\). Hence by Proposition 3.24, it suffices to bound the boundary depth of \(\mathbf{W}(\tilde{\phi})_{\epsilon}\). As \(\mathbf{W}(\tilde{\phi})_{\epsilon}\) is the direct sum of \(\mathbf{W}_{e_{l}}(\tilde{\phi})_{\epsilon}\), it suffices to bound the boundary depth of \(\mathbf{W}_{e_{l}}(\tilde{\phi})_{\epsilon}\) for all idempotent generators \(e_{l}\). Then applying Lemma 6.7, one obtains
\[\beta(\mathbf{W}_{e_{l}}(\tilde{\phi})_{\epsilon})\leq\gamma_{e_{l}}(\tilde{\phi} \tilde{\psi}^{-1})+\beta(\mathbf{W}_{e_{l}}(\tilde{\psi})_{\epsilon})\leq 4C_{0}+ \beta(\mathbf{W}_{e_{l}}(\tilde{\psi})_{\epsilon})\]
where \(\tilde{\psi}\in\mathrm{H\widetilde{am}}(X)\) is an arbitrary fixed nondegenerate Hamiltonian isotopy on \(X\). Then the right hand side is finite and independent of \(\tilde{\phi}\).
### Proofs of the technical lemmas
In this subsection we drop all dependence on the bulk deformation from notations.
Proof of Lemma 6.6.: We construct maps between persistence modules \(\mathbf{f}_{\epsilon}:\mathbf{V}(\tilde{\phi})\to\mathbf{W}(\tilde{\phi})_{\epsilon}[C_{0}]\) and \(\mathbf{g}_{\epsilon}:\mathbf{W}(\tilde{\phi})_{\epsilon}\to\mathbf{V}(\tilde{\phi})[C_{0}]\) as follows. For \(s\in\mathbb{R}\), define
\[f_{\epsilon}^{s}:V(\tilde{\phi})^{s}\to\bigoplus_{l=1}^{m}W_{e_{l}}(\tilde{ \phi})_{\epsilon}^{s+C_{0}}\]
to be the composition of
\[V(\tilde{\phi})^{s} \to\bigoplus_{l=1}^{m}W_{e_{l}}(\tilde{\phi})_{\epsilon}^{s+ \mathcal{A}(e_{l})}, \alpha \mapsto(e_{1}*\alpha,\ldots,e_{m}*\alpha)\]
and the natural map
\[\bigoplus_{l=1}^{m}W_{e_{l}}(\tilde{\phi})_{\epsilon}^{s+\mathcal{A}(e_{l})} \to\bigoplus_{l=1}^{m}W_{e_{l}}(\tilde{\phi})_{\epsilon}^{s+C_{0}}.\]
Define
\[g_{+\epsilon}^{s}:\bigoplus_{l=1}^{m}W_{e_{l}}(\tilde{\phi})_{\epsilon}^{s} \to V(\tilde{\phi})^{s+C_{0}}, (\alpha_{1},\ldots,\alpha_{m}) \mapsto\iota^{s+\epsilon,s+C_{0}}(\alpha_{1}+\cdots+\alpha_{m}).\]
It is straightforward to check, using the fact that \(e_{1}+\cdots+e_{m}=\mathbf{1}_{\mathsf{b}}^{\mathrm{GLSM}}\) and that \(e_{l}\) are idempotent generators, that \(\mathbf{f}_{\epsilon}\), \(\mathbf{g}_{\epsilon}\) provide \(C_{0}\)-interleaving between \(\mathbf{V}(\tilde{\phi})\) and \(\mathbf{W}(\tilde{\phi})_{\epsilon}\).
Proof of Lemma 6.7.: The detailed proof would be almost identical to the part of the proof of [10, Proposition 12] corresponding to this lemma. Hence we only briefly sketch it. First we show the finiteness of the boundary depth. The boundary depth of \(\mathbf{V}(\tilde{\phi})\) is finite because it coincides of the boundary depth of the associated Floer-Novikov complex (see Proposition 3.26). Hence by Lemma 6.6 and Proposition 3.24, \(\mathbf{W}(\tilde{\phi})_{\epsilon}\) has finite boundary depth. Therefore each summand \(\mathbf{W}_{e_{l}}(\tilde{\phi})_{\epsilon}\) has finite boundary depth.
Now we prove the inequality (6.3). Let \(F,G\) be Hamiltonians downstairs with time-\(1\) maps \(\tilde{\phi}\) and \(\tilde{\psi}\) respectively. Choose bulk-avoiding admissible lifts \(\widehat{F}\), \(\widehat{G}\) upstairs and let \((\widehat{F},\widehat{J}_{F})\), \((\widehat{G},\widehat{J}_{G})\) be regular pairs. Let \(\ell_{F}\) resp. \(\ell_{G}\) be the non-Archimedean valuation on the complex \(\mathit{VCF}_{\bullet}(\widehat{F},\widehat{J}_{F})\) resp. \(\mathit{VCF}_{\bullet}(\widehat{G},\widehat{J}_{G})\). Let \(\Delta_{\widehat{F},\widehat{G}}=G\#\overline{F}\) be the difference Hamiltonian upstairs with descent difference Hamiltonian \(\Delta_{F,G}\) downstairs. Let \(\widehat{J}_{F,G}\) be an admissible almost complex structure so that the pair \((\Delta_{\widehat{F},\widehat{G}},\widehat{J}_{F,G})\) is regular. One can obtain a pair \((\Delta_{\widehat{G},\widehat{F}},\widehat{J}_{G,F})\) with the roles of \(\widehat{F}\) and \(\widehat{G}\) reversed.
Now fix \(\epsilon>0\). Choose a cycle \(c_{\widehat{F},\widehat{G},\epsilon}\in\mathit{VCF}_{\bullet}(\Delta_{ \widehat{F},\widehat{G}},\widehat{J}_{F,G})\) representing \(e_{l}\) such that
\[\ell(c_{\widehat{F},\widehat{G},\epsilon})\leq c(e_{l},\Delta_{\widehat{F}, \widehat{G}})+\epsilon.\]
We also choose a cycle \(c_{\widehat{G},\widehat{F},\epsilon}\in\mathit{VCF}_{\bullet}(\Delta_{ \widehat{G},\widehat{F}},\widehat{J}_{G,F})\) representing \(e_{l}\) with
\[\ell(c_{\widehat{G},\widehat{F},\epsilon})\leq c(e_{l},\Delta_{\widehat{G}, \widehat{F}})+\epsilon.\]
Now after choosing perturbation data on the pair-of-pants, one can define a chain map
\[C_{\widehat{F},\widehat{G},\epsilon}:\mathit{VCF}_{\bullet}(\widehat{F}, \widehat{J}_{F})\to\mathit{VCF}_{\bullet}(\widehat{G},\widehat{J}_{G}),\ x \mapsto c_{\widehat{F},\widehat{G},\epsilon}*x.\]
satisfying
\[\ell_{G}(C_{\widehat{F},\widehat{G},\epsilon}(x))\leq c(e_{l},\Delta_{\widehat {F},\widehat{G}})+\ell_{F}(x)+2\epsilon.\]
Similarly, by using the cycle \(c_{\widehat{G},\widehat{F},\epsilon}\) one can also define a chain map
\[C_{\widehat{G},\widehat{F},\epsilon}:\mathit{VCF}_{\bullet}(\widehat{G}, \widehat{J}_{G})\to\mathit{VCF}_{\bullet}(\widehat{F},\widehat{J}_{F})\]
satisfying
\[\ell_{F}(C_{\widehat{G},\widehat{F},\epsilon}(y))\leq c(e_{l},\Delta_{G,F})+ \ell_{G}(y)+2\epsilon.\]
The lemma will follow from the following claim.
**Claim.**\(C_{\widehat{F},\widehat{G},\epsilon}\) and \(C_{\widehat{G},\widehat{F},\epsilon}\) induce a \(\frac{1}{2}\gamma_{e_{l}}(\tilde{\phi}\tilde{\psi}^{-1})+4\epsilon\)-interleaving between \(\mathbf{W}_{e_{l}}(\widehat{F})_{\epsilon}\) and \(\mathbf{W}_{e_{l}}(\widehat{G})_{\epsilon}\).
The detailed proof would also be almost identical to that of [13] except for notations. We omit the details.
_Remark 6.9_.: As one can infer from the above proof, the inequality "\(\beta\leq\gamma\)" is a consequence of studying filtered continuations maps in terms of taking the pair-of-pants product with the filtered continuation elements, which in particular does not depend on the semi-simplicity assumption.
Proof of Lemma 6.8.: Using Proposition 4.21, Lemma 4.14 and the triangle inequality for spectral invariants, one has
\[-c(e_{l},\tilde{\phi}^{-1}) =\ \inf\Big{\{}c(\alpha,\tilde{\phi})\ |\ \langle e_{l},\alpha\rangle\neq 0 \Big{\}}\] \[\geq\ -\mathcal{A}(e_{l})+\inf\Big{\{}c(e_{l}*\alpha,\tilde{\phi}) \ |\ \langle e_{l}*\alpha,\mathbf{1}^{\mathrm{GLSM}}\rangle\neq 0\Big{\}}\] \[\geq\ -\mathcal{A}(e_{l})+\inf\Big{\{}c(e_{l},\tilde{\phi})- \mathcal{A}((e_{l}*\alpha)^{-1})\ |\ \langle e_{l}*\alpha,\mathbf{1}^{\mathrm{GLSM}}\rangle\neq 0 \Big{\}}\] \[\geq\ -\mathcal{A}(e_{l})+c(e_{l},\tilde{\phi})+\inf\Big{\{}- \mathcal{A}(e_{l}*\alpha)-\mathcal{A}((e_{l}*\alpha)^{-1})\ |\ e_{l}*\alpha\neq 0 \Big{\}}\] \[\quad+\inf\Big{\{}\mathcal{A}(e_{l}*\alpha)\ |\ \langle e_{l}* \alpha,\mathbf{1}^{\mathrm{GLSM}}\rangle\neq 0\Big{\}}\]
Here the quantum product and the Poincare pairing are both the bulk-deformed versions. Notice that as \(e_{l}\) is an idempotent generator, \(e_{l}*\alpha=\lambda(\alpha)e_{l}\) and \((e_{l}*\alpha)^{-1}=\lambda(\alpha)^{-1}e_{l}\). Hence
\[\mathcal{A}(e_{l}*\alpha)+\mathcal{A}((e_{l}*\alpha)^{-1})=2\mathcal{A}(e_{l} )-\mathfrak{v}(\lambda(\alpha))-\mathfrak{v}(\lambda(\alpha)^{-1})=2\mathcal{ A}(e_{l})\]
which is uniformly bounded. Moreover, by Proposition 4.21
\[\inf\Big{\{}\mathcal{A}(e_{l}*\alpha)\ |\ \langle e_{l}*\alpha,\mathbf{1}^{ \mathrm{GLSM}}\rangle\neq 0\Big{\}}\geq-\mathcal{A}(\mathbf{1}^{\mathrm{ GLSM}}).\]
Therefore
\[c(e_{l},\tilde{\phi})+c(e_{l},\tilde{\phi}^{-1})\leq 3\mathcal{A}(e_{l})+ \mathcal{A}(\mathbf{1}^{\mathrm{GLSM}}).\]
Lemma 6.8 follows by using the assumption \(\mathcal{A}(e_{l})\leq C_{0}\) and noticing
\[\mathcal{A}(\mathbf{1}^{\mathrm{GLSM}})=\mathcal{A}(e_{1}+\cdots+e_{m})\leq \max_{1\leq l\leq m}\mathcal{A}(e_{l})\leq C_{0}.\qed\]
_Remark 6.10_.: The above argument crucially relies on the semi-simplicity assumption, which allows us to take advantage of the feature that any nonzero element in a field summand of the quantum homology is invertible. Note that such a phenomenon is ultimately due to the abundance of rational curves in toric manifolds.
## 7. \(\mathbb{Z}/p\)-equivariant vortex Floer theory
Following [13, 14], we develop \(\mathbb{Z}/p\)-equivariant Hamiltonian Floer theory in the vortex setting. Using equivariant pair of pants operations, we show that the following analogue of [13, Theorem D] about the total bar length holds in our setting.
**Theorem 7.1**.: _Let \(\phi\) be a Hamiltonian diffeomorphism on the toric symplectic manifold \((X,\omega)\) with lift \(\tilde{\phi}\in\mathrm{Ham}(X)\). Then for any odd prime \(p\), if \(\mathrm{Fix}(\phi)\) and \(\mathrm{Fix}(\phi^{p})\) are finite, then_
\[\tau_{(p)}^{\mathfrak{v}}(\tilde{\phi}^{p})\geq p\cdot\tau_{(p)}^{\mathfrak{v }}(\tilde{\phi}) \tag{7.1}\]
Here we work over \(\Lambda_{\overline{\mathbb{F}}_{p}}\), which is omitted from the notations above. Given the arguments from [13, Section 6], the only missing ingredient for establishing Theorem 7.1 is the package of \(\mathbb{Z}/p\) Borel equivariant vortex Floer theory with bulk deformation. As demonstrated in other parts of the paper, one salient feature of vortex Floer theory is the absence of sphere bubbles due to
the contractibility of symplectic vector space, which allows us to achieve transversality in many settings by only perturbing the almost complex structure. Specializing to the Borel equivariant theory, except for the necessity to deal with the symplectic vortex equations and the appearance of Novikov coefficients, our theory is quite similar to the exact setting as from the original reference [12, 13], at least for bulk-avoiding Hamiltonians, which suffice for our purpose via a limiting argument. Therefore, unless there is anything special in our situation, we will be brief and refer the reader to the original references for full proofs.
In this section the bulk deformation \(\mathfrak{b}\) is fixed. All curve counts are weighted by the bulk term. We often drop it in order to shorten the notations.
### The Borel construction
We take the following model of \(E\mathbb{Z}/p\): the ambient space is
\[S^{\infty}:=\{(z_{0},z_{1},\dots)\ |\ z_{k}\in\mathbb{C}\ \text{for}\ k\in \mathbb{Z}_{\geq 0},\sum|z_{k}|^{2}=1,\ \text{only finitely many}\ z_{k}\text{'s are nonzero}\},\]
and the group \(\mathbb{Z}/p\) freely acts on \(S^{\infty}\) by multiplying each coordinate by \(p\)-th roots of unity. The quotient space of \(S^{\infty}\) under this \(\mathbb{Z}/p\)-action is a model for the classifying space \(B\mathbb{Z}/p\). The group cohomology of \(\mathbb{Z}/p\) over \(\mathbb{F}_{p}\) is recovered as the (graded-commutative) cohomology ring
\[H^{*}(B\mathbb{Z}/p;\mathbb{F}_{p})=\mathbb{F}_{p}[\![u]\!]\langle\theta \rangle,\deg(u)=2\ \text{and}\ \deg(\theta)=1.\]
For \(\epsilon>0\) sufficiently small, \(E\mathbb{Z}/p\) admits a \(\mathbb{Z}/p\)-invariant Morse function
\[\tilde{F}(z)=\sum k|z_{k}|^{2}+\epsilon\sum\operatorname{re}(z_{k}^{p})\]
obtained by perturbing the standard Morse-Bott function \(\sum k|z_{k}|^{2}\) on \(S^{\infty}\) along the critical submanifolds. The function \(\tilde{F}(z)\) has the following properties:
1. defining the map \[\tilde{\tau}:S^{\infty} \to S^{\infty}\] (7.2) \[(z_{0},z_{1},\dots) \mapsto(0,z_{0},z_{1},\dots),\] then we have \(\tilde{F}\circ\tilde{\tau}=\tilde{F}+1\);
2. for any \(l\in\mathbb{Z}_{\geq 0}\), the critical points of \(\tilde{F}\) obtained from perturbing the critical submanifold \(\{|z_{l}|=1\}\) of \(\sum k|z_{k}|^{2}\) can be indexed by \[Z^{0}_{2l},\dots,Z^{p-1}_{2l},\ \text{and}\,Z^{0}_{2l+1},\dots,Z^{p-1}_{2l+1},\] where each \(Z^{i}_{2l}\) has Morse index \(2l\) and each \(Z^{i}_{2l+1}\) has Morse index \(2l+1\);
3. the sets \(\{Z^{0}_{2l},\dots,Z^{p-1}_{2l}\}\) and \(\{Z^{0}_{2l+1},\dots,Z^{p-1}_{2l+1}\}\) respectively form an \(\mathbb{Z}/p\)-orbit of the \(\mathbb{Z}/p\)-action on \(S^{\infty}\);
4. there exists a \(\mathbb{Z}/p\)-equivariant Riemannian metric \(\tilde{g}\) on \(S^{\infty}\) such that \((\tilde{f},\tilde{g})\) is Morse-Smale, and the differential on the corresponding Morse cochain complex is \[Z^{m}_{2l} \mapsto Z^{m}_{2l+1}-Z^{m+1}_{2l+1},\] \[Z^{m}_{2l+1} \mapsto Z^{0}_{2l+2}+\dots+Z^{p-1}_{2l+2},\] where the index \(m\in\mathbb{Z}/p\) is read cyclically.
### The Tate construction
Next, we review the Tate construction for cyclic groups of prime order. Let \(R\) be a unital commutative ring which is an \(\mathbb{F}_{p}\)-algebra (later \(R\) will become \(\overline{\mathbb{F}}_{p}\)). Suppose \((\hat{C}_{\bullet},d_{\hat{C}})\) is a \(\mathbb{Z}_{2}\)-graded chain complex defined over the Novikov ring \(\Lambda_{0,R}\). Note that \(\Lambda_{0,R}\) is a module over \(\Lambda_{0,\mathbb{F}_{p}}\). Introduce the graded field
\[\mathcal{K}=\mathbb{F}_{p}[u^{-1},u]\!],\ \deg(u)=2.\]
Then the \(\mathbb{Z}/p\)-equivariant Tate complex
\[C_{\text{Tate}}(\mathbb{Z}/p,\hat{C}_{\bullet}^{\otimes p})\]
is a module over \(\Lambda_{0,\mathcal{K}}\langle\theta\rangle\) where \(\deg(\theta)=1,\theta^{2}=0\), explicitly given by
\[\hat{C}_{\bullet}^{\otimes p}\otimes_{\Lambda_{0,F_{p}}}\Lambda_{0,\mathcal{K }}\langle\theta\rangle.\]
The differential \(d_{\text{Tate}}\) is \(\Lambda_{0,R}\otimes_{\Lambda_{0,F_{p}}}\Lambda_{0,\mathcal{K}}\)-linear, such that for \(x_{0}\otimes\cdots\otimes x_{p-1}\in\hat{C}_{\bullet}^{\otimes p}\), we have
\[d_{\text{Tate}}(x_{0}\otimes\cdots\otimes x_{p-1}) =d_{\hat{C}}^{\otimes p}(x_{0}\otimes\cdots\otimes x_{p-1})+ \theta(id-\zeta)(x_{0}\otimes\cdots\otimes x_{p-1}),\] \[d_{\text{Tate}}(\theta(x_{0}\otimes\cdots\otimes x_{p-1})) =-\theta d_{\hat{C}}^{\otimes p}(x_{0}\otimes\cdots\otimes x_{p-1}) +u(id+\zeta+\cdots+\zeta^{p-1})(x_{0}\otimes\cdots\otimes x_{p-1}),\]
in which the \(\zeta\) is the automorphism on \(\hat{C}_{\bullet}^{\otimes p}\) defined by
\[x_{0}\otimes\cdots\otimes x_{p-1}\mapsto(-1)^{|x_{p-1}|(|x_{0}|+\cdots+|x_{p- 2}|)}x_{p-1}\otimes x_{0}\otimes\cdots\otimes x_{p-2}.\]
In other words, the Tate complex \((C_{\text{Tate}}(\mathbb{Z}/p,\hat{C}_{\bullet}^{\otimes p}),d_{\text{Tate}})\) is obtained from the \(\mathbb{Z}/p\) group cohomology of the chain complex \((\hat{C}_{\bullet}^{\otimes p},d_{\hat{C}}^{\otimes p})\) by inverting the equivariant parameter \(u\). Denote the homology of the Tate complex by
\[H_{\text{Tate}}(\mathbb{Z}/p,\hat{C}_{\bullet}^{\otimes p}).\]
The following algebraic statement will be used in establishing the localization result proved later.
**Lemma 7.2**.: _[_5_, Lemma 21]_ _Denote the homology of \((\hat{C}_{\bullet},d_{\hat{C}})\) by \(\hat{H}_{\bullet}\). The \(p\)-th power map_
\[\hat{C}_{\bullet} \to\hat{C}_{\bullet}^{\otimes p} \tag{7.3}\] \[x \mapsto x\otimes\cdots\otimes x\]
_induces an isomorphism of \(\Lambda_{0,R}\otimes_{\Lambda_{0,F_{p}}}\Lambda_{0,\mathcal{K}}\)-modules_
\[r_{p}^{*}(\hat{H}_{\bullet}\otimes_{\Lambda_{0,F_{p}}}\Lambda_{0,\mathcal{K}} )\to H_{\text{Tate}}(\mathbb{Z}/p,\hat{C}_{\bullet}^{\otimes p})\]
_where \(r_{p}\) is the operator on \(\Lambda_{0,R}\otimes_{\Lambda_{0,F_{p}}}\Lambda_{0,\mathcal{K}}\) defined by mapping the Novikov variable \(T\) to \(T^{1/p}\)._
This is referred to as the quasi-Frobenius isomorphism in [5, Section 7].
### \(\mathbb{Z}/p\)-equivariant vortex Floer theory
Given a \(1\)-periodic Hamiltonian \(H_{t}\) on \(X\), its \(p\)-th iteration is the family \(H_{t}^{(p)}:=H_{pt}\) If \(\phi:X\to X\) is the time-\(1\) map of \(H\), then the time-\(1\) map of \(H^{(p)}\) is the iteration \(\phi^{p}\). Following [10, 21], we define the \(\mathbb{Z}/p\)-equivariant vortex Hamiltonian Floer homology for \(H^{(p)}\) by using the family Floer homology coming from the Borel construction. For all the Floer-theoretic constructions involving moduli spaces, we always assume that the Hamiltonians involved in the discussion are nondegenerate.
Recall that the toric divisors of \(X\) are given by \(D_{1},\cdots,D_{N}\), which are obtained as the symplectic quotient of the coordinate hyperplanes \(V_{1},\cdots,V_{N}\) in the symplectic vector space \(V\). As in the definition of bulk-deformed Floer homology, we assume that the Hamiltonian \(H\) is bulk-avoiding; in particular, for any odd prime \(p\), \(1\)-periodic orbits of \(H\) and \(H^{(p)}\) are disjoint from \(V_{1}\cup\cdots\cup V_{N}\). We also assume that both \(H\) and \(H^{(p)}\) are nondegenerate. Let \(\widehat{H}\) be an admissible lift of \(H\) and \(\widehat{H}^{(p)}\) an admissible lift of \(H^{(p)}\) (see Remark 4.4). Let \(\widehat{J}^{(p)}=\{\widehat{J}^{(p)}_{t}\}_{t\in S^{1}}\) be a \(1\)-periodic family of compatible almost complex structures on \(V\) such that the pair \((\widehat{H}^{(p)},\widehat{J}^{(p)})\) is admissible and the Floer chain complex \(\text{VCF}_{\bullet}^{\mathfrak{b}}(\widehat{H}^{(p)},\widehat{J}^{(p)}; \Lambda_{0,R})\) has a well-defined bulk-deformed differential \(\partial_{\mathfrak{b}}^{(p)}\), where \(\mathfrak{b}=\sum_{i=1}^{N}\log c_{i}\ V_{i}\) is a chosen bulk in which \(c_{i}\in\mathbb{Z}[\mathfrak{i}]\). Note that we work over \(\Lambda_{0,R}\) instead
of \(\Lambda_{R}\), which does not introduce any further difficulty due to the fact that \(\partial_{\mathfrak{b}}^{(p)}\) preserves the energy filtration on \(\mathit{VCF}_{\bullet}^{\mathfrak{b}}(\widehat{H}^{(p)},\widehat{J}^{(p)}; \Lambda_{0,R})\).
To define equivariant differentials, we include more parameters from the Borel construction. We choose an \(S^{\infty}=E\mathbb{Z}/p\) family of time-dependent compatible _admissible_ almost complex structures
\[\widehat{J}^{(p)}_{\infty}=\{\widehat{J}^{(p)}_{t,z}\}_{t\in S^{1},z\in S^{ \infty}}\]
satisfying the following requirements:
1. Near each critical point \(Z^{0}_{i},i\in\mathbb{Z}_{\geq 0}\) of the Morse function \(\tilde{F}(z)\) on \(S^{\infty}\), we have \(\widehat{J}^{(p)}_{t,z}=\widehat{J}^{(p)}_{t}\);
2. Regard \(\mathbb{Z}/p\subset S^{1}\). For any \(m\in\mathbb{Z}/p\) and \(z\in S^{\infty}\), there holds the equivariance relation \[\widehat{J}^{(p)}_{t-m,z}=\widehat{J}^{(p)}_{t,m\cdot z};\]
3. \(\widehat{J}^{(p)}_{t,z}\) is invariant under the translation (7.2). Namely \[\widehat{J}^{(p)}_{t,\tilde{\tau}(z)}=\widehat{J}^{(p)}_{t,z}.\]
After making such a choice, we can write down the following version of parametrized vortex Floer equation. Let \(\mathfrak{r}_{\pm}=(x_{\pm},\eta_{\pm})\in\mathrm{crit}\mathcal{A}_{H^{(p)}}\) be a pair of equivariant \(1\)-periodic orbits of \(H^{(p)}\) (which do not depend on the lift \(\widehat{H}^{(p)}\)). Given \(i\in\mathbb{Z}_{\geq 0},m\in\mathbb{Z}/p\) and \(\alpha\in\{0,1\}\), the moduli space
\[\mathcal{M}^{i,m}_{\alpha}(\mathfrak{r}_{-},\mathfrak{r}_{+})\]
consists of gauge equivalence classes of pairs of smooth maps (the gauge transformations act on the \((u,\phi,\psi)\)-component)
\[(u,\phi,\psi):\mathbb{R}_{s}\times S^{1}_{t}\to V\times\mathfrak{k}\times \mathfrak{k},\hskip 28.452756ptw:\mathbb{R}_{s}\to S^{\infty}\]
which satisfy the equations and asymptotic conditions
\[\left\{\begin{array}{ll}\partial_{s}u+\mathcal{X}_{\phi}(u)+\widehat{J}^{(p )}_{w(s),t}(\partial_{t}u+\mathcal{X}_{\psi}(u)-X_{\widehat{H}^{(p)}_{t}}(u))= 0,&\partial_{s}\psi-\partial_{t}\phi+\mu(u)=0,\\ \partial_{s}w(s)-\nabla\tilde{F}(w)=0,&\\ \lim_{s\to-\infty}(u(s,\cdot),\phi(s,\cdot),\psi(s,\cdot),w(s))=(x_{-},0,\eta _{-},Z^{0}_{\alpha}),&\\ \lim_{s\to+\infty}(u(s,\cdot),\phi(s,\cdot),\psi(s,\cdot),w(s))=(x_{+},0,\eta _{+},Z^{m}_{i}),&\end{array}\right. \tag{7.4}\]
modulo the \(\mathbb{R}\)-translation action given by
\[(u(s,\cdot),\phi(s,\cdot),\psi(s,\cdot),w(s))\mapsto(u(s+r,\cdot),\phi(s+r, \cdot),\psi(s+r,\cdot),w(s+r)),\hskip 28.452756ptr\in\mathbb{R}.\]
Because of the absence of sphere bubbles, as the capped orbits impose an upper bound on energy, the moduli space \(\mathcal{M}^{i,m}_{\alpha}(\mathfrak{r}_{-},\mathfrak{r}_{+})\) admits a Uhlenbeck-Gromov-Floer compactification \(\overline{\mathcal{M}}^{i,m}_{\alpha}(\mathfrak{r}_{-},\mathfrak{r}_{+})\) by adding equivalence classes of solutions the above coupled equations defined over broken configurations. On the other hand, for a generic choice of \(\{J^{(p)}_{t,z}\}_{t\in S^{1},z\in S^{\infty}}\), the moduli space \(\mathcal{M}^{i,m}_{\alpha}(\mathfrak{r}_{-},\mathfrak{r}_{+})\) is transversely cut out, and the dimension of the moduli space satisfies
\[\dim\mathcal{M}^{i,m}_{\alpha}(\mathfrak{r}_{-},\mathfrak{r}_{+})=\mathrm{CZ }(\mathfrak{r}_{-})-\mathrm{CZ}(\mathfrak{r}_{+})+i-\alpha-1.\]
For a more detailed discussion of these facts, the reader may consult [14, Section 4], [14, Section 6], whose arguments apply to our case after using the setup from [13, Section 6].
After achieving transversality, for each triple \(i\in\mathbb{Z}_{\geq 0},m\in\mathbb{Z}/p\) and \(\alpha\in\{0,1\}\), we can define a \(\Lambda_{0,R}\)-linear map \(\partial_{\alpha,\mathfrak{b}}^{i,m}\) on \(\mathit{VCF}_{\bullet}(\widehat{H}^{(p)},\widehat{J}^{(p)};\Lambda_{0,R})\) of the form
\[\partial_{\alpha,\mathfrak{b}}^{i,m}(\mathfrak{r})=\sum_{\stackrel{{ \mathfrak{g}}}{{\mathrm{CZ}(\mathfrak{r})-\mathrm{CZ}(\mathfrak{r})+i- \alpha=1}}}\left(\sum_{[(\mathfrak{u},w)]\in\mathcal{M}^{i,m}_{\alpha}( \mathfrak{r},\mathfrak{b})}\epsilon([(\mathfrak{u},w)])\exp\left(\sum_{i=1}^{N }\log c_{i}\ [\mathfrak{u}]\cap V_{i}\right)\right)\mathfrak{g},\]
where \(\epsilon([(\mathfrak{u},w)])\in\{\pm 1\}\) is the sign of the rigid solution \([\mathfrak{u}]\), which is well-defined due to the existence of coherent orientations, and \([\mathfrak{u}]\cap V_{i}\) is defined as before, coming from the topological intersection number. We further introduce the notation
\[\partial^{i}_{\alpha,\mathfrak{b}}=\partial^{i,0}_{\alpha,\mathfrak{b}}+\dots+ \partial^{i,p-1}_{\alpha,\mathfrak{b}}.\]
**Definition 7.3**.: The \(\mathbb{Z}/p\)-equivariant \(\mathfrak{b}\)-deformed vortex Floer chain complex
\[\text{\it VCF}_{\bullet}^{\mathbb{Z}/p}(\widehat{H}^{(p)},\widehat{J}^{(p)}_{ \infty};\Lambda_{0,R})\]
is the \(\mathbb{Z}_{2}\)-graded \(\Lambda_{0,R}\)-module given by
\[\text{\it VCF}_{\bullet}(\widehat{H}^{(p)},\widehat{J}^{(p)};\Lambda_{0,R}) \llbracket u\rrbracket\langle\theta\rangle,\deg(u)=2,\deg(\theta)=1\]
with \(\Lambda_{0,R}\llbracket u\rrbracket\)-linear differential
\[\partial^{(p)}_{eq,\mathfrak{b}}(\mathfrak{r}\otimes 1) =\sum_{i\geq 0}\partial^{2i}_{0,\mathfrak{b}}(\mathfrak{r}) \otimes u^{i}+\sum_{i\geq 0}\partial^{2i+1}_{0,\mathfrak{b}}(\mathfrak{r}) \otimes u^{i}\theta,\] \[\partial^{(p)}_{eq,\mathfrak{b}}(\mathfrak{r}\otimes\theta) =\sum_{i\geq 0}\partial^{2i+1}_{1,\mathfrak{b}}(\mathfrak{r}) \otimes u^{i}\theta+\sum_{i\geq 1}\partial^{2i}_{1,\mathfrak{b}}(\mathfrak{r}) \otimes u^{i}.\]
The statement that \((\partial^{(p)}_{eq,\mathfrak{b}})^{2}=0\) follows from the signed count of boundaries of the compactified \(1\)-dimensional moduli spaces \(\overline{\mathcal{M}}^{i,m}_{\alpha}(\mathfrak{r}_{-},\mathfrak{r}_{+})\). The differential is well-defined over \(\Lambda^{0}_{R}\) because we only perturb the almost complex structure to achieve transversality. By continuation map considerations, the resulting homology group
\[\text{\it VHF}_{\bullet}^{\mathbb{Z}/p}(\widehat{H}^{(p)},\widehat{J}^{(p)}_{ \infty};\Lambda_{0,R})\]
is independent of the choice of \(\widehat{J}^{(p)}_{\infty}\), and it is a module over \(\Lambda_{0,R}\llbracket u\rrbracket\langle\theta\rangle\). By inverting \(u\), we can define
\[\text{\it VCF}_{\text{\rm Tate}}(\widehat{H}^{(p)},\widehat{J}^{(p)}_{\infty };\Lambda_{0,R})=\text{\it VCF}^{\mathbb{Z}/p}(\widehat{H}^{(p)},\widehat{J}^ {(p)}_{\infty};\Lambda_{0,R})[u^{-1},u]\langle\theta\rangle\]
for which the differential is the \(\Lambda_{0,R}[u^{-1},u]\)-linear extension of \(\partial^{(p)}_{eq,\mathfrak{b}}\). The homology group is written as
\[\text{\it VHF}_{\text{\rm Tate}}(\widehat{H}^{(p)};\Lambda_{0,R}),\]
which is a module over \(\Lambda^{0,R}\otimes_{\Lambda_{0,F_{p}}}\Lambda^{0,\mathcal{K}}\langle\theta\rangle\).
Here is some explanation of the definition of the equivariant differential. By definition, the leading order term \(\partial^{0}_{0,\mathfrak{b}}\) agrees with the differential \(\partial^{(p)}_{\mathfrak{b}}\) on the complex \(\text{\it VCF}_{\bullet}(\widehat{H}^{(p)},\widehat{J}^{(p)};\Lambda_{0,R})\), so does \(\partial^{1}_{1,\mathfrak{b}}\). The space of equivariant loops \(L^{K}(V)\) admits an \(S^{1}\)-action by shifting the domain parameter, and the natural inclusion \(\mathbb{Z}/p\subset S^{1}\) defines a \(\mathbb{Z}/p\)-action on \(L^{K}(V)\) such that the action functional \(\mathcal{A}_{H^{(p)}}\) is invariant under such an action. More concretely, the reparametrization
\[\mathfrak{r}(t)=(x(t),\eta(t))\mapsto(x(t+\frac{1}{p}),\eta(t+\frac{1}{p}))\]
generates a \(\mathbb{Z}/p\)-action on the Floer homology
\[R_{1/p}:\text{\it VHF}_{\bullet}^{\mathfrak{b}}(\widehat{H}^{(p)};\Lambda_{0,R })\to\text{\it VHF}_{\bullet}^{\mathfrak{b}}(\widehat{H}^{(p)};\Lambda_{0,R})\]
which is realized by the composition
\[\text{\it VCF}_{\bullet}(\widehat{H}^{(p)},\widehat{J}^{(p)};\Lambda_{0,R}) \xrightarrow[\text{\it pullback}]{\sim}\text{\it VCF}_{\bullet}(\widehat{H}^{ (p)},\widehat{J}^{(p)}_{-1/p};\Lambda_{0,R})\xrightarrow[\text{\it continuation}]{ \sim}\text{\it VCF}_{\bullet}(\widehat{H}^{(p)},\widehat{J}^{(p)};\Lambda_{0,R})\]
after passing to homology. Here \(\widehat{J}^{(p)}_{\dots-\frac{1}{p}}\) is the \(S^{1}\)-family of almost complex structures whose value at moment \(t\) is \(\widehat{J}^{(p)}_{t-\frac{1}{p}}\). The action \(R_{1/p}\) generates a \(\mathbb{Z}/p\)-action on homology; we denote
\[R_{m/p}:=(R_{1/p})^{m}:\mathit{VHF}^{\mathfrak{b}}_{\bullet}(\widehat{H}^{(p)}; \Lambda_{0,R})\rightarrow\mathit{VHF}^{\mathfrak{b}}_{\bullet}(\widehat{H}^{( p)};\Lambda_{0,R}).\]
Then the map \(\partial^{1}_{0,\mathfrak{b}}\) descends to
\[id-R_{1/p}:\mathit{VHF}^{\mathfrak{b}}_{\bullet}(\widehat{H}^{(p)};\Lambda_{0,R})\rightarrow\mathit{VHF}^{\mathfrak{b}}_{\bullet}(\widehat{H}^{(p)}; \Lambda_{0,R})\]
on homology, while the map \(\partial^{2}_{1,\mathfrak{b}}\) descends to \(id+R_{1/p}+\dots+R_{(p-1)/p}\). The higher order terms encodes the chain homotopies realizing relations of the form \((R_{1/p})^{p}=id\) on homology, and higher homotopy relations.
Finally, we observe that the degree filtration on the chain complex \(\mathit{VCF}^{\mathbb{Z}/p}_{\bullet}(\widehat{H}^{(p)},\widehat{J}^{(p)}_{ \infty};\Lambda^{0}_{R})\) induced from variables \(u\) and \(\theta\) is preserved by the equivariant differential \(\partial^{(p)}_{eq,\mathfrak{b}}\), and such a filtration is complete and exhaustive. Therefore, we have a spectral sequence converging to \(\mathit{VHF}^{\mathbb{Z}/p}(\widehat{H}^{(p)};\Lambda_{0,R})\), whose first page can be identified with \(\mathit{VHF}^{\mathfrak{b}}_{\bullet}(\widehat{H}^{(p)};\Lambda_{0,R})[\![u] \!]\langle\theta\rangle\). The same holds for the Tate version, which inverts the variable \(u\).
### Equivariant \(p\)-legged pants operations
In this subsection, we define equivariant "\(p\)-legged" pants operations on vortex Hamiltonian Floer theory, which generalizes the constructions from [10, 11] to our situation. We use the homological convention, so the roles of the positive and negative cylindrical ends are the opposite of those from _loc. cit._. We will continue the setup from the previous subsection, and keep using the notations \(H\), \(\widehat{H}\), \(H^{(p)}\), \(\widehat{H}^{(p)}\), and \(\widehat{J}^{(p)}_{t,z}\). Furthermore, we choose a \(1\)-parameter family of compatible almost complex structures \(\widehat{J}\) on \(V\) such that \((\widehat{H},\widehat{J})\) is regular and the Floer chain complex \(\mathit{VCF}^{\mathfrak{b}}_{\bullet}(\widehat{H},\widehat{J};\Lambda_{0,R})\) is well-defined.
The equivariant pants operation is defined over a particularly designed domain. Let \(\pi:S_{\mathcal{P}}\rightarrow\mathbb{R}\times S^{1}\) be the \(p\)-fold branched cover with unique branch point \((0,0)\in\mathbb{R}\times S^{1}\) whose ramification point has maximal ramification order. Then \(S_{\mathcal{P}}\) has \(p+1\) punctures, regarded as \(p\) negative ends and one positive ends. Suppose \(S_{\mathcal{P}}\) are equipped with cylindrical ends
\[\epsilon_{i}^{-}:(-\infty,-1]\times S^{1}\to S_{\mathcal{P}},\quad \epsilon_{i}^{+}:[1,\infty)\times S_{p}^{1}\to S_{\mathcal{P}},\quad i \in\mathbb{Z}/p,\]
subject to the conditions
\[\pi(\epsilon_{i}^{-}(s,t)) =(s,t),\quad m\cdot(\epsilon_{i}^{-}(s,t))=\epsilon_{i+m}^{-}(s,t)\] \[\pi(\epsilon_{i}^{+}(s,t)) =(s,t),\quad m\cdot(\epsilon_{i}^{+}(s,t))=\epsilon_{i+m}^{+}(s,t )=\epsilon_{i}^{+}(s,t+m),\quad\text{ for }m\in\mathbb{Z}/p,\]
where \(S_{p}^{1}:=\mathbb{R}/p\mathbb{Z}\) is the \(p\)-fold cover of \(S^{1}=\mathbb{R}/\mathbb{Z}\). Note that all \(\epsilon_{i}^{+}\) are obtained from shifting from each other by certain \(m\in\mathbb{Z}/p\).
The domain-dependent almost complex structure needs to have particular symmetry. We consider almost complex structures \(\widehat{J}^{+}_{\infty}\) on \(V\) parametrized by \(z\in S^{\infty}\), \(t\in S^{1}\), and \(s\geq-1\), such that:
1. for \(s\geq 2\) and \(z\in S^{\infty}\), we have \(\widehat{J}^{+}_{s,t,z}=\widehat{J}^{(p)}_{t,z}\);
2. for any \(m\in\mathbb{Z}/p\) and \(z\in S^{\infty}\), there holds the equivariance relation \[\widehat{J}^{+}_{s,t-\frac{m}{p},z}=\widehat{J}^{+}_{s,t,m-z};\]
3. \(\widehat{J}^{+}_{s,t,z}\) is invariant under the translation: \[\widehat{J}^{+}_{s,t,\tilde{\tau}(z)}=\widehat{J}^{+}_{s,t,z}.\]
Given such a choice, we further look at almost complex structures \(\widehat{J}_{\infty}^{-,i}\) parametrized by with \(s\leq 1\), \(t\in S^{1}\), \(z\in S^{\infty}\), and indexed by \(i\in\mathbb{Z}/p\) (the label of negative ends) satisfying:
1. for \(s\leq-2\) and any \(w\in S^{\infty}\), we have \(\widehat{J}_{s,t,z}^{-,i}=\widehat{J}_{t}\) for any \(i\in\mathbb{Z}/p\);
2. for any \(i\in\mathbb{Z}/p\) and \(w\in S^{\infty}\), we have the equality \(\widehat{J}_{s,t,z}^{-,i}=\widehat{J}_{s,t,z}^{+}\) hold for \(-1\leq s\leq 1\);
3. for any \(m,i\in\mathbb{Z}/p\) and \(z\in S^{\infty}\), there holds the equivariance relation \[\widehat{J}_{s,t-\frac{m}{p},z}^{-,i}=\widehat{J}_{s,t,z}^{-,i+m};\]
4. \(\widehat{J}_{s,t,z}^{-,i}\) is invariant under the translation: \[\widehat{J}_{s,t,\tilde{r}(z)}^{-,i}=\widehat{J}_{s,t,z}^{-,i}.\]
If \(w:\mathbb{R}\to S^{\infty}\) is a parametrized negative gradient flow line of \(\tilde{F}\), the above data specify a family of almost complex structures \(\{\widehat{J}_{v,w}^{\mathcal{P}}\}_{v\in S_{\mathcal{P}}}\) given by:
1. \(\widehat{J}_{v,w}^{\mathcal{P}}=\pi^{*}\widehat{J}_{s,t,w(s)}^{-,i}=\pi^{*}J_ {s,t,w(s)}^{+}\) for \(v\in\pi^{-1}([-1,1]\times S^{1})\) and \(\pi(v)=(s,t)\);
2. over the negative ends, \(\widehat{J}_{v,w}^{\mathcal{P}}=\pi^{*}\widehat{J}_{s,t,w(s)}^{-,i}\) if \(v=\epsilon_{i}^{-}(s,t)\) for all \(i=0,1,\dots,p-1\);
3. over the positive end, \(\widehat{J}_{v,w}^{\mathcal{P}}=\widehat{J}_{s,t,m\cdot w(s)}^{+}\) for all \(m\in\mathbb{Z}/p\) and \(z=\epsilon_{m}^{+}(s,t)\).
We need to further introduce a Hamiltonian perturbation term
\[\widehat{\mathcal{H}}^{\mathcal{P}}\in\Omega^{1}(S_{\mathcal{P}},C^{\infty}( V)^{K})\]
satisfying the following conditions.
1. For any \(i\in\mathbb{Z}/p\), we have \(\widehat{\mathcal{H}}^{\mathcal{P}}(\epsilon_{i}^{-}(s,t))=\widehat{H}_{t} \otimes dt\);
2. On the positive end, for any \(i\in\mathbb{Z}/p\), there holds \(\widehat{\mathcal{H}}^{\mathcal{P}}(\epsilon_{i}^{+}(s,t))=\widehat{H}_{t+i} ^{(p)}\otimes dt\);
3. The \(\mathbb{Z}/p\)-equivariance condition \(\widehat{\mathcal{H}}^{\mathcal{P}}(m\cdot v)=\widehat{\mathcal{H}}^{ \mathcal{P}}(v)\) holds;
4. Let \(\mathcal{H}^{\mathcal{P}}\in\Omega^{1}(S_{\mathcal{P}},C^{\infty}(X))\) be the induced Hamiltonian perturbation term on \(X\). Then the curvature of the Hamiltonian connection \(\mathcal{H}^{\mathcal{P}}\) on \(S_{\mathcal{P}}\) is \(0\).
Consider moduli spaces of perturbed vortex equation over the surface \(S_{\mathcal{P}}\). Let \(P\to S_{\mathcal{P}}\) be the trivial \(K\)-bundle. Given \(\mathfrak{r}_{+}=(x_{+},\eta_{+})\in\text{crit}\mathcal{A}_{H^{(p)}}\) and \(\mathfrak{r}_{0}=(x_{0},\eta_{0}),\dots,\mathfrak{r}_{p-1}=(x_{p-1},\eta_{p-1} )\in\text{crit}\mathcal{A}_{H}\), for any \(i\in\mathbb{Z}_{\geq 0},m\in\mathbb{Z}/p\) and \(\alpha\in\{0,1\}\) we can introduce the moduli space
\[\mathcal{M}_{\mathcal{P},\alpha}^{i,m}(\mathfrak{r}_{0},\dots,\mathfrak{r}_{ p-1};\mathfrak{r}_{+})\]
which parametrizes gauge equivalence classes of pairs
\[(u,A)\in C^{\infty}(S_{\mathcal{P}},V)\times\mathcal{A}(P),\qquad\quad w: \mathbb{R}_{s}\to S^{\infty}\]
which satisfy the equations and asymptotic conditions
\[\left\{\begin{array}{l}\overline{\partial}_{A,\widehat{\mathcal{H}}^{ \mathcal{P}},\widehat{J}_{v,w}^{\mathcal{P}}}u=0,\qquad\quad*F_{A}+\mu(u)=0, \\ w^{\prime}(s)-\nabla\tilde{F}(w)=0,\\ \lim_{s\to-\infty}(u(\epsilon_{j}^{-}(s,\cdot)),A(\epsilon_{j}^{-}(s,\cdot)), w(\epsilon_{j}^{-}(s,\cdot)))=(x_{j},0,\eta_{j},Z_{\alpha}^{0}),\quad\forall j\in \mathbb{Z}/p,\\ \lim_{s\to\infty}(u(\epsilon_{0}^{+}(s,\cdot)),A(\epsilon_{0}^{+}(s,\cdot)),w( \epsilon_{0}^{+}(s,\cdot)))=(x_{+},0,\eta_{+},Z_{i}^{m}).\end{array}\right.\]
As expected, the moduli space \(\mathcal{M}_{\mathcal{P},\alpha}^{i,m}(\mathfrak{r}_{0},\dots,\mathfrak{r}_{p-1 };\mathfrak{r}_{+})\) admits an Uhlenbeck-Gromov-Floer compactification \(\overline{\mathcal{M}}_{\mathcal{P},\alpha}^{i,m}(\mathfrak{r}_{0},\dots, \mathfrak{r}_{p-1};\mathfrak{r}_{+})\), whose detailed description can be found in [14, Section (4c)]. For a generic choice of almost complex structures and Hamiltonian connections, the moduli space \(\mathcal{M}_{\mathcal{P},\alpha}^{i,m}(\mathfrak{r}_{0},\dots,\mathfrak{r}_{p-1 };\mathfrak{r}_{+})\) is cut out transversely, whose dimension is given by
\[\text{CZ}(\mathfrak{r}_{0})+\dots+\text{CZ}(\mathfrak{r}_{p-1})-\text{CZ}( \mathfrak{r}_{+})+i-\alpha.\]
We define the pants operations using the above moduli spaces. For each \(i\in\mathbb{Z}_{\geq 0}\), \(m\in\mathbb{Z}/p\), and \(\alpha\in\{0,1\}\), define
\[\mathcal{P}^{i,m}_{\alpha,\mathfrak{b}}:\mathit{VCF}_{\bullet}( \widehat{H},\widehat{J};\Lambda_{0,R})^{\otimes p}\to\mathit{VCF}_{\bullet}( \widehat{H}^{(p)},\widehat{J}^{(p)};\Lambda_{0,R})\] \[\mathfrak{x}_{0}\otimes\cdots\otimes\mathfrak{x}_{p-1}\mapsto \sum_{\begin{subarray}{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c} \mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c} \mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c} \mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c} \mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c} \mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c} \mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c} \mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c} \mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c} \mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c} \mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c} \mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c} \mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c} \mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c }\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c }\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c }\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c }\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c }\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c} \mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c} \mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c }\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c} \mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c} \mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c} \mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c }\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c} \mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c} \mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c} \mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c} \mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c} \mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c} \mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c} \mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c} \mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c} \mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c} \mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c} \mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c} \mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c} \mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c} \mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c} \mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c} \mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c} \mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c} \mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c} \mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c} \mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c} \mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c} \mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c} \mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c} \mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c} \mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c} \mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{}\mathfrak{c}\mathfrak{c}\mathfrak{c} \mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c} \mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c} \mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c} \mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c} \mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c} \mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c} \mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c} \mathfrak{c}\mathfrak{}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c} \mathfrak{c}\mathfrak{}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c} \mathfrak{c}\mathfrak{}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{c} \mathfrak{}\mathfrak{c}\mathfrak{}\mathfrak{c}\mathfrak{c}\mathfrak{c}\mathfrak{} \mathfrak{c}\mathfrak{c}\mathfrak{}\mathfrak{c}\mathfrak{c}\mathfrak{}\mathfrak{c} \mathfrak{}\mathfrak{c}\mathfrak{}\mathfrak{c}\mathfrak{}\mathfrak{c}\mathfrak{c} \mathfrak{}\mathfrak{}\mathfrak{c}\mathfrak{}\mathfrak{c}\mathfrak{}\mathfrak{c}\mathfrak{c} \mathfrak{}\mathfrak{}\mathfrak{c}\mathfrak{}\mathfrak{}\mathfrak{c}\mathfrak{}\mathfrak{c} \mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{c}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{c} \mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{} \mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{} \mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{} \mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{} \mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{} \mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{} \mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{} \mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{} \mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{} \mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{} \mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{} \mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{} \mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{} \mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{} \mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{} \mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{} \mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{} \mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{} \mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{} \mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{} \mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{} \mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{} \mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{} \mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{} \mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{} \mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}\mathfrak{}
which is defined by looking at contributions to \(\partial^{(p)}_{eq,\mathfrak{b}}\) from solutions to (7.4) which are contained in a \(C^{2}\)-small neighborhood of the equivariant lift of \(x^{(p)}\). By inverting \(u\), the Tate version is denoted by \(\mathit{VHF}^{\mathrm{loc}}_{\mathrm{Tate}}(H^{(p)},x^{(p)};\Lambda_{0,R})\). One can similarly define the local version of the \(\mathbb{Z}/p\)-equivariant product and coproduct operation localized near \(x\)
\[\mathcal{P}^{\mathrm{loc}}_{x}:H_{\mathrm{Tate}}(\mathbb{Z}/p, \mathit{VCF}^{\mathrm{loc}}_{\bullet}(H,x;\Lambda_{0,R})^{\otimes p})\to \mathit{VHF}^{\mathrm{loc}}_{\mathrm{Tate}}(H^{(p)},x^{(p)};\Lambda_{0,R}),\] \[\mathcal{C}^{\mathrm{loc}}_{x}:\mathit{VHF}^{\mathrm{loc}}_{ \mathrm{Tate}}(H^{(p)},x^{(p)};\Lambda_{R}^{0})\to H_{\mathrm{Tate}}(\mathbb{Z}/p,\mathit{VCF}^{\mathrm{loc}}(H,x;\Lambda_{0,R})^{\otimes p}).\]
Note that just as in the non-equivariant setting, equivariant local Floer theories can be defined for isolated but not necessarily nondegenerate iterations.
Second, the main result of [14, Section 10] shows that if \(x\) and \(x^{(p)}\) are nondegenerate, the composition satisfies
\[\mathcal{C}^{\mathrm{loc}}_{x}\circ\mathcal{P}^{\mathrm{loc}}_{x}=(-1)^{n}u^{n (p-1)}\cdot id,\]
which is an isomorphism as \(u\) is invertible in the ground ring of the Tate version, thus \(\mathcal{P}^{\mathrm{loc}}_{x}\) is an isomorphism by rank considerations. The proof in _loc. cit_. goes through an auxiliary operation \(\mathcal{Z}^{\mathrm{loc}}_{x}\) satisfying \(\mathcal{C}^{\mathrm{loc}}_{x}\circ\mathcal{P}^{\mathrm{loc}}_{x}=\mathcal{Z}^ {\mathrm{loc}}_{x}\), which can be defined in our setting following [14, Definition 10.1]. On the other hand, the calculation \(\mathcal{Z}^{\mathrm{loc}}_{x}=(-1)^{n}u^{n(p-1)}\) is based on reducing to the case of local Morse theory by a deformation argument, which is also legitimate in the vortex setting. Then by virtue of the proof of Proposition 5.2, when the Hamiltonian \(H\) is a \(C^{2}\)-small Morse function, we can match the upstairs and downstairs moduli spaces, so that the calculation also works in our setting. Note that by Proposition 5.7, the presence of bulk \(\mathfrak{b}\) does not affect the argument.
Finally, we can write
\[\mathcal{P}=\sum_{x}\mathcal{P}^{\mathrm{loc}}_{x}+O(T^{\delta}),\qquad\quad \delta>0,\]
in which \(x\) ranges over all \(1\)-periodic orbits of \(H\) and \(O(T^{\delta})\) denotes an operation with positive valuation. Because the local operations \(\mathcal{P}^{\mathrm{loc}}_{x}\) are isomorphisms, and the contributions of the simple (i.e., non-iterated) \(1\)-periodic orbits of \(H^{(p)}\) to the Tate construction are trivial, we see that \(\mathcal{P}\) is an isomorphism over \(\Lambda_{0,R}[u^{-1},u]=\Lambda_{0,R}\otimes_{\Lambda_{0,F_{p}}}\Lambda_{0, \mathcal{K}}\). This finishes the sketch of the proof of the equivariant localization isomorphism.
### Growth of total bar length
After demonstrating the existence of the equivariant Hamiltonian Floer package in the vortex setting, we are in the right position to prove the inequality of total bar length.
Proof of Theorem 7.1.: With equivariant Hamiltonian Floer theory and local Floer homology in our hands, the arguments from [14, Section 7] can easily be adapted to the current situation without much modification. Consequently, we will only provide a sketch of the proof, and refer the reader to _loc. cit._ for complete arguments.
Firstly, we recall the following alternative characterization of the total bar length. Given a field \(\mathbb{K}\), if we define the vortex Hamiltonian Floer homology over the Novikov ring
\[\Lambda_{0,\mathbb{K}}:=\Big{\{}\sum_{i=1}^{\infty}a_{i}T^{g_{i}}\ |\ g_{i}\in\mathbb{R}_{\geq 0},\ a_{i}\in\mathbb{K},\ \lim_{i\to\infty}g_{i}=+\infty\Big{\}}\]
instead of its field of fractions \(\Lambda_{\mathbb{K}}\), then finite bars are reflected as nontrivial _torsion components_, which is the language used in [11]. If we denote the direct sum of the torsion components of the Floer homology \(\mathit{VHF}^{\mathfrak{b}}_{\bullet}(\tilde{\phi};\Lambda_{0,\mathbb{K}})\) by
\[\Lambda_{0,\mathbb{K}}/T^{g_{1}}\Lambda_{0,\mathbb{K}}\oplus\cdots\oplus \Lambda_{0,\mathbb{K}}/T^{g_{s}}\Lambda_{0,\mathbb{K}},\qquad\quad\text{ with }\ g_{1}\geq\cdots\geq g_{s}\geq 0, \tag{7.6}\]
then we can write the total bar length of \(\tilde{\phi}\) over \(\Lambda_{\mathsf{K}}\) as
\[\tau^{\mathsf{b}}_{(p)}(\tilde{\phi},\Lambda_{\mathsf{K}})=g_{1}+\cdots+g_{s},\]
and the boundary depth is given by \(g_{1}\), c.f. [10, Section 4.4.4]. Note that these torsion exponents correspond to _verbose bar-length spectrum_ in the sense of [11], which means that \(g_{i}\) can be \(0\), due to that fact that the Floer differential in our discussion may not strictly decrease the energy.
The claim is more easily to be proved when \(\tilde{\phi}^{p}\) is nondegenerate and bulk-avoiding. We choose a generating Hamiltonian \(H\) for the Hamiltonian isotopy \(\tilde{\phi}\). The comparison between \(\tau^{\mathsf{b}}_{(p)}(\tilde{\phi}^{p})\) and \(\tau^{\mathsf{b}}_{(p)}(\tilde{\phi})\) is established in the following three steps.
1. Using the quasi-Frobenius isomorphism from Lemma 7.2, one can show that the total bar length of \[(C_{\mathrm{Tate}}(\mathbb{Z}/p,\mathit{VCF}_{\bullet}(\widehat{H},\widehat{J} ;\Lambda_{0,R})^{\otimes p}),\partial_{\mathrm{Tate}}),\] i.e., the sum of torsion exponents of the homology group \(\mathit{H}_{\mathrm{Tate}}(\mathbb{Z}/p,\mathit{VCF}_{\bullet}(\widehat{H}, \widehat{J};\Lambda_{0,R})^{\otimes p})\), is equal to \(p\) times the quantity \(\tau^{\mathsf{b}}_{(p)}(\tilde{\phi})\).
2. By appealing to the isomorphism in Equation 7.5 and an application of the homological perturbation lemma, it is shown in [10, Section 7.3.1] the total boundary depth, i.e., the sum of torsion exponents of \(\mathit{H}_{\mathrm{Tate}}(\mathbb{Z}/p,\mathit{VCF}_{\bullet}(\widehat{H}, \widehat{J};\Lambda_{0,R})^{\otimes p})\) agrees with that of \(\mathit{VHF}_{\mathrm{Tate}}(\widehat{H}^{(p)};\Lambda_{0,R})\).
3. Using [10, Proposition 17, Lemma 18], one can prove that the total boundary depth of \(\mathit{VHF}_{\mathrm{Tate}}(\widehat{H}^{(p)};\Lambda_{0,R})\) is bounded from above by \(\tau^{\mathsf{b}}_{(p)}(\tilde{\phi}^{p})\), which is a reminiscent of the Borel spectral sequence in the context of filtered Floer theory.
Finally, to establish Theorem 7.1 for \(\tilde{\phi}\) and \(\tilde{\phi}^{p}\) which are not necessarily bulk-avoiding and may admit isolated degenerate fixed points, an approximation argument and multiple applications of the homological perturbation lemma as in [10, Section 7.4] suffice.
_Remark 7.6_.: To prove Theorem 7.1 for degenerate Hamiltonian diffeomorphisms with isolated fixed points assuming the corresponding result for nondegenerate ones, one can alternatively use the following more elementary argument. Suppose \(H\) is not necessarily bulk-avoiding and may have isolated but degenerate fixed points and periodic points of period \(p\). Let \(H_{i}\) be a sequence of nondegenerate and bulk-avoiding Hamiltonians on \(X\) which converges to \(H\) under \(C^{2}\)-norm. We can choose the perturbations \(H_{i}\) to be supported in an arbitrarily small neighborhood of the one period orbits of \(H\) and \(H^{(p)}\), over which the perturbation is modeled on \(\epsilon_{i}f\) where \(f\) is a Morse function. Then the above implies that
\[\tau^{\mathsf{b}}_{(p)}(H_{i}^{(p)})\geq p\tau^{\mathsf{b}}_{(p)}(H_{i}).\]
Notice that the reduced barcode of \(H_{i}\) resp. \(H_{i}^{(p)}\) is a Cauchy sequence with respect to the bottleneck distance with a uniformly bounded number of bars. Our choice of perturbation also guarantees a uniform upper bound for the short bars. Moreover, we know that the barcode of the limit \(H\) resp. \(H^{(p)}\) is finite. Hence the total bar length of \(H_{i}\) resp. \(H_{i}^{(p)}\) converges to that of \(H\) resp. \(H^{(p)}\), which implies the desired result.
## 8. Open string theory I. Quasimap Floer theory
In this section we recall the construction of quasimap Lagrangian Floer theory developed by Woodward [14]. The basic idea agrees with the philosophy of gauged linear sigma model
[22]: one replaces the count of holomorphic curves in the toric manifold \(X\) by an equivariant count of holomorphic curves upstairs. There are two significant consequences: first, one can achieve transversality of moduli spaces at a very low cost; second, the counts of curves are all integer-valued.
We use the Morse-Bott model for Lagrangians Floer theory to construct open-string theories and closed-open maps. We extend the use of domain-dependent perturbations for bulk-deformed vortex Floer cohomology to the open-string situation. We first need to fix certain notions and notations to describe the combinatorial data of various moduli spaces.
### Trees and treed disks
We first set up the convention for the notion of trees used in this paper.
_Convention 8.1_ (Convention for trees).: A tree, usually denoted by \(\Gamma\), consists of a nonempty set of vertices \(V_{\Gamma}\) and a nonempty set of edges \(E_{\Gamma}\). The set of vertices is decomposed into the set of finite vertices and the set of vertices at infinity, and the decomposition is denote by
\[V_{\Gamma}=V_{\Gamma}^{\text{finite}}\sqcup V_{\Gamma}^{\infty}.\]
We always assume
1. \(V_{\Gamma}^{\infty}\) contains a distinguished vertex \(v_{\text{root}}\) called the _root_.
2. The valence (degree) of any \(v\in V_{\Gamma}^{\infty}\) is either one or two.
The set \(V_{\Gamma}\) is partially ordered in the following way: we denote by \(v_{\alpha}\succ v_{\beta}\) if \(v_{\alpha}\) and \(v_{\beta}\) are adjacent and \(v_{\beta}\) is closer to the root. In this way vertices at infinities are either _incoming_ (called _inputs_) or _outgoing_ (called _outputs_); in particular the output \(v_{\text{root}}\) is outgoing.
Edges are decomposed into four groups: the set of finite edges \(E_{\Gamma}^{\text{finite}}\) consisting of edges connecting two finite vertices, the set of incoming semi-infinite edges \(E_{\Gamma}^{\text{in}}\) consisting of edges connecting \(v_{\alpha}\in V_{\Gamma}^{\infty}\) with \(v_{\beta}\in V_{\Gamma}^{\text{finite}}\) with \(v_{\alpha}\succ v_{\beta}\), the set of outgoing semi-infinite edges \(E_{\Gamma}^{\text{out}}\) consisting edges connecting \(v_{\alpha}\in V_{\Gamma}^{\text{finite}}\) and \(v_{\beta}\in V_{\Gamma}^{\infty}\) with \(v_{\alpha}\succ v_{\beta}\), and the set of infinite edges \(E_{\Gamma}^{\infty}\) connecting two vertices at infinity. We also call incoming resp. outgoing semi-infinite edges inputs resp. outputs.
A tree \(\Gamma\) is called _unbroken_ if all vertices \(v\in V_{\Gamma}^{\infty}\) has valence \(1\). A vertex \(v\in V_{\Gamma}^{\infty}\) of valence \(2\) is called a _breaking_ of the tree \(\Gamma\). Breakings separate \(\Gamma\) into _unbroken components_.
A _ribbon tree_ is a tree \(\Gamma\) together with an isotopy class of embeddings \(\Gamma\hookrightarrow\mathbb{R}^{2}\). Equivalently, it means for each vertex \(v\in V_{\Gamma}\) the adjacent edges are cyclically ordered. As \(\Gamma\) is rooted, it follows that all incoming edges are strictly ordered.
A ribbon tree is _stable_ if the valence of each finite vertex is at least three.
#### 8.1.1. Metric ribbon trees
A _metric_ on a ribbon tree \(\Gamma\) is a function
\[\boldsymbol{l}:E_{\Gamma}^{\text{finite}}\to[0,+\infty).\]
The underlying decomposition
\[E_{\Gamma}^{\text{finite}}=E_{\Gamma}^{\text{finite},0}\sqcup E_{\Gamma}^{ \text{finite},+}=\boldsymbol{l}^{-1}(\{0\})\sqcup\boldsymbol{l}^{-1}((0,+ \infty))\]
is called a _metric type_, denoted by \([\boldsymbol{l}]\). We often call the pair \((\Gamma,[\boldsymbol{l}])\) a **domain type**. A **metric ribbon tree** of type \((\Gamma,[\boldsymbol{l}])\) is a pair \((\Gamma,\boldsymbol{l})\) such that \(\boldsymbol{l}\) has the metric type \([\boldsymbol{l}]\).
As in [22, Section 3.3], one needs to work with unstable trees. We hence replace the usual stability condition by another _minimality_ condition. We say that a metric ribbon trees \((\Gamma,\boldsymbol{l})\) (resp. domain type \((\Gamma,[\boldsymbol{l}])\)) is **minimal** if it has no finite edges of length zero or infinite edges. Hence for
each domain type \(\Gamma\), there is a canonical minimal one \(\Gamma^{\min}\) obtained from \(\Gamma\) by shrinking edges violating the minimality condition.
We define perturbations over the universal trees. Consider a minimal domain type \(\Gamma=(\Gamma,[\mathbf{l}])\) (which is not necessarily stable). Then there is a moduli space of metric trees of type \(\Gamma\), denoted by \(\mathcal{MT}_{\Gamma}\), which is homeomorphic to \((0,+\infty)^{\#E_{\mathbb{L}}^{\mathrm{finite},+}}\), whose elements parametrize the lengths of finite edges with positive lengths. There is also a _universal tree_
\[\mathcal{UT}_{\Gamma}\to\mathcal{MT}_{\Gamma}\]
whose fiber over a point \(p\in\mathcal{MT}_{\Gamma}\) is homeomorphic to a metric tree representing \(p\) (the infinities of semi-infinite or infinite edges are regarded as points in the metric tree).
The above moduli spaces have natural compactifications. In fact, we can define a partial order among all minimal domain types. We say that a minimal domain type \(\Gamma\)**degenerates** to another minimal domain type \(\Pi\), denoted by \(\Pi\preceq\Gamma\), if \(\Pi\) is obtained from \(\Gamma\) by composing the following types of operations
1. Shrinking the length of a finite edge in \(\Gamma\) to zero and collapse this edge.
2. Breaking a finite edge of positive length to a pair of semi-infinite edges joined at a new vertex at infinity.
Notice that if \(\Pi\preceq\Gamma\), then there is a canonical surjective map \(\rho:V_{\Gamma}^{\mathrm{finite}}\to V_{\Pi}^{\mathrm{finite}}\). Then \(\mathcal{MT}_{\Gamma}\) has the natural compactification
\[\overline{\mathcal{MT}}_{\Gamma}:=\bigsqcup_{\Pi\preceq\Gamma}\mathcal{MT}_{ \Pi}.\]
The universal tree is also extended to the compactification, which is denoted by
\[\overline{\mathcal{UT}}_{\Gamma}\to\overline{\mathcal{MT}}_{\Gamma}.\]
There is a special closed subset \(\overline{\mathcal{UT}}_{\Gamma}^{\mathrm{node}}\subset\overline{\mathcal{UT} }_{\Gamma}\) corresponding to infinities or vertices. Notice that the complement of \(\overline{\mathcal{UT}}_{\Gamma}^{\mathrm{node}}\) inside the interior \(\mathcal{UT}_{\Gamma}\) is a smooth manifold.
#### 8.1.2. Tered disks
**Definition 8.2**.: Given a domain type \(\Gamma=(\Gamma,[\mathbf{l}])\). A _tred disk_ of type \(\Gamma\), denoted by \(C=S\cup T\), is the configuration given by the union of disk components \(S_{\alpha}\cong\mathbb{D}\) for all vertices \(v_{\alpha}\in V_{\Gamma}^{\mathrm{finite}}\), a metric \(\mathbf{l}\) on \(\Gamma\) of type \([\mathbf{l}]\) and an interval \(I_{e}\) of length \(\mathbf{l}(e)\) for each finite edge \(e\in E_{\Gamma}^{\mathrm{finite}}\). The notion of isomorphisms between treed disks is standard and omitted.
### Quasimap Floer theory for Lagrangians
We recall the quasimap Floer theory developed by Woodward [10]. Let \(\mathbf{u}\in\mathrm{Int}P\subset\mathbb{R}^{n}\) be an interior point of the moment polytope \(P\) of the toric manifold \(X\). Recall that the number of faces \(N\) of \(P\) coincides with the dimension of \(V\). Let \(L=L(\mathbf{u})\subset X\) be the torus fiber over \(\mathbf{u}\). Let \(\widehat{L}=\widehat{L}(\mathbf{u})\subset\mu^{-1}(0)\subset V\) be the lift of \(L(\mathbf{u})\), which is a \(K\)-invariant Lagrangian torus in \(V\). Explicitly, we have
\[\widehat{L}=\prod_{i=1}^{N}\left\{z_{i}\in\mathbb{C}\ |\ |z_{i}|^{2}=\tau_{i}\right\}\]
where \(\tau_{i}\) are determined by \(\mathbf{u}\) and the constant term in the moment map \(\mu\).
A **holomorphic quasidisk** is an ordinary holomorphic map \(u:(\mathbb{D},\partial\mathbb{D})\to(V,\widehat{L})\) (with respect to the standard complex structure \(\widehat{J}_{V}\)). Two holomorphic quasidisks \(u\) and \(u^{\prime}\) are \(K\)-equivalent if there exists \(g\in K\) such that \(gu=u^{\prime}\). Each \(K\)-equivalence class of holomorphic quasidisks represents a disk class
\[\beta\in H_{2}(V,\widehat{L})/K\cong H_{2}(V,\widehat{L}).\]
Each such class has a well-defined energy
\[\omega(\beta)=\omega_{V}(\beta)\in\mathbb{R}\]
and a well-defined Maslov index
\[i(\beta)\in 2\mathbb{Z}.\]
Given \(k\) and \(\beta\in H_{2}(V,\widehat{L})\), let \(\mathcal{M}^{\mathrm{disk}}_{k+1}(\beta)\) be the moduli space of \(K\)-equivalence classes of holomorphic quasidisks of class \(\beta\) with \(k+1\) boundary marked points, and let \(\overline{\mathcal{M}}^{\mathrm{disk}}_{k+1}(\beta)\) be its compactification. Notice that as \(V\) is aspherical, configurations in \(\overline{\mathcal{M}}^{\mathrm{disk}}_{k+1}(\beta)\) have only disks bubbles but not sphere bubbles. The evaluation of a \(K\)-equivalence class of quasidisks at the last boundary marked point is well-defined as a point in the quotient Lagrangian \(L\subset X\). Hence there is a continuous map
\[\mathrm{ev}:\overline{\mathcal{M}}^{\mathrm{disk}}_{k+1}(\beta)\to L.\]
**Theorem 8.3** (Blaschke product).: _Let \(u:\mathbb{D}^{2}\to V\) be a holomorphic quasidisk. Then there exist \(\theta_{1},\dots,\theta_{N}\in[0,2\pi)\) and \((a_{i,k})_{k=1,\dots,d_{i}}\subset\mathbb{D}^{2}\subset\mathbb{C}\) for \(i=1,\dots,N\) such that_
\[u(z)=\left(\sqrt{\tau_{1}}e^{i\theta_{1}}\prod_{k=1}^{d_{1}}\frac{z-a_{1,k}}{1 -\overline{a_{1,k}}z},\dots,\sqrt{\tau_{N}}e^{i\theta_{N}}\prod_{k=1}^{d_{N}} \frac{z-a_{N,k}}{1-\overline{a_{N,k}}z}\right). \tag{8.1}\]
_Moreover, the Maslov index of \(u\) is \(2(d_{1}+\dots+d_{N})\)._
In particular, there are \(N\) "basic" Maslov two disk classes \(\beta_{1},\dots,\beta_{N}\in H_{2}(V,L)\) where each \(\beta_{i}\) is represented by a quasidisk given as above with \(d_{j}=\delta_{ij}\). These Maslov two classes form a basis of \(H_{2}(V,\widehat{L})\).
**Theorem 8.4**.: _The moduli space \(\mathcal{M}^{\mathrm{disk}}_{k+1}(\beta)\) is regular of dimension \(n+2i(\beta)+k-2\) and the evaluation map_
\[\mathrm{ev}:\mathcal{M}^{\mathrm{disk}}_{k+1}(\beta)\to L\]
_is a smooth submersion._
Proof.: See [1, Section 6].
A consequence is that each stratum of the compactification \(\overline{\mathcal{M}}^{\mathrm{disk}}_{k+1}(\beta)\) is regular. To be more precise, let \(\Gamma\) denote a ribbon tree representing the combinatorial type of a nodal disk (with \(k\) inputs and \(1\) output) with each vertex labelled by a disk class whose sum is equal to \(\beta\). Then there is a stratum \(\mathcal{M}^{\mathrm{disk}}_{\Gamma}\subset\overline{\mathcal{M}}^{\mathrm{ disk}}_{k+1}(\beta)\).
**Corollary 8.5**.: _Each stratum \(\mathcal{M}^{\mathrm{disk}}_{\Gamma}\subset\overline{\mathcal{M}}_{k+1}(\beta)\) is regular and the evaluation map \(\mathrm{ev}:\mathcal{M}^{\mathrm{disk}}_{\Gamma}\to L\) is a submersion._
Proof.: See [11, Corollary 6.2].
#### 8.2.1. Teed holomorphic quasimaps
The idea of treed holomorphic disks goes back to Cornea-Lalonde [1, 1]. We recall the adaptation by Woodward [11] in order to define the quasimap \(A_{\infty}\) algebras. Throughout our discussion, we fix a smooth perfect Morse function \(f_{L}:L\to\mathbb{R}\) defined over the Lagrangian torus \(L\subset X\), which has exactly \(2^{n}\) critical points.
Given a treed disk \(C=S\cup T\) of type \(\Gamma\), suppose we have a domain-dependent perturbation \(f\) of the Morse function \(f_{L}:L\to\mathbb{R}\) parametrized by points \(t\) on the tree part \(T\), a **treed holomorphic quasimap** on \(C\) is a collection of objects
\[\left((u_{v})_{v\in V^{\mathrm{finite}}_{\Gamma}},(x_{e})_{e\in E_{\Gamma}}\right)\]
where for each finite vertex \(v\in V_{\Gamma}^{\mathrm{finite}}\), we assign a smooth map \(u_{v}:S_{v}\to V\) satisfying
\[\overline{\partial}u_{v}=0,\ u_{v}(\partial S_{v})\subset\widehat{L},\]
\(x_{e}:I_{e}\to L\) is a smooth map satisfying
\[x_{e}^{\prime}(t)+\nabla f(x_{e}(t))=0;\]
moreover, the matching condition requires 1) for each node joining a boundary point \(z\) of some surface component \(S_{v}\) and a finite end of an edge \(e\), the value of \(x_{e}(z)\) lies in the same \(K\)-orbit as the value of \(u_{v}(z)\); 2) for each infinite vertex \(v\in V_{\Gamma}^{\infty}\) joining two (semi-)infinite edges \(e_{1}\) and \(e_{2}\), the limits of \(x_{e_{1}}\) and \(x_{e_{2}}\) at the corresponding infinities agree. Here to ensure the convergence of the maps \(x_{e}\), we require that the perturbation \(f\) is supported away from the infinities.
Two treed holomorphic quasimaps are regarded as **equivalent** if after identifying domains, the maps on corresponding surfaces parts are \(K\)-equivalent (recall \(K\) is the gauge group).
To define the \(A_{\infty}\) structure (or other structures) one would like to regularize the moduli spaces of equivalence classes of treed holomorphic quasimaps and their boundaries. One first needs to use coherent systems of perturbations to describe such moduli spaces.
#### 8.2.2. Perturbations for the \(A_{\infty}\) algebra
To achieve transversality relevant for defining the \(A_{\infty}\) algebra, we only need to perturb the Morse function on edges. Hence for a given _minimal_ metric type \(\Gamma\), a domain-dependent perturbation can be viewed as a map
\[P_{\Gamma}:\overline{\mathcal{U}\mathcal{T}}_{\Gamma}\to C^{\infty}(L).\]
We require any such perturbation to vanish near infinities, i.e., vanish near the closed subset
\[\overline{\mathcal{U}\mathcal{T}}_{\Gamma}^{\infty}\subset\overline{ \mathcal{U}\mathcal{T}}_{\Gamma}\]
corresponding to positions of vertices at infinity. Notice that if \(\Gamma\) is not necessarily stable, a perturbation \(P_{\Gamma^{\mathrm{min}}}\) for the minimal form is enough to determine the treed holomorphic map on any treed disks \(C\) of type \(\Gamma\). Indeed, on any infinite edges of \(C\) (if any) the negative gradient flow equation is taken for the unperturbed Morse function \(f_{L}\).
In order to establish the \(A_{\infty}\) relation, we also need to require that, if \(\Gamma\) degenerates to \(\Pi\), then the restriction of \(P_{\Gamma}\) to the stratum \(\overline{\mathcal{U}\mathcal{T}}_{\Pi}\subset\overline{\mathcal{U}\mathcal{T }}_{\Gamma}\) must agree with the perturbation \(P_{\Pi}\) which have been chosen for the minimal domain type \(\Pi\). Hence we need to construct a _coherent_ system of perturbations indexed for all minimal domain types \(\Gamma\). To use the Sard-Smale theorem to prove that generic perturbations are regular, we also need to specify the neighborhood of \(\overline{\mathcal{U}\mathcal{T}}_{\Gamma}^{\infty}\) where we require the perturbation to vanish; such choices of neighborhoods need also be coherent.
Another complexity in this procedure is that we need to work with unstable domains (as in [14], see also [1]), unlike the cases of [10][11][11] where domains are always stable. Here we give a different way of writing Woodward's perturbation scheme for unstable trees (see Section [14, Section 3]). Given a minimal domain type \(\Gamma\), an **indexing function** is a map \(\vec{n}:V_{\Gamma}^{\mathrm{finite}}\to\mathbb{Z}_{\geq 0}\), whose values are denoted by \(n_{v}\), satisfying that \(n_{v}\geq 1\) when \(v\) is an unstable vertex. One should regard the values of \(\vec{n}\) as one half of the Maslov indices of disk components. We consider perturbations which depend also on such indexing functions.
**Definition 8.6**.: A **coherent family of domain-dependent perturbations** is a collection of continuous maps
\[P_{\Gamma,\vec{n}}^{\mathrm{qd}}:\overline{\mathcal{U}\mathcal{T}}_{\Gamma} \to C^{\infty}(L)\]
indexed by all minimal domain types \(\Gamma\) and all indexing functions \(\vec{n}:V_{\Gamma}^{\mathrm{finite}}\to\mathbb{Z}_{\geq 0}\) satisfying the following conditions.
1. For \(\Gamma\) the tree with a single vertex, no input, and one output, the Morse function on the outgoing edge is the unperturbed function \(f_{L}\).
2. When \(\Gamma\) degenerates to \(\Pi\), there is a canonical surjective map \(\rho:V_{\Pi}^{\mathrm{finite}}\to V_{\Gamma}^{\mathrm{finite}}\). Hence any indexing function \(\vec{n}_{\Pi}:V_{\Pi}\to\mathbb{Z}_{\geq 0}\) induces a partition \(\vec{n}_{\Gamma}:V_{\Gamma}\to\mathbb{Z}_{\geq 0}\). We require that \[P_{\Gamma,\vec{n}_{\Gamma}}^{\mathrm{qd}}\,|_{\overline{\mu}_{\Pi}}=P_{\Pi, \vec{n}_{\Pi}}^{\mathrm{qd}}.\]
3. When \(\Gamma\) is broken with unbroken components \(\Gamma_{1},\ldots,\Gamma_{s}\), the partition \(\vec{n}\) on \(\Gamma\) is defined by assembling the partitions \(\vec{n}_{1},\ldots,\vec{n}_{s}\) on \(\Gamma_{1},\ldots,\Gamma_{s}\). Then \(P_{\Gamma,\vec{n}}^{\mathrm{qd}}\) should be naturally induced from \(P_{\Gamma_{i},\vec{n}_{i}}\).
#### 8.2.3. Compactification and transversality
Let \(\Gamma\) be a possibly unstable, non-minimal domain type. A **map type** over \(\Gamma\), denoted by \(\mathbf{\Gamma}\), assigns to each finite vertex \(v_{\alpha}\in V_{\Gamma}^{\mathrm{finite}}\) a disk class \(\beta_{v}\) (with nonnegative Maslov index) and to each vertex at infinity \(v_{\beta}\in V_{\Gamma}^{\infty}\) a critical point \(x_{\beta}\in\mathrm{crit}f_{L}\). A map type \(\mathbf{\Gamma}\) induces an indexing function \(\vec{n}\) on the minimal form \(\Gamma^{\mathrm{min}}\) by setting \(n_{v}\) to be half of the Maslov index of \(\beta_{v}\) and adding together if several vertices are connected by finite edges of length zero. Then use the perturbation \(P_{\Gamma^{\mathrm{min}},\vec{n}}^{\mathrm{qd}}\) to define a moduli space \(\mathcal{M}_{\mathbf{\Gamma}}\) of treed holomorphic disks. The topology of \(\mathcal{M}_{\mathbf{\Gamma}}\) is defined in the usual way.
Given a perturbation, the moduli space \(\mathcal{M}_{\mathbf{\Gamma}}\) is the zero locus a Fredholm section on certain Banach manifold. We say that the moduli space \(\mathcal{M}_{\mathbf{\Gamma}}\) is regular if the Fredholm section is transverse (it is independent of the corresponding Sobolev completions of the space of smooth maps). We say that a coherent system of perturbations is **regular** if all moduli spaces \(\mathcal{M}_{\mathbf{\Gamma}}\) are regular.
Now we consider possible degenerations of treed holomorphic disks. In general, a sequence of treed holomorphic disks of a fixed map type \(\mathbf{\Gamma}\) can converge to a limit by breaking an edge, shrinking an edge to zero, or bubbling off holomorphic disks. Notice that because \(V\) is a vector space and we do not have interior markings, there cannot be any sphere bubbles in the limit. The notion of convergence is standard and its definition is omitted here. As the perturbation system is coherent, any limiting object (of a possibly different map type \(\mathbf{\Pi}\)) is also a treed holomorphic disk defined using a corresponding perturbation \(P_{\Pi^{\mathrm{min}},\vec{n}}^{\mathrm{qd}}\), hence an element in \(\mathcal{M}_{\mathbf{\Pi}}\). We denote
\[\overline{\mathcal{M}}_{\mathbf{\Gamma}}:=\bigsqcup_{\mathbf{\Pi}\preceq \mathbf{\Gamma}}\mathcal{M}_{\mathbf{\Pi}}\]
where by abuse of notation, \(\preceq\) is the natural partial order among map types induced from the notion of convergence.
**Proposition 8.7**.: _There exists a coherent system of perturbation data such that every moduli space \(\mathcal{M}_{\mathbf{\Gamma}}\) is regular._
Proof.: The proof is an inductive construction with respect to the partial order \(\Pi\preceq\Gamma\) among minimal domain types and the indexing function \(\vec{n}\). First one can easily check, by the Blaschke formula Theorem 8.3 that the specification of item (1) in Definition 8.6 can make the relevant configurations transverse. Then once regular perturbations on all boundary strata of \(\mathcal{UT}_{\Gamma}\) have been fixed, one can use the Sard-Smale theorem to find regular extensions to the interior. See details in [21, Corollary 6.2].
Now we consider the compactification of moduli spaces. A map type \(\mathbf{\Gamma}\) is called **essential** if it is unbroken and has no boundary edges of length zero. Given a collection \(\boldsymbol{x}=(x_{1},\ldots,x_{k};x_{\infty})\) of critical points of the Morse function \(f_{L}\), for \(i=0,1\), let
\[\mathcal{M}^{qd}(x_{1},\ldots,x_{k};x_{\infty})_{i}:=\bigcup_{\Gamma}\mathcal{ M}_{\mathbf{\Gamma}}\]
where the union is taken over all essential map types of index \(i\) whose vertices at infinities are labelled by \(\mathbf{x}\).
**Lemma 8.8**.: _If \(i=0\), the moduli space \(\mathcal{M}^{qd}(x_{1},\dots,x_{k};x_{\infty})_{0}\) is discrete and has finitely many points below any given energy bound. If \(i=1\), the compactified moduli space \(\overline{\mathcal{M}}^{qd}(x_{1},\dots,x_{k};x_{\infty})_{1}\) is a 1-dimensional (topological) manifold with boundary, which is compact below any given energy bound._
Proof.: For the zero-dimensional moduli space, the claimed finiteness follows from the compactness argument and the transversality. For the one-dimensional moduli space, the fact that it is a 1-dimensional manifold with boundary follows from the transversality, compactness, as well as the standard gluing construction.
Moreover, the moduli spaces are all oriented. The orientation depends on choices of orientations of unstable manifolds of critical points of \(f_{L}\) and the orientations of moduli spaces of quasidisks; the latter depends on the orientation of the Lagrangian torus and the spin structure, which we fix from the beginning. Notice that these choices can be made independent of the position \(\mathbf{u}\in\mathrm{Int}P\) in the interior of the moment polytope.
#### 8.2.4. Quasimap Fukaya \(A_{\infty}\) algebra
We would like to define a (family of) cohomologically unital \(A_{\infty}\) algebra(s) over \(\Lambda_{\overline{\mathbb{C}}}\) from the moment Lagrangian tori. Given a Lagrangian torus \(L=L(\mathbf{u})\subset X\), a **local system** on \(L\) is a homomorphism
\[\mathbf{y}:H_{1}(L;\mathbb{Z})\to\exp(\Lambda_{0,\overline{\mathbb{C}}}).\]
Introduce the notation \(\mathbf{L}=(L,\mathbf{y})\). We denote the corresponding bulk-deformed \(A_{\infty}\) algebra of \(\mathbf{L}\) by \(\mathcal{F}_{\mathfrak{b}}(\mathbf{L})\), which is defined as follows. First, the underlying \(\mathbb{Z}_{2}\)-graded vector space is
\[QCF_{\mathfrak{b}}^{\bullet}(\mathbf{L};\Lambda_{\overline{\mathbb{C}}}):=\mathrm{ Span}_{\Lambda_{\overline{\mathbb{C}}}}\mathrm{crit}f_{L}\cong(\Lambda_{ \overline{\mathbb{C}}})^{2^{n}}\]
where the degree of a critical point \(x\in\mathrm{crit}f_{L}\) is \(|x|=n-\mathrm{index}(x)\ \mathrm{mod}\ 2\). Given critical points \(x_{1},\dots,x_{k}\), define
\[m_{k}(x_{k},\dots,x_{1})=\sum_{x_{\infty}}(-1)^{\heartsuit}\left(\sum_{[u]\in \mathcal{M}^{qd}(x_{1},\dots,x_{k};x_{\infty})_{0}}\mathfrak{b}([u])T^{E([u]) }\mathbf{y}^{\partial[u]}\epsilon([u])\right)x_{\infty}. \tag{8.2}\]
We explain the terms below.
1. The sign \(\heartsuit\) is defined as \[\heartsuit:=\sum_{i=1}^{k}i|x_{i}|\in\mathbb{Z}_{2}.\] (8.3)
2. For each disk \(u\) with boundary on \(\widehat{L}\), as \(\widehat{L}\) does not intersect the bulk, there is a well-defined topological intersection number \[\mathfrak{b}([u]):=\prod_{j=1}^{N}c_{j}^{u\cap V_{j}}\] which only depends on \(K\)-equivalence class \([u]\). Notice that if \(c_{j}\in\mathbb{Z}[\mathbf{i}]\), so is \(\mathfrak{b}(u)\).
3. \(E([u])\in\mathbb{R}\) is the energy of \([u]\).
4. \(\mathbf{y}^{\partial[u]}\in\exp(\Lambda_{0,\overline{\mathbb{C}}})\) is the value of the local system \(\mathbf{y}\) on the loop \(\partial[u]\subset L\).
5. \(\epsilon([u])\in\{\pm 1\}\) is determined by the orientation of the zero-dimensional moduli space.
Similar to previous cases involving bulk deformations, the expression (8.2) is a legitimate element of \(\mathit{QCF}^{\bullet}_{\mathfrak{b}}(\boldsymbol{L};\Lambda_{\overline{\mathbb{Q} }})\). Extending linearly, one obtains a linear map
\[m_{k}:\mathit{QCF}^{\bullet}_{\mathfrak{b}}(\boldsymbol{L};\Lambda_{\overline{ \mathbb{Q}}})^{\otimes k}\to\mathit{QCF}^{\bullet}_{\mathfrak{b}}(\boldsymbol {L};\Lambda_{\overline{\mathbb{Q}}}).\]
Notice that when \(k=0\), this is a linear map
\[m_{0}:\Lambda_{\overline{\mathbb{Q}}}\to\mathit{QCF}^{\bullet}_{\mathfrak{b}} (\boldsymbol{L};\Lambda_{\overline{\mathbb{Q}}}).\]
**Theorem 8.9** ([10]).: _The collection of linear maps \(m_{0},m_{1},\ldots\) defines a curved \(A_{\infty}\) algebra structure on \(\mathit{QCF}^{\bullet}_{\mathfrak{b}}(\boldsymbol{L};\Lambda_{\mathbb{Q}})\), denoted by \(\mathcal{F}_{\mathfrak{b}}(\boldsymbol{L})\). Moreover, if \(x_{\max}\) is the unique maximal point of \(f_{L}\), then \(\mathbf{e}=x_{\min}\) is a cohomological unit of \(\mathcal{F}_{\mathfrak{b}}(\boldsymbol{L})\), namely \(m_{1}(\mathbf{e})=0\) and_
\[(-1)^{|x|}m_{2}(\mathbf{e},x)=m_{2}(x,\mathbf{e})=x,\ \forall x\in\mathit{QCF}^{ \bullet}_{\mathfrak{b}}(\boldsymbol{L};\Lambda_{\overline{\mathbb{Q}}}).\]
Proof.: See [10, Theorem 3.6] for the case without bulk deformation. One can verify that the case with bulk deformation can be proved in the same way.
_Remark 8.10_.: The \(A_{\infty}\) algebra can be defined over \(\mathbb{Z}\) as long as the bulk deformation has integer coefficients, though we do not need such a fact in our discussion.
#### 8.2.5. Potential function and nontrivial Floer cohomology
Although the quasimap Fukaya algebra is only cohomologically unital, one can still define the potential function.
**Proposition 8.11**.: _For the quasimap \(A_{\infty}\) algebra \(\mathit{QCF}(\boldsymbol{L};\Lambda_{\overline{\mathbb{Q}}})\), \(m_{0}(1)\) is a multiple of \(\mathbf{e}\)._
Proof.: See [10, Proposition 3.7] for the case with \(\mathfrak{b}=0\). When we use a nontrivial (small) bulk deformation, as we only change the weights in counting but do not modify the perturbation method, the same proof goes through.
**Definition 8.12**.: Define \(W_{\mathfrak{b}}(u):H_{1}(L(\mathbf{u});\exp(\Lambda_{0,\overline{\mathbb{Q}}} ))\to\Lambda\) by
\[m_{0}(1)=W_{\mathfrak{b}}(u)(\mathbf{y})\mathbf{e}\]
and call it the **potential function** of the brane \(\boldsymbol{L}=(L(\mathbf{u}),y)\). By abuse of terminology, we also call \(W_{\mathfrak{b}}\) the bulk-deformed potential function of the Lagrangian \(L(\mathbf{u})\) or the toric manifold.
Let \((\mathbb{C}^{*})^{n}\cong X^{*}\subset X\) be the complement of toric divisors. Choose a trivialization
\[\tau_{X}:\mathrm{Int}P\times T^{n}\to X^{*}\]
which is unique up to isotopy, which induces a well-defined trivialization
\[\bigsqcup_{u\in\mathrm{Int}P}H_{1}(L(\mathbf{u});\exp(\Lambda_{0,\overline{ \mathbb{Q}}}))=\mathrm{Int}P\times(\exp(\Lambda_{0,\overline{\mathbb{Q}}}))^{n}.\]
The bulk-deformed **quasimap disk potential** of the toric manifold \(X\) is defined by
\[W_{\mathfrak{b}}:\mathrm{Int}P\times(\exp\Lambda_{0})^{n} \to\Lambda\] \[(\mathbf{u},\mathbf{y}) \mapsto W_{\mathfrak{b}}(\mathbf{u})(\mathbf{y}).\]
Now we can define the quasimap Floer cohomology. By the \(A_{\infty}\) relation, for any \(x\in\mathit{QCF}(\boldsymbol{L};\Lambda_{\overline{\mathbb{Q}}})\),
\[m_{1}(m_{1}(x))+(-1)^{\|x\|}m_{2}(m_{0}(1),x)+m_{2}(x,m_{0}(1))=0.\]
By Theorem 8.9, the last two terms cancel. Hence \(m_{1}^{2}=0\). Hence one can define the \(\mathfrak{b}\)-deformed **quasimap Floer cohomology** of the brane \(\boldsymbol{L}\) to be
\[\mathit{QHF}^{\bullet}_{\mathfrak{b}}(\boldsymbol{L};\Lambda_{\overline{ \mathbb{Q}}}):=\mathrm{ker}m_{1}/\mathrm{im}m_{1}.\]
Following [10][1], to find nontrivial Floer cohomology, one needs to establish a version of the divisor equation. Recall that \(L\cong(S^{1})^{n}\) with \(H_{1}(L;\mathbb{Z})\cong\mathbb{Z}^{n}\). The perfect Morse function \(f_{L}\) has exactly \(n\) critical points of Morse index \(1\), whose homology classes are identified with the \(n\) standard generators of \(H_{1}(L;\mathbb{Z})\). If \(x_{1},\dots,x_{n}\) are these generators, then any local system \(\mathbf{y}\) is determined by the values
\[y_{1}=\mathbf{y}(x_{1}),\dots,y_{n}=\mathbf{y}(x_{n}).\]
**Theorem 8.13**.: _If \(x\) is a generator of \(H_{1}(L;\mathbb{Z})\), then_
\[m_{1}(x)=\partial_{x}W_{\mathfrak{b}}(\mathbf{u})(y_{1},\dots,y_{n})\]
Proof.: In the absence of bulk deformation, this is established in [11, Section 3.6], which also carries over in our case.
Lagrangian branes with nontrivial Floer cohomology can be identified with critical points of the potential function.
**Theorem 8.14**.: _(cf. [11, Theorem 6.6]) If \(\mathbf{y}=(y_{1},\dots,y_{n})\) is a critical point of \(W_{\mathfrak{b}}(\mathbf{u})\), then the Floer cohomology of \(\boldsymbol{L}(\mathbf{u})=(L(\mathbf{u}),\mathbf{y})\) is isomorphic to \(H^{\bullet}(L(\mathbf{u});\Lambda_{\overline{\mathbb{Q}}})\)._
Proof.: The case with \(\mathfrak{b}=0\) is given by [11, Theorem 6.6]. When we have a nonzero small bulk deformation, it is still a consequence of the divisor equation (Theorem 8.13).
### Critical points of the Givental-Hori-Vafa potential
In this subsection we study various properties of the deformed Givental-Hori-Vafa potential which arises from disk counting in gauged linear sigma model.
We first recall the expression of the Givental-Hori-Vafa potential in terms of the data of the moment polytope and explain its relation with the quasimap disk potential. Let \(\Delta\subset\mathbb{R}^{n}\) be the moment polytope of \(X\), described by
\[\Delta=\Big{\{}u\in\mathbb{R}^{n}\ |\ l_{j}(u)=\langle u,v_{j}\rangle- \lambda_{j}\geq 0,\ j=1,\dots,N\Big{\}}.\]
Here \(v_{j}=(v_{j,1},\dots,v_{j,n})\in\mathbb{Z}^{n}\), \(j=1,\dots,N\) are the inward normal vectors of each codimension \(1\) face of \(\Delta\) coming from the toric data and \(\lambda_{j}\in\mathbb{R}\). The **Givental-Hori-Vafa potential** of \(X\) (or rather its moment polytope) is the element
\[W_{0}=\sum_{j=1}^{N}T^{-\lambda_{j}}y^{v_{j}}:=\sum_{j=1}^{N}T^{- \lambda_{j}}y_{1}^{v_{j,1}}\cdots y_{n}^{v_{j,n}}\in\Lambda[y_{1},\dots,y_{n},y_{1}^{-1},\dots,y_{n}^{-1}].\]
More generally, given any small bulk deformation \(\mathfrak{b}=\sum_{j=1}^{N}\log c_{j}V_{j}\), the deformed Givental-Hori-Vafa potential is defined to be
\[W_{\mathfrak{b}}=\sum_{j=1}^{N}c_{j}T^{-\lambda_{j}}y^{v_{j}}.\]
Without loss of generality, we assume that the origin \(0\in\mathbb{R}^{n}\) is contained in the interior of \(\Delta\). Hence all \(\lambda_{j}\) are positive.
**Definition 8.15**.: A point \(\boldsymbol{\eta}=(\eta_{1},\dots,\eta_{n})\in(\Lambda\setminus\{0\})^{n}\) is called a **critical point** of \(W_{\mathfrak{b}}\) if
\[\left(y_{1}\frac{\partial W_{\mathfrak{b}}}{\partial y_{1}}\right)(\eta_{1}, \dots,\eta_{n})=\cdots=\left(y_{n}\frac{\partial W_{\mathfrak{b}}}{\partial y _{n}}\right)(\eta_{1},\dots,\eta_{n})=0.\]
A critical point \(\boldsymbol{\eta}\) is called **nondegenerate** if
\[\det\left(\eta_{i}\eta_{j}\frac{\partial^{2}W_{\mathfrak{b}}}{\partial y_{i} \partial y_{j}}(\boldsymbol{\eta})\right)\neq 0.\]
\(W_{\mathfrak{b}}\) is called a **Morse function** if all the critical points are nondegenerate.
Observe that the Givental-Hori-Vafa potential is very similar to the quasidisk potential; the latter has a dependence on \(u\in\mathrm{Int}\Delta\). Indeed, the disk potential of the Lagrangian \(L(u)\) with a local system \(\boldsymbol{y}\in(\exp(\Lambda_{0}))^{n}\) is
\[W_{\mathfrak{b}}(T^{u_{1}}y_{1},\ldots,T^{u_{n}}y_{n}).\]
This is proved by [11, Corollary 6.4] in the absence of bulk deformations, and the bulk-deformed version follows from the same argument by the Blaschke formula.
Hence a critical point of \(W_{\mathfrak{b}}\) corresponds to a Floer nontrivial Lagrangian if the valuation of the coordinates of the critical point is in the interior of the moment polytope. On the other hand, in view of mirror symmetry, the Jacobian ring of the Givental-Hori-Vafa potential, or formally the ring of functions on the critical locus, is closely to related to the quantum cohomology under mirror symmetry. However, their ranks agree only in the Fano case. In general, certain critical points fall outside the moment polytope and do not correspond to cohomology classes of the toric manifold.
_Example 8.16_.: Consider the \(n\)-th Hirzebruch surface \(F_{n}\) (\(n\geq 1\)) whose moment polytope is
\[\Delta=\left\{u=(u_{1},u_{2})\in\mathbb{R}^{2}\ \left|\begin{array}{c}l_{1}(u)=u_{ 1}\geq 0,\\ l_{2}(u)=u_{2}\geq 0,\\ l_{3}(u)=1-\alpha-u_{2}\geq 0,\\ l_{4}(u)=n-u_{1}-nu_{2}\geq 0.\end{array}\right.\right\}\]
Here \(\alpha\in(0,1)\) is a parameter. The (undeformed) Givental-Hori-Vafa potential is
\[W_{0}(y_{1},y_{2})=y_{1}+y_{2}+T^{1-\alpha}y_{2}^{-1}+T^{n}y_{1}^{-1}y_{2}^{-n}.\]
The equations for critical points are
\[y_{1}=T^{n}y_{1}^{-1}y_{2}^{-n}, y_{2}=T^{1-\alpha}y_{2}^{-1}+nT^{n}y_{1}^{-1}y_{2}^{-n}.\]
Assume \(n\) is even to simplify notations. Solving \(y_{1}\) one obtains
\[y_{1}=\pm T^{\frac{n}{2}}y_{2}^{-\frac{n}{2}}\]
and hence
\[y_{2}=T^{1-\alpha}y_{2}^{-1}\pm nT^{\frac{n}{2}}y_{2}^{-\frac{n}{2}}\Longrightarrow y _{2}^{\frac{n}{2}-1}(y_{2}^{2}-T^{1-\alpha})=\pm T^{\frac{n}{2}}. \tag{8.4}\]
Each of the two equations has \(\frac{n}{2}+1\) roots, providing \(n+2\) critical points, much larger than the rank of homology (which is \(4\)).
Notice that there are two solutions to (8.4) of the form
\[y_{2}=\pm T^{\frac{1-\alpha}{2}}+\text{higher order terms}.\]
They give \(4\) critical points whose "tropical" positions are inside the moment polytope \(\Delta\). There are also \(n-2\) roots of (8.4) whose valuations are
\[\frac{\frac{n}{2}-(1-\alpha)}{\frac{n}{2}-1}>1-\alpha.\]
They correspond to critical points which are outside the moment polytope. This ends the example.
**Definition 8.17**.: We say that a critical point \(\eta=(\eta_{1},\dots,\eta_{n})\) of \(W_{\mathfrak{b}}\) is **inside the moment polytope**\(\Delta\) if
\[\vec{\mathfrak{v}}_{T}(\boldsymbol{\eta})=(\mathfrak{v}_{T}(\eta_{1}),\dots, \mathfrak{v}_{T}(\eta_{n}))\in\mathrm{Int}\Delta\subset\mathbb{R}^{n}.\]
Denote by
\[\mathrm{Crit}_{X}W_{\mathfrak{b}}\subset\mathrm{Crit}W_{\mathfrak{b}}\]
the set of critical points of \(W_{\mathfrak{b}}\) that are inside the moment polytope of \(X\).
**Proposition 8.18**.: _Let \(\mathfrak{b}\) be an arbitrary small bulk deformation. When \(W_{\mathfrak{b}}\) is a Morse function, one has_
\[\#\mathrm{Crit}_{X}W_{\mathfrak{b}}=\mathrm{dim}H_{\bullet}(X).\]
Proof.: We use a result of Fukaya _et. al._[10, Theorem 2.8.1 (2)]. First, Fukaya _et. al._ defined their bulk-deformed Lagrangian Floer disk potential \(\mathfrak{P}\mathfrak{O}_{\mathfrak{b}}\) by counting (stable) holomorphic disks inside the toric manifold (using \(T^{n}\)-equivariant Kuranishi structures). For our bulk-deformed Givental-Hori-Vafa potential function \(W_{\mathfrak{b}}\), their theorem shows that there exists a bulk deformation \(\mathfrak{b}^{\prime}\) and a "change of coordinate" \(y\mapsto y^{\prime}\) such that
\[W_{\mathfrak{b}}(y^{\prime})=\mathfrak{P}\mathfrak{O}_{\mathfrak{b}^{\prime}} (y).\]
Notice that the change of coordinate does not change the Morse property and the tropical positions of the critical points. Hence one has
\[\#\mathrm{Crit}_{X}(W_{\mathfrak{b}})=\#\mathrm{Crit}_{X}(\mathfrak{P} \mathfrak{O}_{\mathfrak{b}^{\prime}}).\]
On the other hand, by [10, Theorem 1.1.3], this number of critical points coincides with the rank of homology.
Lastly we prove the following fact.
**Theorem 8.19**.: _There exists a small bulk deformation \(\mathfrak{b}=\sum_{j=1}^{N}\log c_{j}V_{j}\) with \(c_{j}\in\mathbb{Z}[\mathfrak{i}]\) such that \(W_{\mathfrak{b}}\) is a Morse function and all critical values are distinct._
Proof.: We first show that the statement is true for generic \(\mathfrak{b}\) with complex coefficients. First, we relate the Givental-Hori-Vafa potential to a complex Laurent polynomial by evaluation at \(T=t\) for some complex number \(t\). To consider convergence issue, introduce
\[\Lambda^{\mathrm{conv}}_{0,\overline{\mathbb{Q}}}:=\left\{\sum_{i=1}^{\infty }a_{i}T^{\lambda_{i}}\in\Lambda_{0,\overline{\mathbb{Q}}}\ |\ \sum_{i=1}^{\infty}|a_{i}||t|^{\lambda_{i}}\text{ converges for }|t|\leq\epsilon\text{ for some }\epsilon>0\right\}.\]
Let \(\Lambda^{\mathrm{conv}}_{\overline{\mathbb{Q}}}\) be its field of fractions. By [10, Proposition 8.5], \(\Lambda^{\mathrm{conv}}_{\overline{\mathbb{Q}}}\) is algebraically closed. On the other hand, critical points of \(W_{\mathfrak{b}}\) are solutions to algebraic equations with coefficients in \(\Lambda^{\mathrm{conv}}_{\overline{\mathbb{Q}}}\), as the convergence holds due to the fact that \(W_{\mathfrak{b}}\) has only finitely many terms. Hence critical points are in \((\Lambda^{\mathrm{conv}}_{\overline{\mathbb{Q}}})^{n}\).
On the other hand, if we regard \(T\) as a complex number, then by Kouchnirenko's theorem [11], there is a proper analytic subset \(S\subset\mathbb{C}^{n}\) (which in particular has positive codimension) such that when
\[c(t)=(c_{1}t^{-\lambda_{1}},\dots,c_{N}t^{-\lambda_{N}})\notin S\]
the function \(W_{\mathfrak{b}}^{t}:=\sum_{j=1}^{N}c_{j}t^{-\lambda_{j}}y^{v_{j}}\) has finitely many critical points and the number of them is bounded by \(n!\) times the volume of the Newton polytope of this Laurent polynomial (which only depends on the moment polytope). As proved by Iritani [12, Proposition 3.10], we can also
guarantee that all critical points are nondegenerate. Now take a generic point \((c_{1},\dots,c_{N})\)8 so that \(c(1)\notin S\). We claim that such a point satisfies our requirement.
Footnote 8: Within this proof, being generic means being in the complement of a proper complex analytic subset.
Indeed, the map
\[c:\mathbb{C}\setminus(-\infty,0]\to\mathbb{C}^{n}\]
is an analytic map. Hence the complement of \(c^{-1}(S)\) contains points arbitrarily close to \(0\). We first show that the number of critical points of \(W_{\mathfrak{b}}\) is no greater than Kouchnirenko's bound, temporarily denoted by \(N_{\Delta}\). Indeed, if there are \(N_{\Delta}+1\) critical points, then as the coordinates of them are in \(\Lambda^{\operatorname{conv}}_{\overline{\mathbb{Q}}}\), we can evaluate them at \(T=t\) with \(|t|\) sufficiently small and \(c(t)\notin S\), obtaining more critical points of \(W_{\mathfrak{b}}^{t}\) than possible. Similarly, as we can evaluate critical points at \(|t|\) small, all critical points have to be nondegenerate.
Lastly, we prove that for generic \(\mathfrak{b}\) all critical values of \(W_{\mathfrak{b}}\) are distinct. First notice that the complex monomials \(W_{1},\dots,W_{N}\) separate points, i.e., given \(y^{\prime},y^{\prime\prime}\in(\mathbb{C}^{*})^{n}\), \(y^{\prime}\neq y^{\prime\prime}\), for some \(W_{j}\), \(W_{j}(y^{\prime})\neq W_{j}(y^{\prime\prime})\). This is because a subset of \(n\) monomials among \(W_{1},\dots,W_{N}\) are coordinates on the torus of \(y_{1},\dots,y_{n}\). Now consider the universal critical locus
\[\widetilde{\operatorname{Crit}}W:=\big{\{}(c_{1},\dots,c_{N},y_{1},\dots,y_{ n})\ |\ dW_{\mathfrak{b}}(y_{1},\dots,y_{n})=0\big{\}}.\]
Over the nondegenerate locus it is a smooth \(N\)-dimensional complex manifold and \(c_{1},\dots,c_{N}\) are local parameters. Given a nondegenerate \(c_{1},\dots,c_{N}\), let \(y^{(1)},y^{(2)}\) be two different critical points. Suppose \(W_{j}(y^{(1)})\neq W_{j}(y^{(2)})\). Then deforming \(c\) along \((c_{1},\dots,c_{j}+s,\dots,c_{N})\) and let the two critical points deform as \(y^{(1)}(s)\), \(y^{(2)}(s)\). Then
\[\frac{d}{ds}\left(W_{s}(y^{(1)}(s))-W_{s}(y^{(2)}(s))\right)=W_{j}(y^{(1)})-W _{j}(y^{(2)})\neq 0.\]
This means that the locus of \(c\) where two critical values coincide is cut out transversely.
Now we have shown that for generic complex \(\mathfrak{b}\), \(W_{\mathfrak{b}}\) satisfies the requirement. As the set of such complex \(\mathfrak{b}\) is open and dense, one can actually find \(\mathfrak{b}\) such that \(c_{j}\in\mathbb{Q}[\sqrt{-1}]\). Then by rescaling one can find the desired bulk deformation.
**Definition 8.20**.: A bulk-deformation \(\mathfrak{b}=\sum_{j=1}^{N}\log c_{j}V_{j}\) with \(c_{j}\in\mathbb{Z}[\mathfrak{i}]\) is called **convenient** if \(W_{\mathfrak{b}}\) is a Morse function and all critical values are distinct.
### Homotopy units
The \(A_{\infty}\) algebra constructed using our perturbation scheme only has cohomological units. In order to establish strict unitality one needs the system of perturbations to satisfy an additional property with respect to the operation of forgetting any boundary inputs and stabilize. This is difficult to achieve (in contrast to the case of [10]). Here we use a typical method of constructing a homotopy unit which appeared in [10][11][12][13][14][15][16][17][18][19] etc.
**Definition 8.21**.: [12, Section 4.3] Let \((A,\mathbf{e})\) be a cohomological unital \(A_{\infty}\) algebra over \(\Lambda_{\mathbb{K}}\). A **homotopy unit structure** on \((A,\mathbf{e})\) is an \(A_{\infty}\) structure on the \(\Lambda_{\mathbb{K}}\)-module
\[A^{+}=A\oplus\Lambda_{\mathbb{K}}\mathbf{f}[1]\oplus\Lambda_{\mathbb{K}} \mathbf{e}^{+}\]
such that the \(A_{\infty}\) composition maps on \(A^{+}\) restrict to the \(A_{\infty}\) composition maps on \(A\), \(m_{1}(\mathbf{f})=\mathbf{e}^{+}-\mathbf{e}\), and such that \(\mathbf{e}^{+}\) is a strict unit, i.e.
\[(-1)^{|a|}m_{2}(\mathbf{e}^{+},a)=m_{2}(a,\mathbf{e}^{+})=a, m_{k}(\cdots,\mathbf{e}^{+},\cdots)=0\ \forall k\neq 2.\]
To construct a homotopy unit, one needs to include a collection of extra moduli spaces. Consider **weighted ribbon trees**\(\Gamma\) whose vertices at infinity \(v\in V_{\Gamma}^{\infty}\) are either **unweighted** or **weighted**. We require that when \(v\) is an output or a breaking, it must be unweighted. Each weighted boundary input carries an additional parameter \(\rho\in[0,1]\). Therefore a moduli space of weighted metric ribbon trees has additional parameters from weighted inputs. We require that the perturbations \(P_{\Gamma,\vec{n}}^{\text{disk}}\) on any (minimal) tree \(\Gamma\) also depend on these parameters. Moreover, we require that
1. When all inputs are unweighted, the perturbation on this tree coincides with the perturbation we have chosen to define the cohomologically unital Fukaya algebra \(\mathcal{F}_{\mathbf{b}}(\boldsymbol{L})\).
2. For each weighted input, when the parameter \(\rho=0\), the perturbation on this tree agrees with the perturbation for the tree \(\Gamma^{\prime}\) obtained by changing this weighted input to an unweighted input.
3. For each weighted input \(v\in V_{\Gamma}^{\infty}\), when the parameter \(\rho=1\), the perturbation \(P_{\Gamma,\vec{n}}^{\text{disk}}\) on this tree agrees with the perturbation obtained by pulling back a perturbation \(P_{\Gamma^{\prime},\vec{n}^{\prime}}^{\text{disk}}\) via the forgetful map. Here \(\Gamma^{\prime}\) is defined as follows. Suppose \(v\) is attached to a finite vertex \(v^{\prime}\). If \(n_{v^{\prime}}>0\) or after forgetting \(v\), \(v^{\prime}\) is still stable, then \(\Gamma^{\prime}\) is just obtained by \(\Gamma\) by removing \(v\); if \(n_{v^{\prime}}=0\) and \(v^{\prime}\) becomes unstable after removing \(v\), then \(\Gamma^{\prime}\) is obtained from \(\Gamma\) by removing \(v\) and contracting \(v^{\prime}\) to the next adjacent finite vertex. See Figure 2 for illustration of this operation.
Now we need to define the additional composition maps \(m_{k}^{+}\) on \(A^{+}\) when involves the new generators \(f\) and \(e^{+}\), and prove the \(A_{\infty}\) relation for this enlarged set of compositions. We first define
\[m_{k}^{+}(\cdots,\mathbf{e}^{+},\cdots)\]
according to the requirement of strict unit. Then we need to define \(m_{k}^{+}\) for variables being either the original generators of \(A\) or the element \(\mathbf{f}\). To define this, we require that the incoming edges corresponding to weighted inputs converge to the unique maximal point of the Morse function
Figure 2. Forgetting a weighted input.
\(f_{L}:L\to\mathbb{R}\), and count \(0\)-dimensional moduli spaces. A consequence of the fact that all quasidisks have positive Maslov index is that
\[m_{k}^{+}(\mathbf{f},\cdots,\mathbf{f})=0\ \forall k\geq 2.\]
We need to verify the \(A_{\infty}\) relation for all \(m_{k}^{+}\). Recall that the \(A_{\infty}\) relation reads
\[\sum_{j=0}^{k}\sum_{i=0}^{k-j}(-1)^{\mathbf{F}_{\mathsf{b}}^{i}}m_{k-j+1}^{+}( x_{k},\cdots,m_{j}^{+}(x_{i+j+1},\ldots,x_{i+1}),x_{i},\ldots,x_{1})=0.\]
We only needs to verify for the case when all variables are generators of \(A^{+}\). When all of them are old generators of \(A\), this is the same as the original \(A_{\infty}\) relation for \(m_{k}\); when some variable is \(\mathbf{e}^{+}\), this can be verified from the requirement that \(\mathbf{e}^{+}\) satisfies the equations for a strict unit. Now assume that all variables are either old generators or \(\mathbf{f}\). Consider \(1\)-dimensional moduli spaces with this fixed sequence of inputs and consider its boundary strata. In addition to the strata corresponding to boundary edge breakings, additional boundary strata corresponding to parameters \(\rho\) on weighted inputs turn to \(0\) or \(1\). These strata correspond to the terms \(m_{k}^{+}(\cdots,m_{1}^{+}(\mathbf{f}),\cdots)\) in the \(A_{\infty}\) relation. Hence the \(A_{\infty}\) relation for \(m_{k}^{+}\) is verified. We summarize the above discussion as follows.
**Proposition 8.22**.: _There exists a homotopy unit structure on the cohomologically unit \(A_{\infty}\) algebra \(\mathcal{F}_{\mathsf{b}}(\boldsymbol{L})\). Denote the corresponding strictly unital \(A_{\infty}\) algebra by \(\mathcal{F}_{\mathsf{b}}^{+}(\boldsymbol{L})\). Moreover, if we denote the element whose coboundary relates \(\mathbf{e}\) and \(\mathbf{e}^{+}\) by \(\mathbf{f}_{\boldsymbol{L}}\), then one has_
\[m_{k}^{+}\Big{(}\underbrace{\mathbf{f}_{\boldsymbol{L}},\ldots,\mathbf{f}_{ \boldsymbol{L}}}_{k}\Big{)}=0,\ \forall k\geq 2.\]
#### 8.4.1. Canonical weakly bounding cochain
Recall that a weakly bounding cochain is an odd element \(b\in\mathcal{F}_{\mathsf{b}}^{+}(\boldsymbol{L})\) solving the weak Maurer-Cartan equation
\[\sum_{k\geq 0}m_{k}^{+}(b,\cdots,b)\in\Lambda\mathbf{e}^{+}.\]
In general, worrying about convergence, we require that \(b\) has a positive Novikov valuation. In our case, we only use a special weakly bounding cochain.
**Definition 8.23**.: The **canonical** weakly bounding cochain of the strictly unital \(A_{\infty}\) algebra \(\mathcal{F}_{\mathsf{b}}^{+}(\boldsymbol{L})\) is
\[b_{\boldsymbol{L}}=W_{\mathsf{b}}\mathbf{f}_{\boldsymbol{L}}.\]
We check that, by the fact that \(m_{k}^{+}(\mathbf{f}_{\boldsymbol{L}},\cdots,\mathbf{f}_{\boldsymbol{L}})=0\) for \(k\geq 2\) and \(m_{1}^{+}(\mathbf{f}_{\boldsymbol{L}})=\mathbf{e}_{\boldsymbol{L}}^{+}- \mathbf{e}_{\boldsymbol{L}}\), one has
\[\sum_{k\geq 0}m_{k}^{+}(b_{\boldsymbol{L}},\cdots,b_{\boldsymbol{L}})=m_{0}^{+ }(1)+m_{1}^{+}(W_{\mathsf{b}}\mathbf{f}_{\boldsymbol{L}})=W_{\mathsf{b}} \mathbf{e}_{\boldsymbol{L}}+W_{\mathsf{b}}(\mathbf{e}_{\boldsymbol{L}}^{+}- \mathbf{e}_{\boldsymbol{L}})=W_{\mathsf{b}}\mathbf{e}_{\boldsymbol{L}}^{+}.\]
Hence indeed \(b_{\boldsymbol{L}}\) is a weakly bounding cochain.
Now we can define the flat \(A_{\infty}\) algebra \(\mathcal{F}_{\mathsf{b}}^{\flat}(\boldsymbol{L})\) with compositions being (for \(k\geq 1\))
\[m_{k}^{\flat}(x_{k},\ldots,x_{1})=\sum_{l_{0},\ldots,l_{k}\geq 0}m_{k+l_{0}+ \cdots+l_{k}}^{+}\Big{(}\underbrace{b_{\boldsymbol{L}},\ldots,b_{\boldsymbol{ L}}}_{l_{k}},x_{k},\cdots,x_{1},\underbrace{b_{\boldsymbol{L}},\ldots,b_{ \boldsymbol{L}}}_{l_{0}}\Big{)}.\]
In particular, \(m_{1}^{\flat}\circ m_{1}^{\flat}=0\) and the cohomology of \(\mathcal{F}_{\mathsf{b}}^{\flat}(\boldsymbol{L})\) agrees with the quasimap Floer cohomology \(QHF_{\mathsf{b}}^{\ast}(\boldsymbol{L};\Lambda_{\overline{\mathbb{C}}})\).
#### 8.4.2. Multiplicative structure
We need to identify the multiplicative structures on the quasimap Floer cohomology. The second composition \(m_{2}^{\flat}\) on \(\mathcal{F}_{\mathsf{b}}^{\flat}(\mathbf{L})\) induces a multiplication on \(\mathit{QHF}_{\mathsf{b}}^{\bullet}(\mathbf{L};\Lambda_{\overline{\mathbb{Q}}})\).
**Proposition 8.24**.: _When \(\mathbf{y}\) is a critical point of \(W_{\mathsf{b}}(\mathbf{u})\) and the Hessian of \(W_{\mathsf{b}}(\mathbf{u})\) is nondegenerate at \(\mathbf{y}\), i.e._
\[\det\left(\frac{\partial^{2}W_{\mathsf{b}}(\mathbf{u})}{\partial x_{i} \partial x_{j}}(\mathbf{y})\right)\neq 0,\]
_the quasimap Floer cohomology algebra \(\mathit{QHF}_{\mathsf{b}}^{\bullet}(\mathbf{L};\Lambda_{\overline{\mathbb{Q}}})\) is isomorphic to a Clifford algebra over \(\Lambda_{\overline{\mathbb{Q}}}\) associated to a nondegenerate quadratic form on an \(n\)-dimensional space._
Note that the above nondegeneracy condition coincides with the one from Definition 8.15 because we are considering Laurent polynomials. The computation of the ring structure is carried out in a similar situation in [20]. Here we only sketch it. The key of the computation is to establish another divisor equation
\[m_{2}^{\flat}(x_{i},x_{j})+m_{2}^{\flat}(x_{j},x_{i})=\frac{\partial^{2}W_{ \mathsf{b}}}{\partial x_{i}\partial x_{j}}\mathbf{e}. \tag{8.5}\]
on cohomology. When the corresponding critical point of \(W_{\mathsf{b}}\) is nondegenerate, it follows that the Floer cohomology is isomorphic to a Clifford algebra induced from the Hessian of the critical point.
_Remark 8.25_.: We explain why the divisor equation (8.5) fails on the chain level if one uses the naive way of perturbation. Consider \(X=\mathbb{P}^{1}\). Fix a torus action. The (undeformed) potential function is
\[W=T^{u}y+T^{1-u}\frac{1}{y}.\]
The two terms come from the contribution of two disks, one through the north pole and the other through the south pole. If the divisor equation (8.5) holds, then there should be two configurations with two inputs labelled by the index \(1\) critical point, however, once the perturbation is chosen, one can only see one configurations exist in the moduli space. This is because the perturbation is not symmetric with respect to flipping the two incoming semi-infinite edges.
Proof of Proposition 8.24.: Once the divisor equation (8.5) is established, the calculation of the ring structure follows immediately. Hence we only explain how to achieve the divisor equation following the same idea as [20]. Notice that the \(A_{\infty}\) structure is independent of the perturbation up to homotopy equivalence. Hence the ring structure on the Floer cohomology is independent of the perturbation. Now we broaden the class of perturbations by considering multi-valued ones in order to achieve some symmetry, and use such perturbations to establish Equation (8.5) on the chain level. A multi-valued perturbation is just a (finite) multi-set of perturbations on each tree. We consider a coherent family of multi-valued perturbations which still satisfy Definition 8.6. We say that a multi-valued perturbation is symmetric, if, when restricted to the tree \(\Gamma_{0}\) with two inputs, one output, and one finite vertex, the perturbation \(P_{\Gamma_{0},\vec{n}}\) (where \(\vec{n}\) on the only finite vertex is \(1\), corresponding to Maslov index two disks) is invariant under the \(\mathbb{Z}_{2}\)-action on the universal tree \(\overline{\mathcal{U}\mathcal{T}}_{\Gamma_{0}}\) induced by switching the two incoming semi-infinite edges.
One can follow the same inductive argument to construct a symmetric coherent system of multi-valued perturbations and achieve transversality. Now when defining the counts, we need to count for each member of the multi-valued perturbation and then take an average. This still defines an \(A_{\infty}\) algebra and it is homotopy equivalent to any one defined using single-valued perturbations, provided that we work over the rationals. Moreover, for any two critical points \(x_{i},x_{j}\) of Morse index \(n-1\), the divisor equation (8.5) holds. For details, see [20, Lemma 5.12].
#### 8.4.3. Hochschild cohomology
Now consider the Hochschild cohomology of the \(A_{\infty}\) algebra \(\mathcal{F}^{\flat}_{\mathfrak{b}}(\boldsymbol{L})\).
**Proposition 8.26**.: _When \(\boldsymbol{L}\) corresponds to a nondegenerate critical point of \(W_{\mathfrak{b}}\), one has_
\[\text{HH}^{\bullet}(\mathcal{F}^{\flat}_{\mathfrak{b}}(\boldsymbol{L}))\cong \Lambda_{\overline{\mathbb{Q}}}\]
_where the Hochschild cohomology is generated by the identity \(\boldsymbol{1}_{\mathcal{F}^{\flat}_{\mathfrak{b}}(\boldsymbol{L})}\)._
Proof.: We know that the cohomology of \(\mathcal{F}^{\flat}_{\mathfrak{b}}(\boldsymbol{L})\) is isomorphic to a Clifford algebra over \(\Lambda_{\overline{\mathbb{Q}}}\). This proposition follows from Proposition 3.38.
_Remark 8.27_.: When the bulk-deformation \(\mathfrak{b}\) is convenient, we can formally define the quasimap Fukaya category as the disjoint union of the \(A_{\infty}\) algebras \(\mathcal{F}^{\flat}_{\mathfrak{b}}(\boldsymbol{L})\) for \(\boldsymbol{L}\) corresponding to all critical points of \(W_{\mathfrak{b}}\) inside the moment polytope. However, what we need is only the direct sum of these Hochschild cohomology.
## 9. Open string theory II. Closed-open maps
In this section, we prove Theorem B. It is the consequence of the following theorem.
**Theorem 9.1**.: _Let \(\mathfrak{b}\) be a convenient bulk deformation \(\mathfrak{b}\)._
1. _There is an isomorphism of_ \(\Lambda_{\overline{\mathbb{Q}}}\)_-algebras_ \[\operatorname{CO}_{\mathfrak{b}}:\text{VHF}^{\mathfrak{b}}_{\bullet}(V; \Lambda_{\overline{\mathbb{Q}}})\to\bigoplus_{\boldsymbol{L}\in\operatorname {Crit}_{\mathbb{X}}W_{\mathfrak{b}}}\text{HH}^{\bullet}(\mathcal{F}^{\flat}_ {\mathfrak{b}}(\boldsymbol{L}))\cong(\Lambda_{\overline{\mathbb{Q}}})^{ \operatorname{Crit}_{\mathbb{X}}W_{\mathfrak{b}}}.\]
2. _The operator on_ \(\text{VHF}^{\mathfrak{b}}_{\bullet}(V;\Lambda_{\mathcal{I}[\mathfrak{i}]})\) _defined by the pair-of-pants product with the (bulk-deformed) first Chern class (see Definition_ 9.14_) has distinct eigenvalues in_ \(\Lambda_{\overline{\mathbb{Q}}}\)_._
_Remark 9.2_.: A closed-open map on the level of Floer cohomology, also in the setting of vortex Floer theory, was constructed in [14]. The method of using quilted objects to prove the multiplicative property was learned from Woodward, see [14].
### Moduli spaces for the closed-open map
#### 9.1.1. Based trees and closed-open domain types
Recall our conventions about trees and ribbon trees given in the last section. To model curves with spherical components or Floer cylinders, we consider a broader class of trees called _based trees_. A **based tree** is a pair \((\Gamma,\underline{\Gamma})\) where \(\underline{\Gamma}\) is a subtree with a ribbon structure containing the root \(v_{\text{root}}\) and adjacent semi-infinite edge. In a based tree, vertices in \(V_{\underline{\Gamma}}\) are called a _boundary vertex_, and other vertices are called _interior vertices_. Similarly, an edge is either an interior edge or a boundary edge. A **metric based tree** is a based tree \(\Gamma\) together with a metric on its base \(\underline{\Gamma}\).
Now specify domains responsible for the definition of the closed-open map on the chain level. Consider based trees with a distinguished interior vertex at infinity \(v_{\text{Ham}}^{\infty}\in V_{\Gamma}^{\infty}\setminus V_{\underline{ \Gamma}}\). For each such tree \(\Gamma\), let \(v_{\text{Ham}}\in V_{\underline{\Gamma}}^{\text{finite}}\) be the distinguished vertex in the base \(\underline{\Gamma}\) which is closest to \(v_{\text{Ham}}^{\infty}\). We also assume that such trees always have exactly one boundary output \(v_{\text{out}}\). We call such a tree \(\Gamma\) (with a metric type on the base \(\underline{\Gamma}\)) a **closed-open domain type**. We say \(\Gamma\) is minimal if its base is minimal (see Subsection 8.1), i.e., the base has no finite edges of length zero or infinite edges. For a minimal \(\Gamma\), the base \(\underline{\Gamma}\) has a moduli space \(\mathcal{MT}^{\text{CO}}_{\underline{\Gamma}}\) and universal tree \(\mathcal{UT}^{\text{CO}}_{\underline{\Gamma}}\). One also has a compactification (see Section 8.1) denoted by \(\overline{\mathcal{MT}^{\text{CO}}_{\underline{\Gamma}}}\).
Given a closed-open domain \(\Gamma\), a closed-open domain of type \(\Gamma\) is a treed disk \(C=S\cup T\) which has a distinguished "parametrized component" \(C_{\mathrm{Ham}}\) corresponding to the vertex \(v_{\mathrm{Ham}}\) which has a nonempty boundary. See Figure 3 for an illustration of a closed-open domain.
We define a type of "mixed equation" on closed-open domains. Fix an admissible bulk-avoiding pair \((\widehat{H},\widehat{J})\) for which the bulk-deformed vortex Floer chain complex \(\mathit{VCF}^{\mathsf{b}}_{\bullet}(\widehat{H},\widehat{J};\Lambda_{\overline {\mathbb{C}}})\) is defined. Let \(C=S\cup T\) be a closed-open domain with distinguished component \(C_{\mathrm{Ham}}\). Because there is at least one boundary output, \(C_{\mathrm{Ham}}\) together with the interior puncture and boundary nodes is stable. Hence can always identify \(C_{\mathrm{Ham}}\cong\mathbb{D}\setminus\{0\}\cong(-\infty,0]\times S^{1}\) and equip it with the cylindrical metric. Using a cut-off function supported in \((-\infty,-1]\), one can homotope the pair \((\widehat{H},\widehat{J})\) with the pair \((0,J_{V})\) where \(J_{V}\) is the standard complex structure on the vector space \(V\cong\mathbb{C}^{N}\), giving rise to a domain-dependent pair \((\widehat{H}_{z},\widehat{J}_{z})\) for \(z\in C_{\mathrm{Ham}}\). Given the above data, we consider tuples
\[\left((u_{v})_{v\in V_{\Gamma}},(x_{e})_{e\in E_{\Gamma}}\right)\]
where
1. For each vertex \(v\) belong to the path connecting \(v_{\mathrm{Ham}}^{\infty}\) and \(v_{\mathrm{Ham}}\) (not included), \(u_{v}=[u_{v},\xi_{v},\eta_{v}]\) is a gauge equivalence class of solutions to the vortex equation \[\partial_{s}u_{v}+\mathcal{X}_{\xi_{v}}+\widehat{J}_{t}(\partial_{t}u_{v}+ \mathcal{X}_{\eta_{v}}-X_{\widehat{H}_{t}}(u_{v}))=0,\qquad\qquad\partial_{s} \eta_{v}-\partial_{t}\xi_{v}+\mu(u_{v})=0.\]
2. For \(v=v_{\mathrm{Ham}}\), \(u_{v}=[u_{v},\xi_{v},\eta_{v}]\) is a gauge equivalence class of solutions to \[\partial_{s}u_{v}+\mathcal{X}_{\xi_{v}}+\widehat{J}_{z}(\partial_{t}u_{v}+ \mathcal{X}_{\eta_{v}}-X_{\widehat{H}_{z}}(u_{v}))=0,\qquad\qquad\partial_{s} \eta_{v}-\partial_{t}\xi_{v}+\mu(u_{v})=0.\] (9.1) Moreover, \(u_{v}\) satisfies the Lagrangian boundary condition \[u_{v}(\partial C_{\mathrm{Ham}})\subset\widehat{L}.\] (9.2)
3. For all other \(v\), \(u_{v}\) is a \(K\)-orbit of quasidisk with boundary in \(\widehat{L}\).
4. For each edge \(e\in E_{\Gamma}\), \(x_{e}\) is a (perturbed) negative gradient line/ray/segment of the Morse function \(f_{L}:L\to\mathbb{R}\).
5. These objects must have finite energy and must satisfy the obvious matching condition at interior and boundary nodes.
The finite energy condition forces the component \(u_{v}\) whose domain \(C_{v}\) has the distinguished input \(v_{\mathrm{Ham}}^{\infty}\) to converge to an equivariant \(1\)-periodic orbit of \(\widehat{H}\). Given a closed-open domain type \(\Gamma\), a **closed-open map type** over \(\Gamma\), denoted by \(\mathbf{\Gamma}\), consists of topological types of objects for each
Figure 3. A closed-open domain. The component with a cylindrical end is the component \(C^{\mathrm{Ham}}\).
component. A closed-open map type is called **essential** if there is no interior node and all finite boundary edges have positive length and there is no breaking.
#### 9.1.2. Transversality
Given a closed-open domain type \(\Gamma\), a domain-dependent perturbation consists of a domain-dependent smooth function \(f_{\Gamma}\) depending on positions on the universal tree \(\overline{\mathcal{U}\mathcal{T}}_{\Gamma}\) and a domain-dependent almost complex structure \(\widehat{J}^{\mathrm{CO}}\) depending only on positions on the component \(C_{\mathrm{Ham}}\cong(-\infty,0]\times S^{1}\). In other words, we keep using the standard complex structure over disk components without interior marked point. As before, the perturbation function \(f_{\Gamma}\) also depends on a function \(\vec{n}:V_{\Gamma}^{\mathrm{finite}}\setminus\{v_{\mathrm{Ham}}\}\to\mathbb{Z }_{\geq 0}\). To achieve transversality, one can first fix \(\widehat{J}^{\mathrm{CO}}\) which is equal to the given \(\widehat{J}_{t}\) near \(-\infty\).
Next we need to extend the perturbation we have chosen to define the (bulk-deformed) quasimap \(A_{\infty}\) algebra of \(L\). Notice that for any closed-open domain type \(\Gamma\), the base \(\underline{\Gamma}\) has a distinguished finite vertex \(v_{\mathrm{Ham}}\). The tree \(\Gamma\) degenerates to another tree \(\Pi\) which has an unbroken component \(\Pi^{\prime}\) that does not contain the distinguished vertex. For such unbroken components \(\Pi^{\prime}\), the domain-dependent perturbation has been chosen as before to define the \(A_{\infty}\) structure. Hence we look for a system of domain-dependent perturbations
\[P_{\Gamma,\vec{n}}^{\mathrm{CO}}:\overline{\mathcal{U}\mathcal{T}}_{ \underline{\Gamma}}\to C^{\infty}(L)\]
which respect similar conditions as Definition 8.6. We omit the complete definition here. Moreover, we require that, once \(\Gamma\) has an unbroken component \(\Gamma^{\prime}\) which does not contain \(v_{\mathrm{Ham}}\), the perturbation on this component agrees with the existing one chosen before.
Now we consider relevant moduli spaces. Given a closed-open map type \(\mathbf{\Gamma}\). Let \(\vec{n}:V_{\Gamma}^{\mathrm{finite}}\setminus\{v_{\mathrm{Ham}}\}\to\mathbb{Z }_{\geq 0}\) be the function whose value on \(v\) is half of the Maslov index of the disk class \(\beta_{v}\) contained in the data \(\mathbf{\Gamma}\). The moduli space \(\mathcal{M}_{\underline{\Gamma}}^{\mathrm{CO}}\) is the space of solutions to the mixed equation described above for the complex structure \(\widehat{J}_{z}\) in (9.1), and the negative gradient flow equation with the Morse function \(f_{L}\) perturbed by \(P_{\Gamma,\vec{n}}^{\mathrm{CO}}\). Then as before, one can find a coherent family of perturbations making all such moduli spaces regular. We omit the details.
Furthermore, one can incorporate the perturbations used for defining the homotopy units. For this we allow that the inputs of an closed-open domain type to be weighted or unweighted and require similar properties of perturbations on domains with weighted inputs as in Subsection 8.4 (the almost complex structure \(\widehat{J}^{\mathrm{CO}}\) is independent of the weighting parameters \(\rho\)).
### The closed-open map
Having regularized all relevant moduli spaces, we define the relevant counts for the closed-open maps. A closed-open map type \(\mathbf{\Gamma}\) is called **essential** if it is stable, has no breakings and no sphere bubbles, no boundary edges of length zero. Given a \(k+1\)-tuple of generators \(\mathbf{x}=(x_{1},\ldots,x_{k};x_{\infty})\)9, an equivariant \(1\)-periodic orbit \(\mathfrak{x}\) of the bulk-avoiding Hamiltonian \(\widehat{H}\) and a disk class \(\beta\), denote by
Footnote 9: Notice that among \(x_{1},\ldots,x_{k}\) some of them could be the weighted element \(\mathbf{f}\).
\[\mathcal{M}_{\beta}^{\mathrm{CO}}(\mathfrak{x},\mathbf{x})_{i},\ i=0,1\]
the union of moduli spaces \(\mathcal{M}_{\mathbf{\Gamma}}^{\mathrm{CO}}\) of essential closed-open map types \(\mathbf{\Gamma}\) whose boundary inputs/output are labelled by \(\mathbf{x}\), whose (only) interior input \(v_{\mathrm{Ham}}^{\infty}\) is labeled by \(\mathfrak{x}\), and whose total disk class is \(\beta\), and whose virtual dimension is \(i\). Given \(E\geq 0\), let
\[\mathcal{M}_{\beta}^{\mathrm{CO}}(\mathfrak{x},\mathbf{x})_{i}^{\leq E}\subset \mathcal{M}_{\beta}^{\mathrm{CO}}(\mathfrak{x},\mathbf{x})_{i}\]
be the subset of configurations whose (analytic) energy is at most \(E\).
It is standard to prove the following theorem.
**Theorem 9.3**.:
1. \(\mathcal{M}^{\mathrm{CO}}_{\beta}(\mathfrak{r},\mathbf{x})_{i}\) _is an oriented topological manifold of dimension_ \(i\)_._
2. _For all_ \(E\geq 0\)_,_ \(\mathcal{M}^{\mathrm{CO}}_{\beta}(\mathfrak{r},\mathbf{x})_{0}^{\leq E}\) _is a finite set._
3. _For all_ \(E\geq 0\)_,_ \(\mathcal{M}^{\mathrm{CO}}_{\beta}(\mathfrak{r},\mathbf{x})_{1}^{\leq E}\) _is compact up to at most one 1) interior breaking, 2) boundary breaking, 3) bubbling of holomorphic disks, or 4) the length of a finite boundary edge shrinks to zero._
4. _By the standard gluing construction and identifying fake boundary strata, one can compactify the_ \(1\)_-dimensional moduli space to_ \(\overline{\mathcal{M}}^{\mathrm{CO}}_{\beta}(\mathfrak{r},\mathbf{x})_{1}\) _which is an oriented topological 1-manifold with boundary whose cut-off at any energy level_ \(E\) _is compact._
Now given a local system \(\mathbf{y}\), denote the brane with this local system by \(\boldsymbol{L}=(L,\mathbf{y})\). We define a count
\[n^{\mathrm{CO}}_{\boldsymbol{L},\mathfrak{b}}(\beta,\mathfrak{r},\mathbf{x})= \sum_{[u]\in\mathcal{M}^{\mathrm{CO}}_{\beta}(\mathfrak{r},\mathbf{x})_{0}} \exp\left(\sum_{j=1}^{N}\log c_{j}\;[u]\cap V_{j}\right)T^{E(\beta)}\mathbf{y} ^{\partial\beta}\epsilon([u])\in\Lambda_{\overline{\mathbb{Q}}}\]
where \(\mathfrak{b}=\sum_{j=1}^{N}\log c_{j}V_{j}\). By Gromov compactness one has the following result.
**Lemma 9.4**.: \(n^{\mathrm{CO}}_{\boldsymbol{L},\mathfrak{b}}(\beta,\mathfrak{r},\mathbf{x})\) _converges in \(\Lambda_{\overline{\mathbb{Q}}}\)._
Then define a sequence of linear map
\[\widetilde{\mathrm{CO}}^{n}_{\boldsymbol{L},\mathfrak{b}}:\text{VCF}^{ \mathfrak{b}}_{\bullet}(V;\Lambda_{\overline{\mathbb{Q}}})\to\mathrm{Hom}_{ \Lambda_{\overline{\mathbb{Q}}}}\left(\mathcal{F}^{+}_{\mathfrak{b}}( \boldsymbol{L})^{\otimes n},\mathcal{F}^{+}_{\mathfrak{b}}(\boldsymbol{L}) \right),n=0,1,\ldots\]
by
\[\widetilde{\mathrm{CO}}^{n}_{\boldsymbol{L},\mathfrak{b}}(\mathfrak{r})(x_{n},\ldots,x_{1})=\sum_{x_{\infty}}n^{\mathrm{CO}}_{\boldsymbol{L},\mathfrak{b}} (\mathfrak{r},\mathbf{x})x_{\infty}\]
and linear extension.
We use the canonical weakly bounding cochain \(b_{\boldsymbol{L}}\) to turn it into a chain map. Define
\[\mathrm{CO}^{n}_{\boldsymbol{L},\mathfrak{b}}:\text{VCF}^{\mathfrak{b}}_{ \bullet}(V;\Lambda_{\overline{\mathbb{Q}}})\to\mathrm{Hom}_{\Lambda_{ \overline{\mathbb{Q}}}}\left(\mathcal{F}^{+}_{\mathfrak{b}}(\boldsymbol{L})^ {\otimes n},\mathcal{F}^{+}_{\mathfrak{b}}(\boldsymbol{L})\right),n=0,1,\ldots\]
by
\[\mathrm{CO}^{n}_{\boldsymbol{L},\mathfrak{b}}(\mathfrak{r})(x_{n},\ldots,x_{1 })=\sum_{l_{n},\ldots,l_{0}}\widetilde{\mathrm{CO}}^{n+l_{0}+\cdots+l_{n}}_{ \boldsymbol{L},\mathfrak{b}}\left(\underbrace{b_{\boldsymbol{L}},\ldots,b_{ \boldsymbol{L}}}_{l_{n}},x_{n},\cdots,x_{1},\underbrace{b_{\boldsymbol{L}}, \ldots,b_{\boldsymbol{L}}}_{l_{0}}\right).\]
The whole sequence \(\{\mathrm{CO}^{n}_{\boldsymbol{L},\mathfrak{b}}\}_{n=0,\ldots}\) is then a linear map
\[\mathrm{CO}_{L,\mathfrak{b}}:\text{VCF}^{\mathfrak{b}}_{\bullet}(V;\Lambda_{ \overline{\mathbb{Q}}})\to CC^{\bullet}(\mathcal{F}^{+}_{\mathfrak{b}}( \boldsymbol{L})).\]
**Proposition 9.5**.: \(\mathrm{CO}_{\boldsymbol{L},\mathfrak{b}}\) _is a chain map._
Proof.: We analyze the boundary of 1-dimensional moduli spaces \(\mathcal{M}^{\mathrm{CO}}_{\beta}(\mathfrak{r},x)_{1}\). Given any map \(\boldsymbol{\Gamma}\) contributing to this moduli space, the true boundaries of \(\mathcal{M}^{\mathrm{CO}}_{\boldsymbol{\Gamma}}\) consists of configurations where either there is exactly one interior breaking (at an equivariant 1-periodic orbit) or exactly one boundary breaking (see Figure 4).
The configurations with interior breakings contribute to the composition \(\mathrm{CO}_{\boldsymbol{L},\mathfrak{b}}\circ\delta_{\text{VCF}}\) (the upper left in Figure 4). On the other hand, there are three types of configurations with boundary breakings, described as follows.
1. The first (corresponding to the upper right in Figure 4) is where the breaking separates off a treed disk with no interior puncture or boundary insertions except for an arbitrary number of the weakly bounding cochain \(b\). As we have \[\sum_{k\geq 0}m_{k}^{+}(b,\ldots,b)=W_{b}\mathbf{e}^{+}.\] Such configuration contributes by a multiple of the counting of a closed-open moduli space with a boundary insertion \(\mathbf{e}^{+}\), which vanishes by the forgetful property of the perturbation.
2. The second (corresponding to the lower left in Figure 4) is where the interior puncture and the output are separated by the breaking. This kind of broken configuration contributes to the Gerstenhaber product \(m^{\flat}\circ\operatorname{CO}_{\boldsymbol{L},\mathfrak{b}}(-)\) (up to a sign).
3. The third (corresponding to the lower right in Figure 4) is where the interior puncture and the output are not separated by the breaking. This kind of broken configuration contributes to the Gerstenhaber product \(\operatorname{CO}_{\boldsymbol{L},\mathfrak{b}}(-)\circ m^{\flat}\) (up to a sign).
Therefore, up to sign verifications which we skip here, \(\operatorname{CO}_{\boldsymbol{L},\mathfrak{b}}\) is a chain map.
Figure 4. True boundaries of a 1-dimensional moduli space. The pictures represent the case when the weakly bounding cochain is zero and the insertions are all variables of the Hochschild cochains. One can draw the picture for general cases by arbitrarily inserting weakly bounding cochains on the boundary.
Standard TQFT type argument shows that up to chain homotopy the closed-open map is well-defined, i.e., independent of the pair \((\widehat{H},\widehat{J})\) defining the vortex Floer chain complex and independent of the choice of all relevant perturbations.
There is another map on the cohomology level which we also need. Namely, if we do not use any boundary inputs, by counting treed vortices over closed-open domains one can obtain a linear map
\[\operatorname{CO}^{0}_{\boldsymbol{L},\mathfrak{b}}:\mathit{VHF}^{\, \mathfrak{b}}_{\bullet}(V;\Lambda_{\overline{\mathbb{Q}}})\to\mathit{QHF}^{ \bullet}_{\mathfrak{b}}(\boldsymbol{L};\Lambda_{\overline{\mathbb{Q}}}). \tag{9.3}\]
It was firstly defined in [21] in a slightly different way. Here we can easily generalize to the bulk-deformed case. Moreover, this map sends the identity \(\boldsymbol{1}^{\mathrm{GLSM}}_{\mathfrak{b}}\) to the identity in the Lagrangian Floer cohomology.
Summing over all Floer-nontrivial Lagrangian branes, we define the **closed-open map**
\[\operatorname{CO}_{\mathfrak{b}}:=\bigoplus_{\boldsymbol{L}\in\operatorname{ Crit}_{X}W_{\mathfrak{b}}}\operatorname{CO}_{\boldsymbol{L},\mathfrak{b}}: \mathit{VHF}^{\mathfrak{b}}_{\bullet}(V;\Lambda_{\overline{\mathbb{Q}}})\to \bigoplus_{\boldsymbol{L}\in\operatorname{Crit}_{X}W_{\mathfrak{b}}}\mathit{HH }^{\bullet}(\mathcal{F}^{+}_{\mathfrak{b}}(\boldsymbol{L})).\]
### The closed-open map is multiplicative
Now we establish the following important property of the closed-open map.
**Theorem 9.6**.: _The map \(\operatorname{CO}_{\boldsymbol{L},\mathfrak{b}}\) is multiplicative and maps the unit to the unit._
We use an analogue of "quilted" moduli spaces to prove the multiplicativity, in the same way as in [21, Section 3.6].
**Definition 9.7** (Balanced marked disks and balanced treed disks).:
1. A stable marked disk \(S\cong\mathbb{D}\) with two interior markings \(z^{\prime},z^{\prime\prime}\in\mathrm{Int}S\) and \(k+1\) boundary markings \(\underline{z}=(z_{0},\dots,z_{k})\) is called **balanced** if \(z^{\prime},z^{\prime\prime},z_{0}\) lies on a circle in \(\mathbb{D}\) tangent to \(\partial\mathbb{D}\) at \(z_{0}\).
2. A treed disk with two interior leaves \(z^{\prime},z^{\prime\prime}\), \(k\) boundary inputs and one boundary output is called **balanced** if the following conditions are satisfied. 1. \(z^{\prime},z^{\prime\prime}\) are contained in the same spherical component. 2. If \(z^{\prime},z^{\prime\prime}\) are contained in the same disk component \(S_{v}\). Let \(z^{\prime}_{0}\in\partial S_{v}\) be the boundary node connecting \(S_{v}\) to the output. Then \((S_{v},z^{\prime},z^{\prime\prime},z^{\prime}_{0})\) is a balanced marked disk. 3. If \(z^{\prime},z^{\prime\prime}\) are contained in two different disk components, \(S_{v^{\prime}}\) and \(S_{v^{\prime\prime}}\) respectively. Let \(e_{1},\dots,e_{l}\) be the unique path connecting \(v^{\prime}\) and \(v^{\prime\prime}\) in the tree, then \[\sum_{i=1}^{l}\pm\boldsymbol{l}(e_{i})=0\] where the sign is positive resp. negative if the edge \(e_{i}\) is oriented toward resp. against the output. We call the unique path \(e_{1},\dots,e_{l}\) the **bridge**.
Consider any stable domain type \(\Gamma\) with two interior inputs, \(k\) boundary inputs and one boundary output. Consider the moduli space \(\mathcal{M}^{\mathrm{balanced}}_{\Gamma}\) of balanced treed disks of type \(\Gamma\). The list of codimension one boundary strata is different from the unbalanced case, as the balanced condition cuts down the dimension by \(1\). See Figure 5.
Notice that a real boundary \(\mathcal{M}^{\mathrm{balanced}}_{\Pi}\subset\partial\mathcal{M}^{\mathrm{ balanced}}_{\Gamma}\) could be the product of several other moduli spaces whose types may have either one interior input or zero interior inputs. We have chosen surface metrics with cylindrical ends for stable closed-open domains (with one interior inputs); hence we can extend the choices to a family of surface metrics with cylindrical ends for the moduli space of stable closed-open domains with two interior inputs. We omit the details.
Now we can consider the following mixed equation for domains with two interior cylindrical ends. Choose two bulk-avoiding admissible pairs \((\hat{H}^{\prime}_{t},\hat{J}^{\prime}_{t})\) and \((\hat{H}^{\prime\prime}_{t},\hat{J}^{\prime\prime}_{t})\). Turn on the Hamiltonian perturbation on cylindrical ends. Consider the mixed equation similar to that for the closed-open map. We can extend the existing perturbation to this new type of moduli spaces to achieve transversality.
Proof of Theorem 9.6.: Choose two Floer cycles \(\mathfrak{r}_{1}\) and \(\mathfrak{r}_{2}\). We only need to show that
\[\operatorname{CO}_{\mathfrak{b}}(\mathfrak{r}_{1}*_{\mathfrak{b}}\mathfrak{r }_{2})-\operatorname{CO}_{\mathfrak{b}}(\mathfrak{r}_{1})\star\operatorname{ CO}_{\mathfrak{b}}(\mathfrak{r}_{2})\in\operatorname{Im}\delta_{CC}. \tag{9.4}\]
As one can choose perfect Morse functions on toric manifolds, we can assume that \(\mathfrak{r}_{1}\) and \(\mathfrak{r}_{2}\) are two single equivariant 1-periodic orbits.
Consider 1-dimensional moduli spaces of treed disks with two cylindrical ends labelled by \(\mathfrak{r}_{1}\) and \(\mathfrak{r}_{2}\) and arbitrary boundary output \(x_{\infty}\) and inputs
\[\underbrace{b_{\boldsymbol{L}},\ldots,b_{\boldsymbol{L}}}_{j_{k}},x_{k}, \cdots,x_{1},\underbrace{b_{\boldsymbol{L}},\ldots,b_{\boldsymbol{L}}}_{j_{0} }.\]
We call \(x_{k},\ldots,x_{1}\)**regular inputs**. Consider the true boundaries of such moduli spaces. _a priori_ There are five types of them, listed as below. We count their contributions (weighted by the bulk deformation), whose sum should be zero.
1. Breaking of Floer cylinders at one interior input. As \(\mathfrak{r}_{1}\) and \(\mathfrak{r}_{2}\) are cycles, the contribution of this type of boundary points is zero.
2. Two cylindrical ends merge together to form a pair of pants. The contribution of this type of boundary is \[\operatorname{CO}_{\mathfrak{b}}(\mathfrak{r}_{1}*_{\mathfrak{b}}\mathfrak{ r}_{2}).\]
3. One boundary edge not belonging to the bridge breaks and the piece broken off is not a disk without regular input. The contribution of this type of boundary is a Hochschild coboundary.
4. One boundary edge not belonging to the bridge breaks and the piece broken off is a disk without regular input. The broken off piece sums to a multiple of the strict unit \(e_{\boldsymbol{L}}^{+}\). By the property of the perturbation data, the contribution of this type of boundary is zero.
5. A pair of boundary edges belonging to the bridge break. The contribution of this type of boundary is the Yoneda product \[\operatorname{CO}_{\mathfrak{b}}(\mathfrak{r}_{1})\star\operatorname{CO}_{ \mathfrak{b}}(\mathfrak{r}_{2}).\]
Figure 5. The moduli space of balanced treed disks with two interior inputs and one boundary inputs. This moduli space is parametrized by one variable \(\rho\in[-1,1]\).
Therefore, one obtains (9.4).
Now we prove the unitality. By the choice of the small bulk deformation, the Hochschild cohomology of the quasimap Fukaya category is semisimple and splits as the direct sum of \(1\)-dimensional pieces. Moreover, each piece is the Hochschild cohomology of the \(A_{\infty}\) algebra \(\mathcal{F}_{\mathfrak{b}}^{+}(\boldsymbol{L})\), which is linearly spanned by the identity. Hence we only need to prove that the linear map (9.3) sends the identity \(\mathbf{1}_{\mathfrak{b}}^{\mathrm{GLSM}}\in\mathit{VCF}_{\bullet}^{\mathfrak{ b}}(V;\Lambda_{\overline{\mathbb{Q}}})\) to the identity element of \(\mathit{QHF}_{\mathfrak{b}}(\boldsymbol{L})\). This verification can be found in [23, Theorem 6.11] (this verification does not need to consider the homotopy unit and weakly bounding cochains).
### The Kodaira-Spencer map
To prove the first item of Theorem 9.1, it remains to show that the closed-open map is a linear isomorphism. Proposition 8.18 shows that the domain and the codomain of \(\mathrm{CO}_{\mathfrak{b}}\) have the same rank
\[\dim_{\Lambda_{\overline{\mathbb{Q}}}}\mathit{VHF}_{\bullet}^{\mathfrak{b}}(V ;\Lambda_{\overline{\mathbb{Q}}})=\dim H^{\bullet}(X)=\#\mathrm{Crit}_{X}W_{ \mathfrak{b}}.\]
Hence we only need to show that \(\mathrm{CO}_{\mathfrak{b}}\) is either injective or surjective.
Following [11], we define another closed-open type map which we call the _Kodaira-Spencer map_ at \(\mathfrak{b}\), denoted by
\[\mathfrak{e}_{\mathfrak{b}}:\Lambda_{\overline{\mathbb{Q}}}[\mathbf{z}_{1}, \ldots,\mathbf{z}_{N}]\to(\Lambda_{\overline{\mathbb{Q}}})^{\mathrm{Crit}_{X }W_{\mathfrak{b}}}.\]
It is formally the derivative of the bulk-deformed potential function taken at the bulk \(\mathfrak{b}\) evaluated at critical points of the Morse function. We only need to use the standard complex structure to define this map.
#### 9.4.1. Moduli spaces of quasidisks with tangency conditions
We go toward the definition of the Kodaira-Spencer map. Fix a Lagrangian \(L=L(\mathbf{u})\) for a moment. Let \(I=(\alpha_{1},\ldots,\alpha_{N})\) be a multiindex of nonnegative integers, which defines a monomial
\[\mathbf{z}^{I}=\mathbf{z}_{1}^{\alpha_{1}}\cdots\mathbf{z}_{N}^{\alpha_{N}}.\]
Consider a holomorphic disk \(u:(\mathbb{D},\partial\mathbb{D})\to(V,\widehat{L})\), which can be classified by Theorem 8.3. We write \(u=(u_{1},\ldots,u_{N})\) in coordinates. We say that \(u\) satisfies the \(I\)-tangency condition at \(z\in\mathrm{Int}\mathbb{D}\) if \(u_{i}\) vanishes to the order of \(\alpha_{i}\) at \(z\), for all \(i=1,\ldots,N\). In particular, when \(\alpha_{i}=0\), there is no restriction to \(u_{i}\). Given a multiindex \(I\) and a disk class \(\beta\), denote the moduli space of quasidisks with boundary in \(\widehat{L}\) (with one output) satisfying the \(I\)-tangency condition at the origin by
\[\mathcal{M}_{I,1}^{qd}(\beta).\]
Its virtual dimension is
\[\dim^{\mathrm{vir}}\mathcal{M}_{I,1}^{qd}(\beta)=n+m(\beta)-2|I|-2.\]
_Remark 9.8_.: We can put the above moduli space into an infinitely dimensional Banach space where we can specify the tangency conditions for arbitrary maps with sufficiently high regularity. For example, using the setup of Cieliebak-Mohnke [14, Section 6]. Hence we can examine whether the moduli space of quasidididks subject to tangency conditions is regular or not.
**Proposition 9.9**.: _Suppose \(\beta=\sum_{j=1}^{N}d_{j}\beta_{j}\) with \(d_{j}\in\mathbb{Z}\). Then \(\dim\mathcal{M}_{I,1}^{qd}(\beta)\neq\emptyset\) only if \(d_{j}\geq\alpha_{j}\) for all \(j\). Moreover, the moduli space \(\mathcal{M}_{I,1}^{qd}(\beta)\) is smooth and the evaluation map at the boundary marking is a submersion._
Proof.: By Theorem 8.3, the \(j\)-th coordinate of the map \(u\) of the form (8.1) needs to vanish at least to the order \(\alpha_{j}\) at the origin. Hence \(d_{j}\geq\alpha_{j}\).
To prove the regularity of the moduli space \(\mathcal{M}^{qd}_{I,1}(\beta)\), one only needs to prove the regularity of the corresponding moduli space of holomorphic disks in \(V\) with boundary in \(\widehat{L}\) (before quotienting the \(K\)-action) as the \(K\)-action is free. Since the complex structure on \(V\cong\mathbb{C}^{N}\) is the standard one, and the tangency condition is imposed on each coordinate independently, one only needs to prove the Fredholm regularity for the \(N=1\) case. In this case, we consider holomorphic disks in \(\mathbb{C}\) with boundary contained in the unit circle, which also vanish to a given order \(k\) at the origin. Choose \(p>2\) and \(m\) sufficiently large, so that one has the Sobolev embedding \(W^{m,p}\hookrightarrow C^{k}\) in dimension two.
Now fix the disk class \(\beta\). Consider the Banach space \(W(\beta)\) of maps from \((\mathbb{D},\partial\mathbb{D})\) to \((\mathbb{C},S^{1})\) of regularity \(W^{m+1,p}\). Let \(W_{0}(\beta)\subset W(\beta)\) be the subspace of maps which vanish at \(0\) to the order \(k+1\). Let \(E(\beta)\to W(\beta)\) be the Banach space bundle, whose fiber over \(u\) is the space of \((0,1)\)-forms of regularity \(W^{m,p}\), and let \(E_{0}(\beta)\subset E(\beta)\) be the subbundle of those forms which vanish at \(0\) to the order \(k\). Suppose \(u_{0}:\mathbb{D}\to\mathbb{C}\) is a holomorphic disk in \(W_{0}(\beta)\). Then there is a commutative diagram (see [13, Section 6])
where \(F\) resp. \(F_{0}\) is the standard Cauchy-Riemann operator, restricted to corresponding Banach spaces. One needs to prove that \(F_{0}\) is surjective. Notice that by Cho-Oh's theorem [13, Theorem 6.1], \(F\) is surjective. Hence for each \(\eta_{0}\in E_{0}(\beta)|_{u_{0}}\), there exists \(\xi\in T_{u_{0}}W(\beta)\) such that \(F(\xi)=\eta_{0}\). One only needs to modify \(\xi\) to some \(\xi_{0}\in T_{u_{0}}W_{0}(\beta)\) with \(F(\xi)=F(\xi_{0})\). Indeed, as \(u_{0}\) vanishes up to order \(k+1\) at the origin, the disk class \(\beta\), which is only the degree of the map \(u_{0}\), is at least \(k+1\). Then by the Blaschke formula (8.1), one can easily deform \(u_{0}\) by \(k+1\)-jet data. Such deformations are in the kernel of \(F\). Hence we can obtain the desired \(\xi_{0}\). This proves the Fredholm regularity of the moduli spaces.
The fact that the evaluation map at the output is a submersion onto \(L\) follows easily from the Blaschke formula.
#### 9.4.2. The derivative of the potential
Now we can define the Kodaira-Spencer map. For each critical point \(\boldsymbol{L}\in\operatorname{Crit}_{X}W_{\mathfrak{b}}\) (lying inside the moment polytope), we will define a linear map
\[\widetilde{\mathfrak{se}}_{\boldsymbol{L},\mathfrak{b}}:\Lambda_{\overline{ \mathbb{Q}}}[\mathbf{z}_{1},\ldots,\mathbf{z}_{N}]\to\mathit{QCF}^{+}_{ \bullet}(\boldsymbol{L};\Lambda_{\overline{\mathbb{Q}}})\]
using the counts of certain zero-dimensional moduli spaces. It will turn out that the value of this map is always a multiple of the unique maximum \(\mathbf{e}_{\boldsymbol{L}}=x_{\max}\in\operatorname{Crit}f_{L}\), hence descends to a map
\[\widetilde{\mathfrak{se}}_{\boldsymbol{L},\mathfrak{b}}:\Lambda_{\overline{ \mathbb{Q}}}[\mathbf{z}_{1},\ldots,\mathbf{z}_{N}]\to\mathit{QHF}^{\mathfrak{ b}}_{\bullet}(\boldsymbol{L};\Lambda_{\overline{\mathbb{Q}}}).\]
We define the coefficients to be \(\mathfrak{fs}_{\boldsymbol{L},\mathfrak{b}}\), i.e.,
\[\widetilde{\mathfrak{se}}_{\boldsymbol{L},\mathfrak{b}}(\mathbf{z}^{I})= \mathfrak{fs}_{\boldsymbol{L},\mathfrak{b}}(\mathbf{z}^{I})[\mathbf{e}_{ \boldsymbol{L}}].\]
We first fix a multiindex \(I=(\alpha_{1},\ldots,\alpha_{N})\). Denote
\[\beta^{I}=\alpha_{1}\beta_{1}+\cdots+\alpha_{N}\beta_{N}\in H_{2}(V,\widehat{L }).\]
For each disk class \(\beta\in H_{2}(V,\widehat{L})\) and each critical point \(x\in\operatorname{Crit}f_{L(\mathbf{u})}\) of the Morse function \(f_{L(\mathbf{u})}:L(\mathbf{u})\to\mathbb{R}\), consider the moduli space
\[\mathcal{M}^{qd}_{I,1}(\beta;x)\]
where we require that the output converges to the critical point \(x\). Proposition 9.9 implies that this moduli space is regular. Moreover,
\[\mathcal{M}^{qd}_{I,1}(\beta;x)\neq\emptyset\text{ and }\text{dim}\mathcal{M}^{ qd}_{I,1}(\beta;x)=0\Longrightarrow\beta=\beta^{I}\text{ and }x=x_{\max}.\]
Moreover, in this case, the moduli space has exactly one point because of the Blaschke formula. We count the unique element weighted by the bulk deformation and the local system.
_Remark 9.10_.: _A priori_ we should consider treed holomorphic disks with one boundary output and one interior marking with certain tangency condition. It is similar to the case of proving that \(m_{0}\) is a multiple of \(\boldsymbol{e_{L}}\) that one can prove for zero-dimensional moduli spaces, only those treed disks with one disk component contribute.
The count of the above moduli spaces (with a single point) defines the Kodaira-Spencer map. More explicitly, define
\[\widetilde{\mathfrak{s}}_{\boldsymbol{L},\mathfrak{b}}:\Lambda_{\overline{ \mathbb{Q}}}[\mathbf{z}_{1},\ldots,\mathbf{z}_{N}]\to\text{QHF}^{\bullet}_{ \mathfrak{b}}(\boldsymbol{L};\Lambda_{\overline{\mathbb{Q}}})\]
by
\[\widetilde{\mathfrak{s}}_{\boldsymbol{\xi}_{\boldsymbol{L},\mathfrak{b}}}( \mathbf{z}^{I})=\mathfrak{b}^{I}T^{E(\beta^{I})}\mathbf{y}^{\partial\beta^{I} }[\mathbf{e}_{\boldsymbol{L}}]=\mathfrak{s}_{\boldsymbol{L},\mathfrak{b}}( \mathbf{z}^{I})[\mathbf{e}_{\boldsymbol{L}}].\]
Here for \(\mathfrak{b}=\sum_{j=1}^{N}\log c_{j}V_{j}\), the notation \(\mathfrak{b}^{I}\) denotes the quantity
\[c_{1}^{\alpha_{1}}\cdots c_{N}^{\alpha_{N}},\]
which is the exponential of the intersection number between the above unique quasidisk in \(\mathcal{M}^{qd}_{I,1}(\beta^{I};x_{\max})\) and the bulk \(\mathfrak{b}\).
The Kodaira-Spencer map takes a very simple form. Recall that we have written
\[W_{\mathfrak{b}}=W_{\mathfrak{b},1}+\cdots+W_{\mathfrak{b},N}=c_{1}W_{1}+ \cdots+c_{N}W_{N}.\]
**Proposition 9.11**.: _For each multiindex \(I\), one has_
\[\mathfrak{s}_{\mathfrak{b}}(\mathbf{z}^{I})=W_{\mathfrak{b}}^{I}:=W_{ \mathfrak{b},1}^{\alpha_{1}}\cdots W_{\mathfrak{b},N}^{\alpha_{N}}. \tag{9.5}\]
Proof.: The calculation is carried out in a straightforward way. The area of a disk in class \(\beta^{I}\) is
\[E(\beta^{I})=\alpha_{1}l_{1}(\mathbf{u})+\cdots+\alpha_{N}l_{N}(\mathbf{u}).\]
The contribution of the local system is
\[\mathbf{y}^{\partial\beta^{I}}=\prod_{j=1}^{N}(y_{1}^{v_{j,1}}\cdots y_{n}^{ v_{j,n}})^{\alpha_{j}}.\]
Hence the formula (9.5) follows.
Define the Kodaira-Spencer map by
\[\mathfrak{s}_{\mathfrak{s}_{\mathfrak{b}}}:=\bigoplus_{\boldsymbol{L}\in \text{Crit}_{X}W_{\mathfrak{b}}}\mathfrak{s}_{\boldsymbol{L},\mathfrak{b}}.\]
**Theorem 9.12**.: _The Kodaira-Spencer map \(\mathfrak{s}_{\mathfrak{s}_{\mathfrak{b}}}\) is surjective._
Proof.: By [11, Lemma 3.12], the monomials \(W_{1},\ldots,W_{N}\) generate (over the ring \(\Lambda_{0,\overline{\mathbb{Q}}}\)) the ring
\[\Lambda_{0}^{P}\langle\langle y_{1}^{\pm},\ldots,y_{n}^{\pm}\rangle\rangle\]
which is a ring of formal Laurent series satisfying a particular valuation condition determined by the moment polytope \(P\). Let \(\boldsymbol{\eta}_{1},\ldots,\boldsymbol{\eta}_{s}\) be the critical points of \(W_{\mathfrak{b}}\) inside the moment polytope. Using the notion of convergent Novikov field \(\Lambda_{\overline{\mathbb{Q}}}^{\text{conv}}\), we see that for \(T=t\) being a sufficiently small
nonzero complex number, \(\boldsymbol{\eta}_{1}^{t},\ldots,\boldsymbol{\eta}_{s}^{t}\) are distinct points in \((\mathbb{C}^{*})^{n}\). Then there exist \(s\) complex Laurent polynomials
\[F_{1},\ldots,F_{s}\in\mathbb{C}[y_{1},\ldots,y_{n},y_{1}^{-1},\ldots,y_{n}^{-1}]\]
such that the matrix \(\left[F_{a}(\boldsymbol{\eta}_{b}^{t})\right]_{1\leq a,b\leq s}\) is invertible. Regard \(F_{1},\ldots,F_{s}\) as Laurent polynomials with Novikov coefficients, we see the determinant of the matrix
\[\det\left[F_{a}(\boldsymbol{\eta}_{b})\right]_{1\leq a,b\leq s}\neq 0\in \Lambda_{\overline{\mathbb{Q}}}.\]
The above is still true if we replace \(F_{a}\) by \(T^{A}F_{a}\) for any \(A\in\mathbb{R}\). On the other hand, for \(A\) sufficiently large, \(T^{A}F_{a}\in\Lambda_{0}^{P}\langle\langle y_{1}^{\pm},\ldots,y_{n}^{\pm} \rangle\rangle\). This implies that, the restriction of \(\mathfrak{ks}_{\mathfrak{b}}\) to the finite-dimensional subspace spanned by \(T^{A}F_{a}\) is subjective due to the generation property of the monomials \(W_{1},\ldots,W_{N}\). Hence \(\mathfrak{ks}_{\mathfrak{b}}\) is also surjective.
### A quantum Kirwan map
The set of small bulk-deformations is contained in the larger set of equivariant cohomology upstairs. Classically, there is the Kirwan map
\[\kappa^{\text{classical}}:H_{K}^{\bullet}(V)\to H^{\bullet}(X).\]
In principle, by incorporating vortices one can define a quantization of the Kirwan map. This has been pursued by Ziltener [214] in the symplectic setting and worked out by Woodward [105] in the algebraic setting. Here we define a variant of the quantum Kirwan map, denoted by
\[\kappa_{\mathfrak{b}}:\Lambda_{\mathbb{Z}[\mathbb{H}]}[\mathbf{z}_{1},\ldots, \mathbf{z}_{N}]\to\mathit{VHF}_{\bullet}^{\mathfrak{b}}(V;\Lambda_{\mathbb{Z} [\mathbb{H}]}) \tag{9.6}\]
such that the image of the unit \(1\) is the identity \(\mathbf{1}_{\mathfrak{b}}^{\text{GLSM}}\).
We define the above map by imposing tangency conditions at the origin of the cigar. Fix a regular bulk-avoiding admissible pair \((\widehat{H}_{\infty},\widehat{J}_{\infty})\) which defines a bulk-deformed vortex Floer complex \(\mathit{VHF}_{\bullet}^{\mathfrak{b}}(\widehat{H},\widehat{J};\Lambda_{ \mathbb{Z}[\mathbb{H}]})\). Consider a domain-dependent almost complex structure \(\widehat{J}\) (resp. Hamiltonian perturbation \(\widehat{H}\)) parametrized by points on the cigar \(\Sigma^{\text{cigar}}\cong\mathbb{C}\) which is equal to the standard almost complex structure \(\widehat{J}_{V}\) (resp. vanishes) in a specified neighborhood of \(0\in\Sigma^{\text{cigar}}\) and which agrees with \(\widehat{J}_{\infty}\) (resp. \(\widehat{H}_{\infty}\)) near infinity. Consider the vortex equation with the data \((\widehat{H},\widehat{J})\) on the cigar. Any finite energy solution should converge to a critical point of \(\mathcal{A}_{H_{\infty}}\). Moreover, as the almost complex structure is standard near \(0\), one can impose the tangency condition corresponding to \(I\) at the origin. Such a tangency condition is gauge invariant. Then for each critical point \(\mathfrak{r}\in\text{Crit}\mathcal{A}_{H_{\infty}}\), there is a moduli space
\[\mathcal{M}_{I}^{\text{cigar}}(\mathfrak{r})\subset\mathcal{M}^{\text{cigar}} (\mathfrak{r}).\]
By using domain-dependent perturbations, one can achieve transversality for such a moduli space. Then one has
\[\text{dim}\mathcal{M}_{I}^{\text{cigar}}(\mathfrak{r})=\text{dim}\mathcal{M} ^{\text{cigar}}(\mathfrak{r})-2|I|.\]
On the other hand, as the Hamiltonian is bulk-avoiding, each solution has well-defined topological intersection numbers with \(V_{j}\). Then define
\[\kappa_{\mathfrak{b}}(\mathbf{z}^{I})=\sum_{\stackrel{{\mathfrak{ r}}}{{\dim\mathcal{M}_{I}^{\text{cigar}}(\mathfrak{r})=0}}}\left(\sum_{[\mathfrak{u}] \in\mathcal{M}_{I}^{\text{cigar}}(\mathfrak{r})}\left(\prod_{j=1}^{N}c_{j}^{[ \mathfrak{u}]\cap V_{i}}\right)\epsilon([\mathfrak{u}])\right)\mathfrak{r}.\]
**Theorem 9.13** (Properties of the bulk-deformed quantum Kirwan map).:
1. _The element_ \(\kappa_{\mathfrak{b}}(\mathbf{z}^{I})\) _is a legitimate element of_ \(\text{VCF}_{\bullet}^{\mathfrak{b}}(\widehat{H}_{\infty},\widehat{J}_{\infty} ;\Lambda_{\mathbb{Z}[\mathbb{H}]})\) _and is_ \(\partial^{\mathfrak{b}}\)_-closed. Moreover, its homology class is independent of the choice of perturbation and its corresponding element in_ \(\text{VHF}_{\bullet}^{\mathfrak{b}}(V;\Lambda_{\mathbb{Z}[\mathbb{H}]})\) _is well-defined._
2. \(\kappa_{b}(1)=\mathbf{1}_{b}^{\mathrm{GLSM}}\).
Proof.: The first conclusion follows from the standard argument and the second one follows from the definition of \(\mathbf{1}_{b}^{\mathrm{GLSM}}\).
We define another element in the vortex Floer homology which can be viewed as the first Chern class in the bulk-deformed Hamiltonian Floer homology, or the image of theee first Chern class under the bulk-deformed PSS map. Recall that the first Chern class of a toric manifold is naturally represented by the union of toric divisors. Upstairs, they are the union of all coordinate hyperplanes.
**Definition 9.14**.: The \(\mathfrak{b}\)**-deformed first Chern class** is the element
\[\kappa_{b}(\mathbf{z}_{1}+\cdots+\mathbf{z}_{N})\in\mathit{VHF}_{\bullet}^{ \mathfrak{b}}(V;\Lambda_{\mathcal{I}[\mathbb{H}]})\]
Denote the operator on \(\mathit{VHF}_{\bullet}^{\mathfrak{b}}(V;\Lambda_{\mathcal{I}[\mathbb{H}]})\) defined by the pair-of-pants product with the \(\mathfrak{b}\)-deformed first Chern class by
\[\mathbb{E}_{\mathfrak{b}}:\mathit{VHF}_{\bullet}^{\mathfrak{b}}(V;\Lambda_{ \mathcal{I}[\mathbb{H}]})\rightarrow\mathit{VHF}_{\bullet}^{\mathfrak{b}}(V; \Lambda_{\mathcal{I}[\mathbb{H}]}). \tag{9.7}\]
### The commutative diagram
We prove the following proposition.
**Proposition 9.15**.: _When the bulk deformation \(\mathfrak{b}\) is convenient, the following diagram commutes._
(9.8)
_Here the right vertical arrow is the natural identification induced by the individual isomorphisms \(\mathit{HH}^{\bullet}(\mathcal{F}_{\mathfrak{b}}^{\flat}(\mathbf{L}))\cong\Lambda_ {\overline{\mathbb{Q}}}\)._
Proof.: We turn on Hamiltonian perturbations on disks to construct a homotopy between the Kodaira-Spencer map and the closed-open map composed with the quantum Kirwan map. Fix a critical point of \(W_{\mathfrak{b}}\) lying in the interior of the moment polytope with the corresponding Lagrangian brane \(\mathbf{L}=(L(\mathfrak{u}),\mathbf{y})\). We claim that the following diagram commutes.
(9.9)
Once this is established, it follow that the image \(\mathrm{CO}^{0}_{\mathbf{L},\mathfrak{b}}\circ\kappa_{\mathfrak{b}}\) is contained in the line spanned by the identity element of \(\mathit{QHF}_{\mathfrak{b}}^{\bullet}(\mathbf{L};\Lambda_{\overline{\mathbb{Q}}})\). Hence on the chain level, one has
\[\mathrm{CO}^{0}_{\mathbf{L},\mathfrak{b}}(\kappa_{\mathfrak{b}}(\mathbf{z}^{I}))- \mathfrak{fs}_{\mathbf{L},\mathfrak{b}}(\mathbf{z}^{I})\mathbf{e}_{\mathbf{L}}^{+}\in \mathrm{Im}(m_{1}^{\flat}).\]
As the Hochschild cohomology of \(\mathbf{L}\) is spanned by the identity element, it follows that the diagram (9.8) also commutes.
Now we prove that (9.9) commutes. Consider closed-open domains with one interior marking. Define a 1-parameter family of equations parametrized by \(\nu\in[0,1]\) such that when \(\nu=0\), the
equation is the quasidisk equation with tangency condition at the marking. When \(\nu\) is positive, we stretch a neighborhood of the interior marking and turn on a Hamiltonian perturbation by a bulk-avoiding pair \((\widehat{H},\widehat{J})\). We always require the tangency condition at the interior marking. As for boundary insertions, we only allow the boundary inputs to be labelled by the canonical weakly bounding cochain \(b_{\boldsymbol{L}}\), while the boundary output can be labelled by any critical point of \(f_{L}\). We can consider such moduli spaces with the tangency condition corresponding to multiindex \(I\), total disk class \(\beta\), and the output labelled by \(x\in\text{Crit}f_{L}\).
One can use similar arguments as before to regularize relevant moduli spaces using perturbations which naturally extend existing perturbations defining the \(A_{\infty}\) structure, the closed-open map, and the quantum Kirwan map. Then by counting elements in zero-dimensional moduli spaces, one can define a linear map
\[R_{\boldsymbol{L}}:\Lambda_{\overline{\mathbb{Q}}}[\mathbf{z}_{1},\ldots, \mathbf{z}_{N}]\to\text{QCF}^{\bullet}(\boldsymbol{L};\Lambda_{\overline{ \mathbb{Q}}})\subset\text{QCF}^{\bullet}(\boldsymbol{L};\Lambda_{\overline{ \mathbb{Q}}})^{+}.\]
Now we consider boundaries of \(1\)-dimensional moduli spaces. There are the following types of boundary strata.
1. The boundary at \(\nu=0\). This side of the boundary consists of points in zero-dimensional moduli spaces used to define the Kodaira-Spencer map. The contribution of these boundary points is equal to \(\widehat{\mathfrak{s}}_{\boldsymbol{L},\mathfrak{b}}\).
2. The boundary at \(\nu=1\). This side of the boundary consists of configurations having exactly one interior breaking at certain equivariant \(1\)-periodic orbit of the Hamiltonian \(\widehat{H}\). As the perturbation extends the perturbations chosen for the closed-open map and the quantum Kirwan map, the contribution of these boundary points is equal to \[\text{CO}^{0}_{\boldsymbol{L},\mathfrak{b}}\circ\kappa_{\mathfrak{b}}.\]
3. Boundary points at \(\nu\in(0,1)\). These configurations have exactly one boundary breakings. There are two possibilities. First, the interior puncture and the boundary output are in the same unbroken component. In this case, the other unbroken component is a treed quasidisk with only boundary insertions being the canonical weakly bounding cochain \(b_{\boldsymbol{L}}\). As the perturbation satisfies the forgetful property when one input is unweighted (the strict unit \(\mathbf{e}^{+}\)), the contribution of this kind of boundary points is zero. Second, the interior puncture and the boundary output are in two different unbroken component. The contribution of such configurations is \[m_{1}^{\flat}(R_{\boldsymbol{L}}(\mathbf{z}^{I}))\] which is exact.
Therefore, it follows that on the chain level, one has
\[\widetilde{\mathfrak{s}}_{\boldsymbol{L},\mathfrak{b}}(\mathbf{z}^{I})-\text {CO}^{0}_{\boldsymbol{L},\mathfrak{b}}(\kappa_{\mathfrak{b}}(\mathbf{z}^{I})) \in\text{Im}(m_{1}^{\flat}).\]
Hence on the cohomology level the diagram (9.9) commutes.
Because the Kodaira-Spencer map is surjective, this finishes the proof of item (1) of Theorem 9.1.
### Quantum multiplication by the first Chern class
Now we prove item (2) of Theorem 9.1. We prove the following theorem.
**Theorem 9.16**.: _When \(\mathfrak{b}\) is a convenient small bulk deformation, the operator \(\mathbb{E}_{\mathfrak{b}}\) on \(\text{VHF}^{\mathfrak{b}}_{\bullet}(V;\Lambda_{\overline{\mathbb{Q}}})\) has an eigenspace decomposition_
\[\text{VHF}^{\mathfrak{b}}_{\bullet}(V;\Lambda_{\overline{\mathbb{Q}}})=\bigoplus _{\boldsymbol{L}\in\text{Crit}_{\chi}(W_{\mathfrak{b}})}\text{VHF}^{\mathfrak{ b}}_{\bullet}(V;\Lambda_{\overline{\mathbb{Q}}})_{W_{\mathfrak{b}}( \boldsymbol{L})}.\]
Proof.: By item (1) of Theorem 9.1, \(\mathit{VHF}_{\bullet}^{\mathsf{b}}(V;\Lambda_{\overline{\mathbb{Q}}})\) is semisimple. Hence \(\mathbb{E}_{\mathsf{b}}\) is diagonalizable, and let the eigenvalues be \(\lambda_{1},\ldots,\lambda_{m}\). Now take an eigenvalue \(\lambda=\lambda_{i}\) and a critical point \(\mathbf{L}=(L(\mathbf{u}),\mathbf{y})\in\mathrm{Crit}_{X}W_{\mathsf{b}}\). We consider the restriction of \(\mathrm{CO}_{\mathbf{L},\mathsf{b}}^{0}:\mathit{VHF}_{\bullet}^{\mathsf{b}}(V; \Lambda_{\overline{\mathbb{Q}}})\to\mathit{QHF}_{\mathsf{b}}^{\bullet}(\mathbf{L}; \Lambda_{\overline{\mathbb{Q}}})\) to the \(\lambda\)-eigenspace. We prove that this map is nonzero only when \(\lambda\) coincides with the critical value. Consider closed-open domains with two interior markings, one boundary output, and arbitrarily many boundary inputs (to be labelled by the canonical weakly bounding cochain of \(\mathbf{L}\)). We distinguish the two interior markings. The first one is \(v_{\mathrm{Ham}}\), which will be labelled by an equivariant \(1\)-periodic orbit. The second one is denoted by \(v_{\mathrm{Chern}}\), which will be labelled by components of the equivariant toric divisor. Given any such closed-open domain \(C=S\cup T\) where \(S\) is the surface part and \(T\) is the tree part, the marking corresponding to \(v_{\mathrm{Ham}}\) becomes a puncture while the marking corresponding to \(v_{\mathrm{Chern}}\) is denoted by \(z_{\mathrm{Chern}}\in\mathrm{Int}S\).
We would like to include one more constraints on the position of \(v_{\mathrm{Chern}}\). In the same way as defining the closed-open map, there is a distinguished component \(C_{\mathrm{Ham}}\) of such domains \(C=S\cup T\). Because the domain \(C\) has a distinguished output, we can identify \(C_{\mathrm{Ham}}\) with \(\mathbb{D}\setminus\{0\}\) canonically such that the boundary node on \(C_{\mathrm{Ham}}\) leading towards the output is the point \(1\in S^{1}\cong\partial\mathbb{D}\). Define the **offset angle** of \(z_{\mathrm{Chern}}\) as follows.
1. If \(z_{\mathrm{Chern}}\) is in a cylindrical component, it does not have an offset angle.
2. If \(z_{\mathrm{Chern}}\) is on \(C_{\mathrm{Ham}}\cong\mathbb{D}\setminus\{0\}\), then the offset angle is the angular coordinate of \(z_{\mathrm{Chern}}\).
3. If \(z_{\mathrm{Chern}}\) is not on \(C_{\mathrm{Ham}}\) or any cylindrical component, then there is a unique boundary node on \(C_{\mathrm{Ham}}\) connecting \(C_{\mathrm{Ham}}\) to \(z_{\mathrm{Chern}}\). The offset angle is the angular coordinate of this boundary node.
We fix \(\theta\in S^{1}\setminus\{1\}\) and only consider closed-open domains described as above such that the offset angle of \(z_{\mathrm{Chern}}\) is equal to \(\theta\) or does not have an offset angle. Consider the same equation defining the closed-open maps on such domains with possibly different perturbation data, where on the cylindrical end one has the Hamiltonian perturbation by a regular bulk-avoiding pair \((\widehat{H},\widehat{J})\), and along the boundary one imposes the Lagrangian boundary condition from \(\mathbf{L}\).
We analyze the true boundaries of \(1\)-dimensional such moduli spaces. We assume that the cylindrical end is labelled by a cycle \(a\) in \(\mathit{VCF}_{\bullet}^{\mathsf{b}}(V;\Lambda_{\overline{\mathbb{Q}}})\). The true boundary components corresponding to configurations which have exactly one breaking, either an interior one or a boundary one. See Figure 6.
1. The breaking is interior and the special marking \(z_{\mathrm{Chern}}\) is not on a cylindrical component. The sum of this kind of contributions is zero as the interior input is a cycle. Note that as we are counting treed holomorphic disks, the line segment connecting the component on which \(z_{\mathrm{Chern}}\) lies and \(C_{\mathrm{Ham}}\) is not meant to be a breaking.
2. The breaking is boundary at the offset angle \(1\in S^{1}\) (which is different from \(\theta\)) hence separates \(C_{\mathrm{Ham}}\) and the output. The sum of this kind of configuration is a coboundary in \(\mathit{QCF}_{\mathsf{b}}(\mathbf{L})\), which is zero in cohomology.
3. The breaking is boundary at an offset angle different from \(1\in S^{1}\) and \(\theta\) hence does not separate \(C_{\mathrm{Ham}}\) and the output. The disk bubble contributes to a multiple of the strict unit \(\mathbf{e}_{\mathbf{L}}^{+}\). Hence by the forgetful property of the perturbation data, the contribution of such configurations is zero.
4. The breaking is boundary at the specified offset angle \(\theta\) which separates the special marking \(z_{\mathrm{Chern}}\) and the component \(C_{\mathrm{Ham}}\). The disk bubble always has Maslov index \(2\), hence the interior constraint imposed at \(z_{\mathrm{Chern}}\) gives a factor \(1\). Hence the disk bubble contributes to \(W_{\mathsf{b}}(\mathbf{L})e_{\mathbf{L}}^{+}\). However, as the offset angle is fixed, there are such rigid configurations, and
the counting is equal to \[W_{\mathfrak{b}}(\boldsymbol{L})\cdot\mathrm{CO}^{0}_{\boldsymbol{L},\mathfrak{b}} (a).\]
5. The breaking is interior and the special marking \(z_{\mathrm{Chern}}\) is on the cylindrical component that breaks off. This kind of configuration contributes to \[\mathrm{CO}^{0}_{\boldsymbol{L},\mathfrak{b}}(\mathbb{E}_{\mathfrak{b}}(a))= \lambda\cdot\mathrm{CO}^{0}_{\boldsymbol{L},\mathfrak{b}}(a),\] due to the appearance of the pair-of-pants product in the upper component.
The analysis above shows that in cohomology, one has
\[\lambda\cdot\mathrm{CO}^{0}_{\boldsymbol{L},\mathfrak{b}}(a)=W_{\mathfrak{b}} (\boldsymbol{L})\cdot\mathrm{CO}^{0}_{\boldsymbol{L},\mathfrak{b}}(a).\]
Hence if \(\lambda\neq W_{\mathfrak{b}}(\boldsymbol{L})\), the map \(\mathrm{CO}^{0}_{\boldsymbol{L},\mathfrak{b}}\) vanishes on this eigenspace.
On the other hand, the linear map
\[\bigoplus_{\boldsymbol{L}\in\mathrm{Crit}_{X}W_{\mathfrak{b}}}\mathrm{CO}^{0} _{\boldsymbol{L},\mathfrak{b}}:\mathit{VHF}^{\mathfrak{b}}_{\bullet}(V;\Lambda _{\overline{\mathbb{Q}}})\to\bigoplus_{\boldsymbol{L}\in\mathrm{Crit}_{X}W_{ \mathfrak{b}}}\mathit{QHF}^{\bullet}_{\mathfrak{b}}(\boldsymbol{L};\Lambda_{ \overline{\mathbb{Q}}})\]
is injective, because when we take the component generated by the identity elements of \(\mathit{QHF}^{\bullet}_{\mathfrak{b}}(\boldsymbol{L};\Lambda_{\overline{ \mathbb{Q}}})\), it descends to the isomorphism \(\mathrm{CO}_{\mathfrak{b}}\) onto the direct sum of the Hochschild cohomology. Therefore, one has
\[\mathrm{Spec}(\mathbb{E}_{\mathfrak{b}})\subset W_{\mathfrak{b}}(\mathrm{ Crit}_{X}W_{\mathfrak{b}}). \tag{9.10}\]
Figure 6. Boundary of 1-dimensional moduli spaces with one special interior marking.
On the other hand, for each critical point \(\boldsymbol{L}\in\operatorname{Crit}_{X}(W_{\mathfrak{b}})\), the closed-open map \(\operatorname{CO}^{0}_{\boldsymbol{L},\mathfrak{b}}\) is unital hence nonzero. This implies that \(W_{\boldsymbol{L},\mathfrak{b}}\in\Lambda\) is also an eigenvalue of \(\mathbb{E}_{\mathfrak{b}}\). Hence (9.10) is an identity. As when \(\mathfrak{b}\) is convenient, all critical values are distinct, it follows that all eigenspaces of \(\mathbb{E}_{\mathfrak{b}}\) are \(1\)-dimensional.
|
2302.14321 | Dynamic Transition From Regular to Mach Reflection Over a Moving Wedge | The transition between the Regular Reflection (RR) and Mach Reflection (MR)
phenomenon impacts the design of the supersonic and hypersonic air-breathing
vehicles. The aim of this paper is to numerically investigate the dynamic
transition from RR to MR of unsteady supersonic flow over a two-dimensional
wedge, whose trailing edge moves along the $x$-direction upstream with a
velocity, $V(t)$ at a free-stream Mach number of $M_{\infty}=3$. The simulation
is conducted using the unsteady compressible inviscid flow solver, which is
implemented in OpenFOAM\textsuperscript{\textregistered}, the open-source CFD
tool. Further, the wedge motion is applied by moving the mesh boundary,
performing the Arbitrary Lagrangian-Eulerian (ALE) technique. In addition, the
sonic and detachment criteria are used to define the dynamic transition from RR
to MR during the increase of the wedge angle. Different reduced frequencies,
$\kappa$, in the range of $[0.1-2]$ for the moving wedge are applied to study
the lag in the dynamic transition from the steady-state condition. The results
show that the critical value of $\kappa=0.4$ distinguishes between the rapid
and gradual lag in the transition from RR to MR. In addition, the transition
from RR to MR occurs above the Dual Solution Domain (DSD), since the shock is
curved downstream during the rapid motion of the wedge. | Lubna Margha, Ahmed A. Hamada, Doyle D. Knight, Ahmed Eltaweel | 2023-02-28T05:24:34Z | http://arxiv.org/abs/2302.14321v1 | # Dynamic Transition From Regular to Mach Reflection Over a Moving Wedge
###### Abstract
The transition between the Regular Reflection (RR) and Mach Reflection (MR) phenomenon impacts the design of the supersonic and hypersonic air-breathing vehicles. The aim of this paper is to numerically investigate the dynamic transition from RR to MR of unsteady supersonic flow over a two-dimensional wedge, whose trailing edge moves along the x-direction upstream with a velocity, V(t) at a free-stream Mach number of \(M_{\infty}=3\). The simulation is conducted using the unsteady compressible inviscid flow solver, which is implemented in OpenFOAM(r), the open-source CFD tool. Further, the wedge motion is applied by moving the mesh boundary, performing the Arbitrary Lagrangian-Eulerian (ALE) technique. In addition, the sonic and detachment criteria are used to define the dynamic transition from RR to MR during the increase of the wedge angle. Different reduced frequencies, \(\kappa\), in the range of \([0.1-2]\) for the moving wedge are applied to study the lag in the dynamic transition from the steady-state condition. The results show that the critical value of \(\kappa=0.4\) distinguishes between the rapid and gradual lag in the transition from RR to MR. In addition, the transition from RR to MR occurs above the Dual Solution Domain (DSD), since the shock is curved downstream during the rapid motion of the wedge.
Regular reflection; Mach reflection; Moving wedge; Dynamic shock waves; Supersonic flow; Dual solution domain. 2022
## 1 Introduction
Predicting the shock reflections and the shock wave interactions are very crucial in the design and operation phases of many engineering applications, such as shock-wave focusing, protection against blasts and detonations, and supersonic and hypersonic vehicles. For instance, a series of shock wave interactions are generated in the scramjet inlet, using tilted wedges, to decelerate the flow and achieve efficacious combustion. A proper comprehension of dynamic shock wave interactions over simple moving geometries, such as a wedge, will give insights into the limitations of the moving supersonic intake for an efficient operation.
When a supersonic flow impinges a wedge, an incident shock is generated. Then, it reflects on the mid-plane of symmetry, creating a second shock. Figure 1 shows that the reflection of the incident shock on the mid-plane will follow one of two configurations, Regular Reflection (RR) or Mach Reflection (MR), which depends on the free-stream Mach number, \(M_{\infty}\), and the incident shock angle, \(\beta\), [1, 2]. The structure of RR is formed of two shock waves, the incident shock wave (I), and the reflected shock wave (R), as shown in Figure 1a. They gather on the reflecting surface at a point, called the reflection point (RP). The reflected shock deforms slightly from a straight line due to the interference with the Expansion fans (E), generated at the trailing edge of the wedge. On the other hand, the structure of the MR includes three shock waves, the incident shock wave (I),
the reflected shock wave (R), and the Mach stem (MS), which all meet at the triple point (TP) and a slip line (S) appears, as shown in Figure 0(b). The slip line is formed due to the difference in the flow parameters behind the reflected shock and the Mach stem. The subsonic region behind the Mach stem bounded by the slip lines and the sonic throat (ST), is called the subsonic pocket (SP). Again, expansion fans (E), interfere with the reflected shock and bend it. The weak waves, that propagate behind the reflected shock from expansion waves, reach the slip line causing the generation of Kelvin-Helmholtz vortices (KHV). The transition from RR to MR over a wedge occurs when the wedge angle is large enough, that the reflected shock is no longer able to turn the flow parallel to the mid-plane. Thus, a normal shock is created and MR happens.
John von Neumann [3] is the first one who introduced two different criteria for the transition between RR and MR for symmetrical reflection, known as the detachment criterion and the von Neumann (or mechanical equilibrium) criterion. The detachment criterion is used to determine the transition from RR to MR and denotes the maximum shock deflection angle \(\beta_{D}\) that a RR configuration is theoretically possible. At a higher angle, the reflection point is forced to detach from the mid-plane forming the Mach stem. The von Neumann criterion denotes the theoretical limit of the shock deflection angle \(\beta_{N}\) for the MR configuration. Subsequently, the length scale criterion is proposed by Hornung et al. [4]. This criterion assumes the MR occurs at a certain length of the Mach stem. Moreover, the sonic criterion is also used to define the transition from RR to MR, when the flow beyond the reflected shock becomes sonic which assures the beginning of the Mach stem. Thus, the detachment criterion and the sonic criterion are very close. Furthermore, Both the RR and MR are possible solutions to exist in a range of wave angles (\(\beta_{N}<\beta<\beta_{D}\)) at relatively high Mach numbers (\(M_{\infty}>2.2\) for a perfect gas with the specific heat ratio \(\gamma=1.4\)), which called Dual Solution Domain (DSD) [5]. The supposition of hysteresis within the DSD in steady high-speed flows was first assumed by Hornung et al. [4]. Then, they tried to confirm this hypothesis experimentally [5], but they observed no hysteresis because of the disturbances in the wind tunnel. After that, Chpoun et al. [6] well studied the hysteresis phenomenon experimentally. In addition, Ivanov et al. used different numerical methods, such as Direct Simulation Monte Carlo [7], and Euler approach [8], and also used the kinetic and continuum approaches [9], to carefully investigate the hysteresis.
The transition between RR and MR within the DSD can be achieved using the pulsed energy deposition [10; 11], or the movement of the wedge [12]. Feltun and Skews [13] simulated the dynamic transition between RR and MR over a rotating wedge around its leading edge at \(M_{\infty}=3\) and various rates. They found that the nature of the transition is dependent on the rate of the wedge rotation. At a very low trailing-edge Mach number (\(M_{t}=0.001\)), the transition occurs near the theoretical limit. On the other hand, when the wedge angle changes at higher rates, the transition angles are beyond the DSD. Further, the lag between the dynamic transition angle, \(\beta_{t}\), from RR to MR and the transition wave angle in the static case (\(\beta_{t_{SC}}=39.5^{\circ}\)) increases rapidly at relatively low \(M_{t}\) to an asymptotic value around \(42^{o}\) at \(M_{t}>0.05\). Later on, Naidoo and Skews [14] continued the work using a rapidly rotating wedge and studied the transition between RR and MR in the weak and strong-reflection ranges. Their experimental and numerical results showed that the transition from RR to MR happens beyond the steady-state detachment criterion, and the transition from MR to RR occurs below the von Neumann theoretical limit. Furthermore, Ivanov et al. [8] conducted numerical and experimental investigations over a rotating wedge around its trailing edge in a supersonic flow with \(M_{\infty}=4\). The numerical simulations were used to study the hysteresis. In addition, their experiments showed that the transition between RR and MR occurs around the von Neumann angle.
Numerical simulations were conducted in the present work to investigate the dynamic transition from RR to MR using a two-dimensional wedge at a free-stream Mach number \(M_{\infty}=3\). The trailing edge of the wedge moves horizontally upstream (the wedge height is kept constant) with a constant reduced frequency, \(\kappa\). Further, different values of \(\kappa\) were tested to study the lag in the transition, and the development of the Mach stem height.
## 2 Physical model
### Model Description
The flow configuration of two symmetrical wedges in a supersonic flow with a free-stream Mach number, \(M_{\infty}=3\), is shown in Figure 2. The height of the computational inflow boundary is \(2H\), the height of each wedge is \(h\), the initial wedge chord is \(w(0)=1\), the stream-wise length of the wedge is \(L(t)\), the length of the wedge plus flat plate is \(L_{t}\), and the time-dependent wedge angle is \(\theta(t)\). The trailing edge of the wedge moves in the \(x\)-direction with velocity, \(V(t)\), from its initial location. Thus, the wedge angle, \(\theta(t)\), and wedge length, \(L(t)\), change during the motion, while \(H\), \(h\) and \(L_{t}\) remain constant. The flow is assumed two-dimensional and inviscid, and the gas is calorically and thermally perfect. When the supersonic flow incident on a wedge, the wedge angle generates an oblique shock wave
Figure 1: The shock structure of the RR and MR configurations, respectively.
that is reflected on the plane of symmetry. According to the angle of the wedge, there are two scenarios for the reflected shock; either a RR configuration or a MR configuration is formed. The dynamic transition from RR to MR during the increase of \(\theta(t)\) from \(19^{\circ}\) to \(34^{\circ}\) is studied for different constant values of \(\kappa\) at a range of \([0.1,2]\) with step 0.1. Due to the lag in the shock system, there are two wave angles that can be measured; wave angle at reflection/triple point, \(\beta_{p}\), and tangent wave angle at wedge's apex, \(\beta_{tang}\). Table 1 shows the values of these parameters.
### Governing Equations
Two-dimensional unsteady compressible Euler equations are used to model the supersonic flow over a wedge and are expressed in the conservative form as:
\[\frac{\partial Q}{\partial t}+\frac{\partial F}{\partial x}+\frac{\partial G }{\partial y}=0, \tag{1}\]
where
\[Q=\begin{bmatrix}\rho\\ \rho u\\ \rho v\\ \rho e\end{bmatrix},\quad F=\begin{bmatrix}\rho u\\ \rho u^{2}+p\\ \rho uv\\ u(\rho e+p)\end{bmatrix},\quad G=\begin{bmatrix}\rho v\\ \rho uv\\ \rho v^{2}+p\\ v(\rho e+p)\end{bmatrix} \tag{2}\]
The static pressure is obtained from
\[p=(\gamma-1)\left(\rho e-\rho\frac{u^{2}+v^{2}}{2}\right) \tag{3}\]
where \(u\) and \(v\) are the velocity components in the Cartesian coordinates \(x\) and \(y\), respectively, \(\rho\), \(p\) and \(e\) are the density, the pressure, and the internal energy of the flow field, respectively, and \(\gamma\) is the specific heat ratio of air, which equals 1.4.
### Equations of Motion
The trailing edge of the wedge moves horizontally with velocity \(V(t)\) from its initial location with a constant wedge angular velocity, \(\omega=d\theta/dt\) (\(sec^{-1}\)). Because the wedge height, \(h\), is kept constant, the increase in the wedge angle, \(\theta(t)\), changes only the wedge stream-wise length, \(L(t)\), and they are expressed as:
\[L(t)=h\ cot\left(\theta(t)\right) \tag{4a}\] \[\theta(t)=tan^{-1}\left(\frac{h}{L(0)}\right)+\omega t \tag{4b}\]
where \(L(0)\) is the initial wedge stream-wise length at the starting time, \(t=0\), of the motion.
Further, the dimensional velocity of the trailing edge of the wedge is expressed as:
\[V_{t}(t)=\omega\ h\sqrt{1+cot^{2}\left(\theta(t)\right)} \tag{5}\]
In addition, the wedge angular velocity, \(\omega\), and the time of the motion, \(t\), are normalized using the free-stream velocity and the initial wedge stream-wise length at time \(t=0\), \(L(0)\), to introduce the non-dimensional frequency, \(\kappa\), and the non-dimensional time, \(\tau\), which are defined as:
\[\kappa=\frac{\omega\ L(0)}{U_{\infty}} \tag{6a}\] \[\tau=\frac{t\ U_{\infty}}{L(0)} \tag{6b}\]
Therefore, the trailing edge Mach number can be written as:
\[M_{t}(\tau)=\kappa\ M_{\infty}\ tan\left(\theta(0)\right)\sqrt{1+cot^{2}\left( \theta(\tau)\right)} \tag{7}\]
where \(\theta(\tau)\), the wedge angle as a function of non-dimensional time, is defined as:
\[\theta(\tau)=\theta(0)+\kappa\tau \tag{8}\]
where \(\theta(0)\) is the initial wedge angle at the starting non-dimensional time, \(\tau=0\), of the motion.
## 3 Computational Model
### Numerical Implementation
_rhoCentralDyMFoam_ is a transient, density-based compressible flow solver with support for a mesh-motion, that is
\begin{table}
\begin{tabular}{c c} Initial wedge’s chord, \(w(0)\) & \(1m\) \\ \hline Initial wedge angle, \(\theta(0)\) & \(19^{\circ}\) \\ Final wedge angle, \(\theta(t_{f})\) & \(34^{\circ}\) \\ Wedge height to half domain height, \(\frac{h}{H}\) & \(0.3617\) \\ Initial wedge length to half domain height, \(\frac{L(0)}{H}\) & \(1.0506\) \\ Total wedge length to half domain height, \(\frac{L}{H}\) & \(2\) \\ Free-stream Mach number, \(M_{\infty}\) & \(3\) \\ Reduce frequency, \(\kappa\) & \(0.1:0.1:2\) \\ \end{tabular}
\end{table}
Table 1: SYSTEM PROPERTIES AND PARAMETERS.
Figure 2: THE FLOW CONFIGATION OF THE HORIZONTAL MOTION OF TWO SYMMETRICAL WEDGES IN A SUPERSONIC FLOW.
implemented in OpenFOAM(r)\({}^{\text{\textregistered}}\)\(-\)v2006. The solver's technique combines the semi-discrete and upwind-central non-staggered schemes of Kurganov and Tadmor [15, 16]. The previously mentioned schemes are implemented in the solver in order to avoid the use of Riemann solvers of characteristic decomposition [17]. In addition, _rhoCentralDyMFoam_ depends on an operator-splitting method to solve both momentum and energy equations. Further, explicit predictor equations are implemented for the convection of the conserved variables, whereas implicit corrector equations are used for the diffusion of the primitive variables [17]. The van Leer limiter, conducted in the _rhoCentralDyMFoam_, efficiently balances the performance of the solution through shock capture, oscillations-free fields, and computational cost [18].
### Computational Domain
Since the problem is symmetric in the geometry and the flow behavior, only half of the domain is computed. The computational domain is body-fitted for the wedge with an initial wedge angle of \(19^{\circ}\), as shown in Figure 3. The streamwise length and the transverse length of the domain are kept constant during the simulations of \(2.2w(0)\) and \(0.9w(0)\), respectively, where \(w(0)\) is the initial wedge chord length at \(t=0\). The origin is placed at the leading edge point of the wedge, and the total length of the wedge, \(L_{s}\), is \(1.8w(0)\). The domain is bounded by four boundaries: inlet, outlet, bottom, and top. The flow enters the domain with supersonic Mach number, \(M_{\infty}=3\), and free-stream pressure and temperature at sea level. At the outlet boundary, a zero gradient boundary condition is applied to all of the flow variables, in order to set the outlet field to the internal field value. The bottom boundary condition (the wedge and the flat plate in front of the wedge) has a slip velocity boundary condition with a zero gradient boundary condition for the pressure and temperature. The symmetric plane boundary conditions are applied at the top surface.
### Grid Generation
In this study, the delay in the dynamic transition from the static case value is mainly caused by two reasons; the mesh resolution and the rate of motion of the wedge trailing edge. the moving rate will be discussed further in the results section. Hence, the mesh-independent test is very critical for more accurate results. A 2D-ordered grid of quadrilateral elements was used to discretize the computational domain. The domain was divided into 9 blocks as shown in Figure 3. Orthogonal grids were generated in the blocks near the wedge surface and the mid-plane of symmetry, to reduce the computational noise on the results. Moreover, the middle blocks' edges were curved to improve the grid orthogonality and maintain the same aspect ratio in that region.
In order to accurately capture the transition point from RR to MR and the Mach stem height, a mesh-independent study was performed. Different mesh sizes were implemented at the free-stream Mach number of 3, a reduced frequency \(\kappa=0.726\). The study was started with mesh 1, of \(328\times 90\) elements. Then, the mesh size was doubled to generate mesh 2, mesh 4, mesh 8, and mesh 16. The time step was allowed to change during each simulation, but it was controlled with an upper limit of the Courant-Friedrichs-Lewy (CFL) number, which was set to be 0.2. Figure 4 indicates the variation of the Mach number along the mid-plane of symmetry (\(y=0.9\)) at the sonic criterion (\(M_{transition\ point}\cong 1\)). It shows the convergence of the results with the increase in the mesh size.
Moreover, the
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Mesh \# & Mesh Size & \(\beta_{r_{e}}\) (\({}^{\circ}\)) & \(\frac{\text{MS}}{L(0)}\times 10^{-2}\) & \(\frac{|Error|\%}{L(0)}\) \\ & & & & \(\beta_{r_{e}}\) & \(\frac{\text{MS}}{L(0)}\) \\ \hline
1 & 328 \(\times\) 90 & 41.39 & 5.29 & 1.22 & 46.4 \\
2 & 656 \(\times\) 180 & 41.15 & 7.34 & 0.62 & 25.6 \\
4 & 1312 \(\times\) 360 & 41.03 & 8.75 & 0.33 & 11.3 \\
8 & 2624 \(\times\) 720 & 40.96 & 9.53 & 0.17 & 3.4 \\
16 & 5248 \(\times\) 1440 & 40.89 & 9.87 & - & - \\ \hline \hline \end{tabular}
\end{table}
Table 2: Independent Mesh Test: ABSOLUTE PERCENTAGE ERROR OF THE TRANSITION WAVE ANGLE AT THE REFLECTION POINT, \(\beta_{T_{p}}\), AND THE MACH STEM HEIGHT, MS, AT WEDGE ANGLE, \(\theta\)=\(27^{\circ}\).
Figure 4: Independent Mesh Study: THE MACH NUMBER VARIATION ALONG THE TOP SYMMETRY PLANE AT \(Y=0.9\), AND AT THE SONIC CRITERION OF TRANSITION, USING DIFFERENT GRID SIZES.
Figure 3: Schematic of the Computational Domain, THE BOUNDARY, AND INITIAL CONDITIONS, FOR MESH 1
deviation between mesh 8 and mesh 16 in the Mach distribution can be neglected. Further, Table 2 compares different mesh sizes, showing the absolute percentage error of the transition wave angle at the reflection point, \(\beta_{t}\), and the Mach stem height, MS, at wedge angle, \(\theta=27^{\circ}\). It is clear that for low mesh quality (\(328\times 90\)), the error percentage of developing the Mach stem height, MS, at wedge angle of \(\theta=27^{\circ}\), is around 50%, which is a huge error that would be a miss-leading in determining the dynamic transition point. Moreover, this error percentage rapidly declined by doubling the mesh size. Therefore, both mesh 8 and mesh 16 provide more precise results. In order to reduce the computational time, mesh 8 is selected to implement all the simulations. The minimum element size in mesh 8 is \(0.75mm\times 0.78mm\), and the time step to ensure the CFL number of 0.2 is \(93.5ns\).
## 4 Verification
The inserted dynamic code into the OpenFOAM solver is verified using the supersonic flow over a dynamic rotating wedge problem. The angle of a unity chord wedge is varied with different trailing-edge Mach numbers, \(M_{t}\), in a flow of a free-stream Mach number of 3. The observed transition angles, \(\beta_{t}\), from regular to Mach reflection are obtained for these wedge rotation simulations. Figure 5 shows the comparison of the pressure contours for the flow over the rotating wedge with \(M_{t}=0.05\) at different wedge angles, \(\theta\). Further, Figure 6 indicates that our work is within the margin of accuracy of measurements for the work of Felthun and Skews [13].
## 5 Results and Discussion
The dynamic transition from RR to MR was studied using a horizontal moving wedge at a \(M_{\infty}=3\) and was modeled by starting with RR in a steady flow with a wedge angle of \(19^{\circ}\) and increasing it to \(34^{\circ}\). The sonic and Detachment criteria were used to determine the transition point. Different reduced frequencies (\(0.1:0.1:2\)) were tested to indicate the lag effect in the transition angles, \(\theta_{t}\) and \(\beta_{t}\), and the Mach stem height, MS. The used system properties and parameters are listed in Table 1.
### Dynamic Transition from RR to MR
The sudden motion of the wedge is started horizontally from the steady-state at \(\theta=19^{\circ}\) with different reduced frequencies, \(\kappa\). This movement affects the shock system. A lag appears in the incident shock wave angle behind the steady-state value at the same wedge angle, causing a curvature in the incident shock, which is indicated with the tangent wave angle, \(\beta_{tang}\). Further, the strength of these effects depends on the value of \(\kappa\). The dynamic transition from RR to MR is defined according to the sonic and detachment criteria. Figure 7 is a close view of two transition criteria over the used mesh at a \(\kappa=0.1\). The temporal time of the sonic criterion is determined when there is a point with a Mach number of 1 on the top surface, as shown in Figure 7 (a). After a non-dimensional time of 3.46 for the case of \(\kappa=0.1\), a sharp bend in the reflected shock wave happens, detaching the shock from the symmetric surface and generating the triple point. This would decrease the Mach number below 1 and when the Mach number reaches 0.475 (Mach number after a normal shock) in a point on the symmetric plane, the detachment criterion is fullfilled, as shown in Figure 7 (b). Further, in between the sonic and detachment limits, a Regular Reflection with Subsonic Downstream Flow (RRs) occurs instead of Regular Reflection (RR). In addition, Figure 14 shows the lag and curvature effects in the system with \(\kappa=0.5\), where the transition happens at a wedge angle at the sonic limit of \(\theta_{{}_{SL}}=23.15^{\circ}\pm 0.1^{\circ}\) and at the detachment limit at \(\theta_{{}_{RL}}=23.87^{\circ}\pm 0.1^{\circ}\), while the transition of the static case occurs at a wedge angle \(\theta_{{}_{SC}}=21.5^{\circ}\).
Figure 5: Pressure contours at different WEDGE ANGLES AND \(M_{T}=0.05\). The present work is shown in the colored left sub-FIGUREes, whereas FELTHUN AND SKEWS [13] WORK IS SHOWN IN THE RIGHT SUB-FIGUREes.
Figure 6: Validation with the work of FELTHUN AND SKEWS [13] BY MEASUURING THE EFFECT OF TRAILING-EDGE MACH NUMBER ON TRANSITION ANGLES FROM REGULAR TO MACH REFLECTION
The investigation of the transition from RR to MR was conducted by studying the change of the transition non-dimensional times, \(\tau_{t}\), transition wedge angles, \(\theta_{t}\), and transition wave angles, \(\beta_{t}\), at the two criteria with various \(\kappa\). The increase in reduced frequency represents an increase in the velocity of the moving wedge. Thus, the required time to change the state will decrease with the increase of \(\kappa\). After the significant decrease in the transition non-dimensional time, \(\tau_{t}\) at low values of \(\kappa\), it starts to gradually decrease at higher values of \(\kappa\) than 0.4, reaching \(\tau_{t}=4\) from the starting wedge angle of \(\theta=19^{\circ}\) at \(\kappa=2\), as shown in Figure 8 right. Further, the change in the non-dimensional time between the detachment and sonic limits decreases, dramatically at low values of \(\kappa\) and slightly at high values of \(\kappa\), from 3.46 in the case of \(\kappa=0.1\) to 0.95 in the case of \(\kappa=2\). In addition, the lag in the flow appears with the wedge motion. Thus, there will be time needed to transfer the information from the wedge apex to the end of the incident shock. During this time, the wedge continues its motion. That is why the transition wedge angle in the dynamic cases is larger than that of the static case. This is shown in Figure 8 left with a Gauss curve fit and the uncertainty in measuring the angles is \(\pm 0.1^{\circ}\). Consequently, increasing the motion rate of the wedge, \(\kappa\), increases the gap between the dynamic and the static transition wedge angles. Hence, the reduced frequency generates a spatial and temporal lag in the system.
The difference between the tangent transient wave angle, \(\beta_{t_{\rm max}}\), and the wave angle at reflection/triple point, \(\beta_{t_{\rm max}}\), shown in Figure 9, emphasizes the lag in the propagation of the information along the incident shock wave. Even at a very low value of \(\kappa\), such as \(\kappa=0.1\), the lag in \(\beta_{t_{\rm max}}\) from the static case (about \(2.5^{\circ}\pm 0.1^{\circ}\)), which represents the curvature of the incident shock at the apex of the wedge, is relatively large with respect to the lag in \(\beta_{t_{\rm max}}\) (about \(0.5^{\circ}\pm 0.1^{\circ}\)). At the values of \(\kappa\) smaller than 0.4, the differences in wave angles, \(\beta_{t_{\rm max}}\) and \(\beta_{t_{\rm max}}\), between the sonic and detachment criteria are within a fraction of a degree. Further, the differences increase gradually and reach \(2.5^{\circ}\pm 0.1^{\circ}\) for \(\beta_{t_{\rm max}}\) and \(3.2^{\circ}\pm 0.1^{\circ}\) for \(\beta_{t_{\rm max}}\) at \(\kappa=2\).
Another way to study the lag in the shock wave system is achieved with the ratio of the difference between the wave angles in the dynamic and static cases to the difference between the wedge angles in the dynamic and static cases, \((\beta_{t}-\beta_{t_{\rm SC}})/(\theta_{t}-\theta_{t_{\rm SC}})\), as shown in Figure 10. This ratio represents the ratio between the speed of information propagation along the incident shock wave to the speed of wedge motion. The ratio decreases steeply at \(\kappa\) smaller than 0.4; i.e. the speed of information propagation decreases dramatically with the increase of \(\kappa\) at the low-value range. On the other hand, the ratio decreases almost linearly with a lower negative slope at the value of \(\kappa\) larger than 0.4.
Figure 8: The variation of the transition WEDGE ANGLE, \(\theta_{t}\), AND THE TRANSITION NON-DIMENSIONAL TIME, \(\tau_{T}\), WITH VARIOUS REDUCED FREQUENCIES, \(\kappa\) AT THE SONIC LIMIT, SL, AND THE DETachment LIMIT, DL. GRAY STRABHT LINES ARE PLOTTED TO INDICATE THE NON-LINEAR EFFECT OF THE LAG.
Figure 7: Density contours when the DYNAMIC TRANSITION FROM RR to MR OCURS WITH \(\kappa=0.1\) AT: (A) SONIC AND (B) DETachment CRITERIA.
Figure 9: The variation of TRANSITION WAVE ANGLE, \(\beta_{T}\), WITH VARIOUS REDUCED FREQUENCIES, \(\kappa\) AT THE SONIC LIMIT, SL, AND THE DETachment LIMIT, DL. GRAY STRABHT LINES ARE PLOTTED TO INDICATE THE NON-LINEAR EFFECT OF THE LAG.
### _Mach Stem Height_
In order to examine the effect of changing the movement rate of the wedge on the development of the Mach stem, MS, its height, \(\mathrm{MS}/L(0)\), was measured at a certain wedge angle, \(\theta=27^{\circ}\), for different values of \(\kappa\), as shown in Figure 15. At the static case, a MR happens at wedge angle, \(\theta_{SC}\), of \(27^{\circ}\) with a \(\mathrm{MS}_{SC}/L(0)=0.308\). On the other hand, the dynamic Mach stem height decreases with the increase of \(\kappa\). At a very high value of \(\kappa\), the \(\mathrm{MS}/L(0)\) disappears and the RR occurs, due to the excessive lag in transition, as shown in Figure 15. Moreover, Figure 11 shows that longer time scales (small values of \(\kappa\)) enable the Mach stem to develop close to the static case value. For example, the \(\mathrm{MS}/L(0)\) is about \(99.4\%\) of that of the static case at \(\kappa=0.1\). Further, the deviation of the dynamic Mach stem height from the static case, \(\mathrm{MS}_{SC}/L(0)-\mathrm{MS}/L(0)\), increases gradually with the increase of \(\kappa\), reaching a percentage value of \(98.4\%\) with respect to the static case at \(\kappa=1.3\).
stream Mach number of 3 and different values of \(\kappa\). The horizontal upstream motion of the wedge (increasing the wedge angle) generates a lag in \(\beta_{t_{max}}\), placing it above the DSD. Furthermore, increasing the reduced frequency moves the transition angle far away from the DSD. At high values of \(\kappa\), \(\beta_{t_{max}}\) becomes very close to the sonic reflected shock condition (MRs). This was indicated by a small triangular subsonic zone in the reflection domain between the slip-line and the reflected shock at \(\kappa=1.5\) and 2 and \(\theta=34^{\circ}\), as shown in Figure 13. At \(\kappa=2\), the triangular subsonic zone is very small due to the excessive lag in the shock system, which delays the development of MS. Thus, it became very close to the expansion fan that quickly accelerates the flow to supersonic speeds.
## 6 Conclusion
The current research work aims to investigate the dynamic transition from RR to MR over a two-dimensional slip wedge at a free-stream Mach number of 3. The trailing edge of the wedge moves horizontally upstream with various reduced frequencies, \(\kappa=0.1:0.1:2\). The phenomenon was studied by analyzing the variation of the transition angles, and examining the lag in the development of the Mach stem with various reduced frequencies. Further, a comparison between the dynamic and theoretical static transition was achieved. The major conclusions of the study are:
* The time scale needed to reach the transition declines rapidly at low values of \(\kappa\), and slowly with high values of \(\kappa\). However, the lag in the transition parameters, wedge angle, and wave angles, changes gradually with \(\kappa\).
* The ratio of the speed of information propagation along the incident wave to the speed of wedge motion falls quickly at lower values of \(\kappa\) than 0.4 and decreases with an almost constant lower rate at higher values.
* Although the difference between the theoretical sonic and detachment conditions is within a degree, the difference gradually increases higher than a degree for \(\kappa>0.4\).
* The growth of the Mach stem is very sensitive to the numerical resolution and the lag due to the motion of the wedge with a reduced frequency, \(\kappa\).
* At low values of \(\kappa\), the dynamic transition occurs slightly above the theoretical detachment limit (MR). On the opposite side, the dynamic transition wave angle tends to reach the physical limit of the Mach Reflection with Subsonic Downstream Flow (MRs) at high values of \(\kappa\).
|
2309.00026 | Spectral solutions for the Schrödinger equation with a regular
singularity | We propose a modification in the Bethe-like ansatz to reproduce the hydrogen
atom spectrum and the wave functions. Such a proposal provided a clue to
attempt the exact quantization conditions (EQC) for the quantum periods
associated with potentials V (x) which are singular at the origin. In a
suitable limit of the parameters, the potential can be mapped to |x| potential.
We validate our EQC proposition by numerically computing the Voros spectrum and
matching it with the true spectrum for |x| potential. Thus we have given a
route to obtain the spectral solution for the one dimensional Schr\"odinger
equation involving potentials with regular singularity at the origin. | Pushkar Mohile, Ayaz Ahmed, T. R. Vishnu, Pichai Ramadevi | 2023-08-31T13:05:59Z | http://arxiv.org/abs/2309.00026v2 | # Spectral solutions for the Schrodinger equation with a regular singularity
###### Abstract
We propose a modification in the Bethe-like ansatz to reproduce the hydrogen atom spectrum and the wave functions. Such a proposal provided a clue to attempt the exact quantization conditions (EQC) for the quantum periods associated with potentials \(V(x)\) which are singular at the origin. In a suitable limit of the parameters, the potential can be mapped to \(|x|\) potential. We validate our EQC proposition by numerically computing the Voros spectrum and matching it with the true spectrum for \(|x|\) potential. Thus we have given a route to obtain the spectral solution for the one dimensional Schrodinger equation involving potentials with regular singularity at the origin.
## 1 Introduction
The Bethe ansatz is one of the most powerful tools in the study of quantum integrable systems. It has profound applications in integrable spin chains. In fact, such an ansatz enables diagonalisation of Hamiltonian and obtains energy spectrum using simple algebraic arguments [1; 2].
Even though the Bethe ansatz has been studied extensively in the context of integrable systems, there are interesting features that can be applied to other models. One such striking feature is to compute the quantum spectrum from the classical limit of the integrable model [3]. Such calculations are based on the asymptotes of a set of polynomial equations, which we get from the ansatz, called the Bethe equations. A nice review with applications to integrable quantum field theories and spin systems can be found in [4]. In fact, this review article and [3] illustrate the exact energy spectrum of one-dimensional quantum harmonic oscillator (QHO) from a Bethe-like ansatz. We believe such a neat concise Bethe-like ansatz approach must be generalisable for other quantum mechanical systems.
Hence, we investigated this approach for the hydrogen atom and proposed a modification in the Bethe-like ansatz. Interestingly, we succeeded in reproducing the energy spectrum and the corresponding wave functions.
Well behaved nature of wave function in quantum mechanics forces that the Bethe-like ansatz for pseudo-momentum \(p_{0}(x)\) (whose asymptotic behaviour in \(\hbar\to 0\) matches classical momentum \(p_{cl}(x)\)) must have only simple poles. Unfortunately, we obtain higher order poles for any general polynomial potential of degree greater than 2. Hence, the Bethe-like approach fails for such potentials.
The natural extension of the Bethe-Like ansatz should be the exact WKB (Wentzel-Kramers-Brillouin) method [5; 6] and the 'thermodynamic Bethe Ansatz' (TBA) equations governing the quantum WKB periods [7]. In fact, this route led to the energy spectrum for monic potentials [7] and general polynomial potentials [8]. TBA involves Borel transform and Borel re-summation techniques [6; 8] to handle diverging series as well as capture the singularities in the Borel plane. In fact, these discontinuities in the Borel transform encode information about other perturbative series associated with different classical configurations [9; 10].
For QHO and hydrogen atom, no new information is obtained using TBA approach. However, for higher order polynomial potentials, we can capture the information about the other zeros of the potential from the singularity structure of the Borel re-summed function. TBA approach is definitely powerful in computing the quantum periods for general polynomial potentials \(V(x)=\sum_{n}a_{n}x^{n}\)[8]. Further, the exact WKB method advocated by Voros-Silverstone leads to an exact quantization condition (EQC) obeyed by the quantum periods [8; 11; 12]. Thus the spectral solutions for any general polynomial potentials can be obtained. They have been numerically presented for the polynomial potential with a suitable choice of parameters \(\{a_{n}\}\)[8; 11].
For other potentials with a simple pole and a double pole at the origin, the modification in the TBA analysis has been systematically elaborated in [13]. However, due to the singularity at the origin, the EQC is still an open problem. The Bohr-Sommerfeld quantization to obtain the spectral solution is not correct.
The main theme of this paper is to propose a correction to the quantum period near the singular origin to modify the existing polynomial potential EQC. Such a proposal is motivated from our Bethe-like ansatz for the hydrogen atom. We validate our EQC proposition through an example whose energy spectrum is known.
We know that the wave functions for the \(|x|\) potential are the Airy functions. In fact, the zeros of the Airy function and its derivatives give the true spectrum [14; 15]. We performed a naive TBA approach for the \(|x|\) potential with two turning points and obtained the spectrum using the EQC of the QHO. Our calculated spectrum did not match the true spectrum for the low lying energy states. This exercise indicates that the conventional (TBA & EQC) approach, of finding spectral solutions for polynomial potentials, cannot be applied to the potentials with a derivative singularity at the origin.
Incidentally, the solutions for \(|x|^{2n+1}\) potentials using spectral determinant approach are discussed in [5; 16]. However, our aim was to modify the polynomial potential TBA as well as the EQC to reproduce the true spectrum for these potentials.
As a first step, we showed that the \(|x|\) potential can be viewed as the potential with a simple and a double pole [13] for a suitable limit of parameters. With this choice of parameters, we numerically computed the quantum periods using TBA equation. Then, using our EQC proposition we obtained the Voros spectrum. In fact, our numerical results for the Voros spectrum match well with the true spectrum. Our validation for \(|x|\) spectrum reinforces that the proposed EQC is applicable for the potentials with a regular singularity [13].
The plan of the paper is as follows: In section 2, we briefly review Bethe-like ansatz and present the spectrum of QHO. Then we propose a modification in the Bethe-like ansatz necessary to reproduce the hydrogen atom spectrum. In section 3, we have discussed exact WKB method and Borel resummation technique to deal with divergent perturbative series. We summarise the salient details of TBA in section 4 with some simple potentials as illustrative examples. We discuss the \(|x|\) potential and its relation to potentials with regular singularity in 5. In section 6, we focus on our proposition of EQC for potentials, singular at the origin. We summarise and present some of the open problems in the concluding section 7.
## 2 Bethe-Like Ansatz
For completeness and clarity, we will first review the salient features of Bethe-like ansatz approach for the quantum harmonic oscillator (QHO) spectrum [3; 4]. Then, we present our proposal of modified Bethe-ansatz giving hydrogen atom spectrum.
For QHO, a set of Bethe-like equations can be written for the roots of the wave function. This relies on the nonlinear transformation of time independent Schrodinger equation (TISE)
\[-\frac{\hbar^{2}}{2m}\frac{d^{2}}{dx^{2}}\psi(x)+V(x)\psi(x)=E\psi(x), \tag{1}\]
into Riccati equation:
\[p^{2}-i\hbar p^{\prime}=2m(E-V), \tag{2}\]
where
\[p(x)=\frac{\hbar}{i}\frac{\psi^{\prime}(x)}{\psi(x)}. \tag{3}\]
Note that \(p(x)\)(3) has singularities at the zeros of the wave function \(\psi(x)\). Such singularities are handled by doing an analytic continuation of \(p(x)\to p(z)\) in the complex plane. The nature of _complex function_\(p(z)\) can be fixed from the generic behaviour of the wave function \(\psi(x)\), i.e., \(\psi(x)\) must be normalisable. Suppose we allow second or higher order poles for \(p(z)\),
\[p(z)\propto(z-a)^{-n}\text{ for }n\geq 2. \tag{4}\]
Then the wave function
\[\psi(z)\propto\exp\left(\int p(z)dz\right)=\exp[-n(z-a)^{-n+1}],\]
has essential singularities. This implies that \(p(z)\) can have at most simple poles. Note that the roots of the bound state wave functions \(\psi(x)\) are discrete and isolated [17].
For highly excited states, we can take the classical limit \(\hbar\to 0\). Clearly, \(p(z)\) in the classical limit
\[\lim_{\hbar\to 0}\ p(z)\equiv p_{\rm cl}(z)=\pm\sqrt{2m(E-V(z))}, \tag{5}\]
denotes the familiar classical momentum of the particle which has branch cut singularity. It must be puzzling as to where from this branch cut emerges in the classical limit. It can only be formed when the discrete poles present in \(p(z)\) 'condense' to a continuum as we approach the classical limit. Hence, we can conclude that the poles condense into the branch cut in the classical limit. Note that \(p(z)\) is applicable for classically allowed region (\(E\geq V(x)\)) as well as classically forbidden region (\(E<V(x)\)). Hence \(p(z)\) is referred to as pseudo-momentum.
We will now review QHO spectrum from the Riccati equation to see the resemblance with Bethe ansatz equations.
Let us examine the classical limit \(p_{\rm cl}(z)\)(5) for QHO, of mass \(m\) and angular frequency \(\omega\), whose \(V(x)=V_{\rm QHO}=m\omega^{2}x^{2}/2.\) The function (5) has a square root type branch cut, with branch points at the two turning points
\[z=\pm\sqrt{2E/m\omega^{2}}.\]
Our aim is to determine the allowed energy eigenvalues \(E\) for QHO. In order to achieve this, we probe the asymptotic behaviour of \(p(z)\) as \(z\rightarrow+\infty\) on the real axis :
\[p\sim im\omega z+{\cal O}(\frac{1}{z})\ \ {\rm and}\ \ p^{\prime}\sim im \omega+{\cal O}(\frac{1}{z^{2}}).\]
Notice that the leading term in asymptotic \(p^{\prime}(z)\) is a constant and must be included so that Riccati equation gives
\[\lim_{z\rightarrow\infty}p(z)\equiv p_{o}(z) = \lim_{z\rightarrow\infty}\sqrt{2m\left[(E-\frac{\hbar\omega}{2}) -\frac{m\omega^{2}z^{2}}{2}\right]} \tag{6}\] \[\sim im\omega z-i\frac{(E-\hbar\omega/2)}{\omega z}+{\cal O}(1/z^{3}),\]
where \(\lim_{\hbar\to 0}p_{o}(z)=p_{\rm cl}(z)\)(5). Notice that the asymptotic behaviour of \(p_{o}(z)\) is almost like the classical momentum(5), if we shift
\[E\to E-\frac{1}{2}\hbar\omega. \tag{7}\]
In fact, the branch cut of \(p_{o}(z)\) includes all the poles of \(p(z)\). It is exciting to see the natural emergence of quantum shift in the energy by \(\hbar\omega/2\) from the Riccati equation for QHO. The asymptotic behaviour of \(p(z)\equiv p_{o}(z)\), which has a branch cut, is due to the condensation of simple poles of \(p(z)\). This leads to the following Bethe-like ansatz for \(p(z)\) having \(N\)-simple poles:
\[p(z)=im\omega z+\frac{\hbar}{i}\sum_{j}^{N}\frac{1}{z-z_{j}}. \tag{8}\]
Here, we make the choice of sign in the leading term \((im\omega z)\) so that the wave function remains normalisable. The set \(\{z_{j}\}\) corresponds to the \(N\) roots arising from the nodes of the \(N^{th}\) excited eigenfunction. Incorporating the key observation of the Bethe-like approach, the contour integration around the branch cut in \(p_{o}(z)\) must give the residues due to the simple poles of \(p(z)(2.8)\):
\[\oint_{\gamma}\sqrt{2m\left[\left(E-\frac{\hbar\omega}{2}\right)-V(z)\right]}dz =\sum_{j}^{N}\text{Res}_{x_{j}}=2\pi\hbar N, \tag{2.9}\]
where \(N\) is the number of roots for the \(N^{th}\) excited state and \(\gamma\) is a contour around the branch cut. By doing this contour integral, we get
\[2\pi\left(E-\frac{\hbar\omega}{2}\right)=2\pi(N)\hbar\omega, \tag{2.10}\]
which gives us the energy spectrum of the QHO:
\[E=\left(N+\frac{1}{2}\right)\hbar\omega. \tag{2.11}\]
We have to deduce the wave functions \(\psi_{N}(x)\) corresponding to the \(N\)-th excited energy level from the Riccati equation. When we substitute the ansatz(2.8) into the Riccati equation(2.2) and equate the coefficients of each of the terms \(1/(z-z_{j})\) to zero, we obtain Bethe-like equations for the the roots \(\{z_{j},j=0,1,\ldots N\}\) of the \(N\)-th excited state wavefunction \(\psi_{N}(x)\) :
\[z_{j}=\frac{\hbar}{m\omega}\sum_{i\neq j}\frac{1}{z_{j}-z_{i}},\quad\forall j=1,2,3,..N. \tag{2.12}\]
This system of polynomial equations is solvable, with solutions to \(z_{j}\) being the roots of Hermite polynomials when the factor \(\hbar/m\omega\) is scaled to \(1\). We have solved this system of polynomial equations for \(N\leq 3\) using Mathematica and tabulated (see Table 1). These roots computed are matching with the roots of Hermite polynomials. Once we know the roots \(\{x_{j}\}\)'s for any \(N\), the corresponding energy eigenfunction is constructed as
\[\psi_{N}(x)=\exp[\int p(x)dx]=A\exp\left(\frac{-x^{2}}{2}\right)\prod_{j=1}^{N }(x-x_{j}), \tag{2.13}\]
where \(A\) is determined by normalisation. The flowchart (Table 2) gives a concise summary of the Bethe-like methodology for QHO. Thus we have elaborated the powerfulness of this Bethe-like approach to obtain the complete QHO energy spectrum and the corresponding wave functions. Particularly, the branch cut in asymptotic behaviour of \(p_{o}(z)\) is accountable by condensation of simple poles. Further, the well known zero point energy (\(\hbar\omega/2\)) appeared naturally. It is not clear whether this methodology works for arbitrary potential \(V\). As a first step in this direction, we have attempted hydrogen atom in the following subsection.
### Bethe-Like Ansatz for the Hydrogen Atom
For the hydrogen atom, the potential energy is \(V(r)\propto\frac{1}{r}\). Clearly, rotations in the three-dimensional space leaves the Hamiltonian of the hydrogen atom invariant. Even though it is a three-dimensional system, we can view the hydrogen atom as an effective one dimensional system in radial coordinate \(r\). By rewriting the radial part \(R_{n}^{l}(r)\) of the wavefunction \(\psi_{n,l,m}(r,\theta,\phi)=R_{n}^{l}(r)Y_{lm}(\theta,\phi)\) as \(u_{n}^{l}(r)=rR_{n}^{l}(r),\) it is easy to check that the radial part of the equation resembles the one-dimensional Schrodinger equation for \(u_{n}^{l}(r)\) with effective potential energy
\[V_{eff}(r)=-\frac{e^{2}r}{4\pi\epsilon_{o}r}+\frac{\hbar^{2}l(l+1)}{2mr^{2}}, \tag{14}\]
where the quantum number \(l\) refers to the orbital angular momentum. Following the arguments in the previous section, the pseudo-momentum \(p(r)\) is a rational function with simple poles at \(\{r_{j}\}\) for regular functions \(u_{n}^{l}(r)\) having zeros at the points \(\{r_{j}\}\) where \(j=1,2,\ldots N.\) The residues of \(p(r)\) at these poles must be \(2\pi i\times\hbar/i.\) We should keep in mind the following key differences between the harmonic oscillator and the hydrogen atom: (i) The passage from the three-dimensional problem to the effective one-dimensional system
\begin{table}
\begin{tabular}{|c|} \hline Riccati Equation & Bethe-like ansatz \\ \hline \(p^{2}-i\hbar p^{\prime}=2m(E-\frac{1}{2}m\omega^{2}x^{2})\) & \(p=im\omega x+\hbar/i\sum_{j}1/(x-x_{j})\) \\ \hline \end{tabular} \begin{tabular}{|c|} \hline matching asymptotes (Poles to) (Branch Cut) \\ \hline \(im\omega x-i[E-\frac{1}{2}(\hbar\omega)]/\omega x+\mathcal{O}(1/x^{3})=im \omega x+\hbar/i\sum 1/(x-x_{j})\) \\ \hline \end{tabular} \begin{tabular}{|c|} \hline Bethe equations \\ \hline \(z_{j}=\frac{\hbar}{m\omega}\sum_{i\neq j}\frac{1}{z_{j}-z_{i}},\quad\forall j=1,2,3,..N\) & Spectrum \\ \hline \end{tabular}
\begin{tabular}{|c|} \hline Spectrum \\ \hline \(E_{N}=(N+\frac{1}{2})\hbar\omega\,\psi_{N}(x)\propto H_{N}(x)\) \\ \hline \end{tabular}
\end{table}
Table 2: Flowchart for QHO spectrum from Bethe-Like approach
should introduce an additional pole at \(r=0\).
(ii) the domain of definition is \(r\geq 0\).
(iii) The asymptotic form of pseudo-momentum \(p(r)\) matches exactly the asymptotic values of the classical momentum \(p_{cl}(r)\).
Hence, for highly excited states, \(p(r)\) is the classical momentum (there is no zero point energy shift). In the classical limit, we observe the poles at \(\{r_{j}\}\) condense to form a square root branch cut of \(p(r)\). At large values of \(r\), \(p\rightarrow-\sqrt{2mE}\) as \(V\to 0\), where the negative branch of the square root is chosen to prevent the wave function \(u(r)\) from blowing up at \(\infty\). We will now focus on the energy and the wave function for the \(s\)-orbitals of hydrogen atom (\(l=0\)) which will give us the insight to generalise the Bethe-like ansatz for \(l\neq 0\).
#### 2.1.1 Spectrum for \(l=0\)
Let us propose an ansatz for \(p(r)\) for the \(s\)-orbitals whose orbital angular momentum \(l=0\) to obtain the energy spectrum and the corresponding wave function \(u(r)\). Incorporating the asymptotic form of \(p(r)\) and its poles at \(\{r_{j}\}\), we propose the following Bethe-like ansatz.
**Proposition 1:**
\[p=-\sqrt{2mE}+\frac{\hbar}{i}\frac{1}{r}+\frac{\hbar}{i}\sum_{j=1}^{N}\frac{1 }{r-r_{j}}. \tag{15}\]
with \(N+1\) poles including the pole at \(r=0\). Recall this additional pole was not there in the harmonic oscillator. The large \(r\) limit can be expressed as
\[\lim_{r\rightarrow\infty}p(r)=\lim_{r\rightarrow\infty}-\sqrt{2mE\left(1- \frac{b}{rE}\right)}\sim-\sqrt{2mE}+\frac{b\sqrt{m}}{r\sqrt{2E}}+\mathcal{O} \left(\frac{1}{r^{2}}\right), \tag{16}\]
where \(b=e^{2}/4\pi\epsilon_{0}\). Since the poles must condense to this branch cut, on doing a contour integration around the set of zeroes \(\{r_{j}\}\), the residues must equate on both sides giving us
\[\frac{b\sqrt{m}}{\sqrt{2E}}=\frac{\hbar}{i}(N+1). \tag{17}\]
Here, \(N\) is the number of zeros of the \(s\) orbital wavefunction \(u_{n}^{l=0}(r)\), and one more pole from \(r=0\) for \(R_{n}^{0}(r)\). On rearranging and substituting \(b=e^{2}/4\pi\epsilon_{0}\) we get
\[E_{n}=-\frac{e^{4}m}{32\hbar^{2}\pi^{2}\epsilon_{0}^{2}n^{2}}=-\frac{e^{2}}{2 a_{0}}\frac{1}{n^{2}}\ \mbox{where}\ n=N+1, \tag{18}\]
where \(a_{0}=\hbar^{2}/(me^{2})\) is the Bohr radius. This matches exactly with the hydrogen atom energy spectrum. Further, (15) allows us to fix the roots of the wave function \(u(r)\) by requiring that the coefficients of each of the \(1/(r-r_{i})\) terms add up to zero in the Riccati equation:
\[\sqrt{2mE_{n}}=\frac{\hbar}{i}\frac{1}{r_{i}}+\frac{\hbar}{i}\sum_{\{k\neq i \}=1}^{N}\frac{1}{r_{i}-r_{k}}\ \ \forall i\in 1,2,..N. \tag{19}\]
This gives us a set of Bethe-like equations to solve and determine the roots \(\{r_{i}\}\). For \(N=1\), we get \(r_{1}=2a_{0}\). We have tried to work out the roots for \(N=n-1\leq 3\) using Mathematica and presented the results in Table 3 for \(a_{0}=1\). Once we have explicitly found the roots, we can then integrate the ansatz for \(p\) to obtain the wave function \(R_{n}^{l=0}(r)\). We see explicitly that for \(N\) zeros of the wave function \(u(r)\) we get
\[R_{n}^{0}(r)=A\exp(-\sqrt{2m|E_{n}|}\ r)L_{n}^{0}(r). \tag{20}\]
where \(L_{n}^{0}(r)\) are the Laguerre polynomials. Here the negative branch of the square root is chosen to ensure \(R_{n}(r)\to 0\) as \(r\to\infty\) and \(A\) is the normalisation constant. \(L_{n}^{0}(r)=\prod_{i=1}^{N}(r-r_{i})\) is a polynomial with roots at \(r_{i}\) found from Bethe-like equations. For \(N=0\) and \(N=1\), we get the wave functions
\[R_{1}^{0}(r)=A_{1}\exp(-\frac{r}{a_{0}}),\ \ R_{2}^{0}(r)=A_{2}\exp(-\frac{r}{2 a_{0}})(r-2a_{0}). \tag{21}\]
Using the mathematica program, we can deduce \(N=2\) for \(a_{0}=1\) as
\[R_{3}^{0}(r)=A_{3}\exp(-\frac{r}{3})(r-3/2(3-\sqrt{3}))(r-3/2(3+\sqrt{3})). \tag{22}\]
We will generalise the ansatz for \(p(r)\) for arbitrary \(l\) in the following subsection.
#### 2.1.2 Spectrum for \(l\neq 0\)
**Proposition 2:** Generalising _proposition 1_ in (15), for any \(l\), Bethe-like ansatz for \(p(r)\) is
\[p=-\sqrt{2mE}+\frac{\hbar}{i}\left[\frac{l+1}{r}+\sum_{j}\frac{1}{r-r_{j}} \right]. \tag{23}\]
Such an ansatz will take care of the additional multiplicity of the root at \(r=0\) for wave function \(R_{n}^{l}(r)\). Further, the term \(l(l+1)/r^{2}\) term in the effective potential will be accounted for by the modified ansatz.
\begin{table}
\begin{tabular}{|c|c|c|} \hline \(n=N+1\) & Equations & Solutions \\ \hline
1 & No equations & No roots \\ \hline
2 & \(r_{1}=2\) & \(r_{1}=2\) \\ \hline
3 & \(r_{1}^{2}-r_{1}r_{2}-6r_{1}+3x_{2}=0\) & \\ & \(r_{2}^{2}-r_{1}r_{2}-6r_{2}+3r_{1}=0\) & \(r_{1}=3/2(3-\sqrt{3})\) \\ & \(1/r_{1}+1/r_{2}=2/3\) & \(r_{2}=3/2(3+\sqrt{3})\) \\ \hline
4 & \(r_{1}(r_{1}-r_{2})(r_{1}-r_{3})=4(3r_{1}^{2}-2r_{1}r_{2}-2r_{1}r_{3}+r_{2}r_{3})\) & \(r_{1}=1.871\) \\ & \(r_{2}(r_{2}-r_{1})(r_{2}-r_{3})=4(3r_{2}^{2}-2r_{2}r_{1}-2r_{2}r_{3}+r_{1}r_{3})\) & \(r_{2}=6.618\) \\ & \(r_{3}(r_{3}-r_{1})(r_{3}-r_{2})=4(3r_{3}^{2}-2r_{3}r_{1}-2r_{3}r_{2}+r_{1}r_{2})\) & \(r_{3}=15.517\) \\ & \(1/r_{1}+1/r_{2}+1/r_{3}=3/4\) & \\ \hline \end{tabular}
\end{table}
Table 3: Bethe equations for Hydrogen atom roots for \(l=0\) and \(a_{0}=1\)
The calculation of the energy spectrum \(E_{n}\) for the modified ansatz is almost the same :
\[E_{n}=-\frac{e^{4}m}{32\hbar^{2}\pi^{2}\epsilon_{0}^{2}n^{2}}=-\frac{e^{2}}{2a_{0 }}\frac{1}{n^{2}}\ \mbox{where}\ n=N+l+1. \tag{24}\]
Note that \(n\) counts the total number of roots of the wave function \(R_{n}^{l}(r)\) and \(l+1\) counts the degeneracy of the root at the origin \(r=0\). Interestingly, we observe the bound on \(l\) to be:
\[l\leq n. \tag{25}\]
Substituting the modified ansatz in the Riccati equation and equating the coefficients of \(1/(r_{i}-r_{j})\) to zero we get the following set of Bethe equations for the roots:
\[\sum_{j}\frac{1}{r_{j}} = \frac{1}{a_{0}}\left(\frac{1}{l+1}-\frac{1}{N+l+1}\right), \tag{26}\] \[\frac{\hbar}{i}\sqrt{2mE_{n}} = \hbar^{2}\sum_{i\neq j}\frac{1}{r_{j}-r_{i}}+\hbar^{2}\frac{(l+1) }{r_{j}},\ j=1,2,..N. \tag{27}\]
Solving these equations for every \(N\) will give the solutions for the roots leading us to write the associated Laguerre polynomials:
\[R_{n}^{l}(r)\propto L_{n}^{l}(r). \tag{28}\]
From our **proposal** of reproducing hydrogen atom spectrum, it is tempting to speculate whether the spectrum for arbitrary potential \(V(x)=\sum_{k}a_{k}x^{\pm k}\) can be elegantly obtained. Unfortunately, well behaved nature of wave function requiring \(p(z)\) to have only simple poles is inconsistent with the asymptotic expansion of classical momentum \(p_{cl}(x)\):
\[\lim_{x\rightarrow\infty}\sqrt{2m(E-x^{n})} \sim i\sqrt{2mx^{n}}\left(1-\frac{E}{x^{n}}+\mathcal{O}\left(\frac{1}{x^{ 2n}}\right)\right),\quad\mbox{for}\quad n>0, \tag{29}\] \[\lim_{x\rightarrow\infty}\sqrt{2m(E-x^{n})} \sim \sqrt{(2mE)(1-x^{n}/E+\mathcal{O}(x^{2}n))},\quad\mbox{for}\quad n <0. \tag{30}\]
It appears that the Bethe-like ansatz requires the wave function to factorise into two parts:
\[\psi(x)=f(x)g(x)\ ;\ p(x)=\frac{\hbar}{i}(f^{\prime}/f+g^{\prime}/g). \tag{31}\]
Here \(f(x)\) governs the asymptotic behaviour of \(\psi(x)\) in the limit \(|x|\rightarrow\infty\) and \(g(x)\) is the polynomial that encodes the roots of \(\psi(x)\). In the Bethe-like ansatz, we assumed that the asymptotic behaviour of the wave function is governed only by the leading order asymptotic behavior, which looks like \(\exp[-x^{2}]\) for the QHO and \(\exp[-r/na_{0}]\) for the hydrogen atom. This is the most trivial possible choice of the asymptotic behaviour of the function. Such a choice of asymptote does not appear for other potentials. We may have more contributions to the asymptote due to non-perturbative corrections. Hence, we will have to go beyond Bethe-like ansatz to tackle spectral solution for higher degree polynomial potentials.
WKB Method
In the conventional WKB (Wentzel-Kramers-Brillouin) approximation, we are familiar with the Bohr-quantization condition
\[\Pi_{\gamma,0}=\oint_{\gamma}p_{cl}(x)dx=2\pi N, \tag{3.1}\]
which matches well for large \(N\) excited states for any potential \(V(x)\). \(\Pi_{\gamma,0}\) is sometimes referred to as classical WKB period evaluated for a contour \(\gamma\) around two turning points \(x_{\pm}\) where \(V(x_{\pm})=E\). For QHO, \(x_{\pm}=\pm\sqrt{2mE}\) and curve \(\gamma\) is indicated in Figure 1.
In order to find perturbative corrections to the spectrum, we expand the pseudo-momentum \(p(x)\) as power series in \(\hbar\)
\[p(x)=\sum_{n=0}^{\infty}p_{n}(x)\hbar^{n}, \tag{3.2}\]
with \(p_{o}=p_{cl}=\pm\sqrt{2m(E-V(x))}\). Plugging it in Riccati equation, we require \(p_{n}\)'s to obey
\[p_{n}=\frac{i}{2p_{o}}\left(ip_{n-1}^{\prime}-\sum_{j=1}^{n-1}p_{i}p_{n-i} \right). \tag{3.3}\]
Incorporating these corrections in powers of \(\hbar\) in
\[\Pi_{\gamma}(\hbar)=\oint_{\gamma}p(x)dx=\oint_{\gamma}\sum_{n}p_{n}(x)\hbar^ {n}dx=\sum_{n=0}\Pi_{\gamma,n}(\hbar)\hbar^{n}, \tag{3.4}\]
and then using the conventional WKB quantization will definitely improve the estimate of the energy spectrum. These \(\Pi_{\gamma}(\hbar)\) are known as _quantum periods_ whose classical limit gives \(\Pi_{\gamma,0}(\hbar)\). For the simplest case of QHO, we get corrections from the first order term \(\Pi_{\gamma,1}\) leading to the exact spectrum. Using mathematica, we verified \(\Pi_{\gamma,n>1}\)'s for \(n=2,3,4\) are indeed zero. Hence the series (3.2) converges for QHO. However, it is not clear whether
Figure 1: WKB loops for potential \(V=3x^{2}\)
the series(3.2) converges for other potentials. In fact, the numerical analysis for monic potentials \(V(x)=x^{2M}\)[18] showed
\[\Pi_{\gamma,n}(\hbar)=E^{\frac{1}{2M}+\frac{1}{2}-n(1+\frac{2}{2M})}\frac{2 \sqrt{\pi}\Gamma(1+\frac{1-2n}{2M})P_{n}(2M)(-1)^{n}}{\Gamma(\frac{3-2n}{2}+ \frac{1-2n}{2M})(2n+2)!2^{n}}. \tag{3.5}\]
Here, \(P_{n}\) is a polynomial in \(n\). For \(M=1\) corresponding to QHO, we see that \(\Pi_{\gamma,n>1}(\hbar)=0\) as \(\Gamma[(3-2n))/2+(1-2n)/2]\) is infinite for \(n\geq 2\) confirming only first order correction is non-zero. However, for the pure quartic oscillator, the periods \(\Pi_{\gamma,n}(\hbar)\) grow without bound for large \(n\). The leading order term in \(P_{n}(2M)\) is given by \((2n+1)(n+1)(n-1)!B_{2n}(2M)^{2n-1}\) where \(B_{2n}\) is the \(2n^{th}\) Bernoulli number. \(P_{n}(2M)\) grows like \((2n)!\). This indicates that the quantum period \(\Pi_{\gamma}\) can be a divergent series for some potentials.
In any quantum system, the space of classical configurations is given by the extrema of the potential \(V(x)\) (also called saddle points). QHO has one extremum whereas the cubic potential \(V(x)=3x^{2}-x^{3}\) illustrated in Figure 2 has two extrema. Technically, we have to investigate perturbative series around each of these classical configurations. They will give different quantum periods \(\Pi_{\gamma_{1}},\Pi_{\gamma_{2}},\ldots\) For instance, there are two periods in Figure 2 corresponding to the curves \(\gamma_{1}\)(classically allowed region) and \(\gamma_{2}\)(classically forbidden region). Hence, the divergent series \(\Pi_{\gamma}(3.5)\) implicitly signals the presence of other perturbative series in the quantum system. This is the theme of resurgent quantum mechanics [9]. In order to capture such information, we need the tools of 'Exact WKB methods' advocated by Voros for higher degree polynomials [6].
We will now briefly present the salient features of the 'Exact WKB method'.
### Exact WKB Method
Suppose we split \(p(x)=P(x)+Q(x)\)(3.2) involving even and odd powers:
\[P(x)=\sum_{n=0}^{\infty}p_{2n}\hbar^{2n}\;;\;Q(x)=\sum_{n=0}^{\infty}p_{2n+1} \hbar^{2n+1}. \tag{3.6}\]
Figure 2: WKB loops for potential \(V=3x^{2}-x^{3}\)
Solving the Riccati equation, we can see odd terms are not independent:
\[Q(x)=\frac{i\hbar}{2}\frac{d\log(P)}{dx}. \tag{3.7}\]
Hence the general wave function in the classically allowed region can be written as
\[\psi(x)=\frac{1}{\sqrt{P(x)}}\left(Ae^{i\int P(x)dx}+Be^{-i\int P(x)dx}\right) \tag{3.8}\]
where \(A,B\) are normalisation constants. We need to change \(P(x)\) to \(\tilde{P}(x)=iP(x)\) when we move to the classically forbidden region.
In the exact-WKB approach, the divergent power series will be converted to a series with finite radius of convergence by a Borel transform. For the perturbative series discussed for monic potentials, Borel transform is as follows:
\[\Pi_{\gamma}(\hbar)\rightarrow\hat{\Pi}_{\gamma}(\xi)=\sum_{n}\hat{\Pi}_{ \gamma,2n}\xi^{2n}=\sum_{n}\frac{\Pi_{\gamma,2n}}{2n!}\xi^{2n}, \tag{3.9}\]
which is analytic near origin in the complex plane \(\xi\). Then, the \(\hat{\Pi}_{\gamma}(\xi)\) is promoted to a function through a procedure called 'Borel resummation' :
\[B_{\phi}[\Pi_{\gamma}](\hbar)=\frac{1}{\hbar}\int_{0}^{e^{i\phi}\infty}e^{- \xi/\hbar}\hat{\Pi}_{\gamma}(\xi)d\xi\,\ \ \hbar\in\mathbb{R}_{>0}. \tag{3.10}\]
The above integral denotes a Laplace transform of \(\hat{\Pi}_{\gamma}(\xi)\) along a direction, defined by angle \(\phi\), in the complex plane \(\xi\). If the integral converges for small \(\hbar\), then the corresponding quantum period \(\Pi_{\gamma}(\hbar)\) is said to be Borel summable. Suppose \(\hat{\Pi}_{\gamma}(\xi)\) has singularities on the complex plane \(\xi\), then the Borel summability cannot be performed on the rays containing such singularities. For instance, a simple pole at \(\xi_{0}\) whose \(\arg\xi_{0}=\chi\) will imply a discontinuity in the Borel resummation. That is,
\[\lim_{\delta\to 0}\,{\cal B}_{\chi+\delta}[\Pi_{\gamma}](\hbar)\neq\ \lim_{\delta\to 0}\,{\cal B}_{\chi-\delta}[\Pi_{\gamma}]( \hbar).\]
Hence, we define median Borel resummation \({\cal B}_{\chi}^{med}[\Pi_{\gamma}](\hbar)\), the lateral Borel resummation \({\cal B}_{\chi\pm}[\Pi_{\gamma}](\hbar)\) and the Stokes discontinuity \({\rm disc}_{\chi}[\Pi_{\gamma}](\hbar)\) to characterise and overcome such obstructions to Borel summability:
\[{\cal B}_{\chi}^{med}[\Pi_{\gamma}](\hbar) = \frac{1}{2}\lim_{\delta\to 0}\,{\cal B}_{\chi+\delta}[\Pi_{ \gamma}](\hbar)+{\cal B}_{\chi-\delta}[\Pi_{\gamma}](\hbar) \tag{3.11}\] \[B_{\chi_{\pm}}[\Pi_{\gamma}](\hbar) = \lim_{\delta\to 0}\,B_{\chi\pm\delta}[\Pi_{\gamma}](\hbar)\] \[{\rm disc}_{\chi}[\Pi_{\gamma}](\hbar) = \lim_{\delta\to 0}\,B_{\chi+\delta}[\Pi_{\gamma}](\hbar)-B_{\chi- \delta}[\Pi_{\gamma}](\hbar).\]
The knowledge of all the Stokes discontinuities as well as the classical limit \(\Pi_{\gamma,0}\) of the quantum periods are required to reconstruct the quantum periods (as solutions to the Riemann-Hilbert problem). The _Delabaere-Pham formula_[9, 10] encodes the structure of discontinuities of any quantum period in terms of the other quantum periods:
\[{\cal B}_{\chi-}({\cal V}_{\gamma_{i}})={\cal B}_{\chi+}({\cal V}_{\gamma_{i} })\prod_{j\neq i}(1+{\cal V}_{\gamma_{j}}^{-1})^{-(\gamma_{i},\gamma_{j})}. \tag{3.12}\]
Here,
\[{\cal V}_{\gamma_{i}}=\exp{(\frac{i\Pi_{\gamma_{i}}}{\hbar})}\]
is called as _Voros symbol_ and \((\gamma_{i},\gamma_{j})\) is the intersection number between the curves \(\gamma_{i},\gamma_{j}\). Once we have the solutions for all the Voros symbols, the _exact WKB connection formula_ (also known as Voros-Silverstone connection formulae) leads to an _exact quantization condition_ (EQC) as a single functional relation between \({\cal V}_{\gamma_{i}}\)'s:
\[f({\cal V}_{\gamma_{1}},{\cal V}_{\gamma_{2}},\dots)=0. \tag{3.13}\]
For example, using the Voros-Silverstone connection formulae for the cubic potential \(V(x)=3x^{2}-x^{3}\), the following EQC relating the two quantum periods (as drawn in Figure 2) can be deduced [8; 11]:
\[2\cos{\left(\frac{1}{2\hbar}{\cal B}_{\chi\pm}(\Pi_{\gamma_{1}})\right)}+\exp {\left(-\frac{i}{\hbar}\Pi_{\gamma_{2}}\right)}=0 \tag{3.14}\]
Recall that the \(\Pi_{\gamma_{2}}\) is associated with the classically forbidden region. Such a relation gives the values of energy \(E_{n}\). Sometimes it is convenient to fix the value of energy and compute the values of \(\hbar_{n}(E)\) for which the EQC(3.13) holds. These values of \(\hbar_{n}(E)\) are called _Voros spectrum_.
As mentioned earlier, the solution to the Riemann-Hilbert problem (quantum periods) can be obtained from a set of '_Thermodynamic Bethe Ansatz_'(TBA) equations. We will briefly discuss TBA method in the following section.
## 4 TBA system
For monic potentials, including quartic oscillator, \(V(x)=x^{2M}\), there are \(2M\) turning points located at \(\{w^{i}E^{\frac{1}{2M}}\}\) in the complex plane where \(\omega\) is \(2M\)-th root of unity and \(E\) is the energy. Only two of the turning points are on the real axis. Similarly, for a general \(d\)-degree polynomial potentials \(V(x)=\sum_{n=1}^{d}a_{n}x^{n}\), there will \(d\) turning points. Depending on the choice of \(a_{n}\) (known as moduli), some of the turning points could be real or complex. We can make a suitable choice of the moduli so that all the turning points \(x_{1}<x_{2}<\dots<x_{d}\) are on the real axis. This choice is sometimes referred to as '_minimal chamber_' in the literature. Such a minimal chamber will allow \(\lfloor(d-1)/2\rfloor\) cycles \(\{\gamma_{i}\}\). In fact, the cubic potential \(V(x)=3x^{2}-x^{3}\) shown in Figure 2 allows two cycles \(\gamma_{1},\gamma_{2}\) in the minimal chamber.
In such a minimal chamber, the quantum periods \(\Pi_{\gamma_{2i}}\) corresponding to classically forbidden region are Borel summable along the positive real axis of \(\hbar\) whereas \(\Pi_{\gamma_{2i-1}}\) corresponding to classically allowed region is not Borel summable. Hence the discontinuity formula(3.12) along the real line (\(\chi=0\)) is
\[{\rm disc}_{0}\Pi_{2i-1}=-i\hbar\log(1+{\cal V}_{2i-2}^{-1})-i\hbar\log(1+{ \cal V}_{2i}^{-1}). \tag{4.1}\]
Similarly, there is a discontinuity at \(\chi=\pi/2\) for the quantum periods \(\Pi_{2i}\) whereas \(\Pi_{\gamma_{2i-1}}\) are Borel summable. These two situations are neatly incorporated by defining \(\epsilon_{a}\) functions
as:
\[-i\epsilon_{2i-1}(\theta+i\pi/2\pm i\delta) =\frac{1}{\hbar}\mathcal{B}_{0\pm}(\Pi_{\gamma_{2i-1}})(\hbar) \tag{4.2}\] \[-i\epsilon_{2i}(\theta) =\frac{1}{\hbar}\mathcal{B}(\Pi_{\gamma_{2i}}), \tag{4.3}\]
where \(e^{\theta}=1/\hbar\)[8]. Clearly, these \(\epsilon_{a}\) functions have a discontinuity at \(\chi=\pi/2\) for both even and odd \(a\). Hence, the Delabaere-Pham discontinuities(3.12) can be compactly written as:
\[\text{disc}_{\pi/2}\epsilon_{a}(\theta)=L_{a-1}(\theta)+L_{a+1}(\theta)\, \text{where }L_{a}=\log(1+e^{-\epsilon_{a}(\theta)}). \tag{4.4}\]
Further, the asymptotic series of the functions \(\epsilon_{a}(\theta)\) will be
\[\epsilon_{i}(\theta)=m_{a}e^{\theta}+\mathcal{O}(e^{-\theta}), \tag{4.5}\]
where \(m_{a}\)'s, referred to as masses in two-dimensional integrable theories, are the classical periods:
\[m_{a}=\Pi_{\gamma_{a},0}=\oint_{\gamma_{a}}P(x)dx=2\int_{x_{a}}^{x_{a+1}}P(x) dx\ \text{where }\gamma_{a}=[x_{a},x_{a+1}]. \tag{4.6}\]
Remember to replace \(P(x)\to iP(x)\) whenever the cycle \(\gamma_{a\equiv 2i}\) ( classically forbidden region) so that \(m_{a}\)'s are real and positive.
The solution to the Riemann Hilbert problem for the functions \(\epsilon_{a}(\theta)\) obeying (4.4) and (4.6) can be obtained using the following system of TBA integral equations in the minimal chamber:
\[\epsilon_{a}(\theta)=m_{a}e^{\theta}-\int_{\mathbb{R}}\frac{L_{a-1}(\theta^{ \prime})}{\cosh(\theta-\theta^{\prime})}d\theta^{\prime}-\int_{\mathbb{R}} \frac{L_{a+1}(\theta^{\prime})}{\cosh(\theta-\theta^{\prime})}d\theta^{ \prime}\ a=1,2,\ldots d-1 \tag{4.7}\]
As \(P(x)\) is a series in even powers of \(\hbar\), we have to take both \(\hbar\) positive as well as negative. This in turn adds another similar discontinuity equation, and combining all of these discontinuities transforms the usual propagator into the \(\sinh\) propagator. Finally, the rotation by \(\pi/2\) gives us the \(\cosh(\theta-\theta^{\prime})\) in the integral equation1. For other choices of moduli\(\{a_{n}\}\) in the potential \(V(x)=\sum_{n}a_{n}x^{n}\), some turning points can be on the complex plane. This leads to additional periods in the complex plane. We do not review calculations involving complex turning points here. This is discussed in great detail in [11].
Footnote 1: We thank Katsushi Ito for clarifying this point.
## 5 TBA for \(|x|\)
We have seen in the previous section how the TBA system can be used to compute quantum periods for the polynomial potentials which are smooth [7; 8] and deduce the Voros spectrum from EQC. Interestingly, the TBA equations have been extended to the potentials with regular singularities like \(1/x^{2}\)[13]. However, the EQC for such singular potentials has not been attempted. Conversely for the \(|x|\) potential, we know the true spectrum but not the TBA equations incorporating the derivative singularity at the origin.
Note that the potentials of the form \(|x|^{n}\) with \(n\) odd positive integer were considered in [5] using exact WKB method and spectral determinants but without TBA equations. A TBA equation was also derived in [19] albeit from very different considerations of \(\mathcal{N}=2\) supersymmetric field theories. In fact, this was argued to be the spectral determinant of the \(|x|\) potential which was expanded upon in [16]. These developments do suggest that there could be a TBA approach for the \(|x|\) potential. In the following subsection, we discuss the naive TBA approach for the \(|x|\) potential and show that the energy spectrum do not match the true spectrum for low lying states.
### Naive TBA approach for \(|x|\)
Let us blindly apply the TBA tools, applicable for polynomial potentials, to deduce the WKB periods for the \(|x|\) potential. The Schrodinger equation is given by
\[\hat{H}\psi=-\frac{\partial^{2}}{\partial x^{2}}\psi+|x|\psi=E\psi. \tag{5.1}\]
In this case, there are two turning points, corresponding to \(x=+E\) and \(x=-E\). As there is only one nontrivial cycle \(\gamma\) between them as seen in Figure 3, the solution to the TBA equation for this period is the mass \(m\). As there is only one cycle, we believe that the EQC for \(|x|\) will be similar to the QHO case:
\[\frac{\Pi_{\gamma}}{\hbar}=\frac{m}{\hbar}=2\pi\left(n+\frac{1}{2}\right), \ \ n=0,1,2\ldots \tag{5.2}\]
where the \(0^{th}\) order mass \(m\) is given by
\[m=\oint_{\gamma}\sqrt{E-|x|}\ dx=2\int_{-E}^{E}\sqrt{E-|x|}\ dx=\frac{8}{3} \frac{E^{\frac{3}{2}}}{\hbar} \tag{5.3}\]
On solving the Bohr-Sommerfeld quantisation condition(5.2) we obtain the spectrum for \(\hbar=1,2m=1\)
\[E_{n}=\left(\frac{3\pi}{4}\left(n+\frac{1}{2}\right)\right)^{(2/3)}. \tag{5.4}\]
Table 4 shows that the true spectrum [15] matches well with the conventional WKB results for \(n\geq 5\). Such a mismatch at low \(n\) implies that the naive analogy of EQC between \(|x|\) and QHO is not correct.
This exercise clearly indicates that we must introduce some modifications to incorporate the derivative singularity at \(x=0\). Taking clue from our modified Bethe-like ansatz, we will show that the generalisation of the TBA equations to \(|x|\) is indeed possible by reinterpreting the problem on a half real line similar to [5]. We will now discuss the subtle features in the following subsection.
### Effective radial problem for \(|x|\)
The most natural way to consider the \(|x|\) problem is to treat \(x\) as radial coordinate \(r\). The Schrodinger equation in radial coordinate is
\[\hat{H}\psi=\left(\frac{-\hbar^{2}}{2m}\frac{\partial^{2}}{\partial r^{2}}+r \right)\psi=E\psi. \tag{5.5}\]
Recall that the radial part of the differential equation describing hydrogen atom contains a centrifugal term \(l(l+1)/r^{2}\) (\(l\) denotes orbital angular momentum). We believe that the derivative singularity can be made to appear as a centrifugal term. This will lead to a correction to the potential in the \(r\) coordinate.
By performing the change of variable \(x\to r\) the derivative discontinuity has seemingly vanished. However, the singularity is still present in the topology of the problem. That is, \(\psi\) is now a function of \(r\in[0,\infty)\). Hence we need to specify a boundary value for \(\psi\) at \(r=0\) instead of the exponential decay as \(x\rightarrow-\infty\).
Note that the parity symmetry of \(|x|\) potential requires that the wave function must be either symmetric or anti-symmetric: \(\psi(x)=\pm\psi(-x)\). In the \(r\)-coordinate system, such a parity symmetry imposes either Dirichlet (\(\psi(r=0)=0\)) or Neumann boundary conditions (\(\psi^{\prime}(r)|_{r=0}=0\)). Consider the wave function near \(r=0\) to be of the following form:
\[\psi(r)\sim r^{\ell}f(r), \tag{5.6}\]
where \(f(r)\) is some function nonzero at the origin whose derivative \(f^{\prime}(r)|_{r=0}\) vanishes. The
\begin{table}
\begin{tabular}{c|c|c} \(n\) & True Spectrum & Naive TBA Spectrum \\ \hline
0 & 1.01879 & 1.1154602372253557 \\
1 & 2.33811 & 2.320250794710102 \\
2 & 3.2482 & 3.2616255199180713 \\
3 & 4.08795 & 4.081810015382323 \\
4 & 4.8201 & 4.826316143499807 \\
5 & 5.52056 & 5.517163872783549 \\
6 & 6.16311 & 6.167128465231806 \\
7 & 6.78311 & 6.784454480834836 \\
8 & 7.3721 & 7.374853108941933 \\
9 & 7.94413 & 7.942486663292496 \\ \end{tabular}
\end{table}
Table 4: Spectrum for \(|x|\) potential \(n=0\) to \(9\).
Figure 3: WKB loops for potential \(V=3|x|\)
allowed boundary value conditions
\[\psi(0)=0,\ \psi^{\prime}(0)\neq 0\ \ ;\ \ \psi(0)\neq 0,\ \psi^{\prime}(0)=0,\]
forces \(\ell\) to be either \(1\) or \(0\) respectively. Therefore, for the \(|x|\) potential in radial coordinate, we can add the following centrifugal term as
\[\frac{\hbar^{2}(\ell)(\ell-1)}{r^{2}}, \tag{5.7}\]
to do the TBA calculations. In other words, the singularity at \(x=0\) is traded for an effective centrifugal term. This corrected potential is a subset of a more general potential with single and double pole [13] for a suitable choice of the parameters.
We will now present the salient features of the TBA system for potential with single and double pole [13]. This sets the stage to numerically compute the quantum periods for the linear potential \(|x|\) by taking suitable limits for the parameters.
### TBA equation for a potential with Single and Double Pole
We will briefly review the Schrodinger type equation with polynomial potentials with simple pole and a centrifugal term [13]:
\[\Big{(}-\hbar^{2}\frac{d^{2}}{dx^{2}}+x^{s+1}+\sum_{a=1}^{s+2}u_{a}x^{s+1-a}+ \hbar^{2}\frac{l(l+1)}{x^{2}}\Big{)}\psi(x)=0. \tag{5.8}\]
Here \(x\geq 0,s\geq 0\), \(l\) is any real number and \(u_{a}\)'s are parameters. Note that \(\ell=l+1(5.7)\). For the following choice of parameters:
\[s=0\ ;\ u_{1}=-E, \tag{5.9}\]
the (5.8) reduces to
\[\Big{(}-\hbar^{2}\frac{d^{2}}{dx^{2}}+\frac{x^{2}-Ex+u_{2}}{x}+\hbar^{2}\frac{ l(l+1)}{x^{2}}\Big{)}\psi(x)=0. \tag{5.10}\]
This resembles the equation for the linear potential (5.5), in the limit \(u_{2}\to 0,l\to 0\) or \(\,-1\). We will present the TBA equation for the potential (5.10) as discussed in [13].
Taking \(x\in\mathbb{C}\) (complex domain), the equation (5.10) remains invariant under Symanzik rotation:
\[(x,E,u_{2})\rightarrow(\omega x,\omega E,\omega^{2}u_{2})\,\ \text{where}\ \omega=\exp\frac{2\pi i}{s+3}|_{s=0}. \tag{5.11}\]
From the semi classical behaviour in the limit \(\hbar\to 0\), the turning points \(e_{1},e_{2}\) for
\[E=V(x)=\frac{x^{2}+u_{2}}{x},\]
can be chosen to be in the positive real axis: \(0<e_{1}<e_{2}\) for \(u_{2}\geq 0\). Note that \(e_{1}\to 0\) as \(u_{2}\to 0\). For this potential, there are two cycles as shown in Figure 4: \(\gamma_{1}\) encircling \(e_{1}\) and \(e_{2}\) (classically allowed region) and \(\hat{\gamma}\) encircling the pole at the origin \(0\) and \(e_{1}\) (classically forbidden). The \(l\) dependent centrifugal term (double pole) is responsible for
a non-trivial monodromy of the wave function around the origin. This is very similar to our modified Bethe-like ansatz in section 2.1 for the hydrogen atom potential. In fact, the non-trivial monodromy and the Symanzik rotation symmetry leads to modification of the TBA equations2 for the \(\gamma_{1}\) and \(\hat{\gamma}\) cycles:
Footnote 2: Interested readers can see [13] for details
\[\epsilon_{1}(\theta) = m_{1}e^{\theta}-\int_{\mathbb{R}}\frac{\log(1+\omega^{3/2}e^{2 \pi il}e^{-\hat{\epsilon}(\theta^{\prime})})+\log(1+\omega^{3/2}e^{-2\pi il}e^ {-\hat{\epsilon}(\theta^{\prime})})}{\cosh(\theta-\theta^{\prime})}\frac{d \theta^{\prime}}{2\pi}\] \[\hat{\epsilon}(\theta) = \hat{m}e^{\theta}-\int_{\mathbb{R}}\frac{\log(1+e^{-\epsilon_{1} (\theta^{\prime})})}{\cosh(\theta-\theta^{\prime})}\frac{d\theta^{\prime}}{2 \pi}. \tag{5.12}\]
Here \(m_{1},\hat{m}\) are the \(0^{th}\) order WKB periods chosen with orientation so that they are both real and positive:
\[m_{1}=\frac{1}{i}\oint_{\gamma_{1}}\sqrt{\frac{x^{2}-Ex+u_{2}}{x}}dx\ \ ;\ \ \hat{m}=\oint_{\hat{\gamma}}\sqrt{\frac{x^{2}-Ex+u_{2}}{x}}dx. \tag{5.13}\]
As discussed in section 4, the classically allowed period \(\Pi_{\gamma_{1}}\) is not Borel summable along the positive real axis of \(\hbar\). So, the period is resummed by taking average of two Borel resummations(lateral and median resummation(3.12)) calculated just after and before crossing the discontinuity along the positive real \(\hbar\) axis. For the above TBA equations, the median resummation is [13]
\[\frac{1}{\hbar}\mathcal{B}_{med}(\Pi_{\gamma_{1}})(\hbar)=m_{1}e^{\theta}+P \int_{\mathbb{R}}\frac{\log(1+\omega^{3/2}e^{2\pi il}e^{-\hat{\epsilon}(\theta ^{\prime})})(1+\omega^{3/2}e^{-2\pi il}e^{-\hat{\epsilon}(\theta^{\prime})})} {\sinh(\theta-\theta^{\prime})}\frac{d\theta^{\prime}}{2\pi} \tag{5.14}\]
where P is the principal value of the integral which can be computed using the formula:
\[P\int_{\mathbb{R}}\frac{f(\theta^{\prime})}{\sinh(\theta-\theta^{\prime})}d \theta^{\prime}=\lim_{\delta\to 0}\int_{\mathbb{R}}\frac{\sinh(\theta- \theta^{\prime})\cos(\delta)}{\sinh^{2}(\theta-\theta^{\prime})\cos^{2}( \delta)+\cosh^{2}(\theta-\theta^{\prime})\sin^{2}(\delta)}f(\theta^{\prime})d \theta^{\prime} \tag{5.15}\]
Note that the TBA equations are same for any integer \(l\).
Figure 4: WKB loops for potential \(V=x+u_{2}/x\)
At this stage it is natural to return to the \(|x|\) problem by taking the limit \(l\to 0,u_{2}\to 0\). However, this limit runs into several singularities which need regularisation. Some of these details are discussed in Appendix A. Instead we circumvent this problem by keeping \(u_{2},l\) small but finite in our computation.
#### 5.3.1 Numerical Computation of quantum periods for \(|x|\)
To solve the TBA system, we used the Gaussian Interpolation technique presented in Appendix B of [11] with \(2^{12}\) points randomly distributed around \(\theta=0\) instead of the Fourier transform method used in [13].
First, we validated our numerical code on quantum periods by confirming that the Voros spectrum for \(u_{1}=-3,u_{2}=1,l=-2/5\) matched with the Table 2 in [13] obtained using Bohr-Sommerfeld quantization:
\[\frac{1}{\hbar}\mathcal{B}_{med}(\Pi_{\gamma_{1}})(\hbar)\sim 2\pi(k+1/2)\ \ \ \ k\in\mathbb{Z}_{\geq 0}. \tag{5.16}\]
Using our validated numerical code, we obtained the quantum periods for \(E=1,u_{2}=10^{-8},l=10^{-5}\). These are plotted in Figure 5. A mathematica file containing this computation is linked on the arXiv page as an ancillary file.
Our next step is to obtain Voros spectrum using these quantum periods. This requires deriving exact quantization conditions (EQC) for potentials which are singular at the origin.
Bohr Sommerfeld quantization (5.16) is not correct to determine \(\theta_{n}\) for low lying ground and excited states for potentials(5.10). We will now address the EQC for potentials with singular behaviour near the origin.
## 6 Exact quantization condition
The key construction to arrive at the TBA equations for the \(|x|\) potential was a shift to radial coordinates \(r=|x|\) with an additional centrifugal term \(\hbar l(l+1)/r^{2}\) (vanishing for Dirichlet or Neumann boundary conditons at \(r=0\)). Even though, we obtained the quantum periods \(\Pi_{\gamma_{1}},\Pi_{\hat{\gamma}}\) solving numerically the TBA system of equations, we have no idea how to write Voros-Silverstone connection formula to deduce the exact quantization condition (EQC).
To deal with the pole at origin we look into our modified Bethe-like ansatz proposal for hydrogen atom in section 2. There, the pseudomomentum(2.23) with an additional orbital angular momentum dependent simple pole at the origin reflects the zero of the wave function at the origin. This suggests that in the EQC we may have to introduce additional correction to the quantum periods enclosing the origin. _This is not needed for polynomial potential which has no singular behaviour at the origin._
From our modified Bethe-like ansatz(2.15) as well as the exact solution for the symmetric potential \(V(x)=-\frac{1}{|x|}\)[20; 21], the quantization condition for \(l=0\) is
\[\oint p\ dx=2\pi\hbar(N+1)\,\ N=0,1\ldots. \tag{6.1}\]
We put forth the following proposal:
**Proposition 3:** The correction in the EQC to the quantum period due to singular behaviour at the origin for \(l=0\) is
\[\Pi_{0}=2\pi\hbar. \tag{6.2}\]
Technically this should also be the correction for \(l=-1\) as the potential is unchanged.
In order to work out EQC, we would like to go back to \(x\in(-\infty,\infty)\) where the wave function decays as \(x\to\pm\infty\). Clearly, the radial TBA system is symmetrically mirrored about the origin. For the \(|x|\) case in \(x\) domain, there are 2 more loops, as shown in Figure 6 which we will denote by \(\gamma_{1_{-}},\hat{\gamma}_{-}\). From the symmetry \(V(x)=V(-x)\), we expect
\[\Pi_{\gamma_{1}}=\Pi_{\gamma_{1_{-}}},\quad\Pi_{\hat{\gamma}}=\Pi_{\hat{\gamma }_{-}}.\]
However, the TBA system should continue to be exactly the same for \(\Pi_{\gamma_{1}},\Pi_{\hat{\gamma}}\) and their negative analogue. In fact, this equivalence is mainly due to the fact that the _origin is not a turning point_ and hence should not contribute any additional Borel resummation discontinuities in the periods containing it. We see that from the period structure, the TBA system for \(|x|\) is analogous to that of the TBA system for the symmetric double well potential, except for the singular behaviour at the origin. Hence we propose the following for the classically forbidden cycle \(\hat{\Gamma}\) between the two turning points \(\pm e_{1}\):
**Proposition 4:**
The quantum period for the classically forbidden cycle \(\hat{\Gamma}\) between the turning points \(\pm e_{1}\) will be
\[\Pi_{\hat{\Gamma}}=\Pi_{\hat{\gamma}}+\Pi_{\hat{\gamma}_{-}}+i\Pi_{0}, \tag{108}\]
with the same \(\Pi_{0}\)(108) correction.
As the potential(105) near the origin resembles the symmetric hydrogen atom, we are justified to add the same \(\Pi_{0}\)(108).
Thus there are 3 nontrivial periods \(\Pi_{\gamma_{1}},\Pi_{\gamma_{1_{-}}},\Pi_{\hat{\Gamma}}\) for the potential in Figure 6. The period structure resembles the symmetric quartic oscillator in the minimal chamber. So, the same Zinn-Justin's EQC derived in [11] will be applicable:
\[\cos\left(\mathcal{B}_{med}(\Pi_{\gamma_{1}})/\hbar\right)\pm\frac{1}{\sqrt{1 +\exp[-\frac{i}{\hbar}\Pi_{\hat{\Gamma}}]}}=0. \tag{109}\]
Substituting for \(\Pi_{\hat{\Gamma}}\)(108) with \(\Pi_{0}\)(108) we get:
\[\cos\left(\mathcal{B}_{med}(\Pi_{\gamma_{1}})/\hbar\right))\pm\frac{1}{\sqrt{ 1+\exp[-\frac{i}{\hbar}2\Pi_{\hat{\gamma}}+2\pi]}}=0. \tag{110}\]
Now we need to fix \(\pm\) sign in the EQC.
Recall for the polynomial potentials, which are regular at the origin, \(\mathcal{B}_{med}(\Pi_{\gamma_{1}})\) obeys QHO quantization condition when \(\exp[-\frac{i}{\hbar}\Pi_{\hat{\Gamma}}]\to 0\). This fixes the sign as \(+\) in the EQC (110) for quartic polynomial potential.
However in our case with the singularity at the origin, \(\mathcal{B}_{med}(\Pi_{\gamma_{1}})\) obeys quantization condition(108) when \(\exp[-\frac{i}{\hbar}\Pi_{\hat{\Gamma}}]\to 0\). This requires the minus sign in (110). Hence the EQC to determine Voros spectrum for potential (105) is
\[\cos\left(\mathcal{B}_{med}(\Pi_{\gamma_{1}})/\hbar\right))-\frac{1}{\sqrt{1+ \exp[-\frac{i}{\hbar}2\Pi_{\hat{\gamma}}+2\pi]}}=0. \tag{111}\]
Solving the above EQC for \(E=1,u_{2}=10^{-8},l=10^{-5}\) gives the Vorus spectrum. For this choice of parameters, \(\Pi_{\hat{\gamma}}\sim\mathcal{O}(10^{(-4)})\) (as seen in Figure 5c). Hence we have neglected them in determining the Voros spectrum. Our numerical computations is tabulated in Table 53 and they match very well with the true spectrum of \(|x|\)[14; 15]. Clearly, this validation suggests that our proposed EQC is applicable for general potentials \(V(r)=r+u_{2}/r\). Although we focused on the limit \(l\to 0\) to reproduce spectrum for the \(|x|\) potential, we could determine Voros spectrum for general potentials \(V(r)=r+u_{2}/r+\hbar^{2}l(l+1)/r^{2}\) with non-zero centrifugal for \(l>0\). It appears from our modified Bethe-like ansatz(2.15), that the correction to the _proposition 3_ (6.2) for \(l>0\) is
Footnote 3: Mathematica code of this computation can be found on the arXiv page as an ancillary file.
\[\Pi_{0}^{(l)}=2\pi(l+1)\hbar.\]
Hence, using our numerical code, we can obtain Voros spectrum by including the above correction to \(\Pi_{\hat{\Gamma}}\)(6.3) in the proposed EQC(6.6). This elaborate exercise shows that the EQC can be constructed for the general potential \(V(r)\)(5.8). In fact, we can choose the parameters(5.8) so that the zeros and the turning points on the positive real line (minimal chamber). Similar to what we did for \(|x|+1/|x|\), we go back to \(x\in(-\infty,\infty)\) to draw a symmetric potential with \(2s+3\) cycles. For all these potentials, we need to modify the EQC of the smooth polynomial potential of degree \(2s+4\). Near the origin, it is only the simple pole and centrifugal term which will contribute. Hence, \(\Pi_{\hat{\Gamma}}\) near the origin must include \(\Pi_{0}^{(l)}\) correction in the EQC. For highly excited states (\(\theta\to\infty\)), the EQC should reduce to the Bohr quantization condition(6.1) applicable to the singular potentials.
Even though the methodology is straightforward, the computation of quantum periods and Voros spectrum for higher degree polynomials gets tedious.
## 7 Conclusion
In this article, first, we reviewed Bethe-like approach for quantum harmonic oscillator (QHO) and then proposed a modification for the hydrogen atom pseudo-momentum(2.23).
\begin{table}
\begin{tabular}{|c|c|c|} \hline \(n\) & Computed \(\theta_{n}\) & True \(\theta_{n}\) \\ \hline
0 & 0.02852 & 0.02792 \\ \hline
1 & 1.26107 & 1.27401 \\ \hline
2 & 1.76443 & 1.76715 \\ \hline
3 & 2.11220 & 2.11207 \\ \hline
4 & 2.35925 & 2.35919 \\ \hline
5 & 2.56402 & 2.56272 \\ \hline
6 & 2.72669 & 2.72787 \\ \hline
7 & 2.87390 & 2.87165 \\ \hline \end{tabular}
\end{table}
Table 5: The numerically computed Voros spectrum for the \(|x|\) potential with \(l=10^{-5},u_{2}=10^{-8}\) compared to the true spectrum
This neatly reproduced the energy spectrum and the wave functions. However, for the higher degree polynomial potentials Bethe-like ansatz fails.
We briefly presented 'Thermodynamic Bethe ansatz' (TBA) method along with exact quantization conditions (EQC) leading to the spectral solutions for smooth polynomial potentials. Even though the generalisation of TBA equations for the potentials with simple and double poles [13] is known, the EQC has not been derived.
We showed that \(|x|\) potential can be approximated to the potential with regular singularity by taking a suitable limit of the parameters. Taking the symmetric form of the potential (11) in the coordinate \(x\in(-\infty,\infty)\), there will be \(2s+3\) cycles. In this article, we focused on the potential for \(s=0\).
Taking hints from our proposed Bethe-like ansatz for hydrogen atom, we put forth _proposition 4_ in section (6), stating additional correction to the quantum period \(\Pi_{\Gamma}(6.3)\). Further, we modified the existing quartic polynomial potential EQC, imposing Bohr-Sommerfeld quantization condition applicable for singular potentials at the origin. Our proposed EQC(6.6), for the potentials with single and double pole, indeed matched very well with the true spectrum for the \(|x|\) potential with appropriate choices of the parameters. Thus we have validated our EQC proposition for the potential (11) when \(s=0\).
Even though we elaborated for the \(s=0\) potentials, our arguments should be generalisable for the potentials with \(s>0\) as well. This requires computation of the quantum periods [13] and the modification of the smooth polynomial potential (of degree \(2s+4\)) EQC [11]. The numerical computations do get cumbersome and we will take it up in future. Such an exercise could help us to validate the \(|x|^{3}\) and other odd power Voros spectrum obtained using spectral determinant approach [5].
## Note added
After uploading this paper on arxiv, a recent paper [22] was kindly brought to our notice by one of the authors of that paper. Here they have derived a EQC for potential with regular singularity, although from a different Wronskian based approach.
## Acknowledgements
We would like to thank Katsushi Ito for discussions on TBA propagator. We are grateful to Marcos Marino for sharing his notes which turned out very valuable. PR would like to acknowledge the ICTP's Associate programme where significant progress on this work happened during her visit as senior associate. AA would like to acknowledge IIT Bombay for supporting travel to _Integrability in Gauge and String Theory 2022_ conference where parts of this work were presented. PM is supported by a scholarship from the Inlaks Shivdasani Foundation.
Regularisation of TBA for \(|x|\)
In section 5, we saw the TBA equation(5.12) for a linear potential with a single and double pole :
\[V(r)=r-E+\frac{u_{2}}{r}+\frac{\hbar l(l+1)}{r^{2}},\] (A.1)
which takes the form:
\[\epsilon_{1}(\theta)=m_{1}e^{\theta}-\frac{1}{2\pi}\int_{\mathbb{R }}\frac{\log(1+e^{-2\hat{\epsilon}(\theta^{\prime})}-2\cos(2\pi l)e^{-\hat{ \epsilon}(\theta^{\prime})})}{\cosh(\theta-\theta^{\prime})}\] \[\hat{\epsilon}(\theta)=\hat{m}e^{\theta}-\frac{1}{2\pi}\int_{ \mathbb{R}}\frac{\log(1+e^{-\epsilon_{1}(\theta^{\prime})})}{\cosh(\theta- \theta^{\prime})}.\] (A.2)
In order to reproduce the Hamiltonian for the pure \(|x|\) potential, we would like to take \(u_{2}=0,l=0\) or \(-1\). However, this leads to highly singular behaivour and hence we need to carefully take the limit \(u_{2}\to 0\) and \(l\to 0\) or \(l\rightarrow-1\) where the single and double pole seemingly vanish. In this limit,
\[m_{1}\rightarrow\frac{4}{3}E^{3/2},\ \hat{m}\to 0.\]
With \(E=1\), (A.2) reduces to
\[\epsilon_{1}(\theta)=\frac{4}{3}e^{\theta}-\frac{1}{2\pi}\int_{ \mathbb{R}}\frac{\log(1+e^{-2\hat{\epsilon}(\theta^{\prime})}-2e^{-\hat{ \epsilon}(\theta^{\prime})})}{\cosh(\theta-\theta^{\prime})}d\theta^{\prime}\] \[\hat{\epsilon}(\theta)=-\frac{1}{2\pi}\int_{\mathbb{R}}\frac{\log (1+e^{-\epsilon_{1}(\theta^{\prime})})}{\cosh(\theta-\theta^{\prime})}d \theta^{\prime}.\]
However, this TBA system is highly singular: as \(\theta\rightarrow\infty,\hat{\epsilon}\to 0\) and \(\log[(1-e^{-\hat{\epsilon}(\theta^{\prime})})^{2}]\rightarrow-\infty\). Thus additional regularisation is needed before this TBA system can be used. Following [16; 19] the regularisation of the singular term \(\epsilon_{1}\) must be done by subtracting a factor of \(\log(2\pi l)\). We see this as follows :
\[\epsilon_{1}(\theta) \sim-\log(2\pi le^{-A(\theta)}+\mathcal{O}(l^{2}))\] \[\hat{\epsilon}(\theta) \sim-\log(1+2\pi lB(\theta)+\mathcal{O}(l^{2})).\] (A.3)
Then expanding the TBA equations in \(l\), we have
\[A(\theta)-\log(2\pi l) =\frac{4}{3}e^{\theta}-\frac{1}{2\pi}\int_{\mathbb{R}}\frac{d \theta^{\prime}}{\cosh(\theta-\theta^{\prime})}\log(1+B^{2}(\theta^{\prime}))- \log(2\pi l)\] \[B(\theta) =\frac{1}{2\pi}\int_{\mathbb{R}}\frac{d\theta^{\prime}}{\cosh( \theta-\theta^{\prime})}e^{-A(\theta^{\prime})}.\] (A.4)
Thus we see that the divergent term in \(\epsilon_{1}(\theta)\) ends up cancelling on both sides, leaving us with a system of equations that is no longer divergent. This regularisation, although initially appearing to be valid only for the even states with \(l=0\) carries through exactly
in the same way and gives us the same equations if instead we choose to expand around \(l=-1\). This is due to the periodicity of the only explicit \(l\) dependence in the TBA equations is given by the \(\cos(2\pi l)\) term.
Further, this is the same TBA system shown in [19] (up to a overall constant shift of \(\theta\)), which was shown to be solved by the Airy functions :
\[e^{-A(\theta)}=-2\pi\frac{d}{dz}Ai^{2}(z) \tag{100}\] \[B(\theta)=-2\pi\frac{d}{dz}Ai(e^{i\pi/3}z)Ai(e^{-i\pi/3}z), \tag{101}\]
with \(z=e^{\frac{2}{3}\theta}\). It was argued that \(e^{-A(\theta)}\) must be the correct spectral determinant for the problem, since it vanishes at those values of \(\theta\) where \(Ai(z)=0\) or \(Ai^{\prime}(z)=0\) which correspond to the true spectrum. To get a direct derivation of this from the TBA system, let us examine what happens to the Bohr Sommerfeld quantisation under the regularisation scheme. The shift from \(\epsilon_{1}\)(101) to \(\Pi_{\gamma_{1}}\) involves a rotation \(\theta\rightarrow\tilde{\theta}=\theta+i\pi/2\). This takes the form [13]
\[\frac{1}{\hbar}\mathcal{B}_{med}(\Pi_{\gamma_{1}})=\frac{4}{3}e^{\tilde{ \theta}}+\frac{1}{2\pi}P\int_{\mathbb{R}}\frac{d\theta^{\prime}}{\sinh(\tilde {\theta}-\theta^{\prime})}\log(1+e^{-2\tilde{\epsilon}(\theta^{\prime})}-2 \cos(2\pi l)e^{-\tilde{\epsilon}(\theta^{\prime})})\]
\[\frac{1}{\hbar}\mathcal{B}_{med}(\Pi_{\gamma_{1}})=2\pi(n+1/2),n=0,1,2...\]
However under the regularisation scheme(102), we have
\[\mathcal{B}_{med}(\Pi_{\gamma_{1}})=\mathcal{B}_{med}(A(\theta))+\log(2\pi l) =2\pi\hbar(n+1/2). \tag{102}\]
This implies that the points in the spectrum \(\{\theta_{i}\}\) which solve the Bohr Sommerfeld condition satisfy
\[e^{-\mathcal{B}_{med}(A(\theta))}=\mathcal{O}(2\pi l), \tag{103}\]
which must vanish when \(l\to 0\). Hence, \(e^{-\mathcal{B}_{med}(A(\theta))}\) must be the spectral determinant for the problem.
|
2309.07749 | OmnimatteRF: Robust Omnimatte with 3D Background Modeling | Video matting has broad applications, from adding interesting effects to
casually captured movies to assisting video production professionals. Matting
with associated effects such as shadows and reflections has also attracted
increasing research activity, and methods like Omnimatte have been proposed to
separate dynamic foreground objects of interest into their own layers. However,
prior works represent video backgrounds as 2D image layers, limiting their
capacity to express more complicated scenes, thus hindering application to
real-world videos. In this paper, we propose a novel video matting method,
OmnimatteRF, that combines dynamic 2D foreground layers and a 3D background
model. The 2D layers preserve the details of the subjects, while the 3D
background robustly reconstructs scenes in real-world videos. Extensive
experiments demonstrate that our method reconstructs scenes with better quality
on various videos. | Geng Lin, Chen Gao, Jia-Bin Huang, Changil Kim, Yipeng Wang, Matthias Zwicker, Ayush Saraf | 2023-09-14T14:36:22Z | http://arxiv.org/abs/2309.07749v1 | # OmnimateRF: Robust Omnimate with 3D Background Modeling
###### Abstract
Video matting has broad applications, from adding interesting effects to casually captured movies to assisting video production professionals. Matting with associated effects such as shadows and reflections has also attracted increasing research activity, and methods like Omnimate have been proposed to separate dynamic foreground objects of interest into their own layers. However, prior works represent video backgrounds as 2D image layers, limiting their capacity to express more complicated scenes, thus hindering application to real-world videos. In this paper, we propose a novel video matting method, OmnimateRF, that combines dynamic 2D foreground layers and a 3D background model. The 2D layers preserve the details of the subjects, while the 3D background robustly reconstructs scenes in real-world videos. Extensive experiments demonstrate that our method reconstructs scenes with better quality on various videos.
## 1 Introduction
Video matting is the problem of separating a video into multiple layers with associated alpha mattes such that the layers are composited back to the original video. It has a wide variety of applications in video editing as it allows for substituting layers or processing them individually before compositing back, and thus has been studied well over decades. In typical applications like rotoscoping in video production and background blurring in online meetings, the goal is to obtain the masks containing only the object of interest. In many cases, however, it is often preferred to be able to create video mattes that include not only the object of interest but also its associated effects, like shadow and reflections. This could reduce the often-required, additional manual segmentation of secondary effects and help increase realism in the resulting edited video. Being able to factor out the related effects of foreground objects also helps reconstruct a clean background, which is preferred in applications like object removal. Despite these benefits, this problem is much more ill-posed and has been much less explored than the conventional matting problem.
The most promising attempt to tackle this problem is Omnimate [21]. _Omnimattes_ are RGBA layers that capture dynamic foreground objects and their associated effects. Given a video and one or more coarse mask videos, each corresponding to a foreground object of interest, the method reconstructs an _omnimatte_ for each object, in addition to a static background that is free from all of the objects of interest _and_ their associated effects. While Omnimate [21] works well for many videos, it is limited by its use of homography to model backgrounds, which requires the background be planar or the video contains only rotational motion. This is not the case as long as there exists parallax caused by camera motions and objects occlude each other. This limitation hinders its application in many real-world videos, as shown in Fig. 1.
D\({}^{2}\)NeRF [36] attempts to address this issue using two
Figure 1: **Video with parallax effects. Limited by their 2D image representation (a), previous works such as Omnimate fail to handle videos with parallax effects in the background. Their foreground layer (b) has to capture (dis)occlusion effects to minimize the reconstruction loss. In contrast, our method employs a 3D background (c), enabling us to obtain clean foreground layers (d).**
radiance fields, which model the dynamic and static part of the scene. The method works entirely in 3D and can handle complicated scenes with significant camera motion. It is also self-supervised in the sense that no mask input is necessary. However, it separates all _moving_ objects from a static background and it is not clear how to incorporate 2D guidance defined on video such as rough masks. Further, it cannot independently model multiple foreground objects. A simple solution of modeling each foreground object with a separate radiance field could lead to excessive training time, yet it is not clear how motions could be separated meaningfully in each radiance field.
We propose a method that has the benefit of both by combining 2D foreground layers with a 3D background model. The lightweight 2D foreground layers can represent multiple object layers, including complicated objects, motions, and effects that may be challenging to be modeled in 3D. At the same time, modeling background in 3D enables handling background of complex geometry and non-rotational camera motions, allowing for processing a broader set of videos than 2D methods. We call this method _OmnimattaRF_ and show in experiments that it works robustly on various videos without per-video parameter tuning. To quantitatively evaluate the background separation of a 3D scene, D\({}^{2}\)NeRF released a dataset of 5 videos rendered with Kubrics, which are simple indoor scenes with few pieces of furniture and some moving objects that cast solid shadows.
We also render five videos from open-source Blender movies [6] with sophisticated motions and lighting conditions for more realistic and challenging settings. Our method outperforms prior works in both datasets, and we release the videos to facilitate future research.
In summary, our contributions include the following:
1. We propose a novel method to make Omnimatte [21] more robust by better modeling the static background in 3D using radiance fields [22].
2. Utilizing the _omnimatte_ masks, we propose a simple yet effective re-training step to obtain a clean static 3D reconstruction from videos with moving subjects.
3. We release a new dataset of 5 challenging video sequences rendered from open-source blender movies [6] with ground truths to better facilitate the development and evaluation of the video matting with associated effects (aka _omnimatting_[21]) problem.
## 2 Related Work
Video Matting.There is a long line of work exploring video matting due to its importance in video editing. Green screening and rotoscoping are critical first steps in any visual effects pipeline. The matting problem aims to extract the foreground subjects into their own RGBA layers and separate them from the background RGB layer, which is a highly under-constrained problem. Many approaches have utilized motion and depth cues in addition to integrating user interactions [7, 3, 32, 16, 9]. Background Video Matting [18] specifically addresses real-time video matting of people and preserving strand-level hair details.
Matting with Associated Effects.Video matting is often insufficient, as foreground subjects might have associated effects like shadows or reflections that need to be extracted into the foreground RGBA layers. This problem has not been explored as extensively and, in practice, is often dealt with manually using advanced interactive rotoscoping tools [15]. Omnimatte [21] was the first to propose a generic framework capable of learning any associated effect. Previous works often specifically addressed associated effects like shadows [34, 33]. The ability to obtain matte layers with associated effects has many exciting applications, such as re-timing motions of different people [20], consistent background editing [13, 14], background subtraction, green screening, and many other video effects [21]. Recently, FactorMatte [12] has been proposed to improve the quality with data augmentation and conditional priors. These works have in common that they take predefined masks that hint at the foreground objects and decompose each video into several layers, with one object in each layer with its associated effects. Then, there is a background layer, a 2D static image, or a deformable atlas shared by all the frames. The background is warped and cropped via a homography to render each frame. While the foreground layers have shown great potential in capturing dynamics, their single image background limits the application of these methods to videos with planar environments without parallax effects caused by camera motion.
Radiance Fields.Radiance fields (RF) emerged as 3D representations capable of capturing geometric details and photorealistic appearances [22]. Radiance fields model the 3D scene as a continuous function that maps the position and the viewing direction of any point in world space to its color and opacity. Novel views can be synthesized via volume rendering along rays cast. This continuous function is learned by optimizing with a reconstruction loss on the rendered images. This view-dependent volumetric representation can model various challenging scenes that previous surface-based methods struggled to handle: e.g., shiny surfaces like metals or fuzzy surfaces like hair or fur. Since then, it has been extended along multiple axes: better appearance modeling (e.g., reflection and refraction [31, 5, 2, 1], faster optimization [8, 27, 23] and modeling dynamic scenes [38, 17, 10, 19]. Since the MLP-based implicit RF representations are slow to train, we use voxel-based explicit radiance field representations [8][27].
Specifically, we use the factorized voxel grid representation from [8].
Self-Supervised Video Dynamics Factoring.Another related work is video dynamics factoring without needing a predefined mask. One recent work is deformable sprites [39] that rely only on motion cues. Similar to other video matting works, it has a 2D foreground and background layers and the same limitations as Omnimatte. For modeling in 3D, D\({}^{2}\)NeRF[36] proposes to decouple the scene with two radiance fields, one for the dynamic content and the other for the statics. D\({}^{2}\)NeRF[36] handles a special case of matting with only one foreground object, and, compared to the other methods, it is not limited to planar backgrounds. However, the self-supervised method relies on the heuristics that require per-video hyper-parameter tuning and does not robustly generalize to new videos. The quality of the foreground reconstruction can also be limited for objects that have large nonrigid motions.
We, therefore, propose a method for video matting with associated effects that has the advantages of supervised 2D mattes that support multiple individual objects with great details, as well as 3D background decoupling that works with non-planar videos.
## 3 Method
The concept of _omimattes_ is proposed by Lu et al. [21], extending RGBA video mattes to capture associated effects of the objects of interest like shadows and reflections. To avoid any confusion, in the following text, we refer to their work as capital Omnimatte, and the resulting RGBA layers as italic _omnimatte_. In the matting setup, the user prepares a video of \(T\) frames \(\{I_{t}\}_{t=1}^{T}\), and \(N\)_ordered_ mask layers \(\{M_{t}^{i}\}_{i=1}^{N}\), each containing a coarse mask video of an object of interest. The video's camera parameters are also precomputed as \(\{P_{t}\}\).
The goal is to predict RGBA foreground layers \(C_{t}^{i}\) and \(\alpha_{t}^{i}\) that contain the objects together with their associated effects, and a background layer \(B_{t}\) which is clean and free from the effects cast by the foreground objects. An input frame \(I_{t}\) should be reconstructed by alpha compositing the foreground layers above the background.
In Omnimatte, the background is represented by a static 2D image and a homography transform \(P_{t}\). To compose a frame, part of the static background is extracted according to the estimated homography \(P_{t}\). The key idea of our work is to represent the static background in 3D using a radiance field, while keeping the foreground in 2D to better capture the dynamics of objects. We employ an explicit factorized voxel-based radiance field [8] to model the background. In this case, \(P_{t}\) represents a camera pose, and a background frame is rendered with volume rendering. Note that the foreground layers are still 2D videos. We refer to this combination as the OmnimatteRF model.
### The OmnimatteRF Model
An outline of our model is depicted in Figure 2. The model has two independent branches: foreground and background. For any given frame, the foreground branch predicts an RGBA image (_omnimatte_) for each object, and the background branch renders a single RGB image.
**Preprocessing**. Following similar works, we use an off-the-shelf model RAFT [29] to predict optical flow between neighboring frames. The flow is used as an auxiliary input and ground truth for supervision, denoted by \(\{F_{t}\}\). We also use an off-the-shelf depth estimator MiDaS [26] to predict monocular depth maps \(\{D_{t}\}\) for each frame and use them as ground truth for the monocular depth loss.
**Background**. The background branch consists of a static neural radiance field, \(f_{\text{bg}}\), encoding the 3D representation of the scene. To render a pixel in a frame \(I_{t}\), a ray is traced according to the estimated camera pose \(P_{t}\), and the final RGB color is produced via volumetric rendering. The result of rendering the entire frame is \((B_{t},\hat{D}_{t})=f_{\text{bg}}(P_{t})\), where \(B_{t}\) is an RGB image and \(\hat{D}_{t}\) is a depth map.
**Foreground**. The foreground branch is a UNet-style convolutional neural network, \(f_{\text{fg}}\), similar to that of Omnimatte. The input of the network is a concatenation of three maps:
1. The coarse mask \(M_{t}^{i}\). The mask is provided by the user, outlining the object of interest. Mask values are ones if the pixels are inside the object.
2. The optical flow \(F_{t}\). It provides the network with motion hints. Note that the network also predicts an optical flow as an auxiliary task (detailed in Sec. 3.2.2).
3. The feature map \(E_{t}\). Each pixel \((x,y)\) in the feature map is the positional encoding of the 3-tuple \((x,y,t)\).
Multiple foreground layers are processed individually. For the \(i\)-th layer, the network predicts the _omnimatte_ layer \((C_{t}^{i},\alpha_{t}^{i})\) and the flow \(\hat{F}_{t}^{i}\).
**Detail Transfer**. For a tradeoff between image quality and training time, the foreground network typically produces a color layer with missing details when the alpha layers have captured sufficient associated effects. To boost the output quality, Omnimatte transfers details from input frames. We include the same process in our pipeline. Note that this is a post-processing step to produce final results, and does not apply to model optimization.
### Optimizing the Model
We optimize an OmnimatteRF model for every video since both branches of our model are video-specific. To supervise learning, we employ an image reconstruction loss and several regularization losses.
#### 3.2.1 Reconstruction Loss
We compute the reconstruction loss with the composed image \(\hat{I}_{t}\) by alpha composition of foreground and background layers:
\[\hat{I}_{t}=\sum_{i=1}^{N}\left(\prod_{j=1}^{i-1}(1-\alpha_{t}^{j})\alpha_{t}^{i }C_{t}^{i}\right)+\prod_{i=1}^{N}(1-\alpha_{t}^{i})B_{t} \tag{1}\]
And the reconstruction loss is the mean-squared-error between the predicted and input frame,
\[\mathcal{L}_{\text{recons}}=||\hat{I}_{t}-I_{t}||^{2} \tag{2}\]
The reconstruction loss supervises both branches of our pipeline simultaneously. Limited by the computational cost of volumetric rendering, the background layer is rendered only at sparse random locations at each step, where \(\mathcal{L}_{\text{recons}}\) is computed for the composed pixel values.
#### 3.2.2 Foreground Losses
We follow Omnimatte and include the alpha regularization loss \(\mathcal{L}_{\alpha\text{-reg}}\), alpha warp loss \(\mathcal{L}_{\alpha\text{-warp}}\), and flow reconstruction loss \(\mathcal{L}_{\text{flow}}\). We also bootstrap the initial alpha prediction to match the input mask with the mask loss \(\mathcal{L}_{\text{mask}}\), which is gradually decayed and disabled once its value drops below the threshold.
While most regularization terms in Omnimatte can be applied directly to our pipeline, the flow reconstruction loss is an exception. The formulation of the loss remains identical: given the per-layer flow prediction \(\hat{F}_{t}^{i}\) and a background layer flows \(F_{t}^{\text{bg}}\), the complete flow \(\hat{F}_{t}\) is composed via alpha composition (Eq. 1). Then, the loss is defined as:
\[\mathcal{L}_{\text{flow}}=||(\hat{F}_{t}-F_{t})\otimes M_{t}^{\text{fg}}||^{2} \tag{3}\]
Here, \(M_{t}^{\text{fg}}\) is the union of all foreground masks (\(\{M_{t}^{i}\}\)) for the frame \(I_{t}\), and the loss is only evaluated at the location of _input_ coarse masks. The authors of Omnimatte have shown the effectiveness of this loss in their case, and we also demonstrate its importance in an ablation study.
However, it remains unclear how \(F_{t}^{\text{bg}}\) can be obtained. In Omnimatte, the background flow can be derived from image homography, which serves both as an input to the network and a background for composition. On the other hand, since our 3D background has only known camera poses but not depths, we cannot obtain background flows directly. Instead, we use the ground truth flow \(F_{t}\) as network input to provide motion cues and a masked version of \(F_{t}\) as background flow for composition. The masked flow is \(F_{t}^{\text{m}}=F_{t}\otimes(1-M_{t}^{\text{fg}})\), which is the ground truth optical flow with the regions marked in the coarse masks set to zeros. \(\otimes\) denotes elementwise multiplication. We find it crucial to use \(F_{t}^{\text{m}}\) rather than \(F_{t}\) for composition, as the latter case encourages the network to produce empty layers with \(\alpha_{t}^{i}\) equal to zero everywhere.
Figure 2: **Method overview.** We propose a video matting method, named OmnimatteRF, which combines 2D foreground layers with a 3D background layer. The foreground branch (\(f_{\text{fg}}\), in green box) predicts an RGBA layer (\(C_{t}^{i},\alpha_{t}^{i}\)) for each object, and an auxiliary flow output (\(\hat{F}_{t}^{i}\)). The background branch (\(f_{\text{bg}}\), in yellow box) produces a background layer with depths (\(B_{t},\hat{D}_{t}\)). **Optimization.** During training, predicted colors (\(\hat{I}_{t}\)) and flow (\(\hat{F}_{t}\)) are alpha-composited, whose inputs have red and green borders respectively. The right most column illustrates the data terms in the loss function, and we omit the regularization terms in this illustration.
#### 3.2.3 Background Losses
Apart from the reconstruction loss, the background network is supervised by the total variation regularization loss, \(\mathcal{L}_{\text{bg-reg}}\), as in TensoRF [8]. In addition, monocular depth supervision is used to improve scene reconstruction when the camera motions consist of rotation only:
\[\mathcal{L}_{\text{depth}}=\text{metric}(D_{t},\hat{D}_{t}), \tag{4}\]
where \(\hat{D}_{t}\) is the estimated depth from volume rendering [22], and the metric function is the scale-invariant loss from MiDaS [26]. Also, we empirically find that \(\mathcal{L}_{\text{depth}}\) can introduce floaters, and employ the distortion loss \(\mathcal{L}_{\text{distort}}\) proposed in Mip-NeRF 360 [4] to reduce artifacts in the background.
#### 3.2.4 Summary
The combined loss for joint optimization is:
\[\begin{split}\mathcal{L}=&\mathcal{L}_{\text{recons} }+\underbrace{\mathcal{L}_{\alpha\text{-reg}}+\mathcal{L}_{\alpha\text{- warp}}+\mathcal{L}_{\text{flow}}+\mathcal{L}_{\text{mask}}}_{\text{ Foreground}}+\\ &\underbrace{\mathcal{L}_{\text{bg-reg}}+\mathcal{L}_{\text{ depth}}+\mathcal{L}_{\text{distort}}}_{\text{Background}}\end{split} \tag{5}\]
At every optimization step, \(\mathcal{L}_{\text{recons}}\) and background losses are evaluated at sparse random locations. Foreground losses are computed for the full image.
### Clean Background via Masked Retraining
When the pipeline is trained jointly as described above, it is sometimes observed that the background radiance field models some of the foreground contents like shadows (see Fig. 3(c)). Compared to 2D images, 3D radiance fields are so much more capable that they can exploit distorted geometry constructs, such as holes and floaters, to capture some temporal effects, although the models are given no time information. For example, as the camera moves over time, there may be a correlation between whether a surface is covered by shadow and the direction the surface is viewed from.
We illustrate this problem in Fig. 3 and explain the cause at an intuitive level. The foreground branch is bootstrapped to produce alpha values that match the coarse mask inputs, which include only the object without the associated effects. In other words, \(\alpha_{t}\) values are close to one at the object, but zero in the shadows (for simplicity, we consider one foreground layer in which the object casts a shadow, like in Fig. 3). At a pixel \((x,y)\) covered by shadow, Eq. 1 simply collapses to \(\hat{I}_{t}(x,y)\approx B_{t}(x,y)\). The reconstruction loss will therefore encourage \(B_{t}(x,y)\) to match the color of the shadow for a ray shot toward this location.
As training proceeds, \(f_{\text{fg}}\) will then gradually increase the predicted alpha values at the shadowed regions. If the shadow is hard and \(\alpha\) gets close to one, Eq. 1 evaluates to \(\hat{I}_{t}(x,y)\approx C_{t}^{i}(x,y)\), and the reconstruction loss gives little to no constraint to the background color at the pixel. As a result, \(f_{\text{bg}}\) is unable to learn to remove the shadow color that it produces for the ray towards frame \(I_{t}\) at \((x,y)\).
There are also cases where the shadow is soft and \(\alpha\) is in between. In these cases, the problem remains ambiguous.
Therefore, we propose to obtain clean background reconstruction via an optional optimization step. In joint training, the foreground _omnimatte_ layers can capture most associated effects, including the parts with leaked content in the background layer. The alpha layers \(\alpha_{t}\) can then be used to train a radiance field model from scratch, with no samples from the foreground region where alpha values are high. We show in the ablation study (see Fig. 7) that this step produces cleaner background reconstruction for in-the-wild videos. As only the background is optimized, the process is fast and takes less than an hour to complete.
## 4 Evaluation
We compare our quantitative and qualitative methods with Omnimatte and D\({}^{2}\)NeRF [21, 36], which are state-of-the-art methods in 2D video matting and 3D video segmentation, respectively. In addition, we compare with Layered Neural Atlas (LNA) [13], which uses a deformable 2D background in contrast to Omnimatte's static image.
Figure 3: **Background Layer Training Signals. We illustrate how the training signal to the background layer changes over time. It explains why the background captures some of the associated effects (in this example, shadows). We use the pixel circled in red as an example. (a) At the beginning of training, the foreground alpha value (in light green) does not include the shadow. Therefore, \(\alpha\) is small and at this pixel, \(\hat{I}_{t}(x,y)\approx B_{t}(x,y)\). The reconstruction loss \(\mathcal{L}_{\text{recons}}\) encourages the background network \(f_{\text{bg}}\) to produce dark prediction at this location from this viewing angle. (b) As training progresses, \(\alpha\) gets larger in the shadow region, and \(\hat{I}_{t}(x,y)\approx C_{t}^{i}(x,y)\). This means that \(f_{\text{bg}}\) receives little to no supervision signals from this pixel. If it has modeled the shadow in some ways (in this case, a hole), it has little incentive to remove it, leaving the artifact in (c).**
### The Movies Dataset
Quantitative evaluation of background segmentation requires a dataset with both input videos and ground-truth background imagery. Prior works primarily use datasets like CDW-2014 [35], which are limited to mostly static backgrounds and are not applicable to our settings. Recently, Kubrics is proposed in D\({}^{2}\)NeRF, which enables the evaluation of 3D background synthesis. However, these videos have relatively simple scenes and lighting. To facilitate the evaluation of video matting and background segmentation in challenging scenarios, we select six clips from three Blender movies in Blender Studio [6]. Compared to Kubrics, they feature more complicated scenes and lighting conditions, large nonrigid motion of the characters, and higher resolution. To ensure usability, we manually edit the camera trajectories so that there are sufficient camera motions and the actors have reasonable sizes. We render the clips with and without the actors to obtain input and ground truth for background reconstruction evaluation purposes. The camera poses are also exported.
### Experiment Setup
We evaluate the performance of our proposed method on four datasets.
1. Movies: our novel challenging dataset.
2. Kubrics: the dataset generated and used in D\({}^{2}\)NeRF, which consists of five scenes of moving objects from 3D Warehouse [30] rendered with Kubric [11].
3. DAVIS [24, 25]: short clips with moving foreground subjects, like humans, cars, and animals. This dataset is widely used to evaluate 2D-background matting methods [21, 13, 39].
4. Wild: in-the-wild sequences collected from the internet that are closer to casually captured videos, with natural and noisier camera motions, including translations and rotations, as well as objects at different distances from the camera. Naturally, these videos have backgrounds that are challenging for pure 2D methods.
Kubrics and Movies are synthetic datasets with clean background layer renderings available. Note that novel view synthesis is not the focus of our method, so we evaluate the background with input views. Both datasets have known camera poses and object masks which are used for training and evaluation.
DAVIS and Wild are real-world videos without clean background. Therefore, we only perform a qualitative evaluation to demonstrate the robustness of our method. For videos in Wild we recover camera poses with COLMAP. For videos that COLMAP cannot process reliably, including DAVIS videos, we use poses from RoDynRF [19].
To obtain coarse object masks, we attempt to extract them with pre-trained object segmentation models from Detectron 2 [37]. In case it does not work, we use the Roto Brush tool in Adobe After Effects. Detailed procedures are described in the supplementary material. It takes about 10 minutes of manual effort to produce a 200-frame mask.
For all videos, we also estimate homographies with LoFTR [28] and OpenCV to enable Omnimatte processing.
As mentioned in D\({}^{2}\)NeRF [36], the method is sensitive to hyperparameters. The authors released five sets of configurations for different videos. We experiment with every video using all provided configurations and report the best-performing ones.
### Implementation Details
Our network is built upon the publicly available official implementation of Omnimatte [21], and TensoRF [8]. The videos in Kubrics have resolution \(512\times 512\), and all methods run at the resolution \(256\times 256\). For videos in other datasets with a higher resolution of \(1920\times 1080\), we downsample them by a factor of 4.
We optimize the networks for up to 15,000 steps. The learning rate of \(f_{\text{fg}}\) is set to \(0.001\) and is exponentially decayed after 10,000 steps. For \(f_{\text{bg}}\) we use the learning rate scheduling scheme of TensoRF. Training takes up to 6 hours on a single RTX3090 GPU. Detailed network architecture, hyper-parameters and timing data are presented in the supplementary. Our code and datasets will also be made publicly available.
### Quantitative Evaluation
We quantitatively evaluate the background reconstruction quality of our method on two synthetic datasets. We
Figure 4: **Background Reconstruction.** We show examples of results presented in quantitative evaluations. For videos with parallax effects, 3D methods like D\({}^{2}\)NeRF and ours reconstruct less distorted background than Omnimatte and LNA.
report PSNR, SSIM and LPIPS for all videos in Table 1, and some visualizations in Fig. 4. For D\({}^{2}\)NeRF, we tried every provided pre-set configuration for every video in Movies, and it only gave good results for the Dog, Rooster, and Dodge videos. Omnimatte and LNA with the 2D background layers struggles in both datasets. Our method can handle these videos well.
### Qualitative Evaluation
We present a qualitative comparison of the methods in Fig. 5. Due to space limitations, we present at least one video from every dataset but show a frame from every selected video in the figure. The original videos are available in supplementary and we highly recommend watching them. D\({}^{2}\)NeRF works well for the fine-tuned videos but not for new inputs without further hyper-parameter tuning. Omimatte background has significant distortion around objects, and its foreground layer has to compensate for the limitation by capturing all residuals. Our method is versatile enough to perform well for a variety of videos with our 3D background model.
### Ablation Studies
#### 4.6.1 Loss Terms
We present background reconstruction results without \(\mathcal{L}_{\text{depth}}\) in Fig. 6. For video sequences with rotational camera poses, the model struggles to extract 3D information from the input videos because of a lack of 3D clues. This loss is critical to extending our method to a broader range of videos. The effects of \(\mathcal{L}_{\text{flow}}\) are also demonstrated in Fig. 6. The auxiliary task improves foreground quality and reduces unrelated content.
#### 4.6.2 Clean Background Retraining
We employ an additional step for real-world sequences to optimize a clean background from scratch. In Fig. 7, we compare the background layer from the initial joint optimization and the final result. This is a simple yet robust way to obtain a better background.
### Limitations
We list some limitations that future works can explore.
1. If a background region is covered by shadows nearly all of the time, the background model cannot recover its color correctly. An example from a Movies video is shown in Fig. 8. In theory, an _omimatte_ layer has an alpha channel and can capture only the additive shadow that allows the background to have the original color. However, this problem is largely under-constrained in the current setting, making it ambiguous and leading the background to unsatisfying solutions.
2. The foreground layer captures irrelevant content. In real-world videos, unrelated motions often exist in the background, like swaying trees and moving cars. These effects cannot be modeled by the static radiance field and will be captured by the foreground layer regardless of their association with the object. Possible directions include i) using a dummy 2D layer to catch such content or ii) a deformable 3D background model with additional regularization to address the ambiguity as both background and foreground can model motion.
3. Foreground objects may have missing parts in the _omimatte_ layers if they're occluded. Since our foreground network predicts pixel values for alpha composition, it does not always hallucinate the occluded parts.
4. The video resolution is limited. This is primarily due to the U-Net architecture of the foreground model inherited from Omnimatte. Higher resolutions can potentially be supported with the use of other lightweight image encoders.
5. The foreground layer may capture different content when the weights are randomly initialized differently. We include visual results in the supplementary materials.
## 5 Conclusion
We propose a method to obtain _omimattes_, RGBA layers that include objects and their associated effects by com
\begin{table}
\begin{tabular}{c|c c c|c c c|c c c|c c c|c c c} \hline \multirow{2}{*}{**Kubries**} & \multicolumn{3}{c|}{Car} & \multicolumn{3}{c|}{Cars} & \multicolumn{3}{c|}{Bag} & \multicolumn{3}{c|}{Chair} & \multicolumn{3}{c}{Pillow} \\ & LPIPS4 & SSIM & PSNR & LPIPS4 & SSIM & PSNR & LPIPS4 & SSIM & PSNR & LPIPS4 & SSIM & PSNR & LPIPS4 & SSIM & PSNR \\ D\({}^{2}\)NeRF & 0.135 & 0.854 & 34.10 & 0.105 & 0.859 & 34.77 & 0.131 & 0.880 & 33.98 & 0.090 & 0.916 & 33.29 & 0.105 & 0.926 & 38.80 \\ Omimatte & 0.162 & 0.819 & 31.14 & 0.157 & 0.834 & 31.20 & 0.271 & 0.796 & 23.64 & 0.175 & 0.865 & 26.91 & 0.270 & 0.841 & 21.17 \\ LNA & - & - & - & - & - & - & 0.138 & 0.835 & 27.08 & 0.105 & 0.881 & 21.21 & 0.080 & 0.923 & 31.66 \\ Ours & **0.033** & **0.958** & **39.09** & **0.032** & **0.961** & **39.78** & **0.029** & **9.972** & **39.58** & **0.023** & **0.977** & **42.46** & **0.022** & **0.982** & **43.62** \\ \hline \multirow{2}{*}{**Movies**} & \multicolumn{3}{c|}{Donkey**} & \multicolumn{3}{c|}{Dog} & \multicolumn{3}{c|}{Chicken} & \multicolumn{3}{c|}{Rooster} & \multicolumn{3}{c}{Dodge} \\ & LPIPS4 & SSIM & PSNR & LPIPS4 & SSIM & PSNR & LPIPS4 & SSIM & PSNR & LPIPS4 & SSIM & PSNR & LPIPS4 & SSIM & PSNR \\ D\({}^{2}\)NeRF & - & - & - & 0.370 & 0.694 & 22.73 & - & - & - & - & 0.340 & 0.708 & 25.13 & 0.408 & 0.729 & 20.95 \\ Omimatte & 0.315 & 0.653 & 19.11 & 0.279 & 0.706 & 21.74 & 0.312 & 0.704 & 20.95 & 0.220 & 0.741 & 23.14 & 0.067 & 0.879 & 23.88 \\ LNA & 0.104 & 0.849 & 18.79 & 0.154 & 0.828 & 26.08 & 0.190 & 0.818 & 19.22 & 0.131 & 0.804 & 26.46 & 0.068 & 0.937 & 24.94 \\ Ours & **0.005** & **0.990** & **38.24** & **0.030** & **0.976** & **31.44** & **0.021** & **0.978** & **32.86** & **0.024** & **0.969** & **27.65** & **0.006** & **0.991** & **39.11** \\ \hline \end{tabular}
\end{table}
Table 1: **Quantitative evaluations.** We present the background reconstruction comparison of our method and baselines on the Kubrics and Movies datasets. Best results are in **bold** and second place are underlined. Results marked - are the ones the method failed to give good separations (visuals in supplementary).
bining 2D foreground layers and a 3D background model. Extensive experiments demonstrate that our approach is applicable to a wide variety of videos, expanding beyond the capabilities of previous methods.
Figure 5: **Qualitative comparison.** We compare results of our and baseline methods on videos from each dataset. Readers are strongly encouraged to view videos files of more sequences available in the supplementary. The first two videos are synthetic from Kubrics and Movies, followed by three Wild videos. Omimatte fails to handle objects in 3D and produces distorted background. D\({}^{2}\)NeRF works for videos with appropriate hyper-parameters, but does not generalize to new videos easily. Our method handles videos in many different settings. Due to space constraint we defer LNA results to the supplementary.
Figure 6: **Loss Term Ablations.** Background of real-world videos without \(\mathcal{L}_{\text{depth}}\) and foreground without \(\mathcal{L}_{\text{low}}\) can de degraded for real-world videos.
Figure 7: **Clean Background Retraining.** Background layers jointly trained can capture the shadows as a hole on the ground (a-c). After the joint training, the foreground _omnimate_ provides a better mask that can be used to train a clean background (d-f). |
2309.13714 | Observation of wave propagation over 1,000 km into Antarctica winter
pack ice | A drifting wave-ice buoy, which was configured by mounting the OpenMetBuoy on
an ad hoc floating platform that we named Medusa, was deployed at the
L\"utzow-Holm Bay (LHB) marginal ice zone in Antarctica on 4 Feb 2022 during
the 63rd Japanese Antarctica research expedition. The wave-ice buoy,
Medusa-766, survived the Antarctica winter as the measurement duration reached
333 days. During the winter months, it was located deep in the ice cover with
the shortest distance to the ice-free Southern Ocean over 1,000 km; at this
time, there was evidence of ocean wave signals at the buoy position. Using the
directional wave spectra obtained from the ECMWF's reanalysis, we show that the
Medusa-766 observed waves were likely generated by an extratropical cyclone in
the Southern Ocean. Wave-induced ice breakup potential for such an event could
extend 100s km into the ice field. When Medusa-766 was in LHB in the summer
months, it did not detect sizeable wave energy despite the low sea ice
concentration extent even during on-ice waves events. Characterising the
considerable differences in the wave attenuation at LHB is needed to elucidate
the relative contribution of ocean waves to the unstable LHB fast ice. The
success of Medusa-766 demonstrates the robustness of the general design,
hardware, firmware, and the high sensitivity of the sensor used. The result is
promising for future LHB wave-ice interaction research. | Takehiko Nose, Tomotaka Katsuno, Takuji Waseda, Shuki Ushio, Jean Rabault, Tsubasa Kodaira, Joey Voermans | 2023-09-24T18:14:13Z | http://arxiv.org/abs/2309.13714v1 | # Observation of wave propagation over 1,000 km into Antarctica winter pack ice
###### Abstract
A drifting wave-ice buoy, which was configured by mounting the OpenMetBuoy on an ad hoc floating platform that we named Medusa, was deployed at the Lutzow-Holm Bay (LHB) marginal ice zone in Antarctica on 4 Feb 2022 during the 63rd Japanese Antarctica research expedition. The wave-ice buoy, Medusa-766, survived the Antarctica winter as the measurement duration reached 333 days. During the winter months, it was located deep in the ice cover with the shortest distance to the ice-free Southern Ocean over 1,000 km; at this time, there was evidence of ocean wave signals at the buoy position. Using the directional wave spectra obtained from the ECMWF's reanalysis, we show that the Medusa-766 observed waves were likely generated by an extratropical cyclone in the Southern Ocean. Wave-induced ice breakup potential for such an event could extend 100s km into the ice field. When Medusa-766 was in LHB in the summer months, it did not detect sizeable wave energy despite the low sea ice concentration extent even during on-ice waves events. Characterising the considerable differences in the wave attenuation at LHB is needed to elucidate the relative contribution of ocean waves to the unstable LHB fast ice. The success of Medusa-766 demonstrates the robustness of the general design, hardware, firmware, and the high sensitivity of the sensor used. The result is promising for future LHB wave-ice interaction research.
A **Abbreviations:** AMSR2 - Advanced Microwave Scanning Radiometer 2; ADS - The Arctic Data archive System; ERA5 - ECMWF Reanalysis 5; GNSS - Global Navigation Satellite System; IMU - Inertial measurement unit; JARE - Japanese Antarctica research expedition; LHB - Lutzow-Holm Bay; MSLP - mean sea level pressure MIZ - Marginal ice zone; MCU - Micro controller unit; OMB - OpenMetBuoy; SIC - Sea ice concentration; SAR - Synthetic aperture radar; PSD - power spectral density;
drifting wave-ice buoy, Lutzow-Holm Bay, waves in Antarctica sea ice, Japanese Antarctica research expedition, OpenMetBuoy
Introduction
The inaugural JARE was conducted in 1956; each year, a Japanese icebreaker traverses the summer ice cover to deliver necessary supplies to the Syowa Station. The current icebreaker _Shirase_ has been sailing since 2009. Variety of monitoring and research observations are made including atmosphere, ocean, and space. There are several anecdotal evidence that waves traverse the sea ice to reach the Syowa Station. One of the most striking event was the fast ice breakup that occurred on 18 Mar 1980, which led to the loss of two aeroplanes moored on ice near the Syowa Station. This event is documented in Higashi et al. (1982), in which they concluded that the catastrophic ice breakup event near the Ongul Island was caused by the swell-induced flexural failure of the fast ice. An extratropical cyclone with a minimum pressure of 940 hPa generated waves that propagated towards the Syowa Station. Higashi et al. (1982) estimated the incoming wave height at the ice edge to be around 4 m, which was attenuated by sea ice cover to 0.4 m over a distance of 10 km. They suggested that such wave conditions can disintegrate 1 m thick ice plate into ice pieces of less 100 m. The ice in this area was used as a runway for aeroplanes since 1957 when the Syowa Station was established, and the 1980 event was the first time fast ice broke up on a regional scale.
Higashi et al. (1982) noted that even before the 1980 breakup event, fast ice extensively drifted out of LHB several times after severe storms. The uniqueness of the JAREs is the availability of decades of observations such as in situ Syowa Station measurements and navigation logs at almost the same shipping route and season. Exploiting this, Ushio (2003); Ushio et al. (2004); Ushio (2006) investigated the frequency of fast ice breakup in and its subsequent loss from LHB. They combined the in situ and navigation data with the satellite imagery and focused on the period between 1980 and 2004. Figure 2 in Ushio (2006) shows that fast ice breakups occurred every year from 1980 to 1988. A stable period followed between 1988 to 1996, but after that, prolonged, e.g., 2-3 months, breakups occurred every year until the end of the analysis period in 2004. Satellite imagery shows that fast ice in LHB has been unstable with repeated breakups followed by a stable condition with wide area freezing up every several years. Here, the "breakup" is defined as the offshore drift of fractured ice floes northward of 68\({}^{\circ}\) 50' S (Ushio 2006). Ushio (2003, 2006) explored factors that contribute to the fast ice breakup and concluded that low snow cover over fast ice and the pack ice conditions northward of the LHB fast ice correlated with the LHB ice breakups. Regarding the latter point, Ushio (2006) hypothesised that a pack ice zone serves as a "barrier" to the LHB fast ice. He suggests that a compact pack ice zone is a more effective barrier and showed that there is coincidence of LHB fast ice breakups and the formation of a large polynya in the pack ice zone during the winter months. To further consolidate this hypothesis, we need to quantify how much wave energy remains unattenuated and propagates to the LHB fast ice.
With the viewpoint to study the relative contribution of ocean waves to the LHB fast ice breakup, we conducted a wave measurement experiment during JARE63 and deployed four drifting wave-ice buoys and two SOFAR Spotter buoys in the MIZ offshore LHB. Two of the four wave-ice buoys (a wave-ice buoy is a term used when a wave sensor is placed on an ice floe, and that ice floe serves as the sensor's floating platform) were equipped with the OMB (Rabault et al. 2022) as the wave measuring sensor, of which one, named Medusa-766, measured and transmitted GNSS and wave data for around 333 days. Because Medusa-766 survived the harsh Antarctica winter, it recorded valuable observation deep in the winter pack ice: the data provide remarkable
evidence that ocean wave energy propagates over 1,000 km into the ice cover. We use the well-known wave attenuation law to examine the likely source of the wave signal. We, then, discuss the distance scale of wave-induced ice breakup potential along the propagation distance. Wave-induced ice breakup is not well understood because of the difficulty in conducting observations that capture the exact instance of ice breakups. Despite the challenges, Voermans et al. (2020) proposed a wave-induced ice breakup index known as \(I_{br}\) based on wave conditions and sea ice properties. We discuss the derivation and application of this wave-induced breakup index. Lastly, the wave data measured in LHB is discussed in the context of wave attenuation characteristics.
## 2 Wave-ice buoy, data, and analysis methods
### The Medusa-766 wave-ice buoy
A reliable and durable wave-ice buoy is desired for successful deployments in the harsh Antarctica sea ice cover. Additionally, long-lasting battery life is another desired trait as the opportunity to deploy sensors in Antarctica are limited. To this end, the OMB (Rabault et al., 2022) was considered a suitable choice for the JARE wave-ice observation for the following reasons. First, the OMB uses a low power MCU and an industrial grade IMU that is thermally calibrated to \(-40\) "C(STMicroelectronics, 2020). While microprocessor based wave-ice buoys have been used to collect wave-ice data, e.g., Kohout et al. (2015); Rabault et al. (2020), low power MCUs, which have sufficient clock speed and RAM to perform necessary onboard calculations, have advanced the wave-ice buoy technology. The OMB reliability has been proven in the Arctic region (Rabault et al., 2023; Nose et al., 2023), while the JARE63 observation was the first OMB deployment on the Antarctica sea ice.
We outfitted the OMB with six Tadiran Lithium D-cell TL-5930 batteries each with 19 Amp-hr capacity. The number was six, not because we planned a specific measurement duration, but rather we packed as many batteries as we can into the sensor enclosure (the Takachi waterproof enclosure SPCP131815T). From previous campaigns, we expected battery life of one year. A detailed description of the estimated power consumption during the observation period is given in Appendix A. The sensor enclosure was mounted on a substructure, or a floating platform, that was designed for the JARE63 deployment. In many ice floe observations, a sensor and its enclosure are directly placed on the ice floe. However, in severe ice conditions of Antarctica winter, the sensor enclosures can be destroyed by ridging and rafting of the ice. The ad hoc floating platform that we named Medusa was designed to provide sensor protection, on which the OMB was mounted. Medusa was 10 kg in weight and its dimensions were 540 mm diameter and 280 mm in height. Medusa's structure consists of a life-saving buoy that is sandwiched with two stainless steel plates, on which the sensor enclosure is mounted. Figure 1 presents Medusa-766 images: the left image shows how the sensor enclosure was attached to Medusa and the right image shows the Medusa-766 being deployed on an ice floe.
Spectral noise in accelerometer-based wave buoys affects wave measurements, e.g., overestimation of spectral density in the frequency range 0.05-0.15 Hz when wind speed exceeds 10 ms\({}^{-1}\) or wave heights above 4 m (Collins and Jensen, 2022). With the emergence of IMU-based wave buoys housed in a relatively small platform (diameter\(\approx\)0.5 m or smaller), we seem to be facing a similar but different problem regarding the noise floor. When the wave-ice buoy motion is stable, e.g., on an ice floe with diameter
encountering wave wavelengths, the spectral noise floor is consistent with the sensor specification white noise; however, when the buoy floats in open ocean, e.g., our Arctic Ocean wave observations (Waseda et al., 2017; Nose et al., 2018, 2023), the spectral noise floor significantly elevates and the ideal filter is needed to retrieve the frequency integrated wave statistics. As will be shown later, the results discussed in this study is unaffected by the spectral noise floor.
The default GNSS and wave measurement intervals were 0.5 and 1 hour, respectively. The 1 hourly wave intervals were chosen as we anticipated frequent detectable wave energy during the low ice extent months between February and April. After April, we changed the measurement intervals to 1 and 3 hours (GNSS and wave), exploiting the 2-way Iridium messaging capability (Rabault, 2022) that was implemented for the JARE63 campaign. In the harsh field environment, however, the OMB MCU periodically reboots when firmware logic somehow goes amiss; then, the measurement intervals return to its default intervals. After a reboot, we need to re-send another message to change the intervals back to the desired duration.
### Atmospheric pressure and wind, sea ice, and wave data
The MSLP and wind data were obtained from ERA5 and used to describe the synoptic conditions in the Southern Ocean. The SIC was used to infer Medusa-766 surface conditions and to estimate the distance from the ice edge to its position when there was wave signal. The SICs are derived based on the AMSR2 data and obtained from ADS (Hori et al., 2012) (ADS-AMSR2 herein).
Regional wave fields were produced from significant wave height, peak wave period, and mean wave direction from ERA5, which was used to describe the wave field evolution under extratropical cyclones and the wave conditions near the ice edge. The directional wave spectra \(S(f,\theta)\) were obtained also from ERA5 and used to interpret the incoming wave directional distribution that propagates into the ice field. \(f\) is the frequency (Hz) and \(\theta\) is the wave direction in the meteorological convention (i.e., waves propagate from). The \(S(f,\theta)\) integrated over the directions produce the frequency wave spectra (or PSD) as \(S(f)=\int S(f,\theta)d\theta\), which was used to estimate the wave attenuation rate due to sea ice. Instead of integration over all directions, limiting the integration range can produce discretised directional energy that propagates towards the buoy position.
### Wave statistics and wave-ice attenuation
Wave spectra (PSD) of the vertical surface displacements are estimated from the IMU readings and transmitted via Iridium messages (Rabault et al., 2022). The integrated wave statistics are calculated from the \(S(f)\). The significant wave height is \(H_{m0}=4\sqrt{m_{0}}\) where \(m_{0}=\int_{f0}^{f1}S(f)df\). Wave periods used were the peak period \(T_{p}\), which is the inverse frequency of the peak \(S(f)\), and the \(-1\) moment period, also known as the energy mean wave period in ERA5, is \(T_{0m1}=\frac{\int_{f0}^{f1}f^{-1}S(f)df}{m_{0}}\). The frequency range \([f0,f1]\) was [0.044,0.503] Hz for Medusa-766 and [0.034,0.548] Hz for ERA5.
To analyse how much wave energy attenuated from the incoming waves, we used the well-known form of wave attenuation by ice,
\[S_{ice}(f)=S_{in}(f)e^{-\alpha x}, \tag{1}\]
where \(S_{ice}\) and \(S_{in}\) are the attenuated and incoming PSDs, and \(x\) is a distance between the two points. The wave attenuation is frequency dependent via
\[\alpha=af^{n}, \tag{2}\]
which was first observed by Wadhams (1975). The constant \(a\) and the exponent \(n\) are understood to vary depending on ice types (Meylan et al., 2018; Voermans et al., 2021).
## 3 Results: ocean wave signal over 1,000 km into ice cover
Medusa-766 was deployed at 68.54\({}^{\circ}\) S, 38.29\({}^{\circ}\) E on 4 Feb 2022. It was placed on an ice floe of roughly 20 m wide by a crane from _Shirase_ (see Figure 1). The last contact from Medusa-766 was on 3 Jan 2023. The total measurement duration was approximately 333 days, and the rates of data acquisition and transmission success for both the GNSS positions and wave spectra were ~95 % (details of the Iridium transmission results are provided in Appendix A). It recorded over 10,000 GNSS positions and 4,000 wave spectra, and it drifted a total estimated distance of over 5,000 km. The Medusa-766 trajectory is plotted in Figure 2 with the ADS-AMSR2 SIC fields on 1 Feb and 1 Aug 2022. The figure shows the vast scale of seasonal sea ice cover variability in Antarctica during 2022. Here, we present the wave signal measured in the winter Antarctica pack ice when Medusa-766 was located over 1,000 km from the ice edge.
By 10 Jul 2022, the ice cover between the Weddell Sea and LHB (60\({}^{\circ}\) W and 40\({}^{\circ}\) E) had advanced to around 60\({}^{\circ}\) S. This meant that Medusa-766 was deep in pack ice where the closest ice edge was over 1,000 km away. Because Medusa-766 has a low spectral noise floor of 60 \(\frac{\mu g}{\rm{MHz}}\)(STMicroelectronics, 2020), which has been achieved in our observations, it is in theory capable of measuring centimetre order wave heights. During July and August, there were several events when we can identify ocean wave signal in the wave data based on the visual inspection of the spectra, in which the measured signal appears sufficiently higher than the noise level. We counted that Medusa-766 detected ocean wave signal deep in the pack ice four times in July and twice in August. The wave height and period time series measurd by Medusa-766 during July 2022 are presented in Figure 3, which shows that the \(H_{m0}\) values reached up to 10 cm. We focus on the event on 20-21 July. Leading up to this event, energetic waves were generated in the ice-free Southern Ocean off the Weddell Sea when the extratropical cyclone developed. This system appear to have developed near 47\({}^{\circ}\) S, 40\({}^{\circ}\) W on 00:00 18 Jul 2022 then migrated south to 55\({}^{\circ}\) S by 19 July with a minimum MSLP of 960 hPa. The system weakened to around 975 hPa as it migrated east to 30\({}^{\circ}\) W, and this was inferred as the probable synoptic condition that generated waves to Medusa-766. Figure 4 depicts the described synoptic conditions on 19 and 20 Jul 2022. The wave field on 20 Jul 2022 is shown in Figure 5a, which indicates that the primary wave vectors at the ice edge are directed towards the pack ice zone. The spectral evolution during the event is shown as a waterfall plot in Figure 5b.
To investigate whether Medusa-766 measured signals were the Southern Ocean waves that propagated into the ice field, we fitted the attenuation of an incoming wave spectrum to that of the observed Medusa-766 spectrum on 06:00 21 Jul 2022, which was the peak of the event (see Figure 5b). The ice edge incoming wave spectrum was obtained from ERA5 at 59\({}^{\circ}\) S, 22\({}^{\circ}\) W in the form of directional spectra (see the black marker in Figure 5a), and the distance between the incoming wave spectrum position and the Medusa-766 (68.2\({}^{\circ}\) S, 7.2\({}^{\circ}\) W) was approximately 1,250 km. At the
incoming wave location, the \(H_{m0}\) peaked at around 4 m with a \(T_{p}\) of 14.9 s at UTC 09:00 on 20 Jul 2022. We note here that linear theory group speed of this wave system takes around 23 hours to travel 1,250 km. The directional spectrum revealed that there were two energy systems: one in the northwest and the other in the northeast sectors. The local wind vectors were directed from northeast, so we can assume the northeast waves were wind seas while the northwest waves were the swell, likely generated by the extratropical cyclone. The PSDs obtained from integrating the directional spectrum over all directions and discretised to the northwest sector are shown in the top panel of Figure 6. The primary wave system was indeed the swell energy propagating from the northwest sector towards the Medusa-766 position, and the directionally discretised northwest sector PSD was used as \(S_{in}\) with \(H_{m0}\) and \(T_{p}\) of 3.25 m and 14.86 s.
The goal of this exercise is to show that Medusa-766 observed signal could be explained by attenuating the incoming wave spectrum using the well-known wave damping law (Equations 1 and 2). It is debatable whether applying this attenuation form to describe the significantly attenuated signal over such a long distance is valid; indeed, we show here that many combinations of \(a\) and \(n\) may reproduce the observed wave statistics reasonably. We first refer to table 1 of Thomson et al. (2021) for typical values of constant \(a\) in \(\alpha\propto af^{n}\), which were (0.005,0.260). We selected four \(a\) values 0.005, 0.010, 0.020, and 0.025, and tuned the exponent \(n\) to attenuate the \(S_{in}\) so that the observed Medusa-766 \(H_{m0}\) was matched. The \(n\) exponent values, tuned to two decimal places, were 2.41, 2.65, 2.89, and 2.97 respectively; \(n\) is understood to range between 2 and 4 in the existing field studies (Meylan et al., 2018; Thomson et al., 2021; Waseda et al., 2022) with an attenuation distance scale of \(O(100)\) km. The Medusa-766 PSD and the attenuated PSD \(S_{in}0-3\) are shown on the bottom panel of Figure 6. A summary of wave heights and periods, and the attenuation coefficients are given in Table 1. The same exercise could be repeated to match the observed Medusa-766 \(T_{0m1}\); however, due to the fact that many combinations of \(a\) and \(n\) exist, we are unable to attribute a physical meaning of the coefficients, e.g., inferring mechanism with which the wave energy is attenuated. Measuring unique attenuation coefficients is a motivation for future observations. Nevertheless, the similarity between the attenuated PSDs and the Medusa-766 observed PSD support that the captured signal was ocean waves regardless of the physical meaning of the attenuation coefficients.
We corroborate the 20-21 Jul swell event results by comparing the Medusa-766 spectrum with the noise floor achieved in the field (Medusa-766 itself as well as the Arctic Ocean observation (Nose et al., 2023)) and the catalogue specification in Figure 7. The daily averaged spectrum for 14 Jul 2022 for Medusa-766 and 24 Oct 2021 for the Nose et al. (2023) observation were considered to be the field-achieved noise floors. The catalogue specification IMU noise is \(N_{0}=60\frac{\mu q}{\mathrm{\sqrt{Hz}}}\), and the noise floor was estimated as \((N_{0}\times 10^{-6})^{2}(2\pi f)^{-4}\). The observed swell energy is 2 orders of magnitude above the noise floors estimated from the field and the IMU specification. Figure 7 is a convincing demonstration of the sensor sensitivity and that the measured swell signal was unaffected by the sensor noise. For completeness, we list the respective \(H_{m0}\) values: 0.4 cm for the catalogue specification, 0.5 cm for the Arctic Ocean noise floor, 0.8 cm for the Medusa-766 noise floor, and the 20-21 Jul event's \(H_{m0}\) was 7.5 cm.
## 4 Discussion
### Distance scale of wave-induced sea ice breakup
With a viewpoint to gain insights into the wave-induced ice breakup potential, the ice breakup up index known as \(I_{br}\) was calculated along the propagation distance based on the attenuation coefficients \(a=0.01\) and \(n=2.65\) in Equations 1 and 2. Following Voermans et al. (2020), the monochromatic wave breaks the ice when the largest stress imposed by a wave on an elastic ice sheet exceeds the flexural strength \(\sigma_{flx}\) of the ice sheet: \(\left(\frac{2\pi^{2}Ah_{i}}{\lambda_{mono}^{2}}\right)Y>\sigma_{flx}\) (e.g., Dumont, Kohout, and Bertino (2011)). This yields an ice breakup index for monochromatic waves,
\[I_{br}^{(mono)}=\left(\frac{2\pi^{2}Ah_{i}}{\lambda_{mono}^{2}}\right)\left( \frac{Y}{\sigma_{flx}}\right), \tag{3}\]
in which the ice breaks up when \(I_{br}^{(mono)}>1\). Here, \(\sigma_{flx}\) is the ice flexural strength, \(Y\) is the Young's modulus, \(A\) is the monochromatic wave amplitude, \(\lambda_{mono}\) is the wavelength, and \(h_{i}\) is the ice thickness. Boutin et al. (2018); Voermans et al. (2020) further extended this breakup index to incorporate ice breakups in a random wave field; the monochromatic wave amplitude and wavelength were replaced by the significant wave height \(H_{m0}\) and peak wavelength \(\lambda_{p}\). A coefficient was introduced to consider the ultimate limit state within a given time period as follows:
\[\frac{A}{\lambda_{mono}^{2}}\approx\frac{c_{1}H_{m0}/2}{\lambda_{p}^{2}}, \tag{4}\]
where \(c_{1}\frac{H_{m0}}{2}/\lambda_{p}\) may be considered as an approximation of the steepest expected wave in a given sea state. The breakup index for a random wave field is, then, derived as
\[I_{br}^{(rand)}=\left(\frac{2\pi^{2}c_{1}\frac{H_{m0}}{2}h_{i}}{\lambda_{p}^{2 }}\right)\frac{Y}{\sigma_{flx}}. \tag{5}\]
Voermans et al. (2020) omitted the constants (\(c_{1}\times 2\pi^{2}\)) from the breakup index and found observation evidence that a threshold value is 0.014 adopting \(c_{1}=3.6\) (proposed by Boutin et al. (2018) for a stationary wave field of 500 waves), which is equivalent to \(I_{br}^{(rand)}=1\). The value of the coefficient \(c_{1}\), however, remains debatable due to uncertainties associated with the flexural strength \(\sigma_{flx}\) and the Young's modulus \(Y\)(Timco and Weeks, 2010; Karulina et al., 2019). Moreover, the underlying assumption of a brittle fracture without a plastic deformation remains to be tested. For now, we ignore the plastic deformation regime following Voermans et al. (2020), and the \(I_{br}^{(rand)}\) index was used to quantify the spatial scale of wave-induced ice breakup potential from observations and models.
The swell part of a spectrum has a coarse frequency resolution, which can cause discontinuities in \(\lambda_{p}\) along the wave propagation distance (and in turn \(I_{br}^{(rand)}\)); as such, we used the wavelength of the \(T_{0m1}\), denoted as \(\lambda_{-1}\), instead of \(\lambda_{p}\) in Equation 5. The \(I_{br}^{rand}\) threshold is unchanged because the \(c_{1}\) coefficient in Equation 4 becomes 3.42 adopting \(\lambda_{-1}\approxeq 0.95\lambda_{p}\) (an applicable approximation for swell spectra as shown in the work of Ahn (2021)). The \(-1\) moment mean wavelength \(\lambda_{-1}\) is estimated from the
linear dispersion relation \(\frac{g}{2\pi}T_{0m1}^{2}\) following Voermans et al. (2020). Since ice properties were not measured, we use \(\sigma_{flx}\) and \(Y\) also from Voermans et al. (2020): \(\sigma_{flx}\in[0.1,0.7]\) MPa and \(Y\in[1,6]\) GPa with most probable values of \(\sigma_{flx}=0.4\) MPa and \(Y=3\) GPa.
Assuming ice thickness of 1 m, and the probable ice properties (\(\sigma_{flx}=0.4\) MPa and \(Y=3\) GPa), the wave-induced ice breakup threshold of 0.014 was exceeded to around 400 km into the ice cover (see the bottom panel of Figure 8). The scale here is difficult to comprehend; the reason is that considerable uncertainty arises from the lack of in situ sea ice mechanical properties \(\sigma_{flx}\) and \(Y\). As a demonstration, Figure 8 presents the attenuated wave estimates as waves travel into sea ice and the wave-induced ice breakup index \(I_{br}^{(rand)}\) with a conservative uncertainty: the upper bound uncertainty using \(\sigma_{flx}=0.7\) MPa and \(Y=1\) GPa remains above the wave-induced ice breakup threshold up to 1,000 km into the ice cover.
The sea ice mechanical properties used in the \(I_{br}^{(rand)}\) parameter is a considerable error source. In light of the analysis here, it is clear that more observations will improve our understanding of the ice breakup physics. To this end, a new approach to measure ice properties and the precise timing of ice breakups using geophones is emerging (Moreau, Weiss, and Marsan 2020; Voermans et al. 2023); these observations are promising for future ice breakup studies.
### Waves in ice measurements in Lutzow-Holm Bay: the JARE perspective
From the winter Antarctica ice cover wave observation, we showed the ocean wave signal over 1,000 km into the pack ice likely originated from the extratropical cyclone in the Southern Ocean. This evidence seems robust, however, the analysis that followed needs to be tempered by the obvious fact that the interpretation was made from the single buoy measurements. For example, it is unlikely that the attenuation coefficients remain constant for the long propagation distance because we inherently assume the ice type remains the same too. Perhaps this is the reason that many combinations of the attenuation coefficients can achieve a tolerable fit to the observed \(H_{m0}\). The wave-induced ice breakup potential for 1 m ice thickness could extend to 400 km from the ice edge, but if the ice breaks up, the wave attenuation characteristics are modified. The attenuation rate could be changing in time and space. Our lack of knowledge about the typical floe size along the propagation distance is another unknown. Lastly, many studies show that swell dispersion relation is practically unchanged between open water and typical ice field (e.g., figure 1 of Boutin et al. (2018)). However, there is a conspicuous observation of \(\lambda\) shortening in Liu and Mollo-Christensen (1988) who observed 18 s waves' \(\lambda\) was around 250 m in the Antarctica pack ice whereas the linear dispersion relation derived \(\lambda\) is over 600 m. Notwithstanding these limitations, the Medusa-766 observation shows that the waves likely propagate a strikingly long distance from the ice-free Southern Ocean into the ice field.
With a view to consider the long distance propagation of swell and sea ice breakup potential in the context of LHB waves, we now discuss the wave data measured while Medusa-766 was located in LHB. LHB is exposed to the Southern Ocean that has vast fetches and an abundance of extratropical cyclonic activities. As such, we expected frequent wave propagation to the buoy from Southern Ocean waves, at least between February and April when the SIC extent is low. However, this was not the case. When Medusa-766 was located in LHB, the measured \(H_{m0}\) exceeded 0.5 m only three times
in February, none in March and April. Then, from about 14 Apr, Medusa-766 began to drift westward and out of LHB; i.e., the ice floe the Medusa-766 was deployed somehow drifted out of the bay. We examine two synoptic events on 1 and 11 Apr 2022 before the Medusa-766 drifted out of LHB. These events generated waves towards Medusa-766 with \(H_{m0}\) greater than 5 m and \(T_{p}\) around 10-12 s at the LHB ice edge. The wave height and period time series at Medusa-766 between 1 and 17 Apr are shown in Figure 9.
A snapshot of wave fields of the two synoptic events are presented in Figure 10. The black markers, Pt1 and Pt2, indicate the estimated positions of incoming waves. The first event was a synoptic scale low pressure system located 100s of kms offshore of LHB and generated northeast waves propagating towards Medusa-766 (see the top panel of Figure 10). At this time, Medusa-766 drifted towards ice field, then remained more or less stationary for several days. The peak of the wave event at Pt1 near the ice edge occurred at UTC 21:00 on 31 Mar with \(H_{m0}\) and \(T_{p}\) of 5.3 m and 11.2 s. The distance between Pt1 and Medusa-766 was only around 90 km, but the measured \(H_{m0}\) at Medusa-766 was less than 0.1 m. The second event occurred via a combined effect of a low pressure system west of the Medusa-766 position and the high pressure system offshore of LHB; this generated waves travelling predominantly from west northwest as shown on the bottom panel of Figure 10. The peak incoming wave energy as inferred from ERA5 was 5.1 m \(H_{m0}\) and 11.2 s \(T_{p}\). Pt2 was located around 200 km away from Medusa-766 and was more protected from the incoming waves by the ice field compared to the first event, at least as depicted by the ADS-AMSR2 SIC fields. Despite this, the Medusa-766 measured \(H_{m0}\) during this event was around 0.25 m, which was larger than the first event, but still significantly attenuated. While the incoming wave conditions were not so dissimilar to the wave propagation in winter described in the results and the Higashi et al. (1982) event, these waves seemingly did not cause a regional scale fast ice breakup.
We note that Medusa-766 drifted during the 11 Apr event. Three days later, on 14 Apr, Medusa-766 began to outflow and drifted westward from LBH. Whether ocean waves played any part in triggering the outflow of ice floes at these times is unknown, largely because the analysis and discussion in this study are based on the single buoy measurements. Building on the success of Medusa-766, 23 Medusa-OMBs were deployed in LBH (15 on fast ice and 8 in drift ice floes) during the JARE64 campaign.
## 5 Conclusions
Medusa-766 demonstrated its durability and robustness by surviving 333 days in the extreme Antarctica environment. Further, being able to detect centimetre order swell signals demonstrated the high sensitivity of the sensor used. These led to a striking illustration of the long distance propagation of swell into sea ice that has not been previously observed to the authors' knowledge, and showed the potential of the IMU-based wave measurement in sea ice. The Medusa-766 observation provided insights into the future JARE wave-ice interaction observation strategies as we aim to elucidate the relative contribution of ocean waves to the unstable LHB fast ice.
## Acknowledgement
We are grateful to the crew and expedition members onboard the icebreaker _Shirase_ for their cooperative support in conducting on-deck operations during our JARE63 wave-ice buoy deployment.
## Data availability
The data that support the findings of this study are available from the corresponding author upon reasonable request.
## Funding
This work was a part of the Arctic Challenge for Sustainability II (ArCS II) Project (Program Grant Number JPMXD1420318865).
A part of this study was also conducted under JSPS KAKENHI Grant Numbers JP 19H00801, 19H05512, 21K14357, and 22H00241.
## Table
Figure 2: Medusa-766 trajectory between 4 Feb 2022 and 3 Jan 2023 overlaid on the ADS-AMSR2 SIC on 1 Feb (top) and 1 Aug (bottom) 2022. The brown line is the trajectory, and the brown marker shows its location on the respective dates. SICs are shown in colours. The 1 Feb panel also has Sentinel-1 SAR images overlaid.
Figure 3: Medusa-766 wave height and period time series for July 2022. Medusa-766 was located deep in the Antarctica winter pack ice where the shortest distance to the ice-free Southern Ocean was over 1,000 km.
Figure 4: Synoptic conditions and the Medusa-766 positions for 19 Jul and 20 Jul 2022 (top and bottom, respectively) leading up to the wave event at the Medusa-766 position on 21 Jul. The brown line is the Medusa-766 trajectory, and the brown marker shows its location on the respective dates. The black marker is the location where the ERA5 directional spectra were obtained. ERA5 MSLP is shown in yellow contours, and the red vectors are the ERA5 10 m winds. The 0.15 and 0.80 ADS-AMSR2 SIC contours are also shown as black dashed and solid lines, respectively.
Figure 5: The wave field and Medusa-766 measured spectra on 20–21 Jul 2022.
Figure 6: The ERA5 incoming wave spectrum at the ice edge is shown on the top panel. The black solid line was the PSD obtained by integration over all directions whereas the blue line with markers integrated the northwest sector. The ERA5 incoming wave, \(S_{in}\) was attenuated using various combinations of \(a\) and \(n\) that matched the Medusa-766 \(H_{m0}\) and shown on the bottom panel. A summary of the wave heights and periods from these PSDs are provided in Table 1.
Figure 7: The OMB specification spectral noise floor and the field-achieved spectral noise floors from Medusa-766 and the Arctic Ocean observation (Nose et al. 2023) are plotted with the Medusa-766 swell signal spectrum on 06:00 21 Jul 2022. The field-achieved noise floors were obtained by averaging spectra obtained on 14 Oct and 24 Jul 2022 for Medusa-766 and the Arctic Ocean observation, respectively.
Figure 8: The attenuated wave height using Equation 1 (top) is plotted with the wave-induced ice breakup index, \(I_{br}^{(rand)}\), using Equation 5 (bottom) along the wave propagation distance between 400 and 1,000 km from the ice edge. The uncertainty was calculated using \(\sigma_{flx}=0.1\) MPa & \(Y=6\) GPa for the lower bound and \(\sigma_{flx}=0.7\) MPa & \(Y=1\) GPa for the upper bound.
Figure 9: Medusa-766 wave height and period time series for 1–17 Apr 2022 while it was located in LBH. After 17 Apr, Medusa-766 drifted out of LBH.
Figure 10: ERA5 wave field snapshots of two synoptic events that generated waves towards Medusa-766 near the LH bay are shown here. 1 Apr (top) was the northeast wave event when the Medusa-766 measured \(H_{m0}\) was less than 0.1 m. 11 Apr was the west northwest event (bottom) when the Medusa-766 measured \(H_{m0}\) was around 0.25 m. The brown line is the Medusa-766 trajectory, and the brown marker shows its location on the respective dates. The black markers show where the ERA5 directional spectra were obtained. The \(H_{m0}\) is shown in the colours and the wave vectors are mean wave direction scaled by \(T_{p}\) The 0.15 and 0.80 ADS-AMSR2 SIC contours are also shown as grey dashed and solid lines, respectively. |
2309.03507 | Quantum retrodiction in Gaussian systems and applications in
optomechanics | What knowledge can be obtained from the record of a continuous measurement
about the quantum state the measured system was in at the beginning of the
measurement? The task of quantum state retrodiction, the inverse of the more
common state prediction, is rigorously and elegantly addressed in quantum
measurement theory through retrodictive Positive Operator Valued Measures. This
article provides an introduction to this general framework, presents its
practical formulation for retrodicting Gaussian quantum states using
continuous-time homodyne measurements, and applies it to optomechanical
systems. We identify and characterise achievable retrodictive POVMs in common
optomechanical operating modes with resonant or off-resonant driving fields and
specific choices of local oscillator frequencies in homodyne detection. In
particular, we demonstrate the possibility of a near-ideal measurement of the
quadrature of the mechanical oscillator, giving direct access to the position
or momentum distribution of the oscillator at a given time. This forms the
basis for complete quantum state tomography, albeit in a destructive manner. | Jonas Lammers, Klemens Hammerer | 2023-09-07T06:36:11Z | http://arxiv.org/abs/2309.03507v1 | # Quantum retrodiction in Gaussian systems and applications in optomechanics
###### Abstract
What knowledge can be obtained from the record of a continuous measurement about the quantum state the measured system was in at the beginning of the measurement? The task of quantum state retrodiction, the inverse of the more common state prediction, is rigorously and elegantly addressed in quantum measurement theory through retrodictive Positive Operator Valued Measures. This article provides an introduction to this general framework, presents its practical formulation for retrodicting Gaussian quantum states using continuous-time homodyne measurements, and applies it to optomechanical systems. We identify and characterise achievable retrodictive POVMs in common optomechanical operating modes with resonant or off-resonant driving fields and specific choices of local oscillator frequencies in homodyne detection. In particular, we demonstrate the possibility of a near-ideal measurement of the quadrature of the mechanical oscillator, giving direct access to the position or momentum distribution of the oscillator at a given time. This forms the basis for complete quantum state tomography, albeit in a destructive manner.
## I Introduction
Continuous measurements [1; 2; 3] are a powerful tool for the preparation and control of quantum states in open systems and as such are of great importance for studies of fundamental physics and applications in quantum technology. Based on a continuous measurement record, it is possible to track the quantum trajectory of a system in its Hilbert space in real time, as demonstrated in circuit QED systems [4; 5], atomic ensembles [6; 7], and in optomechanics [8] with micromechanical oscillators [9; 10; 11; 12; 13] and levitated nanoparticles [14; 15; 16]. Determining the conditional quantum state formally requires to solve the stochastic Schrodinger or master equation [1; 2; 3], which is a daunting task in general. For the important case of linear quantum systems, which includes most applications in optomechanics and atomic ensembles, the integration of the Schrodinger equation simplifies greatly and turns out to be equivalent to classical Kalman filtering [17]. For this reason, these well-established and powerful tools of classical estimation and control theory are increasingly finding application in quantum science [18] and are becoming a well-accepted technique for preparing quantum states.
Like any measurement in quantum mechanics, continuous measurements not only determine the state of the system post measurement, but also provide information about its initial state prior. The dual use of continuous measurements for predictive preparation and retrospective analysis of quantum states, cf. Fig. 1, as well as their combination in what is referred to as quantum state smoothing, has received considerable attention in the theoretical literature, for a review see [19]. Retrospective state analysis and smoothing have been investigated experimentally in cavity- and circuit QED [20; 21; 22; 23; 24], atomic ensembles [25; 26], and optomechanics [11; 12; 27]. However, compared to quantum state preparation by filtering, the applications of these concepts for state readout seem to be less known, although they represent powerful tools for quantum state verification and tomography.
Here we aim to give a self-contained and accessible introduction to the theory of quantum state retrodiction based on continuous measurements and its formulation for linear quantum systems. The main equations of this theory have been derived before in the context of quantum state smoothing in [28; 29; 30]. We focus our presentation here on the aspect of state retrodiction and aim to provide operational recipes for this. The general formalism is applied to optomechanical systems, for which we identify and characterize the retrodictive measurements achievable there in terms of their Positive Operator Valued Measures (POVMs). In particular, we consider the common regimes of driving the optomechanical cavity on
Figure 1: Schematic of a continuously monitored quantum system: The output of field a quantum system is combined with a strong local oscillator (LO) to perform homodyne detection from time \(t_{0}\) to \(t_{1}\), producing some measurement record \(\mathcal{Y}\). Starting from a known initial state \(\rho(t_{0})\) this record can be used by integrating a stochastic master, Eq. (3), to predict the system state \(\rho_{\mathcal{Y}}(t_{1})\) conditioned on the record \(\mathcal{Y}\). Alternatively, the measurement record can be used to retrodict an effect operator \(\hat{E}_{\mathcal{Y}}(t_{0})\), cf. Eq. (36), that characterizes an effective POVM measurement on the initial state \(\rho(t_{0})\).
resonance or on its red or blue mechanical sidebands and discuss the role of the local oscillator frequency in homodyne detection. In each case we determine the realized POVM and compare to what is achieved in state filtering in the same configuration. As a main finding, we show that red-detuned driving in the resolved-sideband limit allows for an almost perfect quadrature measurement, which is back-action free but completely destructive. Our treatment accounts for imperfections due to thermal noise and detection inefficiencies, and studies requirements on the quantum cooperativity for performing efficient state readout. In particular, we determine the concrete filter functions that are necessary for the post-processing of the photocurrent in order to realize certain POVMs.
The article is organized as follows: In Sec. II we recapitulate the description of conditional state preparation through continuous measurement based on stochastic master equations and the equivalent Kalman filter, emphasizing the operational interpretation of the central formulas. In close analogy we introduce in Sec. III the formalism of retrodictive POVMs and its application to linear quantum systems, where the POVM consists of Gaussian effect operators conveniently characterized by their first and second moments. In Sec. IV we illustrate the application of this formalism to the simple case of a decaying cavity. Finally, in Sec. V we provide a rather detailed modelling of an optomechanical system and derive the retrodictive POVMs in various parameter regimes.
## II Conditional state preparation through continuous measurements
### Conditional Master Equation
To set the scene and introduce some notation, we start with an overview of the concept of conditional (stochastic) master equations, referring to [1; 2] for detailed derivations. These describe the evolution of continuously monitored quantum systems, and are used to prepare _conditional_ (or _filtered_) quantum states.
We consider an open quantum system governed by Hamiltonian \(\hat{H}\) and coupled to a Markovian bath via jump operator \(\hat{L}\). This gives rise to a quantum master equation [1; 2] for the system's density operator \(\rho(t)\),
\[\mathrm{d}\rho(t)=-i[\hat{H},\rho(t)]\mathrm{d}t+\mathcal{D}[\hat{L}]\rho(t) \mathrm{d}t, \tag{1}\]
with the usual Lindblad superoperator \(\mathcal{D}[\hat{L}]\rho=\hat{L}\rho\hat{L}^{\dagger}-(\hat{L}^{\dagger}\hat{ L}\rho+\rho\hat{L}^{\dagger}\hat{L})/2\). The generalization to multiple jump operators is straightforward. We will designate all operators (except density operators) by caret superscripts. The increment \(\mathrm{d}\rho(t):=\rho(t+\mathrm{d}t)-\rho(t)\) propagates the state by an infinitesimal amount forward in time. Integrating this equation of motion yields a trace-preserving completely-positive map \(\mathcal{N}_{t_{0},t}\) which takes an initial state \(\rho(t_{0})\)[31].
Further information about the state can be gained by monitoring the bath to which the system is coupled [1; 2; 3]. In that case conditioning the state on the knowledge gained from these indirect measurements is known as _filtering_[32]. We only consider the case of homodyne (and later heterodyne) measurements, as we are ultimately interested in _linear_ dynamics. Other measurement schemes, such as photon counting, would take the conditional dynamics out of this regime. A continuous homodyne detection of the outgoing mode, as sketched in Fig. 1, yields a stochastic photocurrent \(I(t)\). This can be normalized, \(Y(t):=I(t)/\alpha\), with some \(\alpha\in\mathbb{R}\) so that for vacuum input its increment \(\delta Y(t)=Y(t+\delta t)-Y(t)\) has the variance of white noise, \(\overline{\delta Y(t)^{2}}=\overline{\delta I(t)^{2}}/\alpha^{2}\equiv\delta t\) where the bar denotes an ensemble average [33]. The measured signal can be decomposed into a deterministic and stochastic part as
\[\textbf{(I)}\ \mathrm{d}Y(t)=\langle\hat{C}+\hat{C}^{\dagger}\rangle_{\rho(t)} \mathrm{d}t+\mathrm{d}W(t). \tag{2}\]
Here, \(\hat{C}=\sqrt{\eta}\mathrm{e}^{\mathrm{i}\theta}\hat{L}\) denotes the measurement operator, which includes imperfect detection efficiency \(\eta\in[0,1]\) and the local oscillator phase \(\theta\). Angled brackets denote an expectation value, \(\langle\hat{C}\rangle_{\rho}:=\mathrm{Tr}\{\hat{C}\rho\}\), and \(\mathrm{d}W\) is a stochastic Wiener increment satisfying the Ito relation \((\mathrm{d}W)^{2}=\mathrm{d}t\). Equation (2) is a stochastic Ito equation [34; 35] denoted by the **(I)** in front. Depending on the measurement results, the system satisfies the conditional master equation
\[\textbf{(I)}\ \mathrm{d}\rho(t)=-i[\hat{H},\rho(t)]\mathrm{d}t+ \mathcal{D}[\hat{L}]\rho(t)\mathrm{d}t \tag{3}\] \[\qquad\qquad\qquad+\mathcal{H}[\hat{C}]\rho(t)\mathrm{d}W(t),\]
with superoperator \(\mathcal{H}[\hat{C}]\rho:=(\hat{C}-\langle\hat{C}\rangle_{\rho})\rho+\rho( \hat{C}^{\dagger}-\langle\hat{C}^{\dagger}\rangle_{\rho})\). Assume the system has evolved from \(t_{0}\) to \(t_{1}\) and produced some measurement record \(\mathcal{Y}=\{Y(s),t_{0}\leq s<t_{1}\}\), as depicted in Fig. 1. By integrating the master equation from \(t_{0}\) to \(t_{1}\) we obtain a conditional (or _filtered_) state \(\rho_{\mathcal{Y}}(t_{0})=\mathcal{N}_{t_{0},t_{1}|\mathcal{Y}}[\rho(t_{0})]\) dependent on the initial state \(\rho(t_{0})\) and conditioned on the record \(\mathcal{Y}\).
The conditional master equation Eq. (3) can be generalized to \(N_{L}\) Markovian baths and \(N_{C}\) monitored chan
nels,
\[\textbf{(I)}\ \mathrm{d}\rho(t)=-i[\hat{H},\rho(t)]\mathrm{d}t+\sum_{j=1 }^{N_{L}}\mathcal{D}[\hat{L}_{j}]\rho(t)\mathrm{d}t \tag{4}\] \[\qquad\qquad\qquad\qquad\qquad\qquad+\sum_{k=1}^{N_{C}}\mathcal{H }[\hat{C}_{k}]\rho(t)\mathrm{d}W_{k}(t).\]
The measurement operators \(\hat{C}_{k}\) do not necessarily correspond one-to-one to the jump operators \(\hat{L}_{j}\) as before, and we will see an example in Sec. V where effectively \(N_{C}>N_{L}\). However, since any information recorded by the observer must have previously leaked from the system it holds that \(\sum_{j}\hat{L}_{j}^{\dagger}\hat{L}_{j}-\sum_{k}\hat{C}_{k}^{\dagger}\hat{C}_ {k}\geq 0\). The \(\mathrm{d}W_{j}\) are mutually independent Wiener increments satisfying the Ito relation
\[\mathrm{d}W_{j}(t)\mathrm{d}W_{k}(t)=\delta_{jk}\mathrm{d}t, \tag{5}\]
and each \(\mathrm{d}W_{j}\) is related to a corresponding homodyne measurement increment \(\mathrm{d}Y_{j}\) as
\[\textbf{(I)}\ \mathrm{d}Y_{j}(t)=\langle\hat{C}_{j}+\hat{C}_{j}^{\dagger} \rangle_{\rho(t)}\mathrm{d}t+\mathrm{d}W_{j}(t). \tag{6}\]
For details and derivations of this general formalism for describing quantum dynamics conditioned on continuous homodyne detection, we refer once more to [1, 2].
### Linear Dynamics
#### ii.2.1 Linear systems
We now apply these concepts to linear systems with Gaussian states governed by the general master equation Eq. (4). We consider a bosonic quantum system with \(M\) modes and \(2M\) associated canonical operators \(\hat{r}_{j}\) which we collect into a vector \(\hat{\mathbf{r}}=(\hat{r}_{j})_{j=1,\dots,2M}\). The \(\hat{r}_{j}\) satisfy canonical commutation relations
\[i\sigma_{jk}:=[\hat{r}_{j},\hat{r}_{k}], \tag{7}\]
giving rise to a skew-symmetric matrix \(\sigma\in\mathbb{R}^{2M\times 2M}\). For example, the usual choice for an oscillator with \(M\) modes would be \(\hat{\mathbf{r}}=[\hat{\mathbf{x}}^{\mathrm{T}},\hat{\mathbf{p}}^{\mathrm{T}} ]^{\mathrm{T}}=[\hat{x}_{1},\dots,\hat{x}_{M},\hat{p}_{1},\dots,\hat{p}_{M}]^{ \mathrm{T}}\), which entails
\[\sigma=\begin{bmatrix}\mathbb{0}_{M}&\mathbb{1}_{M}\\ -\mathbb{1}_{M}&\mathbb{0}_{M}\end{bmatrix}. \tag{8}\]
In a linear system the Hamiltonian is at most quadratic in the canonical operators while the jump and measurement operators are at most linear. \(\hat{H}\) can be expressed as
\[\hat{H}=\frac{1}{2}\hat{\mathbf{r}}^{\mathrm{T}}H\hat{\mathbf{r}}, \tag{9}\]
with a symmetric matrix \(H\in\mathbb{R}^{2M\times 2M}\). Without loss of generality, we assume that \(\hat{H}\) does not contain terms linear in \(\hat{\mathbf{r}}\)[36]. We write the \(N_{C}\) linear measurement operators as
\[\hat{\mathbf{C}}=(A+iB)\hat{\mathbf{r}}, \tag{10}\]
with \(A,B\in\mathbb{R}^{N_{C}\times 2M}\), and \(N_{L}\) jump operators as
\[\hat{\mathbf{L}} =\Lambda\hat{\mathbf{r}}, \tag{11}\] \[\Lambda^{\dagger}\Lambda =:\Delta+i\Omega, \tag{12}\]
with complex \(\Lambda\in\mathbb{C}^{N_{L}\times 2M}\) and \(\Delta,\Omega\in\mathbb{R}^{2M\times 2M}\) symmetric and skew-symmetric respectively.
#### ii.2.2 Gaussian states
A Gaussian state \(\rho\)[37, 38, 39, 40, 41] is, by definition, any state with a Gaussian phase-space distribution. Gaussian states are _fully determined_ by their first- and second-order cumulants [42], namely a vector of means
\[\mathbf{r}_{\rho}:=\langle\hat{\mathbf{r}}\rangle_{\rho}:=\mathrm{Tr}\{\hat{ \mathbf{r}}\rho\}\in\mathbb{R}^{2M} \tag{13}\]
and a symmetric covariance matrix
\[V_{jk}^{\rho}:=\langle\{\hat{r}_{j}-r_{j}^{\rho},\hat{r}_{k}-r_{k}^{\rho}\} \rangle_{\rho}\in\mathbb{R}^{2M\times 2M}. \tag{14}\]
All higher-order cumulants are identically zero, so knowing \(\mathbf{r}_{\rho}\) and \(V_{\rho}\) determines the full Wigner function of \(\rho\) and thus also \(\rho\) itself. Note that the normalization of \(V_{\rho}\) chosen in Eq. (14) means that diagonal elements correspond to twice the variance, e. g., \(V_{jj}^{\rho}=2(\langle\hat{r}_{j}^{2}\rangle-\langle\hat{r}_{j}\rangle^{2})\).
The assumption of a Gaussian initial state \(\rho(t_{0})\) is both convenient and reasonable. Since Gaussian operators have the tremendously useful property to remain Gaussian under linear dynamics they are easy to work with. Additionally, considering only Gaussian states is justified since Gaussian measurements [43, 44] and Gaussian baths [45] tend to "Gaussify" the state of the system. Mathematically this means that if we start with an arbitrary initial state \(\rho(t_{0})\), higher-order cumulants of order \(\geq 3\) are damped by the dynamics. Depending on how slowly this damping happens, if our linear system is initially prepared in a non-Gaussian state these higher orders may need to be taken into account, which we do in App. C.5. But for now we focus on the case of Gaussian initial states only.
It is known [29, 46] that a master equation for \(\rho\) can be directly translated into differential equations for the means and covariance matrix, as detailed in App. C. For a Gaussian state one finds
\[\textbf{(I)}\ \mathrm{d}\mathbf{r}_{\rho}(t)=Q\mathbf{r}_{\rho}(t)\mathrm{d}t+ \big{(}V_{\rho}(t)A^{\mathrm{T}}-\sigma B^{\mathrm{T}}\big{)}\mathrm{d} \mathbf{W}(t), \tag{15}\]
with the drift matrix
\[Q:=\sigma(H+\Omega), \tag{16}\]
comprising unitary and dissipative terms. If we reintroduce the actually measured homodyne signal
\[\textbf{(I)}\ \mathrm{d}\mathbf{Y}(t)=2A\mathbf{r}_{\rho}(t)\mathrm{d}t+ \mathrm{d}\mathbf{W}(t), \tag{17}\]
we can write
\[\text{\bf(I)}\ \mathrm{d}\mathbf{r}_{\rho}(t)=M_{\rho}(t)\mathbf{r}_{\rho}(t) \mathrm{d}t+\big{(}V_{\rho}(t)A^{\mathrm{T}}-\sigma B^{\mathrm{T}}\big{)}\mathrm{ d}\mathbf{Y}(t), \tag{18}\]
with the _conditional drift matrix_
\[M_{\rho}(t):=Q+2\sigma B^{\mathrm{T}}A-2V_{\rho}(t)A^{\mathrm{T}}A. \tag{19}\]
In Eq. (18) the measurement current \(\mathrm{d}\mathbf{Y}(t)\) enters the evolution of the conditional means only through multiplication with the measurement matrices \(A\) and \(B\). Hence, reducing the detection efficiency which corresponds to \(A,\,B\to 0\) causes the stochastic increment to disappear as it should. Note that the covariance matrix \(V_{\rho}(t)\) twice enters Eq. (18), once through the drift matrix \(M_{\rho}(t)\) and once directly coupled to \(\mathrm{d}\mathbf{Y}(t)\). The latter term has the effect that a large variance, which corresponds to large uncertainty about the state, boosts the effect each bit of gathered information has on the evolution of the conditional means.
The covariance matrix satisfies the deterministic equation
\[\frac{\mathrm{d}V_{\rho}(t)}{\mathrm{d}t} =M_{\rho}(t)V_{\rho}(t)+V_{\rho}(t)M_{\rho}^{\mathrm{T}}(t) \tag{20}\] \[\quad+D+2V_{\rho}(t)A^{\mathrm{T}}AV_{\rho}(t).\]
with _diffusion matrix_
\[D:=2\sigma\big{(}\Delta-B^{\mathrm{T}}B\big{)}\sigma^{\mathrm{T}}. \tag{21}\]
The evolution of \(V_{\rho}(t)\) is independent of the means \(\mathbf{r}_{\rho}(t)\) or any other cumulants, which is a peculiarity of Gaussian dynamics. However, while it is independent of the measurement record and not a stochastic equation, it does depend on the measurement device through matrices \(A,\,B\). This is reasonable since the information gained from observations of the system conditions the state, reducing its uncertainty.
In the following we assume stable dynamics, which makes the covariance matrix collapse to some steady state matrix \(V_{\rho}(t)\to V_{\rho}^{\infty}\) asymptotically for \(t\to\infty\) from any initial \(V_{\rho}(t_{0})\). We find \(V_{\rho}^{\infty}\) by solving the Riccati equation \(\dot{V}_{\rho}=0\) which implies
\[M_{\rho}^{\infty}V_{\rho}^{\infty}+V_{\rho}^{\infty}(M_{\rho}^{\infty})^{ \mathrm{T}}=-D-2V_{\rho}^{\infty}A^{\mathrm{T}}AV_{\rho}^{\infty}, \tag{22}\]
where \(M_{\rho}^{\infty}\) is just \(M_{\rho}(t)\) with \(V_{\rho}(t)\mapsto V_{\rho}^{\infty}\). The right-hand side is negative definite and the covariance matrix is positive definite for proper quantum states, so \(M_{\rho}^{\infty}\) only has eigenvalues with negative real part. Now if the experiment has been running sufficiently long we can simply plug \(V_{\rho}^{\infty}\) and \(M_{\rho}^{\infty}\) into Eq. (18) to find the evolution of the means,
\[\text{\bf(I)}\ \mathbf{r}_{\rho}(t) =\mathrm{e}^{(t-t_{0})M_{\rho}^{\infty}}\mathbf{r}_{\rho}(t_{0}) \tag{23}\]
Because \(M_{\rho}^{\infty}\) is stable (all eigenvalues have non-positive real parts) we see that the initial condition \(\mathbf{r}_{\rho}(t_{0})\) is damped exponentially, as is the integrand in the second line.
Here we see that the means (and thus the whole state) do not depend on the entire continuous measurement record \(\mathcal{Y}\) as such, but only on the Ito-integral in the second line of Eq. (23), which is a simple vector of \(2M\) real numbers for a system composed of \(M\) subsystems. Thus, the integral kernel \(\mathrm{e}^{M_{\rho}^{\infty}(t-\tau)}\big{(}V_{\rho}^{\infty}A^{\mathrm{T}}- \sigma B^{\mathrm{T}}\big{)}\) actually picks out a set of \(2M\) temporal modes of the monitored fields. Each of these (not necessarily orthogonal) modes of the light fields provides an estimate for one of the \(2M\) phase space variables of the system. We will elaborate on this aspect further in Sec. II.3.2. For the particular case of a freely decaying monitored cavity this fact was pointed out already by Wiseman [47], and in Sec. IV we will treat this cavity as an illustrative example of the formalism developed here, reproducing the results of [47].
### Interpretation and Discussion
#### ii.3.1 Conditional quantum states
We now want to remind the reader of how the conditional Gaussian quantum state has to be interpreted, and what its preparation via continuous measurements means from an operational perspective.
The means \(\mathbf{r}_{\rho}(t)\) and covariance matrix \(V_{\rho}(t)\) determined from Eq. (18) and Eq. (20) fully determine the density matrix for the conditional state. It is instructive to note that the Gaussian density matrix is always of the form [48; 49]
\[\rho(t)\propto\hat{D}(\mathbf{r}_{\rho}(t))\exp\Bigl{[}-\mathbf{\hat{r}}^{ \mathrm{T}}\Gamma_{\rho}(t)\mathbf{\hat{r}}\Bigr{]}\hat{D}^{\dagger}(\mathbf{ r}_{\rho}(t)). \tag{24}\]
Here \(\hat{D}(\mathbf{q})=\exp(-i\mathbf{q}\sigma\mathbf{\hat{r}})\) is a displacement operator in phase space, and the matrix \(\Gamma_{\rho}(t)\) is a simple functional of the covariance matrix [50]. The shape of the Gaussian wave packet in phase space is determined by the middle term on the right hand side, which evolves deterministically and is independent of the measurement results. The wave packet's position in phase space is set by the displacement operators, and does depend on the photocurrent via Eq. (23).
Prediction of a conditional quantum state based on a continuous measurement during time interval \([t_{0},t]\) starting from a known Gaussian initial state therefore simply means to calculate the means according to Eq. (23). Knowing those numbers the prediction is that a hypothetical projective measurement of canonical operators at time \(t\) will give results with these same averages, and second moments according to the covariance matrix \(V_{\rho}(t)\) which depends only on the initial condition. Statistics of any other measurement can be determined from Eq. (24). For stable dynamics dependencies on initial conditions
will disappear in the long run, and the covariance matrix will become time-independent. The wave packet will then have a fixed shape and undergo stochastic motion in phase space with positions known from the photocurrent.
The quality of the conditional preparation can be judged from the purity \(\mathcal{P}(\rho)=\mathrm{Tr}\{\rho^{2}\}\leq 1\) of the conditional state. It tells us how close it is to a pure state and thus quantifies the amount of classical uncertainty in \(\rho\). For a Gaussian state with \(M\) modes it is given by [51]
\[\mathcal{P}(\rho)=1/\sqrt{\det(V_{\rho})}. \tag{25}\]
Unobserved dissipation tends to reduce the purity while monitoring the dynamics and conditioning the state increases the purity. Ideally, perfect detection allows to prepare pure states, which are the only states with \(\mathcal{P}(\rho)=1\). The bound \(\mathcal{P}\leq 1\) implies \(\det(V_{\rho})\geq 1\) which is also imposed by Heisenberg's uncertainty relation. We briefly recall prototypical pure Gaussian states of a single mode. Coherent states \(|\alpha\rangle\) have equal variances \(V_{xx}=V_{pp}=1\) and vanishing covariance. The vacuum \(|0\rangle\) is a special coherent state with vanishing means. Squeezed states [46] have the variance in one quadrature reduced below _shot noise_, that is below 1 (the variance of vacuum). The conjugate quadrature is then necessarily anti-squeezed to satisfy Heisenberg's uncertainty relation. An important class of non-pure Gaussian states are thermal states. These have vanishing covariance and equal variance \(V_{xx}=V_{pp}=2\bar{n}+1\), where \(\bar{n}\geq 0\) is the mean number of excitations. Importantly, \(\mathcal{P}=1/(2\bar{n}+1)\) decreases as \(\bar{n}\) grows.
#### ii.2.2 Mode functions
We mentioned at the end of Sec. II.2.2 that the kernels in the forward and backward integrals of the means in Eqs. (23) and Eq. (46) each pick out sets of temporal modes. Recall that the means \(\mathbf{r}_{\rho}(t)=(r_{j}(t))_{j=1,\ldots,2M}\) in Eq. (23) depend on the measurement currents \(\mathbf{Y}(\tau)=(Y_{k}(\tau))_{k=1,\ldots,N_{C}}\) only through integration with respect to the functions
\[f_{jk}^{\rho}(t,\tau):=\left[\mathrm{e}^{(t-\tau)M_{\rho}^{\infty}}\big{(}V_{ \rho}^{\infty}A^{\mathrm{T}}-\sigma B^{\mathrm{T}}\big{)}\right]_{jk}. \tag{26}\]
Each (unnormalized) temporal mode function \(f_{jk}^{\rho}(t,\tau)\) is integrated with a corresponding signal \(Y_{k}(\tau)\),
\[X_{j}(t):=\int_{t_{0}}^{t}f_{jk}^{\rho}(t,\tau)\mathrm{d}Y_{k}(\tau), \tag{27}\]
to enter the evolution of \(r_{j}(t)\). A measurement current \(Y_{k}(\tau)\) results from a quadrature measurement of some outgoing light field, \(Y_{k}(\tau)\propto\langle\hat{x}_{k}^{out}(\tau)\rangle\). Thus integration with \(f_{jk}^{\rho}\) effectively corresponds to the measurement of a quadrature operator of a certain temporal mode \(\hat{X}_{j}(t)\propto\int^{t}f_{jk}^{\rho}(t,\tau)\hat{x}_{k}(\tau)\mathrm{d}\tau\) with result \(X(t)\). Of course the mode functions \(f_{jk}^{\rho}\), and thus the new optical modes, will generally not be orthogonal.
## III State verification using retrodictive POVMs
### Retrodictive POVMs
In the previous section we have seen how to use continuous monitoring for the preparation of conditional states (_filtering_). We are now going to show how to interpret the measurement record instead as an instantaneous _Positive-Operator Valued Measure (POVM)_[1; 2; 31] measurement. To fully appreciate this result let us first remind the reader of a few facts about POVMs and general measurements in quantum mechanics.
#### iii.1.1 Positive-operator valued measures
A general measurement of a given quantum state \(\rho\) is always composed of (i) possible measurement outcomes \(x\in\mathcal{X}\), (ii) probabilities for those outcomes to occur \(P(x|\rho)\), and (iii) the effect that obtaining some outcome \(x\) has on the system, i. e., the post-measurement state \(\rho_{x}\propto\hat{M}_{x}\rho\hat{M}_{x}^{\dagger}\) where \(\hat{M}_{x}\) incorporates the measurement back action on the state. The probability for a particular \(x\) to be measured is given by
\[P(x|\rho)=\mathrm{Tr}\{\hat{M}_{x}\rho\hat{M}_{x}^{\dagger}\}=\mathrm{Tr}\{ \hat{M}_{x}^{\dagger}\hat{M}_{x}\rho\}=\mathrm{Tr}\{\hat{E}_{x}\rho\} \tag{28}\]
with the positive _effect operator_\(\hat{E}_{x}:=\hat{M}_{x}^{\dagger}\hat{M}_{x}\). Because \(\sum_{x}P(x|\rho)=1\) must hold for any \(\rho\), the operators \(\hat{E}_{x}\) must resolve the identity \(\sum_{x}\hat{E}_{x}=\hat{1}\). Without reference to the \(\hat{M}_{x}\) any collection of positive self-adjoint operators \(\{\hat{E}_{x},x\in\mathcal{X}\}\) which resolve the identity is called a _Positive-Operator Valued Measure (POVM)_.
#### iii.1.2 Continuous monitoring as POVM measurement
To see how to reinterpret the measurement record let us again consider the simple system governed by the master equation (3), and an evolution from \(t_{0}\) to \(t_{1}\) that produced some record \(\mathcal{Y}=\{Y(s),t_{0}\leq s<t_{1}\}\). Note that Eq. (3) is nonlinear in \(\rho\) in order to yield a trace-preserving map \(\mathcal{N}_{t_{0},t_{1}|\mathcal{Y}}\). If instead we consider the linear equation [47]
\[\begin{split}\textbf{(I)}\ \mathrm{d}\tilde{\rho}(t)=-i[\hat{H}, \tilde{\rho}(t)]\mathrm{d}t+\mathcal{D}[\hat{L}]\tilde{\rho}(t)\mathrm{d}t\\ +\big{(}\hat{C}\tilde{\rho}(t)+\tilde{\rho}(t)\hat{C}^{\dagger} \big{)}\mathrm{d}Y(t),\end{split} \tag{29}\]
we find it generates equivalent but non-trace-preserving dynamics,
\[\tilde{\rho}_{\mathcal{Y}}(t_{1})=\tilde{\mathcal{N}}_{t_{0},t_{1}|\mathcal{Y}} [\rho(t_{0})], \tag{30}\]
denoted by a tilde. The trace of the conditional state now carries additional information, namely the probability for \(\mathcal{Y}\) to have occurred given an initial \(\rho(t_{0})\),
\[P(\mathcal{Y}|\rho(t_{0}))=\mathrm{Tr}\{\tilde{\rho}_{\mathcal{Y}}(t_{1})\}. \tag{31}\]
If we plug Eq. (30) into this expression and include an identity operator \(\hat{1}\), we can write
\[P(\mathcal{Y}|\rho(t_{0})) =\mathrm{Tr}\{\hat{1}\tilde{\mathcal{N}}_{t_{0},t_{1}|\mathcal{Y}} [\rho(t_{0})]\} \tag{32}\] \[=\mathrm{Tr}\{\tilde{\mathcal{N}}_{t_{0},t_{1}|\mathcal{Y}}^{ \dagger}[\hat{1}]\rho(t_{0})\}, \tag{33}\]
where is the Hilbert-Schmidt adjoint channel of \(\tilde{\mathcal{N}}_{t_{0},t_{1}|\mathcal{Y}}\) that acts on \(\hat{1}\). We now define
\[\hat{E}_{\mathcal{Y}}(t_{0}):=\tilde{\mathcal{N}}_{t_{0},t_{1}|\mathcal{Y}}^{ \dagger}[\hat{1}], \tag{34}\]
which will play a crucial role throughout this article. With this definition Eq. (33) can be rewritten as
\[P(\mathcal{Y}|\rho(t_{0}))=\mathrm{Tr}\{\hat{E}_{\mathcal{Y}}(t_{0})\rho(t_{0 })\}. \tag{35}\]
Comparing Eq. (35) to Eq. (28) shows that \(\{\hat{E}_{\mathcal{Y}}(t_{0}),\mathcal{Y}\in\mathfrak{Y}\}\) indeed constitutes a POVM on the initial state \(\rho(t_{0})\). Here, the "outcomes" \(x\equiv\mathcal{Y}\in\mathfrak{Y}\) comprise all possible records one could observe, and \(\sum_{Y\in\mathcal{Y}}P(\mathcal{Y}|\rho(t_{0}))=1\) because the sum corresponds to averaging over (i. e., ignoring) the observations which yields the unconditional trace-preserving evolution (1). As in filtering, we will show later that the effect operators actually depend only on certain weighted integrals of the measurement record \(\mathcal{Y}\), and not on the whole continuous record as such.
### Backward effect equation
Just as the conditional quantum state, the effect operators \(\hat{E}_{\mathcal{Y}}(t)\) themselves can be considered as dynamical quantities obeying a certain (stochastic) equation of motion. In open but unobserved systems Barnett, Pegg, and Jeffers [52, 53, 54] derived a deterministic differential equation describing the propagation _backwards in time_ of effect operators to yield effective POVMs at past times. Tsang [55, 56, 57] and Molmer et al. [58, 29] incorporated continuous observations into Bayesian updates of past measurement results, which (arguably, see [59]) extends classical _smoothing_ to the quantum domain, and results in a stochastic differential equation for \(\hat{E}_{\mathcal{Y}}(t)\).
For a given system dynamics the effect operators are backpropagated by a channel adjoint to that of the state. More specifically, for continuously monitored systems governed by conditional master equation (29), the adjoint _conditional effect equation_, which takes some effect operator \(\hat{E}(t)\) from the future to the past reads [55, 56, 58]
\[\begin{split}\textbf{(BI)}\ -\mathrm{d}\hat{E}(t)&:=\hat{E} (t-\mathrm{d}t)-\hat{E}(t)\\ &=i[\hat{H},\hat{E}(t)]\mathrm{d}t+\mathcal{D}^{\dagger}[\hat{L} ]\hat{E}(t)\mathrm{d}t\\ &\qquad+\big{(}\hat{C}^{\dagger}\hat{E}(t)+\hat{E}(t)\hat{C} \big{)}\mathrm{d}Y(t),\end{split} \tag{36}\]
with adjoint Lindblad superoperator \(\mathcal{D}^{\dagger}[\hat{L}]\hat{E}:=\hat{L}^{\dagger}\hat{E}\hat{L}-\frac{ 1}{2}\hat{L}^{\dagger}\hat{L}\hat{E}-\frac{1}{2}\hat{E}\hat{L}^{\dagger}\hat{L}\). The **(BI)** indicates that the equation has to be treated as a backward Ito equation. In App. A we give a detailed derivation of this equation, and comment further on its interpretation as a differential equation for propagation backwards in time. Note that we defined the increment \(\mathrm{d}\hat{E}\) with an explicit minus sign. This differs from the convention of Molmer et al. [58, 29], but follows the convention of Tsang [55, 56].
Comparing the effect equation (36) to the forward master equation (29) we observe the following differences. The sign of the Hamiltonian changes, which we expect from the usual time-reversal in closed systems. The Lindblad superoperator \(\mathcal{D}\) is replaced by its adjoint \(\mathcal{D}^{\dagger}\) which is no longer trace-preserving but vanishes when applied to the identity. The measurement operator \(\hat{C}\) is replaced by its adjoint.
Solving Eq. (36) for \(\hat{E}(t)\) for \(t\leq t_{1}\) requires a certain final condition \(\hat{E}(t_{1})\). We have motivated the definition (34) of the effect operator by means of the final condition \(\hat{E}(t_{1})=\hat{1}\). This can be interpreted as describing a situation where at time \(t_{1}\) a certain \(\{\hat{E}_{x},x\in\mathcal{X}\}\) is performed on the system but the outcome \(x\) is not registered. If the outcome \(x\) is registered, and we want to describe a dynamics post-selected on it, one needs to replace the identity in Eq. (33) by \(\hat{E}_{x}(t_{1})\) to obtain an effective \(\hat{E}_{x,\mathcal{Y}}(t_{0})\). This general observation-assisted backpropagation is what we refer to as _retrodiction_. It is remarkable that a non-trivial POVM can also be retrodicted starting from the trivial effect operator \(\hat{E}(t_{1})=\hat{1}\) and using nothing but knowledge of the system's dynamics and continuous observations. In fact, in many relevant cases the final condition on \(\hat{E}\) will be damped out in the long run, just as initial conditions for the forward propagated density matrix become irrelevant for stable dynamics. This point will be addressed more rigorously further below.
The unnormalized effect equation generalizing Eq. (36) to multiple observed and unobserved channels reads
\[\begin{split}\textbf{(BI)}\ -\mathrm{d}\hat{E}(t)&=i[\hat{H},\hat{E} (t)]\mathrm{d}t+\sum_{j=1}^{N_{L}}\mathcal{D}^{\dagger}[\hat{L}_{j}]\hat{E}(t) \mathrm{d}t\\ &\qquad+\sum_{k=1}^{N_{C}}\bigl{(}\hat{C}_{k}^{\dagger}\hat{E}(t) +\hat{E}(t)\hat{C}_{k}\bigr{)}\mathrm{d}Y_{k}(t).\end{split} \tag{37}\]
Since we only consider conditional dynamics from now on, we will drop the subscript \(\mathcal{Y}\) and remember that both \(\rho\) and \(\hat{E}\) depend on respective parts of the monitoring record.
### Linear Dynamics and Gaussian POVMs
As in Sec. II.2 we focus our approach on linear systems. Like density operators we can represent effect operators in terms of phase space distributions, which lets us translate the effect equation into differential equations for the cumulants. One only needs to be more careful with the definition of statistical quantities, as \(\hat{E}\) does not have unit trace (or may not be trace class at all). For example
the means and covariance matrix are given by
\[r_{j}^{E}:=\langle\hat{r}_{j}\rangle_{E}:=\frac{\mathrm{Tr}\{\hat{r }_{j}\hat{E}\}}{\mathrm{Tr}\{\hat{E}\}}, \tag{38}\] \[V_{jk}^{E}=\langle\{\hat{r}_{j}-r_{j}^{E},\hat{r}_{k}-r_{k}^{E}\} \rangle_{E}, \tag{39}\]
where the expectation value \(\langle\cdot\rangle_{E}\) is explicitly normalized, and defined as long as \(\mathrm{Tr}\{\hat{E}\}\) exists.
Gaussian effect operators and their time dynamics have been treated recently by Zhang and Molmer [29], Huang and Sarovar [28], and Warszawski et al. [30]. Since we aim to keep our treatment self-contained we reproduce a number of results (in particular on the Gaussian equations of motion of the effect operator) presented there. Our derivation and presentation is meant to complement these previous ones by further details and background. In particular it was not apparent to us if the restriction to Gaussian effect operators is justified as it is for quantum states (cf. the discussion in Sec. II.2.2). In App. C.7 we consider the evolution of general effect operators, and show that it is very similar to that of general quantum states. Hence one can apply a notion of backward stability analogous to that of quantum states.
To obtain the evolution of the means and covariance matrix associated with \(\hat{E}\) from the corresponding equations for \(\mathbf{r}_{\rho}\) and \(V_{\rho}\) let us rewrite the effect equation (37) as
\[\mathbf{(BI)}\ -\mathrm{d}\hat{E}(t)=-i[-\hat{H},\hat{E}(t)]\mathrm{d}t+ \sum_{j=1}^{N_{L}}\mathcal{D}[\hat{L}_{j}]\hat{E}(t)\\ +\sum_{j=1}^{N_{L}}(\hat{L}_{j}^{\dagger}\hat{E}(t)\hat{L}_{j}- \hat{L}_{j}\hat{E}(t)\hat{L}_{j}^{\dagger})\mathrm{d}t\\ +\sum_{k=1}^{N_{C}}\bigl{(}\hat{C}_{k}^{\dagger}\hat{E}(t)+\hat{ E}(t)\hat{C}_{k}\bigr{)}\mathrm{d}Y_{k}(t), \tag{40}\]
where the second line compensates for the replacement of \(\mathcal{D}^{\dagger}\) by \(\mathcal{D}\). We see that this equation is structurally very similar to the unnormalized master equation (4), so Eqs. (18) and (20) for \(\mathrm{d}\mathbf{r}_{\rho}\) and \(\hat{V}_{\rho}\) serve as a good starting point with the following changes: (i) Time-reversal requires us to treat them as backward Ito equations, cf. App. B. (ii) The sign flip of \(\hat{H}\) causes \(H\mapsto-H\) and replacing the measurement operators \(\hat{C}_{k}\) by their adjoint entails \(B\mapsto-B\). (iii) Working out the change stemming from the sandwich terms in the second line we find in App. C.7 that it contributes terms \(-2\sigma\Omega\mathbf{r}_{E}\) and \(-\sigma\Omega V_{E}-(\sigma\Omega V_{E})^{\mathrm{T}}\) to the evolution of the means and covariance matrix, respectively. Together with \(H\mapsto-H\) this simply changes the sign of the unconditional drift matrix \(Q\mapsto-Q\). Hence the backward Ito equation for the means reads
\[\mathbf{(BI)}\ -\mathrm{d}\mathbf{r}_{E}(t):=\mathbf{r}_{E}(t- \mathrm{d}t)-\mathbf{r}_{E}(t)\\ =M_{E}(t)\mathbf{r}_{E}(t)\mathrm{d}t+\bigl{(}2V_{E}(t)A^{ \mathrm{T}}+\sigma B^{\mathrm{T}}\bigr{)}\mathrm{d}\mathbf{Y}(t), \tag{41}\]
with the conditional backward drift matrix
\[M_{E}(t):=-Q-2\sigma B^{\mathrm{T}}A-2V_{E}(t)A^{\mathrm{T}}A. \tag{42}\]
The deterministic backward Riccati equation for the covariance matrix is similar to Eq. (20),
\[-\frac{\mathrm{d}V_{E}(t)}{\mathrm{d}t} :=V_{E}(t-\mathrm{d}t)-V_{E}(t) \tag{43}\] \[=M_{E}(t)V_{E}(t)+V_{E}(t)M_{E}^{\mathrm{T}}(t)\] \[\quad+D-2V_{E}(t)A^{\mathrm{T}}AV_{E}(t),\]
and clearly shows the importance of continuous observations for retrodiction. Without observations (i. e., when \(A=B=0\)) the drift matrices would be equal up to a sign, \(M_{\rho}(t)=-M_{E}(t)=Q\). At the same time the quadratic Riccati equations for the respective covariance matrices would turn into linear Lyapunov equations. Assuming stable forward dynamics with a positive steady state solution \(V_{\rho}^{\infty}>0\) of Eq. (22),
\[QV_{\rho}^{\infty}+V_{\rho}^{\infty}Q^{\mathrm{T}}=-D, \tag{44}\]
would preclude stable backward dynamics: there cannot simultaneously be a positive asymptotic covariance matrix \(V_{E}^{\infty}>0\) for \(t\to-\infty\) that satisfies
\[-QV_{E}^{\infty}-V_{E}^{\infty}Q^{\mathrm{T}}=-D. \tag{45}\]
Only the presence of a sufficiently large quadratic \(A^{\mathrm{T}}A\)-term in Eq. (43), corresponding to sufficiently efficient observations, allows us to find an asymptotic solution \(V_{E}^{\infty}>0\). Analogous to Eq. (22) this implies an asymptotic drift matrix \(M_{E}^{\infty}\) whose eigenvalues have negative real parts.
Assuming stable backward dynamics that make any Gaussian effect operator with \(V_{E}(t_{1})\) collapse to \(V_{E}^{\infty}\) as \(t\to-\infty\), we can plug the asymptotic solution \(V_{E}^{\infty}\) into the equation for the means. Similar to the forward solution in Eq. (23) we find
\[\mathbf{(BI)}\ \mathbf{r}_{E}(t)=\mathrm{e}^{(t_{1}-t)M_{E}^{\infty}} \mathbf{r}_{E}(t_{1})\\ +\int_{t}^{t_{1}}\mathrm{e}^{(\tau-t)M_{E}^{\infty}}\bigl{(}V_{E}^ {\infty}A^{\mathrm{T}}+\sigma B^{\mathrm{T}}\bigr{)}\mathrm{d}\mathbf{Y}(\tau), \tag{46}\]
where the integral is a backward Ito integral as explained in Appendix B. The negative eigenvalues of \(M_{E}^{\infty}\) again cause exponential damping of the final condition \(\mathbf{r}_{E}(t_{1})\) and of the integrand, which picks out a different set of modes compared to the forward integral in Eq. (23), cf. Sec. II.3.2.
### Interpretation of retrodictive POVMs
In analogy to Eq. (24), the POVM realized at time \(t\) in retrodiction based on continuous homodyne detection during some time interval \([t,t_{1}]\) can be written as
\[\Bigl{\{}\hat{D}(\mathbf{r}_{E}(t))\hat{E}_{0}(t)\hat{D}^{\dagger}(\mathbf{r}_{E }(t))\Bigr{\}}. \tag{47}\]
Here \(\hat{E}_{0}(t)=\exp\bigl{[}-\mathbf{\hat{r}}^{\mathrm{T}}\Gamma_{E}(t)\mathbf{ \hat{r}}\bigr{]}\) is independent of the means, since \(\Gamma_{E}(t)\) is determined by the covariance matrix \(V_{E}(t)\) as explained below Eq. (24). Means \(\mathbf{r}_{E}(t)\) and covariance matrix \(V_{E}(t)\) are determined by (41) and (43). The POVM elements all correspond to displaced versions of the operator \(\hat{E}_{0}(t)\). The shape of \(\hat{E}_{0}(t)\) determines the resolution in phase space achieved by the POVM in retrodiction. It is again useful to consider the purity of a Gaussian effect operator \(\hat{E}\) with covariance matrix \(V_{E}\) which is computed as in Eq. (25). Unit purity means the given POVM actually corresponds to projections onto pure states, constituting a quantum-limited measurement. Pure POVM elements with equal variances then indicate a projection onto coherent states, which corresponds to a heterodyne measurement of both quadratures [1], that is \(\{|\alpha\rangle\langle\alpha|/\pi\}\). Asymmetric variances, on the other hand, indicate squeezed projectors that correspond, in the ideal limit of infinite squeezing, to a measurement of only one quadrature, \(\{|x\rangle\langle x|\}\), where \(|x\rangle\) denotes a quadrature eigenstate.
Reduced purity means additional uncertainty and thus lower resolution of the measurement. We will see in the examples in Sections IV and V that the purity of retrodicted effect operators decreases quickly when the detection efficiency is low or there is coupling to unobserved baths. Quite generally, for systems subject to continuous time measurements with a measurement rate \(\Gamma\) (including losses) competing with other decoherence processes happening at rate \(\gamma\) one will find that the dynamics of both conditional density and effect matrix crucially depend on a quantum cooperativity parameter \(C_{q}=\Gamma/\gamma\). The regime \(C_{q}>1\) signifies the possibility to produce quantum limited POVMs in retrodiction just as it allows pure conditional quantum states in prediction. In Sec. V we will prove this statement in great detail for continuous measurements on optomechanical systems.
While it is possible to perform quantum limited POVMs corresponding to projections on pure states it cannot be used as a means for _preparation_ of pure quantum states. The "collapsed" posterior state is physically not realized since retrodictive POVMs are destructive: once all information necessary for realizing the POVM has been gathered the system's state has already evolved into something different, whose best description is just the conditional quantum state. It does not make sense to consider the posterior state after the measurement in a similar way as it is useless to ask for the state of a photon after photo-detection.
Repeated measurement of a POVM (47) on identically prepared systems in state \(\rho_{0}\) will map out the probability distribution
\[P(\mathbf{r}|\rho_{0})=\mathrm{Tr}\{D(\mathbf{r})\hat{E}_{0}D^{\dagger}( \mathbf{r})\rho_{0}\}.\]
This is the information on the state \(\rho_{0}\) which is directly accessible via retrodictive POVMs. Other relevant aspects regarding the quantum state may be inferred from such information, possibly collected for different POVMs by changing the dynamics - and with it the equations of motion for \(\hat{E}(t)\) - of the system.
One may, for example, be interested in reconstructing the density matrix \(\rho_{0}\) itself which corresponds to the problem of quantum state tomography. Recapitulating the methods available to perform this task is beyond the scope of this article, and we refer to the literature in this field [60; 61]. We just state two particularly simple cases: The heterodyne POVM directly provides the Mandel \(Q\)-function of the quantum state, \(Q(\alpha)=\langle\alpha|\rho_{0}|\alpha\rangle\). If \(\rho_{0}\) was known to be Gaussian this POVM will directly give the correct means and (co)variances with one unit of added quantum noise in each quadrature. A POVM corresponding to an infinitely squeezed state will give access directly to the marginal distribution in the respective quadrature \(\langle x|\rho_{0}|x\rangle\).
It is worth emphasizing that the Gaussian POVMs realized by linear dynamics considered here may well be applied to non-Gaussian states. No assumption on the initial _state_\(\rho_{0}\) went into the derivation of the equations of motion (41) and (43) for the Gaussian operator \(\hat{E}(t)\). Provided the measurement delivers sufficient resolution in phase space the tools of retrodictive Gaussian POVMs can therefore well be used for verification of non-Gaussian states which have been created initially by some different means. (Of course those initial states cannot emerge as conditional states from Gaussian dynamics and homodyne detections alone.) Along these lines, preparation and verification of Fock states in macroscopic mechanical oscillators have been discussed by Khalili et al. [62], see also [63].
## IV Basic examples
In this section we will consider two basic but illustrative examples of the formalism developed so far which will provide a firm basis for the more serious application to optomechanical systems in Sec. V.
### Monitoring a decaying cavity
Let us start with the simple example of a decaying cavity undergoing homodyne detection, depicted in Fig. 2. This example was used by Wiseman [47] to illustrate the interpretation of quantum trajectories in measurement theory as retrodictive POVM elements. Using operator algebra he showed that with an ideal detector and infinite observation time one can perform a projective measurement of the initial state of the cavity onto a quadrature eigenstate. Using the formalism developed in the previous sections we will treat the same setup here for homodyne detection with efficiency \(\eta\). For ideal detection \(\eta\to 1\) recover the result of Wiseman.
#### iv.1.1 Conditional state evolution
We consider an ideal freely damped cavity with \(\hat{H}=0\) and decay rate \(\Gamma\). The output is mixed with a strong local oscillator with adjustable relative phase \(\phi\) to perform homodyne detection with efficiency \(\eta\in[0,1]\). For later reference we first study the corresponding stochastic master equation for the conditional state of the intra-cavity field [47]. In a frame rotating at the cavity frequency this is
\[\mathbf{(I)}\ \mathrm{d}\rho(t)=\Gamma\mathcal{D}[\hat{a}]\rho(t) \mathrm{d}t+\sqrt{\eta\Gamma}\mathcal{H}[\mathrm{e}^{-i\phi}\hat{a}]\rho(t) \mathrm{d}W(t), \tag{48}\]
where \(\hat{a}^{\dagger},\hat{a}\) are the cavity creation and annihilation operators (CAOs). The canonical quadrature operators are \(\hat{x}=(\hat{a}+\hat{a}^{\dagger})/\sqrt{2}\) and \(\hat{p}=-i(\hat{a}-\hat{a}^{\dagger})/\sqrt{2}\) which we collect into a vector \(\hat{\mathbf{r}}=\begin{bmatrix}\hat{x}&\hat{p}\end{bmatrix}^{\mathrm{T}}\). Then the Wiener increment \(\mathrm{d}W(t)\) is related to the detector output \(\mathrm{d}Y(t)\) as
\[\mathrm{d}Y(t)=\sqrt{2\eta\Gamma}\langle\hat{x}_{\phi}\rangle_{\rho(t)} \mathrm{d}t+\mathrm{d}W(t), \tag{49}\]
with \(\hat{x}_{\phi}:=\cos(\phi)\hat{x}+\sin(\phi)\hat{p}\). Due to the symmetry of the problem we choose \(\phi=0\) without loss of generality, observing only the \(\hat{x}\)-quadrature of the cavity.
Spelling out Eqs. (15) and (20) for \(\mathrm{d}\mathbf{r}_{\rho}\) and \(\dot{V}_{\rho}\) we find
\[\mathbf{(I)}\ \mathrm{d}x_{\rho}(t) =-\frac{\Gamma}{2}x_{\rho}\mathrm{d}t+\sqrt{\frac{\eta\Gamma}{2} }(V^{\rho}_{xx}(t)-1)\mathrm{d}W(t), \tag{50a}\] \[\mathbf{(I)}\ \mathrm{d}p_{\rho}(t) =-\frac{\Gamma}{2}p_{\rho}\mathrm{d}t+\sqrt{\frac{\eta\Gamma}{2} }V^{\rho}_{xp}(t)\mathrm{d}W(t), \tag{50b}\]
and
\[\dot{V}^{\rho}_{xx} =-(1-2\eta)\Gamma V^{\rho}_{xx}+(1-\eta)\Gamma-\eta\Gamma(V^{ \rho}_{xx})^{2}, \tag{51a}\] \[\dot{V}^{\rho}_{xp} =-(1-\eta)\Gamma V^{\rho}_{xp}-\eta\Gamma V^{\rho}_{xx}V^{\rho}_{xp},\] (51b) \[\dot{V}^{\rho}_{pp} =-\Gamma V^{\rho}_{pp}+\Gamma-\eta\Gamma(V^{\rho}_{xp})^{2}. \tag{51c}\]
The steady-state covariance matrix \(V^{\infty}_{\rho}\) satisfying the Riccati equation \(\dot{V}_{\rho}=0\) is given by the variances and covariance
\[V^{\rho}_{xx}=V^{\rho}_{pp}=1,\qquad V^{\rho}_{xp}=0. \tag{52}\]
Computing the purity \(\mathcal{P}(\rho)=1/\sqrt{\det(V^{\infty}_{\rho})}=1\) shows that the prepared steady state is pure, which together with equal variances implies it is a coherent state. However, plugging \(V^{\infty}_{\rho}\) into Eqs. (50) for the means makes \(\mathrm{d}W\) drop out, so the asymptotic forward evolution does not depend on the monitored output. In the long term both mean values decay exponentially, affirming the expected result that for long times a decaying cavity will simply collapse to the vacuum state, \(\rho_{\infty}=|0\rangle\langle 0|\).
This insight is important. It shows that the covariance matrix and purity alone do not let us judge the effectiveness of a given preparation (or retrodiction) scheme. If the unconditional dynamics produce some mixed steady state we can increase our knowledge by monitoring the output. At long times the conditional dynamics will produce a state with fixed covariance matrix and measurement-dependent means that move around phase space, such that the averaged conditional dynamics agree with the unconditional dynamics. However, if the unconditional dynamics already yield a quantum-limited state (such as the vacuum in the present example) then there is nothing to be gained from observing the output. These statements apply to both quantum states and effect operators.
While the observations cannot aid the (long-term) state preparation we will now see how they let us infer information about the _initial_ state of the cavity.
#### iv.1.2 Retrodiction of POVM elements
The equation adjoint to Eq. (48) for the backwards-propagating POVM element \(\hat{E}\) reads
\[\mathbf{(BI)}\ -\mathrm{d}\hat{E}(t) =\Gamma\mathcal{D}^{\dagger}[\hat{a}]\hat{E}(t)\mathrm{d}t \tag{53}\] \[\quad+\sqrt{\eta\Gamma}(\hat{a}^{\dagger}\hat{E}(t)+\hat{E}(t) \hat{a})\mathrm{d}Y(t).\]
In [47] Wiseman essentially constructed an operator solution of this equation for a unit-efficiency measurement and showed that the corresponding POVM corresponds to a projection on quadrature eigenstates. Restricting ourselves to Gaussian POVMs (cf. App. C.7) we instead directly write down the (normalized) equations of motion
Figure 2: Schematic of a freely decaying cavity monitored from time \(t_{0}\) to \(t_{1}\). Light leaving the cavity at rate \(\Gamma\) is superposed on a balanced beam-splitter with a strong local oscillator (LO). Two photodetectors monitor the output ports and their photocurrents are subtracted to yield a time-continuous homodyne measurement signal \(Y(t)\). Imperfect detection is modeled as photon loss induced by a second beam splitter which only transmits a fraction \(\eta\) of the signal light.
of means and covariance matrix, Eqs. (41) and (43),
\[\textbf{(BI) }-\mathrm{d}x_{E}(t) =\frac{\Gamma}{2}x_{E}\mathrm{d}t+\sqrt{\frac{\eta\Gamma}{2}}(V_{xx }^{E}(t)+1)\mathrm{d}W(t), \tag{54a}\] \[\textbf{(BI) }-\mathrm{d}p_{E}(t) =\frac{\Gamma}{2}p_{E}\mathrm{d}t+\sqrt{\frac{\eta\Gamma}{2}}V_{xp }^{E}(t)\mathrm{d}W(t), \tag{54b}\]
and
\[-\dot{V}_{xx}^{E} =(1-2\eta)\Gamma V_{xx}^{E}+(1-\eta)\Gamma-\eta\Gamma(V_{xx}^{E}) ^{2}, \tag{55a}\] \[-\dot{V}_{xp}^{E} =(1-\eta)\Gamma V_{xp}^{E}-\eta\Gamma V_{xx}^{E}V_{xp}^{E},\] (55b) \[-\dot{V}_{pp}^{E} =\Gamma V_{pp}^{E}+\Gamma-\eta\Gamma(V_{xp}^{E})^{2}. \tag{55c}\]
We solve \(\dot{V}_{xx}^{E}=0\) to find the asymptotic solution
\[V_{xx}^{E}=\frac{1-\eta}{\eta}, \tag{56}\]
which entails constant covariance, \(\dot{V}_{xp}^{E}\equiv 0\), independent of its current value. Note that the asymptotic \(\hat{x}\)-variance vanishes, \(V_{xx}^{E}\to 0\), as \(\eta\to 1\) which shows that the corresponding effect operator measures \(\hat{x}\) with arbitrary precision. The effect operator will be squeezed in \(\hat{x}\) (i. e., \(V_{xx}^{E}<1\)) for any \(\eta>\frac{1}{2}\). On the other hand, \(V_{xx}^{E}\rightarrow\infty\) as \(\eta\to 0\) emphasizing that retrodiction crucially depends on observations. When attempting to solve \(\dot{V}_{pp}^{E}=0\) we find that there is no finite asymptotic solution, \(V_{xp}^{E}\) and \(V_{pp}^{E}\), which simultaneously satisfies \(V_{pp}^{E}\geq 0\) and \(\mathrm{det}[V_{E}]\geq 0\), which are necessary requirements for a proper covariance matrix. Thus \(V_{pp}^{E}(t)\) grows beyond all bounds as time runs backwards, which is in line with the fact that our setup only gathers information about \(\hat{x}\). Thus, retrodiction allows to effectively perform a projective measurement of a quadrature operator on the initial state of the cavity. By changing the homodyne angle analogous results can be obtained for any quadrature \(\hat{x}_{\phi}\). This agrees with the finding of [47] derived using completely different methods exploiting operator algebra. One can check by direct computation (paying attention to detail [64]) that the effect operator constructed in [47] indeed satisfies the equation of motion (53).
We can now also derive the filter functions or temporal modes which have to be extracted from the photocurrent. Plugging the asymptotic variance \(V_{xx}^{E}\) into the equation for \(x_{E}\) we find
\[\textbf{(BI) }-\mathrm{d}x_{E}(t) =\frac{\Gamma}{2}x_{E}\mathrm{d}t+\sqrt{\frac{\Gamma}{2\eta}} \mathrm{d}W(t) \tag{57}\] \[=-\frac{\Gamma}{2}x_{E}\mathrm{d}t+\sqrt{\frac{\Gamma}{2\eta}} \mathrm{d}Y(t). \tag{58}\]
The solution to this equation is given by
\[\textbf{(BI) }x_{E}(t) =\mathrm{e}^{-\Gamma(t_{1}-t)/2}x_{E}(t_{1}) \tag{59}\] \[\quad+\sqrt{\frac{\Gamma}{2\eta}}\int_{t}^{t_{1}}\mathrm{e}^{- \Gamma(t_{1}-\tau)/2}\mathrm{d}Y(t),\]
for \(t\leq t_{1}\) so the final value \(x_{E}(t_{1})\) is exponentially damped, and far into the past the mean \(\hat{x}\)-quadrature of the retrodicted effect operator will depend only on the integrated measurement current. The temporal mode to be extracted from the continuous measurement is an exponentially decaying function in time with width \(\Gamma/2\) set by the cavity decay rate.
### Beam-splitter vs. squeezing interaction
We will now examine why we can prepare only a coherent state (the vacuum) but are able to measure squeezed states. This is due to the beam-splitter (BS) coupling between the cavity and the field outside,
\[\hat{H}_{\mathrm{int}}^{\mathrm{BS}}=\Gamma(\hat{a}\hat{c}_{\mathrm{out}}^{ \dagger}+\hat{a}^{\dagger}\hat{c}_{\mathrm{out}}), \tag{60}\]
where \(\hat{c}_{\mathrm{out}}^{\dagger},\hat{c}_{\mathrm{out}}\) are the CAOs corresponding to the outgoing mode being measured. This interaction causes a state swap between the intracavity and outside fields. To illustrate this further let us replace the beam-splitter coupling by a two-mode squeezing (TMS) interaction,
\[\hat{H}_{\mathrm{int}}^{\mathrm{TMS}}=\Gamma(\hat{a}^{\dagger}\hat{c}_{ \mathrm{out}}^{\dagger}+\hat{a}\hat{c}_{\mathrm{out}}). \tag{61}\]
For our simple cavity this is obviously unrealistic but we will encounter the TMS interaction again in optomechanical systems, so it is worthwhile understanding the effect this has on the dynamics. \(\hat{H}_{\mathrm{int}}^{\mathrm{TMS}}\) creates entangled pairs of photons so detecting the outgoing light will reveal information about the current state of the cavity but not about what it was before the interaction. The corresponding master equation reads
\[\textbf{(I) }\mathrm{d}\rho(t)=\Gamma\mathcal{D}[\hat{a}^{\dagger}]\rho(t) \mathrm{d}t+\sqrt{\eta\Gamma}\mathcal{H}[\hat{a}^{\dagger}]\rho(t)\mathrm{d}W(t). \tag{62}\]
This yields equations of motion for the means and (co)variances of the conditional state,
\[\textbf{(I) }\mathrm{d}x_{\rho}(t) =\frac{\Gamma}{2}x_{\rho}\mathrm{d}t+\sqrt{\frac{\eta\Gamma}{2}}( V_{xx}^{\rho}(t)+1)\mathrm{d}W(t), \tag{63}\] \[\textbf{(I) }\mathrm{d}p_{\rho}(t) =\frac{\Gamma}{2}p_{\rho}\mathrm{d}t+\sqrt{\frac{\eta\Gamma}{2}}V_ {xp}^{\rho}(t)\mathrm{d}W(t), \tag{64}\]
and
\[\dot{V}_{xx}^{\rho} =(1-2\eta)\Gamma V_{xx}^{\rho}+(1-\eta)\Gamma-\eta\Gamma(V_{xx}^{ \rho})^{2}, \tag{65}\] \[\dot{V}_{xp}^{\rho} =(1-\eta)\Gamma V_{xp}^{\rho}-\eta\Gamma V_{xx}^{\rho}V_{xp}^{\rho},\] (66) \[\dot{V}_{pp}^{\rho} =\Gamma V_{pp}^{\rho}+\Gamma-\eta\Gamma(V_{xp}^{\rho})^{2}, \tag{67}\]
which are exactly the same as the backward Eqs. (54) and (55) for the BS interaction. So while \(V_{pp}^{\rho}(t)\) will grow beyond all bounds asymptotically, we can condition the \(\hat{x}\)-quadrature to arbitrary precision limited only by our detection efficiency \(\eta\), meaning we can prepare arbitrarily squeezed states. Similarly, the situation is also
reversed for the backward effect equation, yielding equations for means and covariance matrix given by Eqs. (50) and (51). This means the effect operators will become independent of the photocurrent in the long-time limit and project only on the vacuum state. We summarize the effect of each coupling on the performance of both pre- and retrodiction in Table 1.
## V Conditional state preparation and verification in optomechanics
The illustrative examples studied in the previous sections provide the background for the main application of the formalism to time continuous measurements on optomechanical systems [65; 66]. The system of interest is a single mode of a mechanical oscillator, such as a membrane depicted in Fig. 3, which couples to the light field inside a resonantly driven cavity. The light escaping the cavity is then mixed with a local oscillator to perform heterodyne detection. We will be interested in the weak coupling limit of optomechanics, where the cavity can be adiabatically eliminated, and the time continuous measurement effectively concerns the mechanical system only. It is important to note that this weak coupling limit does not exclude the regime of strong quantum cooperativity where the measurement back action noise process effectively becomes stronger than all other noise processes acting on the oscillator. Quantum cooperativities on the order of 100 have been demonstrated in recent optomechanical systems [11]. It is clear that the tools of quantum state pre- and retrodiction become especially powerful in such a regime.
The adiabatic limit of the conditional optomechanical master equation has been treated in great detail in [67]. We summarize here the main aspects, and then apply it to discuss pre- and retrodiction.
### Optomechanical setup
We consider a mechanical mode with frequency \(\Omega_{\mathrm{m}}\) coupled to a cavity with resonance frequency \(\omega_{\mathrm{c}}\), driven by a strong coherent field with frequency \(\omega_{0}\). We move to a rotating frame with respect to the drive \(\omega_{0}\), and assume the generated intracavity amplitude is large so we can linearize the radiation pressure interaction. Following standard treatment [66] this yields
\[\hat{H}_{\mathrm{lin}} =\hat{H}_{0}+g\big{(}\hat{a}+\hat{a}^{\dagger}\big{)}\big{(} \hat{a}_{\mathrm{c}}+\hat{a}_{\mathrm{c}}^{\dagger}\big{)}, \tag{68a}\] \[\hat{H}_{0} =\Omega_{\mathrm{m}}\hat{a}^{\dagger}\hat{a}-\Delta_{\mathrm{c}} \hat{a}_{\mathrm{c}}^{\dagger}\hat{a}_{\mathrm{c}}, \tag{68b}\]
where \(\hat{H}_{0}\) comprises the local Hamiltonians of cavity and mechanics with \(\Delta_{\mathrm{c}}=\omega_{0}-\omega_{\mathrm{c}}\), and \(g\propto g_{0}\) is the cavity-enhanced optomechanical coupling strength. \(\hat{a}\) and \(\hat{a}_{\mathrm{c}}\) are the annihilation operators of the mechanical and cavity mode, respectively.
The cavity field leaks out at a full width at half maximum (FWHM) decay rate \(\kappa\). The (unconditional) master equation of the joint state \(\rho_{\mathrm{mc}}\) of mechanical and cavity mode reads
\[\begin{split}\dot{\rho}_{\mathrm{mc}}(t)&=-i[\hat{H} _{\mathrm{lin}},\rho_{\mathrm{mc}}(t)]+\kappa\mathcal{D}[\hat{a}_{\mathrm{c}} ]\rho_{\mathrm{mc}}(t)\\ &\quad+\mathcal{L}_{\mathrm{th}}\rho_{\mathrm{mc}}(t),\end{split} \tag{69}\]
where we have also included a thermal bath,
\[\begin{split}\mathcal{L}_{\mathrm{th}}\rho_{\mathrm{mc}}(t)& =\gamma(\bar{n}+1)\mathcal{D}[\hat{a}]\rho_{\mathrm{mc}}(t)\\ &\quad+\gamma\bar{n}\mathcal{D}[\hat{a}^{\dagger}]\rho_{\mathrm{ mc}}(t),\end{split} \tag{70}\]
with mean phonon number \(\bar{n}\) which couples to the mechanical oscillator at rate \(\gamma\) (FWHM of the mechanical mode).
We monitor the field that leaks from the cavity using homodyne or heterodyne detection. As usual the outgoing field is combined on a balanced beam splitter with a strong local oscillator, and the difference of the measured intensities in the two output beams is the measurement current, depicted in the bottom left of Fig. 3. As compared to the conditional master equation Eq. (48)
\begin{table}
\begin{tabular}{|c||c|c|} \hline & Prediction \(\rho\) & Retrodiction \(\bar{E}\) \\ \hline \hline beam splitter interaction & Coherent & Squeezed \\ \hline two-mode squeezing int. & Squeezed & Coherent \\ \hline \end{tabular}
\end{table}
Table 1: Summary of the predicted quantum state and the retrodictive POVM realized for a single mode coupled via a beam splitter or a two-mode squeezing interaction to the monitoring field.
Figure 3: Schematic of a micromechanical membrane coupled to a driven cavity with coupling strength \(g\). Before entering the cavity the linearly polarized driving field is transmitted through a polarizing beam-splitter (PBS) and quarter-wave plate (QWP). After interaction with the cavity and membrane the outgoing light again passes the QWP, such that it becomes orthogonally polarized to the incoming light. It is reflected off the PBS and superposed on a second beam splitter with a strong local oscillator (LO) to perform homodyne or heterodyne detection with detection efficiency \(\eta\). The membrane is additionally coupled to a thermal bath with rate \(\gamma\) and mean phonon number \(\bar{n}\).
studied in Sec. IV on the decaying cavity we consider here a slightly more general setup where the local oscillator frequency \(\omega_{\text{lo}}\) may be detuned from the driving frequency \(\omega_{\text{0}}\), captured by \(\Delta_{\text{lo}}=\omega_{\text{lo}}-\omega_{\text{0}}\). This realizes a measurement of the outgoing field quadrature operator \(\hat{a}_{\text{out}}(t)\text{e}^{-i\Delta_{\text{lo}}t+i\phi_{\text{lo}}}+\hat{ a}_{\text{out}}^{\dagger}(t)\text{e}^{i\Delta_{\text{lo}}t-i\phi_{\text{lo}}}\), where \(\phi_{\text{lo}}\) is the tunable phase of the local oscillator. This yields a conditional master equation for cavity and mechanics,
\[\begin{split}\textbf{(I)}\;\text{d}\rho(t)&=-i[\hat{ H}_{\text{lin}},\rho_{\text{mc}}(t)]\text{d}t+\kappa\mathcal{D}[\hat{a}_{\text{c}}] \rho_{\text{mc}}(t)\text{d}t\\ &\quad\quad+\mathcal{L}_{\text{th}}\rho_{\text{mc}}(t)\text{d}t \\ &\quad\quad+\sqrt{\eta\kappa}\mathcal{H}\big{[}\hat{a}_{\text{c}} \text{e}^{i(\Delta_{\text{c}}+\Delta_{\text{lo}})t-i\phi_{\text{lo}}}\big{]} \text{d}W(t),\end{split} \tag{71}\]
where \(\eta\in[0,1]\) is the detection efficiency.
We would like an effective master equation for the mechanics alone. To this end one can start from the combined master equation Eq. (71) and move to an interaction picture with respect to \(\hat{H}_{0}\). Assuming the cavity field decays fast on the time-scale set by the optomechanical interaction, \(g/\kappa\ll 1\), one can adiabatically eliminate the cavity dynamics from the description. For details of this procedure see [67, 8]. But before we state the result let us take a closer look at the optomechanical interaction.
### Optomechanical interaction
The linearized radiation-pressure interaction is given by the last term in Eq. (68). The interaction decomposes into two terms: (i) a beam-splitter (BS) coupling \(g(\hat{a}\hat{a}_{\text{c}}^{\dagger}+\hat{a}^{\dagger}\hat{a}_{\text{c}})\) and (ii) a two-mode squeezing (TMS) part \(g(\hat{a}\hat{a}_{\text{c}}+\hat{a}^{\dagger}\hat{a}_{\text{c}}^{\dagger})\). These give rise to Stokes and anti-Stokes scattering processes depicted in Fig. 4. If we work in an interaction picture with respect to \(\hat{H}_{0}\) these terms oscillate at frequencies \(\Omega_{\text{m}}\pm\Delta_{\text{c}}\). For a _red-detuned_ drive, \(\Delta_{\text{c}}=-\Omega_{\text{m}}\), the BS interaction becomes resonant and is thus enhanced while the TMS interaction oscillates quickly at \(2\Omega_{\text{m}}\) and is suppressed. For a _blue-detuned_ drive, \(\Delta_{\text{c}}=\Omega_{\text{m}}\), the situation is reversed so the TMS interaction is enhanced and the BS interaction suppressed. For a resonant drive, \(\Delta_{\text{c}}=0\), both processes contribute equally.
As we have seen in the initial example in Section IV, the entangling TMS interaction enhances our ability to prepare a conditional mechanical state. Because the outgoing light is entangled with the mechanics, performing a quantum-limited squeezed detection will also project the oscillator onto a squeezed state. On the other hand, the BS interaction generates light with the mechanical state swapped onto it. Observing it lets us determine what the state was before the interaction but will not enable the preparation of squeezed states. For retrodiction the situation is reversed. Extracting information about the system in the past from BS light produces squeezed effect operators (sharp measurements) on the past state, while entangled TMS light lets us retrodict coherent effect operators at best. Thus TMS (blue drive) enhances our ability to prepare while the BS interaction (red drive) enhances our ability to retrodict.
### Master equation of the mechanics
In [67, 8] the master equation Eq. (71) is turned into an effective evolution equation for the mechanical state \(\rho_{\text{m}}\equiv\rho\) through adiabatic elimination of the cavity mode. Since the result is not a proper Lindblad master equation one needs to perform a rotating wave approximation for which we integrate the dynamics over a short time,
\[\textbf{(I)}\;\delta\rho(t):=\int_{t}^{t+\delta t}\text{d}\rho(\tau). \tag{72}\]
We are interested here in the case of mechanical oscillators with high quality factors \(Q=\Omega_{\text{m}}/\gamma\) where \(\Omega_{\text{m}}\) is much larger than other system frequencies set by the optomechanical interaction and decoherence, i. e., \(\Omega_{\text{m}}\gg g^{2}/\kappa,\;\bar{n}\gamma\). In fact, we assume \(\Omega_{\text{m}}\) is so much larger that we can choose \(\delta t\) such that \(\Omega_{\text{m}}\gg 1/\delta t\gg g^{2}/\kappa,\;\bar{n}\gamma\), which allows us to pull \(\rho(t)\) out of all deterministic time integrals since it is approximately constant on this time-scale. Note that this requires \(Q\gg\bar{n}\) to be fulfilled with a safe margin. We emphasize that sideband resolution (\(\Omega_{\text{m}}\gg\kappa\)) is not required for the following. We can then perform the rotating wave approximation by dropping all resonant terms oscillating at \(\pm 2\Omega_{\text{m}}\). Choosing the right
Figure 4: Top: schematic conversion processes occurring in the optomechanical setup depicted in Fig. 3, that scatter cavity photons at frequency \(\omega_{\text{c}}\) into the outgoing sidebands while creating or annihilating a mechanical phonon at frequency \(\Omega_{\text{m}}\). Bottom: the spectrum of the outgoing light (not to scale). As discussed in Sec. V.2 the linearized optomechanical interaction facilitates two processes: The beam splitter (BS) interaction converts a cavity photon into a phonon and an outgoing photon in the lower (red) sideband at \(\omega_{\text{c}}-\Omega_{\text{m}}\). Two-mode squeezing (TMS) combines a cavity photon and a phonon to produce an outgoing photon in the upper (blue) sideband at \(\omega_{\text{c}}+\Omega_{\text{m}}\).
phase \(\phi_{\mathrm{lo}}\) and quadrature frame, we find
\[\textbf{(I)}\ \delta\rho(t) =\Gamma_{-}\mathcal{D}[\hat{a}]\rho(t)\delta t+\Gamma_{+}\mathcal{D }[\hat{a}^{\dagger}]\rho(t)\delta t\] \[\quad+\sqrt{\eta}\int_{t}^{t+\delta t}\mathcal{H}\big{[}\hat{C}( \tau;\Delta_{\mathrm{lo}})\big{]}\rho(\tau)\mathrm{d}W(\tau) \tag{73a}\] \[\quad+\mathcal{L}_{\mathrm{th}}\rho(t)\delta t.\] with the time-dependent measurement operator \[\hat{C}(\tau;\Delta_{\mathrm{lo}}) :=\sqrt{\Gamma_{-}}\hat{a}\mathrm{e}^{-i(\Omega_{\mathrm{eff}}- \Delta_{\mathrm{lo}})\tau}\] (73b) \[\quad+\sqrt{\Gamma_{+}}\hat{a}^{\dagger}\mathrm{e}^{i(\Omega_{ \mathrm{eff}}+\Delta_{\mathrm{lo}})\tau}.\] The effective mechanical frequency \[\Omega_{\mathrm{eff}} :=\Omega_{\mathrm{m}}-\sqrt{2}g^{2}(\beta_{+}+\beta_{-}),\] (74a) \[\beta_{\pm} :=\frac{\Delta_{\mathrm{c}}\pm\Omega_{\mathrm{m}}}{(\kappa/2)^{2 }+(\Delta_{\mathrm{c}}\pm\Omega_{\mathrm{m}})^{2}} \tag{74b}\]
results from a shift of \(\Omega_{\mathrm{m}}\) due to the optical spring effect, and the rates
\[\Gamma_{\pm}:=\frac{g^{2}\kappa}{(\kappa/2)^{2}+(-\Delta_{\mathrm{c}}\pm \Omega_{\mathrm{m}})^{2}} \tag{75}\]
are the usual Stokes and anti-Stokes rates known from sideband cooling. From these we can define two effective cooperativities
\[C_{\pm}:=\Gamma_{\pm}/\gamma=C_{\mathrm{cl}}\frac{\kappa^{2}}{\kappa^{2}+4(- \Delta_{\mathrm{c}}\pm\Omega_{\mathrm{m}})^{2}}, \tag{76}\]
in terms of the classical cooperativity
\[C_{\mathrm{cl}}=\frac{4g^{2}}{\kappa\gamma}. \tag{77}\]
Each \(C_{\pm}\) compares the rate of the respective (anti-)Stokes process to the incoherent coupling rate of the thermal bath. In the regime \(\kappa\gg\Omega_{\mathrm{m}}\) of a broad cavity [68] and assuming \(\kappa\gg\Delta_{\mathrm{c}}\) the cooperativities reduce to the classical cooperativity, \(C_{\pm}\approx C_{\mathrm{cl}}\). As an example for the orders of magnitude involved here consider a recent experiment [11], which realized \(C\approx C_{\mathrm{cl}}\sim 10^{7}\) and for \(\bar{n}\sim 10^{5}\) a corresponding _quantum cooperativity_[66]
\[C_{q}:=\frac{C}{\bar{n}+1}\sim 10^{2}. \tag{78}\]
To obtain a proper master equation from Eq. (73) we still need to perform the integral over the measurement term, which depends on the choice of \(\Delta_{\mathrm{lo}}\). But Eq. (73) already illustrates the point we made in Sec. V.2: detuning the driving field affects the optomechanical interaction. Driving on resonance, \(\Delta_{\mathrm{c}}=0\), TMS and BS interaction occur with equal strength which is reflected by \(\Gamma_{+}=\Gamma_{-}\). A blue drive, \(\Delta_{\mathrm{c}}=\Omega_{\mathrm{m}}\), enhances TMS and causes \(\Gamma_{+}>\Gamma_{-}\), while a red drive, \(\Delta_{\mathrm{c}}=-\Omega_{\mathrm{m}}\), enhances the BS interaction and causes \(\Gamma_{-}>\Gamma_{+}\). Additionally, we can tune the local oscillator either to resonantly detect at the driving frequency, \(\Delta_{\mathrm{lo}}=0\), or to either the blue or the red sideband, \(\Delta_{\mathrm{lo}}=\pm\Omega_{\mathrm{eff}}\). We will explore these different dynamics step by step, starting with a resonant drive and resonant detection in the following section, then considering detection of the sidebands in Sec. V.5, and finally treating an off-resonant drive with sideband detection in Sec. V.6.
### Drive and detect on resonance
We start by exploring a cavity driven on resonance, \(\Delta_{\mathrm{c}}=0\), so we find equal rates \(\Gamma_{+}=\Gamma_{-}=:\Gamma\), equal cooperativities \(C:=C_{+}=C_{-}\) with
\[C=C_{\mathrm{cl}}\frac{\kappa^{2}}{\kappa^{2}+4\Omega_{\mathrm{m}}^{2}}, \tag{79}\]
and \(\Omega_{\mathrm{eff}}=\Omega_{\mathrm{m}}\). The first detection scheme we consider is homodyne detection on resonance, \(\Delta_{\mathrm{lo}}=0\). Plugging this into Eq. (73) yields the measurement operator
\[\hat{C}(\tau;\Delta_{\mathrm{lo}}=0) :=\sqrt{\Gamma}\big{(}\hat{a}\mathrm{e}^{-i\Omega_{\mathrm{m}} \tau}+\hat{a}^{\dagger}\mathrm{e}^{i\Omega_{\mathrm{m}}\tau}\big{)} \tag{80}\] \[=\sqrt{2\Gamma}\big{(}\hat{x}\cos(\Omega_{\mathrm{m}}\tau)+\hat{p }\sin(\Omega_{\mathrm{m}}\tau)\big{)}, \tag{81}\]
with \(\hat{x}=(\hat{a}+\hat{a}^{\dagger})/\sqrt{2}\) and \(\hat{p}=-i(\hat{a}-\hat{a}^{\dagger})/\sqrt{2}\). Using again that we can pull \(\rho(t)\) out of the integrals we find the master equation
\[\textbf{(I)}\ \delta\rho(t) =\mathcal{L}_{\mathrm{th}}\rho(t)+\Gamma\mathcal{D}[\hat{a}]\rho(t )\delta t+\Gamma\mathcal{D}[\hat{a}^{\dagger}]\rho(t)\delta t\] \[\quad+\sqrt{\eta\Gamma}\mathcal{H}[\hat{x}]\rho(t)\delta W_{ \mathrm{c}}(t) \tag{82}\] \[\quad+\sqrt{\eta\Gamma}\mathcal{H}[\hat{p}]\rho(t)\delta W_{ \mathrm{s}}(t)\]
with the coarse-grained Wiener increments
\[\textbf{(I)}\ \delta W_{\mathrm{c}}(t) :=\sqrt{2}\int_{t}^{t+\delta t}\cos(\Omega_{\mathrm{m}}\tau) \mathrm{d}W(\tau), \tag{83a}\] \[\textbf{(I)}\ \delta W_{\mathrm{s}}(t) :=\sqrt{2}\int_{t}^{t+\delta t}\sin(\Omega_{\mathrm{m}}\tau) \mathrm{d}W(\tau). \tag{83b}\]
It turns out that these are approximately normalized, \(\delta W_{\mathrm{c}}^{2}(t)=\delta t(1+\mathcal{O}(\Omega_{\mathrm{m}}\delta t )^{-1})\) and \(\delta W_{\mathrm{s}}^{2}(t)=\delta t(1+\mathcal{O}(\Omega_{\mathrm{m}}\delta t )^{-1})\), and independent \(\delta W_{\mathrm{c}}(t)\delta W_{\mathrm{s}}(t)=\delta t\mathcal{O}(\Omega_{ \mathrm{m}}\delta t)^{-1}\). Thus we can replace \(\delta t\to\mathrm{d}t\), \(\delta\rho\to\mathrm{d}\rho\) and \(\delta W_{\mathrm{c}/\mathrm{s}}\to\mathrm{d}W_{\mathrm{c}/\mathrm{s}}\) to obtain the effective system dynamics
\[\textbf{(I)}\ \mathrm{d}\rho(t) =\mathcal{L}_{\mathrm{th}}\rho(t)+\Gamma\mathcal{D}[\hat{x}]\rho( t)\mathrm{d}t+\Gamma\mathcal{D}[\hat{p}]\rho(t)\mathrm{d}t \tag{84}\] \[\quad+\sqrt{\eta\Gamma}\mathcal{H}[\hat{x}]\mathrm{d}W_{\mathrm{c} }(t)+\sqrt{\eta\Gamma}\mathcal{H}[\hat{p}]\mathrm{d}W_{\mathrm{s}}(t)\]
with independent Wiener increments \(\mathrm{d}W_{\mathrm{c}}(t)\) and \(\mathrm{d}W_{\mathrm{s}}(t)\).
#### v.4.1 Conditional state evolution
Using the notation of Sec. II.2 we find \(H=\mathbb{0}_{2}\), \(\Delta=(\Gamma+\frac{1}{2}\gamma(2\bar{n}+1))\mathbb{1}_{2}\) and \(\Omega=\frac{1}{2}\gamma\sigma\), as well as the
measurement matrices \(A=\sqrt{\eta}\Gamma\mathbb{1}_{2}\), and \(B=\mathbb{0}_{2}\). As before we solve \(\hat{V}_{\rho}=0\) to obtain the steady state covariance \(V_{xp}^{\rho}=0\) and equal variances
\[V_{\rho}^{\infty} :=V_{xx}^{\rho}=V_{pp}^{\rho}\] \[=\frac{1}{4\eta C}\Big{(}\sqrt{1+8\eta C(2C+2\bar{n}+1)}-1\Big{)} \tag{85}\]
in terms of the cooperativity Eq. (79). The purity is simply the inverse of the variance, \(\mathcal{P}(\rho)=1/V_{\rho}^{\infty}\), so it suffices to consider \(V_{\rho}^{\infty}\). Note that as \(\eta\to 0\) the variance approaches its thermal state value \(V_{\rho}^{\infty}\to 2\bar{n}+1+2C\). From the covariance matrix we can compute the conditioned drift matrix from Eq. (19) which turns out to be diagonal,
\[M_{\rho}^{\infty}=\lambda\mathbb{1}_{2},\;\;\;\lambda_{\rho}=-\frac{\gamma}{2 }\sqrt{1+8\eta C(2C+2\bar{n}+1)}. \tag{86}\]
The degenerate eigenvalue \(\lambda_{\rho}\) is always real, and negative as long as \(\gamma\) or \(\eta\Gamma\) are non-zero and thus guarantees stable dynamics. We obtain the mode functions with which the cosine and sine components of the measurement current are Ito-integrated in Eq. (23) by evaluating the kernel
\[\begin{bmatrix}f_{xc}(t)&f_{xs}(t)\\ f_{pc}(t)&f_{ps}(t)\end{bmatrix}=\mathrm{e}^{M_{\rho}^{\infty}t}\big{(}V_{\rho }^{\infty}A^{\mathrm{T}}-\sigma B^{\mathrm{T}}\big{)}. \tag{87}\]
We find that \(f_{xs}(t)=f_{pc}(t)=0\) and
\[f_{\rho}(t):=f_{xc}(t)=f_{ps}(t)=\sqrt{\eta\Gamma}V_{\rho}^{\infty}\mathrm{e} ^{-\lambda_{\rho}t}, \tag{88}\]
which shows that the cosine and sine components of the photocurrent each only enter the corresponding (\(\hat{x}\) or \(\hat{p}\)) quadrature.
In the following we assume that \(\eta C\gg 1\) and \(\bar{n}\gg 1\) so \(\bar{n}+1\approx\bar{n}\). In terms of the quantum cooperativity [66]
\[C_{q}=\frac{C}{\bar{n}+1}\approx\frac{C}{\bar{n}} \tag{89}\]
we find the variance
\[V_{\rho}^{\infty}\approx\sqrt{\frac{C_{q}+1}{\eta C_{q}}}=\frac{1}{\sqrt{\eta }}\sqrt{1+\frac{1}{C_{q}}}, \tag{90}\]
plotted in Fig. 5 (a), and the mode function damping rate is given by
\[\lambda_{\rho}\approx-2\eta\Gamma\sqrt{\frac{C_{q}+1}{\eta C_{q}}}. \tag{91}\]
The equal variances in Eq. (85) and vanishing covariance indicate that we prepare a thermal steady state, which approaches a pure coherent state as \(\eta\to 1\) and \(C_{q}\to\infty\), as we see from the limiting expression Eq. (90) and also from the purity plot in Fig. 5 (b). The exponent \(\lambda_{\rho}\) in Eqs. (86) and Eq. (91) determines how fast the mode functions Eq. (88) decay, and thereby the "memory time" of the conditional state. In the regime where \(C_{q}\gg 1\;\Leftrightarrow\;\Gamma\gg\gamma(\bar{n}+1)\) we find \(\lambda_{\rho}\approx-2\sqrt{\eta}\Gamma\) so the mode function is only determined by the measurement rate. If \(\Gamma\) is much larger than typical evolution time-scales it becomes sharply peaked at \(t\), so the conditional state essentially follows the measurement current in real time. However, \(\Gamma\) must stay well below \(\Omega_{\mathrm{m}}\) or it violates the assumptions underlying our coarse-graining. In the opposite regime of \(C_{q}\ll 1\;\Leftrightarrow\;\Gamma\ll\gamma(\bar{n}+1)\) the exponent is given by \(\lambda_{\rho}\approx-2\sqrt{\Gamma\gamma(\bar{n}+1)/2}\). As \(\Gamma\to 0\) the mode function becomes essentially flat but also goes to zero itself. In this limit the detection will yield mostly noise and only little signal, so the evolution becomes effectively unconditional.
Figure 5: (a) Log-linear plot of the steady state variance \(V_{\rho}^{\infty}\) (same for \(\hat{x}\) and \(\hat{p}\)) from Eq. (85) obtained for homodyne detection on resonance, plotted against detection efficiency \(\eta\) and quantum cooperativity \(C_{q}=C/(\bar{n}+1)\). We chose a bath occupation of \(\bar{n}\sim 10^{5}\)[11], which entails \(C\sim 10^{3}\dots 10^{7}\) in the plotted regime. The exact variance is virtually indistinguishable from its approximate value Eq. (90) because the difference goes as \(\sim 1/C\). The plot is also indistinguishable from the exact and approximate variances \(V_{E}^{\infty}\) of the effect operator in Eqs. (92) and Eq. (93). (b) The purity of the covariance matrix corresponding to the variance in (a).
Retrodiction of POVM elements
We obtain the asymptotic effect operator by solving the Riccati equation resulting from Eq. (43). Again \(V_{xp}^{E}=0\) and
\[V_{E}^{\infty} :=V_{xx}^{E}=V_{pp}^{E}\] \[=\frac{1}{4\eta C}\Big{(}\sqrt{1+8\eta C(2C+2\bar{n}+1)}+1\Big{)} \tag{92}\] \[\approx\sqrt{\frac{C_{q}+1}{\eta C_{q}}}, \tag{93}\]
so we find effect operators with equal variance, which corresponds to a POVM realizing a heterodyne measurement.
The asymptotic variance of the retrodicted effect operator is strictly greater than the asymptotic variance of the conditional state, \(V_{E}^{\infty}-V_{\rho}^{\infty}=1/(2\eta C)\). The difference vanishes as \(C\to\infty\) so the limits Eq. (90) and Eq. (93) are the same, and the plot in Fig. 5 (a) also holds for \(V_{E}^{\infty}\). As expected, the exact \(V_{E}^{\infty}\) in Eq. (92) diverges without observations: \(V_{E}^{\infty}\sim 1/(2\eta C)\) as \(\eta\to 0\). Otherwise the forward and backward dynamics are very similar: we find the same drift matrix as in Eq. (86) with a degenerate negative eigenvalue \(\lambda_{E}=\lambda_{\rho}\), and the mode function takes the same form as before,
\[f_{E}(t)=\sqrt{\eta}\Gamma V_{E}^{\infty}\mathrm{e}^{-\lambda_{E}t}, \tag{94}\]
with the strictly greater variance \(V_{E}^{\infty}\) placing more weight on the backward optical mode compared to the evolution of the conditional state. Assuming \(C,\bar{n}\gg 1\), forward and backward mode functions become identical.
For both preparation and retrodiction we see that we can never measure or prepare states with sub-shot noise resolution. In fact, in the ideal limit of perfect detection, \(\eta\to 1\), and large cooperativity, \(C_{q}\to\infty\), both \(V_{\rho}^{\infty}\) and \(V_{E}^{\infty}\) approach 1 so we can at best measure and prepare coherent states. This symmetry is not surprising since detecting on resonance means both TMS and BS interaction contribute equally to the observed light. The situation is different when the local oscillator is resonant with either of the sidebands.
### Drive on resonance, detect sidebands
We now detune the local oscillator with respect to the driving laser, \(\Delta_{\mathrm{lo}}=\pm\Omega_{\mathrm{m}}\), to resolve the information contained in the sidebands located at \(\omega_{\mathrm{c}}\pm\Omega_{\mathrm{m}}\). Recalling the general equation Eq. (73), we see that detecting the red sideband, \(\Delta_{\mathrm{lo}}=-\Omega_{\mathrm{m}}\), makes \(\hat{a}^{\dagger}\) resonant while \(\hat{a}\) oscillates at \(-2\Omega_{\mathrm{m}}\), and yields the measurement operator
\[\hat{C}(\tau;\Delta_{\mathrm{lo}}=-\Omega_{\mathrm{m}})=\sqrt{\Gamma}\big{(} \hat{a}\mathrm{e}^{-2i\Omega_{\mathrm{m}}\tau}+\hat{a}^{\dagger}\big{)}. \tag{95}\]
Resonant detection of the blue sideband with \(\Delta_{\mathrm{lo}}=\Omega_{\mathrm{m}}\) analogously makes \(\hat{a}\) resonant and results in
\[\hat{C}(\tau;\Delta_{\mathrm{lo}}=\Omega_{\mathrm{m}})=\sqrt{\Gamma}\big{(} \hat{a}+\hat{a}^{\dagger}\mathrm{e}^{2i\Omega_{\mathrm{m}}\tau}\big{)}. \tag{96}\]
Thus, after coarse-graining we expect to better see an effect of the TMS interaction on the red sideband, and of the BS interaction on the blue sideband. To evaluate the integrals in Eq. (73) we introduce
**(I)**: \[\delta W_{0}(t) :=\int_{t}^{t+\delta t}\mathrm{d}W(\tau),\] (97a) **(I)**: \[\delta W_{\mathrm{c},2}(t) :=\sqrt{2}\int_{t}^{t+\delta t}\cos(2\Omega_{\mathrm{m}}\tau) \mathrm{d}W(\tau),\] (97b) **(I)**: \[\delta W_{\mathrm{s},2}(t) :=\sqrt{2}\int_{t}^{t+\delta t}\sin(2\Omega_{\mathrm{m}}\tau) \mathrm{d}W(\tau),\] (97c)
analogous to Eqs. (83) which separate the photocurrent oscillating at twice the mechanical frequency from its DC component (at the given sideband frequency). As before these are approximately normalized and independent of one another (up to \(\mathcal{O}(\Omega_{\mathrm{m}}\delta t)^{-1}\)) so we treat them as independent Wiener increments. Making the replacements \(\delta t\to\mathrm{d}t\) and \(\delta W_{\alpha}\to\mathrm{d}W_{\alpha}\) we obtain two coarse-grained master equations depending on the choice of \(\Delta_{\mathrm{lo}}=\pm\Omega_{\mathrm{m}}\).
#### v.5.1 Detecting the red sideband
We first consider the local oscillator tuned to the red sideband, \(\Delta_{\mathrm{lo}}=-\Omega_{\mathrm{m}}\). This yields the coarse-grained master equation
\[\begin{split}\textbf{(I)}\ \mathrm{d}\rho(t)&=\mathcal{L}_{ \mathrm{th}}\rho(t)+\Gamma\mathcal{D}[\hat{x}]\rho(t)\mathrm{d}t+\Gamma \mathcal{D}[\hat{p}]\rho(t)\mathrm{d}t\\ &\quad+\sqrt{\eta\Gamma}\mathcal{H}[\hat{a}]\rho(t)\mathrm{d}W_{ \mathrm{c},2}(t)\\ &\quad-\sqrt{\eta\Gamma}\mathcal{H}[\hat{a}]\rho(t)\mathrm{d}W_{ \mathrm{s},2}(t)\\ &\quad+\sqrt{\eta\Gamma}\mathcal{H}[\hat{a}^{\dagger}]\rho(t) \mathrm{d}W_{0}(t).\end{split} \tag{98}\]
Analogously to the case of resonant detection we can use the Gaussian formalism to compute the conditional steady state variances,
\[V_{xx}^{\rho} =\frac{1}{3\eta C}\Big{(}\sqrt{1+4\eta C((3-2\eta)C+3\bar{n}+2)} -1\Big{)}-\frac{1}{3}, \tag{99}\] \[V_{pp}^{\rho} =\frac{1}{\eta C}\Big{(}\sqrt{1+4\eta C(C+\bar{n})}-1\Big{)}+1, \tag{100}\]
which for \(C,\bar{n}\gg 1\) become approximately
\[V_{xx}^{\rho} \approx\frac{2}{3}\sqrt{\frac{(3-2\eta)C_{q}+3}{\eta C_{q}}}-\frac {1}{3}, \tag{101}\] \[V_{pp}^{\rho} \approx 2\sqrt{\frac{C_{q}+1}{\eta C_{q}}}+1. \tag{102}\]
To find the corresponding Gaussian effect operators realizable through retrodiction we could translate the full master equation above to an effect equation and then apply the Gaussian formalism as before. Instead we take the shortcut of directly reading off the Riccati equation
Eq. (43) from the corresponding Riccati equation of the conditional state. Solving it yields the asymptotic variances
\[V^{E}_{xx} =\frac{1}{3\eta C}\Big{(}\sqrt{1+4\eta C((3-2\eta)C+3\bar{n}+2)}+1 \Big{)}+\frac{1}{3} \tag{103}\] \[V^{E}_{pp} =\frac{1}{\eta C}\Big{(}\sqrt{1+4\eta C(C+\bar{n})}+1\Big{)}-1, \tag{104}\]
which for \(C,\bar{n}\gg 1\) approach
\[V^{E}_{xx} \approx\frac{2}{3}\sqrt{\frac{(3-2\eta)C_{q}+3}{\eta C_{q}}}+ \frac{1}{3}, \tag{105}\] \[V^{E}_{pp} \approx 2\sqrt{\frac{C_{q}+1}{\eta C_{q}}}-1. \tag{106}\]
Considering the ideal limit \(\eta\to 1\) and \(C_{q}\to\infty\) we find
\[V^{E}_{xx} \to 1, V^{E}_{pp} \to 1 \tag{107}\]
for the effect operator, so at best we retrodict POVMs that project onto coherent states. On the other hand, we find
\[V^{\rho}_{xx} \to\frac{1}{3}, V^{\rho}_{pp} \to 3 \tag{108}\]
for the conditional steady state, showing that we can in principle prepare squeezed states. Necessary conditions for going below shot noise in the preparation are \(C_{q}>1\) and \(\eta>1/2\) since
\[V^{\rho}_{xx}<1\quad\Leftrightarrow\quad\eta>\frac{C+\bar{n}}{2C}\approx\frac{ 1}{2}\bigg{(}1+\frac{1}{C_{q}}\Big{)}, \tag{109}\]
which is confirmed by the plot of \(V^{\rho}_{xx}\) in Fig. 6 (a). However, even with one quadrature below shot noise the prepared state will never be entirely pure as seen in Fig. 6 (b).
#### v.2.2 Detecting the blue sideband
Tuning the local oscillator to the blue sideband, for \(\Delta_{\rm lo}=+\Omega_{\rm m}\), we find the master equation
**(I)**: \[\mathrm{d}\rho(t) =\mathcal{L}_{\rm th}\rho(t)+\Gamma\mathcal{D}[\hat{x}]\rho(t) \mathrm{d}t+\Gamma\mathcal{D}[\hat{p}]\rho(t)\mathrm{d}t\] (110) \[\quad+\sqrt{\eta\Gamma}\mathcal{H}[\hat{a}^{\dagger}]\rho(t) \mathrm{d}W_{\rm c,2}(t)\] \[\quad+\sqrt{\eta\Gamma}\mathcal{H}[\hat{a}^{\dagger}]\rho(t) \mathrm{d}W_{\rm s,2}(t)\] \[\quad+\sqrt{\eta\Gamma}\mathcal{H}[\hat{a}]\rho(t)\mathrm{d}W_{ \rm 0}(t),\]
which asymptotically results in a conditional state with variances
\[V^{\rho}_{xx} =\frac{1}{3\eta C}\Big{(}\sqrt{1+4\eta C((3-2\eta)C+3\bar{n}+1)} -1\Big{)}+\frac{1}{3} \tag{111}\] \[V^{\rho}_{pp} =\frac{1}{\eta C}\Big{(}\sqrt{1+4\eta C(C+\bar{n}+1)}-1\Big{)}-1, \tag{112}\]
which for \(C,\bar{n}\gg 1\) become approximately
\[V^{\rho}_{xx} \approx\frac{1}{3}\sqrt{\frac{(3-2\eta)C_{q}+3}{\eta C_{q}}}+ \frac{1}{6}, \tag{113}\] \[V^{\rho}_{pp} \approx\sqrt{\frac{C_{q}+1}{\eta C_{q}}}-\frac{1}{2}. \tag{114}\]
Figure 6: (a) Linear-logarithmic plot of the approximate steady state variances \(V^{\rho}_{xx}\) and \(V^{E}_{xx}\) from Eqs. (101) and Eq. (118), plotted against detection efficiency \(\eta\) and quantum cooperativity \(C_{q}\) in the limit of large cooperativity \(C\gg 1\). The dashed line denotes the shot noise-limited variance of the vacuum state at \(V_{xx}=1\). (b) The purity of the covariance matrices corresponding to the variances plotted in (a).
We see here that in the limit of \(\eta\to 1\) and \(C_{q}\to\infty\) the variances approach
\[V^{\rho}_{xx}\to 1, V^{\rho}_{pp}\to 1, \tag{115}\]
so we can at best prepare coherent states.
To find the corresponding effect operators we again translate the forward Riccati equation directly to a corresponding backward equation. This yields the asymptotic variances
\[V^{E}_{xx} =\frac{1}{3\eta C}\Big{(}\sqrt{1+4\eta C((3-2\eta)C+3\bar{n}+1)}+ 1\Big{)}-\frac{1}{3} \tag{116}\] \[V^{E}_{pp} =\frac{1}{\eta C}\Big{(}\sqrt{1+4\eta C(C+\bar{n}+1)}+1\Big{)}+1, \tag{117}\]
which for \(C,\bar{n}\gg 1\) become approximately
\[V^{E}_{xx} \approx\frac{2}{3}\sqrt{\frac{(3-2\eta)C_{q}+3}{\eta C_{q}}}- \frac{1}{3}, \tag{118}\] \[V^{E}_{pp} \approx 2\sqrt{\frac{C_{q}+1}{\eta C_{q}}}+1. \tag{119}\]
Considering the ideal limit \(\eta\to 1\) and \(C_{q}\to\infty\) we see that the asymptotic effect operators can in principle project onto squeezed states,
\[V^{E}_{xx} \to\frac{1}{3}, V^{E}_{pp}\to 3, \tag{120}\]
provided \(C_{q}>1\) and \(\eta>1/2\) since
\[V^{E}_{xx}<1 \Leftrightarrow \eta>\frac{C+\bar{n}+1}{2C}=\frac{1}{2}\bigg{(}1+\frac{1}{C_{q}} \bigg{)}. \tag{121}\]
Since both the limiting \(\hat{x}\)-variances Eq. (101) and Eq. (118) and corresponding \(\hat{p}\)-variances agree, the plots in Fig. 6 also hold for the effect operators retrodicted on the blue sideband.
These results are summarized in a Table 2: For large quantum cooperativity and resonant drive, homodyne detection of the blue (red) sideband generates coherent (squeezed) conditional states and squeezed (coherent) retrodictive POVMs. This conforms with the expectation that blue (red) sideband photons have been generated via a beam splitter (two-mode squeezing) interaction, as discussed in Sec. V.2. Thus, these to cases perform qualitatively similar to the basic examples studied in Sec. IV and Table 1. There is, of course, a significant quantitative difference as e.g. the squeezed POVM realized by resonant drive exhibits a noise reduction by \(66\%\) only. A perfect quadrature measurement, such as found in Sec. IV, would require infinite squeezing. In order to achieve this, the driving field has to be detuned from cavity resonance, as will be discussed next.
### Off-resonant drive
Very relevant in experiments is also the case of an off-resonant drive, \(\Delta_{\mathrm{c}}\neq 0\), for example to perform sideband cooling or to prepare squeezed mechanical states in pulsed schemes [8]. Detuning also enables richer retrodictive dynamics since it allows to selectively enhance and suppress the Stokes and anti-Stokes rates \(\Gamma_{\pm}\), and thus the BS and TMS components of the optomechanical interaction.
To analyze the effects of non-zero detuning we need to return to the original coarse-grained master equation Eq. (73). Evaluating the integral over the measurement term for homodyne detection of the carrier or sideband frequencies proceeds analogously to the previous sections. We only need to remember that the sidebands are now located at \(\omega_{\mathrm{c}}\pm\Omega_{\mathrm{eff}}\) with the effective frequency \(\Omega_{\mathrm{eff}}\) from Eq. (74). The Stokes and anti-Stokes rates \(\Gamma_{\pm}\) from Eq. (75) are no longer equal to a single rate
\[\Gamma=\frac{g^{2}\kappa}{(\kappa/2)^{2}+\Omega_{\mathrm{m}}^{2}}, \tag{122}\]
but can be written as
\[\Gamma_{\pm} =\Gamma f_{\pm}, \tag{123a}\] \[f_{\pm} :=\frac{1+4(\Omega_{\mathrm{m}}/\kappa)^{2}}{1+4(-\Delta_{\mathrm{ c}}\pm\Omega_{\mathrm{m}})^{2}/\kappa^{2}}. \tag{123b}\]
For a blue-detuned drive, \(\Delta_{\mathrm{c}}>0\), such that \(\Gamma_{+}\geq\Gamma_{-}+\gamma\) the mechanical dynamics are unstable. Since we are interested in stationary states obtained through continuous driving and observation, we will thus consider only a red-detuned drive, \(\Delta_{\mathrm{c}}<0\), in the following. We see that with \(\Delta_{\mathrm{c}}=-\Omega_{\mathrm{m}}\) we can enhance \(\Gamma_{-}\) by a factor \(f_{-}=1+4(\Omega_{\mathrm{m}}/\kappa)^{2}>1\) while suppressing \(\Gamma_{+}\) by \(f_{+}=(1+4(\Omega_{\mathrm{m}}/\kappa)^{2})/(1+16(\Omega_{\mathrm{m}}/\kappa)^{ 2})<1\). In the broad cavity regime (\(\Omega_{\mathrm{m}}/\kappa\ll 1\)) this imbalance becomes negligible, so we do not expect any benefit from a detuned drive, but whenever \(\Omega_{\mathrm{m}}/\kappa>1\) the enhancement of \(\Gamma_{-}\) greatly enhances our ability to retrodict POVMs with sub-shot noise resolution, as we will now show.
Analogously to the previous sections we solve the Riccati equations for the asymptotic covariance matrices of filtered Gaussian states and retrodicted POVM elements. We will only consider \(\hat{x}\)-variances since the results can
\begin{table}
\begin{tabular}{|c||c|c|} \hline Detected Sideband & Prediction \(\rho\) & Retrodiction \(\hat{E}\) \\ \hline \hline Blue \(\Delta_{\mathrm{lo}}=\Omega_{\mathrm{m}}\) & Coherent & Squeezed \\ \hline Red \(\Delta_{\mathrm{lo}}=-\Omega_{\mathrm{m}}\) & Squeezed & Coherent \\ \hline \end{tabular}
\end{table}
Table 2: Conditional states and retrodictive POVMs generated by resonant drive and homodyne detection of the blue or red sideband.
be applied to any other quadrature by changing the local oscillator phase. Also, since we are interested in fundamental limits we consider only detection of the sidebands that are optimal for preparation and retrodiction, respectively, in the sense that they minimize the stationary variance: the red sideband, \(\Delta_{\text{lo}}=-\Omega_{\text{eff}}\), for preparation and the blue sideband, \(\Delta_{\text{lo}}=\Omega_{\text{eff}}\), for retrodiction. The solutions are conveniently expressed in terms of the classical and quantum cooperativities
\[C_{\pm} :=\frac{\Gamma_{\pm}}{\gamma}=Cf_{\pm}, \tag{124}\] \[C_{q}^{\pm} :=\frac{C_{\pm}}{\bar{n}+1}=C_{q}f_{\pm}, \tag{125}\]
where the "bare" cooperativities \(C\) and \(C_{q}\) are the same as for a resonant drive considered in the previous sections. The solution for a conditional Gaussian steady state prepared by observing the red sideband, \(\Delta_{\text{lo}}=-\Omega_{\text{eff}}\), then reads
\[V_{xx}^{\rho} =\frac{1}{\eta(C_{-}+2C_{+})}\bigg{(}-1-(1-\eta)C_{-} \tag{126a}\] \[\qquad\qquad\qquad\qquad\qquad+(1-2\eta)C_{+}+\sqrt{r}\bigg{)},\] \[r :=(C_{-}-C_{+}+1)^{2}+4\eta(3-2\eta)C_{-}C_{+}\] \[\qquad\quad+8\eta C_{+}(\bar{n}+1)+4\bar{n}\eta C_{-}. \tag{126b}\]
Here we see that in the broad cavity regime, \(\Omega_{\text{m}}/\kappa\ll 1\), where \(C_{-}\approx C_{+}\approx C\), the variance is just given by what one finds by driving on resonance. Thus the minimal variance obtained for \(\eta=1\) and \(C_{q}\to\infty\) will be given by \(V_{xx}^{\rho}\to 1/3<1\), and thus corresponds to a squeezed state. On the other hand, when \(\Omega_{\text{m}}/\kappa>1\) we find that \(C_{q}^{-}\gg C_{q}^{+}\), and \(V_{xx}^{\rho}\to 1\) so we can at best prepare coherent states. The effect of different cavity linewidths is also depicted in Fig. 7, where we see that a red-detuned drive does not help preparation as expected.
We can compare these results to the asymptotic variance of a Gaussian effect operator retrodicted by observing the blue sideband, \(\Delta_{\text{lo}}=\Omega_{\text{eff}}\), which reads
\[V_{xx}^{E} =\frac{1}{\eta(2C_{-}+C_{+})}\bigg{(}1-(1-\eta)C_{+} \tag{127a}\] \[\qquad\qquad\qquad\qquad\qquad+(1-2\eta)C_{-}+\sqrt{s}\bigg{)},\] \[s :=(C_{-}-C_{+}+1)^{2}+4\eta(3-2\eta)C_{-}C_{+}\] (127b) \[\qquad\quad+4\eta C_{+}(\bar{n}+1)+8\bar{n}\eta C_{-}\]
Here we find that to retrodict POVMs with sub-shot noise resolution, \(V_{xx}^{E}<1\), the detection efficiency must satisfy
\[\eta>\frac{1}{2}\bigg{(}1+\frac{1}{C_{q}^{-}}\bigg{)}, \tag{128}\]
and thus necessarily \(\eta>1/2\), but also \(C_{q}^{-}>1\). This is interesting because it means that with detuning \(\Delta_{c}=-\Omega_{\text{m}}\) we no longer require a large "bare" cooperativity \(C_{q}>1\) to measure with sub-shot noise resolution, but only a large product \(C_{q}(1+4(\Omega_{\text{m}}/\kappa)^{2})>1\), which can be rewritten as
\[\bigg{(}\frac{\Omega_{\text{m}}}{\kappa}\bigg{)}^{2}>\frac{1-C_{q}}{4C_{q}}. \tag{129}\]
Thus in the sideband-resolved regime a detuned drive allows to retrodict POVMs that beat the shot noise limit even for sub-unit quantum cooperativities. In fact, whenever \(\Omega_{\text{m}}/\kappa\gg 1\) such that \(C_{q}^{-}\gg 1\) and \(C_{q}^{-}\gg C_{q}^{+}\), the minimal variance will approach
\[V_{xx}^{E}\to\frac{1-\eta}{\eta} \tag{130}\]
as can also be seen in Fig. 8, where we plot the achievable variances for conservative values of \(C_{q}=1/2\) and \(\eta=0.77\). These results show that with an off-resonant (red-detuned) drive, and using only continuous measurements, it is possible to measure with sub-shot noise variance limited only by the detection efficiency \(\eta\).
In summary, it is possible to perform quadrature measurements of the mechanical state with sub-shot noise variance through continuous monitoring of the cavity output. By using a red-detuned cavity drive and sufficiently efficient homodyne detection of the blue sideband of the output one achieves a squeezed retrodictive POVM realizing a quadrature measurement for the past mechanical state. In the resolved sideband limit, the quality of the quadrature measurement is essentially limited by the detection efficiency only and does not require a quantum cooperativity larger than one.
## VI Conclusion & Outlook
We have given here a self-contained introduction to the theory of retrodictive POVMs, demonstrating the potential to retrieve information about the initial quantum state of a system based on the outcomes of a continuous measurement process. The general formalism has been illustrated in detail for linear quantum systems and applied to realistic models of optomechanical systems.
The application of our theoretical framework to optomechanics has revealed promising avenues for achieving retrodictive state analysis. By characterizing achievable retrodictive POVMs in various optomechanical operating modes, such as resonant and off-resonant driving fields, we have illustrated the potential for precise retrodictive measurements of mechanical oscillators. Notably, our findings unveil the possibility of nearly ideal quadrature measurements, offering direct access to the position or momentum distribution of mechanical oscillators at specific time instances. This advancement opens doors to novel possibilities in quantum state tomography, also of Non-Gaussian states, albeit with the caveat of being inherently destructive.
We hope that this presentation will facilitate and advance the use of retrodictive POVMs also in other linear quantum systems beyond optomechanics. Extending the formalism to more complex and nonlinear systems presents an intriguing challenge. As quantum technology continues to advance, the insights gained from this work will contribute to the expanding toolkit of quantum state analysis and manipulation.
We thank Klaus Molmer, Albert Schliesser, Stefan Danilishin, Sebastian Hofer, David Reeb, Reinhard Werner und Lars Dammeier for discussions on this topic. We acknowledge funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through Project-ID 274200144 - SFB 1227 (projects A06) and Project-ID 390837967 - EXC 2123.
## Appendix A Derivation of the effect equation
### General retrodiction
We now illustrate the claim made in Sec. III.2 that effect operators are themselves dynamical objects [52; 53; 54; 55; 56; 57; 58] whose time evolution is determined by that of the quantum system. Assume the system evolves according to some completely positive map \(\tilde{\mathcal{N}}\)[31] from \(t_{0}\) to \(t\) (e. g., the solution to a master equation) such that
\[\tilde{\rho}(t)=\tilde{\mathcal{N}}_{t_{0},t}[\rho(t_{0})]. \tag{10}\]
This may include closed unitary as well as open dissipative evolution. Performing continuous measurements leads to a conditional map,
\[\tilde{\rho}_{\mathcal{I}}(t)=\tilde{\mathcal{N}}_{t_{0},t,\mathcal{I}}[\rho (t_{0})], \tag{11}\]
Figure 8: (a) The asymptotic variance \(V^{E}_{xx}\) of retrodicted effect operators from Eqs. (127) plotted against detuning of the drive \(\Delta_{\mathrm{c}}\) in units of the mechanical frequency \(\Omega_{\mathrm{m}}\). Instead of the ideal limit \(\eta=1\) and \(C_{q}\rightarrow\infty\) used to create Fig. 7 (b) we consider a realistic efficiency \(\eta=0.77\) and sub-unit cooperativity \(C_{q}=1/2\). As the cavity enters the sideband-resolved regime where \(\Omega_{\mathrm{m}}/\kappa>1\) the variance approaches \(V^{E}_{xx}\rightarrow(1-\eta)/\eta\) (dash-dotted line). (b) This limiting value of the variance is plotted in the bottom figure against \(\eta\).
which depends on the particular measurement record \(\mathcal{I}:=\{Y(s),t_{0}\leq s<t\}\) obtained during a single run of the experiment. Now we let the state evolve further until some time \(t_{1}>t\) so we find analogously
\[\tilde{\rho}_{\mathcal{I},\mathcal{J}}(t_{1}) =\tilde{\mathcal{N}}_{t,t_{1},\mathcal{J}}[\tilde{\rho}_{\mathcal{ I}}(t)] \tag{111}\] \[\equiv\tilde{\rho}_{\mathcal{Y}}(t_{1}), \tag{112}\]
with records \(\mathcal{J}:=\{Y(s),t\leq s<t_{1}\}\) and \(\mathcal{Y}:=\mathcal{I}\cup\mathcal{J}=\{Y(s),t_{0}\leq s<t_{1}\}\). Performing a _positive-operator valued measure (POVM)_ measurement \(\{\hat{E}_{x}|x\in\mathcal{X}\}\) with effect operators \(\hat{E}_{x}\) at time \(t_{1}\) we expect outcome \(x\) to occur with probability
\[P(x|\tilde{\rho}_{\mathcal{Y}}(t_{1}))=\mathrm{Tr}\{\hat{E}_{x}\tilde{\rho}_{ \mathcal{Y}}(t_{1})\}. \tag{113}\]
Plugging (111) into this expression, and writing \(\hat{E}_{x}\equiv\hat{E}_{x}(t_{1})\), yields
\[\mathrm{Tr}\{\hat{E}_{x}(t_{1})\tilde{\rho}_{\mathcal{Y}}(t_{1})\} =\mathrm{Tr}\{\hat{E}_{x}(t_{1})\tilde{\rho}_{\mathcal{I}, \mathcal{J}}(t_{1})\} \tag{114}\] \[=\mathrm{Tr}\{\hat{E}_{x}(t_{1})\tilde{\mathcal{N}}_{t,t_{1}, \mathcal{J}}[\tilde{\rho}_{\mathcal{I}}(t)]\}\] (115) \[=\mathrm{Tr}\{\hat{\mathcal{N}}_{t,t_{1},\mathcal{J}}^{\dagger}[ \tilde{E}_{x}(t_{1})]\tilde{\rho}_{\mathcal{I}}(t)\}\] (116) \[=\mathrm{Tr}\{\hat{E}_{x,\mathcal{J}}(t)\tilde{\rho}_{\mathcal{I }}(t)\}, \tag{117}\]
where \(\tilde{\mathcal{N}}^{\dagger}\) denotes the Hilbert-Schmidt adjoint of \(\tilde{\mathcal{N}}\), and \(\hat{E}_{x,\mathcal{J}}(t):=\tilde{\mathcal{N}}_{t,t_{1},\mathcal{J}}^{\dagger }[\hat{E}_{x}(t_{1})]\). This shows that a POVM \(\{\hat{E}_{x}(t_{1})|x\in\mathcal{X}\}\) at \(t_{1}\) is equivalent to a different POVM \(\{\hat{E}_{x,\mathcal{J}}(t)|x\in\mathcal{X},\mathcal{J}\in\mathfrak{J}\}\) at previous time \(t\), where \(\mathfrak{J}\) denotes the set of all possible observation records \(\mathcal{J}\) from \(t\) to \(t_{1}\). The dynamics of individual effect operators are given by
\[\hat{E}_{x,\mathcal{J}}(t)=\tilde{\mathcal{N}}_{t,t_{1},\mathcal{J}}^{\dagger }[\hat{E}_{x}(t_{1})]. \tag{118}\]
This POVM backpropagation is what we call retrodiction. One could now use the first part \(\mathcal{I}\) of the whole record \(\mathcal{Y}\) to obtain \(\tilde{\rho}_{\mathcal{I}}(t)\) through filtering, and then use the second part \(\mathcal{J}\) from \(t\) to \(t_{1}\) to effect a POVM measurement of \(\hat{E}_{x,\mathcal{J}}\) on \(\tilde{\rho}_{\mathcal{I}}(t)\), performing state preparation and verification with the same setup, but using disjoint sets of data. Of particular relevance is thus the case where no additional measurement is performed at the final time \(t_{1}\) so we start with the trivial POVM \(\hat{E}(t_{1})=\hat{1}\). In that case the retrodicted effect operators at \(t\) will depend entirely on the continuous observations.
For simplicity we drop the subscripts \(\mathcal{I}\) and \(\mathcal{J}\) and remember that \(\tilde{\rho}\) and \(\hat{E}\) depend on respective parts of the measurement record.
### Conditional master equation
We now consider the special case of a system governed by conditional master equation (3),
\[\begin{split}\textbf{(I)}\ \mathrm{d}\tilde{\rho}(t)& =-i[\hat{H},\tilde{\rho}(t)]\mathrm{d}t\mathcal{D}[\hat{L}]\tilde {\rho}(t)\mathrm{d}t\\ &\qquad+\big{(}\tilde{C}\tilde{\rho}(t)+\tilde{\rho}(t)\hat{C} \big{)}\mathrm{d}Y(t).\end{split} \tag{119}\]
To derive the effect equation adjoint to this master equation consider again Eq. (117) for the probability to measure some particular value \(x\) given a conditional state \(\tilde{\rho}(t_{1})\),
\[\mathrm{Tr}\{\hat{E}_{x}\tilde{\rho}(t_{1})\}=\mathrm{Tr}\{\hat{E}_{x}(t) \tilde{\rho}(t)\}. \tag{120}\]
Obviously the left-hand side does not depend on the arbitrary parameter \(t\), so when we take a variation with respect to \(t\)[58] we find
\[0 =\mathrm{d}_{t}\mathrm{Tr}\{\hat{E}_{x}(t)\tilde{\rho}(t)\} \tag{121}\] \[=\mathrm{Tr}\{\hat{E}_{x}(t+\mathrm{d}t)\tilde{\rho}(t+\mathrm{d }t)-\hat{E}_{x}(t)\tilde{\rho}(t)\}. \tag{122}\]
We know that \(\tilde{\rho}(t+\mathrm{d}t)=\tilde{\rho}(t)+\mathrm{d}\tilde{\rho}(t)\) with \(\mathrm{d}\tilde{\rho}(t)\) given by Eq. (119), and we similarly assume we can write \(\hat{E}_{x}(t)=\hat{E}_{x}(t+\mathrm{d}t)-\mathrm{d}\hat{E}_{x}(t+\mathrm{d}t)\). We can determine \(\mathrm{d}\hat{E}_{x}(t+\mathrm{d}t)\) from inserting these relation into the equation above,
\[0=\mathrm{Tr}\{\hat{E}_{x}(t+\mathrm{d}t)\mathrm{d}\tilde{\rho}(t)+\mathrm{d} \hat{E}_{x}(t+\mathrm{d}t)\tilde{\rho}(t)\}. \tag{123}\]
Looking at the first term in conjunction with (119), and suppressing the time-dependence of \(\hat{E}_{x}\) for the moment, we see the trace decomposes into three parts,
\[\mathrm{Tr}\{\hat{E}_{x}\mathrm{d}\tilde{\rho}(t)\}=(A)\mathrm{d}t+(B) \mathrm{d}t+(C)\mathrm{d}Y(t), \tag{124}\]
for the Hamiltonian, jump, and measurement operators respectively. For example
\[(A) =-i\mathrm{Tr}\{\hat{E}_{x}[\hat{H},\tilde{\rho}(t)]\} \tag{125}\] \[=-i\mathrm{Tr}\{\hat{E}_{x}\big{(}\hat{H}\tilde{\rho}(t)-\tilde{ \rho}(t)\hat{H}\big{)}\}\] (126) \[=-i\mathrm{Tr}\{\hat{E}_{x}\hat{H}\tilde{\rho}(t)-\hat{H}\hat{E}_ {x}\tilde{\rho}(t)\}\] (127) \[=i\mathrm{Tr}\{[\hat{H},\hat{E}_{x}]\tilde{\rho}(t)\}, \tag{128}\]
where from the second to third line we made use of the cyclic property of the trace. Similarly we find for the jump operator with \(\mathcal{D}[\hat{L}]\rho=\hat{L}\rho\hat{L}^{\dagger}-(\hat{L}^{\dagger}\hat{L} \rho+\rho\hat{L}^{\dagger}\hat{L})/2\) that
\[(B) =\mathrm{Tr}\{\hat{E}_{x}\left(\mathcal{D}[\hat{L}]\tilde{\rho}( t)\right)\} \tag{129}\] \[=\mathrm{Tr}\left\{\left(\mathcal{D}^{\dagger}[\hat{L}]\hat{E}_{x }\right)\tilde{\rho}(t)\right\}, \tag{130}\]
with \(\mathcal{D}^{\dagger}[\hat{L}]\hat{E}=\hat{L}^{\dagger}\hat{E}\hat{L}-(\hat{L}^{ \dagger}\hat{L}\hat{E}+\hat{E}\hat{L}^{\dagger}\hat{L})/2\), and for the measurement term
\[(C) =\mathrm{Tr}\{\hat{E}_{x}\big{(}\hat{C}\tilde{\rho}(t)+\tilde{ \rho}(t)\hat{C}\big{)}\} \tag{131}\] \[=\mathrm{Tr}\{\big{(}\hat{C}^{\dagger}\hat{E}_{x}+\hat{E}_{x} \hat{C}\big{)}\tilde{\rho}(t)\}. \tag{132}\]
Putting all three contributions together we find
\[\begin{split} 0=\mathrm{Tr}\Big{\{}&\Big{(}\mathrm{d}\hat{E}_{x}+ i[\hat{H},\hat{E}_{x}]\mathrm{d}t+\mathcal{D}^{\dagger}[\hat{L}]\hat{E}_{x}\mathrm{d}t\\ &\qquad+\big{(}\hat{C}^{\dagger}\hat{E}_{x}+\hat{E}_{x}\hat{C} \big{)}\mathrm{d}Y(t)\Big{)}\tilde{\rho}(t)\Big{\}}.\end{split} \tag{133}\]
Note that we did not specify \(\tilde{\rho}\) so this equation has to hold for arbitrary density operators. Recalling that \(\hat{E}_{x}\equiv\hat{E}_{x}(t+\mathrm{d}t)\) we shift the time argument and conclude that
\[-\mathrm{d}\hat{E}_{x}(t) =i[\hat{H},\hat{E}_{x}(t)]\mathrm{d}t+\mathcal{D}^{\dagger}[\hat{L }]\hat{E}_{x}(t)\mathrm{d}t \tag{101}\] \[\qquad+\big{(}\hat{C}^{\dagger}\hat{E}_{x}(t)+\hat{E}_{x}(t)\hat {C}\big{)}\mathrm{d}Y(t-\mathrm{d}t).\]
Our derivation swept a few things under the rug as the unusual argument of \(\mathrm{d}Y(t-\mathrm{d}t)\) suggests. In particular, we started with a stochastic Ito equation denoted by the **(I)** in Eq. (100), but we did not say how to interpret the effect equation we just obtained. It turns out that it is a stochastic _backward Ito equation_, which is explained in the next section. This means when expressing the integral as a Riemann sum the stochastic increment needs to be evaluated at the _upper_ limit of each subinterval.
## Appendix B Forward and backward Ito integration
Deterministic integrals of integrable functions defined as limits of Riemann-Stieltjes sums do not depend on whether their integrands are evaluated at the lower or upper end of their subintervals. The upper and lower Riemann sums converge in the limit of vanishing subinterval length. For stochastic integrals this is not the case as the integrand may fluctuate rapidly [34; 35; 69]. Thus there are different ways to integrate a random process depending on where one evaluates the integrand.
The type mostly used in this article is the Ito stochastic integral with the integrand evaluated at the lower end of each subinterval. Another well-known type is the Stratonovich integral with evaluation performed at the mid-point [34; 35]. A third, lesser known type is the backward Ito integral with the integrand evaluated at the upper limit of each subinterval [70].
We denote Ito integrals and differentials by a prepended **(I)** and backward Ito integrals by **(BI)**. Everything to the right of **(I)** (or **(BI)**) is an Ito (or backward Ito) integral. It is possible to mix different integral types [70]. But since we never do this it should always be clear which type of integral is being used.
To clarify the distinction between **(I)** and **(BI)** let us recall the definition of Ito integrals [1; 34; 35]. Consider an interval \([t_{0},t_{1}]\) and partitions \(P_{n}=\{\tau_{j}:j=0,\ldots,n\}\) such that
\[t_{0}=\tau_{0}<\tau_{1}<\cdots<\tau_{n}=t_{1}. \tag{102}\]
We consider only sequences of \(P_{n}\) such that
\[\mathrm{mesh}(P_{n}):=\max_{j=1,\ldots,n}(\tau_{j}-\tau_{j-1})\to 0 \tag{103}\]
as \(n\to\infty\). Provided it exists and is independent of the partition sequence, the Ito integral of some function (or stochastic process) \(f(t)\) with respect to a white noise process \(W_{t}(\equiv W(t))\) is defined as the mean-square limit
\[\textbf{(I)}\int_{t_{0}}^{t_{1}}f(\tau)\mathrm{d}W_{\tau}:=\lim_{n\to\infty} \sum_{j=1}^{n}f(\tau_{j-1})(W_{\tau_{j}}-W_{\tau_{j-1}}). \tag{104}\]
The corresponding backward Ito integral is defined as
\[\textbf{(BI)}\int_{t_{0}}^{t_{1}}f(\tau)\mathrm{d}W_{\tau}:=\lim_{n\to\infty} \sum_{j=1}^{n}f(\tau_{j})(W_{\tau_{j}}-W_{\tau_{j-1}}). \tag{105}\]
Given a stochastic process \(X(t)\) with Ito equation
\[\textbf{(I)}\ \mathrm{d}X_{t}=f(X_{t},t)\mathrm{d}t+g(X_{t},t)\mathrm{d}W_{t}, \tag{106}\]
the corresponding backward Ito equation reads
\[\textbf{(BI)}\ \mathrm{d}X_{t}=(f(X_{t},t)-g(X_{t},t)g^{\prime}(X_{t},t))\, \mathrm{d}t+g(X_{t},t)\mathrm{d}W_{t} \tag{107}\]
with \(g^{\prime}(X_{t},t)=(\partial g(x,t)/\partial x)|_{x=X_{t}}\).
We need backward Ito integrals because they naturally govern the evolution of the effect operator. Let us illustrate this by considering a simple Ito stochastic process driven only by white noise,
\[\textbf{(I)}\ \mathrm{d}X(t)=M_{t}X(t)\mathrm{d}W_{t}, \tag{108}\]
with Wiener increment \(\mathrm{d}W_{t}\) and some linear map \(M_{t}\) acting on \(X(t)\). Integrating both sides we obtain an equivalent integro-differential equation,
\[X(t)=X(t_{0})+\textbf{(I)}\int_{t_{0}}^{t}M_{\tau}X(\tau)\mathrm{d}W_{\tau}. \tag{109}\]
If we continue replacing \(X(\tau)\) on the right-hand side by this expression we find
\[X(t)=\sum_{n=0}^{\infty}\mathcal{M}_{t,t_{0}}^{(n)}X(t_{0}) \tag{110}\]
with the operators \(\mathcal{M}_{t,t_{0}}^{(n)}\) defined recursively via
\[\mathcal{M}_{t,t_{0}}^{(0)} =\mathbb{1}, \tag{111}\] \[\mathcal{M}_{t,t_{0}}^{(n)} =\textbf{(I)}\int_{t_{0}}^{t}M_{\tau}\mathcal{M}_{\tau,t_{0}}^{(n -1)}\mathrm{d}W_{\tau}. \tag{112}\]
We retrieve the Ito equation of \(X(t)\) through variation with respect to \(t\) since
\[\mathrm{d}_{t}\mathcal{M}_{t,t_{0}}^{(0)} =0, \tag{113}\] \[\mathrm{d}_{t}\mathcal{M}_{t,t_{0}}^{(n)} =\textbf{(I)}\ \mathrm{d}_{t}\int_{t_{0}}^{t}M_{\tau_{1}}\mathrm{d}W_{\tau_{1}} \int_{t_{0}}^{\tau_{1}}M_{\tau_{2}}\mathrm{d}W_{\tau_{2}}\ldots\] (114) \[\qquad\qquad\cdots\int_{t_{0}}^{\tau_{n-1}}M_{\tau_{n}}\mathrm{d}W _{\tau_{n}}\] \[=\textbf{(I)}\ M_{t}\mathrm{d}W_{t}\int_{t_{0}}^{t}M_{\tau_{2}} \mathrm{d}W_{\tau_{2}}\ldots\] \[\qquad\qquad\cdots\int_{t_{0}}^{\tau_{n-1}}M_{\tau_{n}}\mathrm{d}W _{\tau_{n}}\] \[=\textbf{(I)}\ M_{t}\mathcal{M}_{t,t_{0}}^{(n-1)}\mathrm{d}W_{t}, \tag{115}\]
so as expected we find
\[\mathrm{d}_{t}X(t) =\sum_{n=0}^{\infty}\mathrm{d}_{t}\mathcal{M}_{t,t_{0}}^{(n)}X(t_{0}) \tag{101}\] \[=\textbf{(I)}\ M_{t}\sum_{n=1}^{\infty}\mathcal{M}_{t,t_{0}}^{(n-1 )}X(t_{0})\mathrm{d}W_{t}\] (102) \[=\textbf{(I)}\ M_{t}X(t)\mathrm{d}W_{t}. \tag{103}\]
Now consider a second process \(E(t)\) which starts at \(t_{1}>t\) and evolves as the adjoint of \(X(t)\), such that
\[\langle E(t_{1})|X(t_{1})\rangle =\langle E(t_{1})|\sum_{n=0}^{\infty}\mathcal{M}_{t_{1},t_{0}}^{( n)}X(t_{0})\rangle \tag{104}\] \[=\langle\sum_{n=0}^{\infty}(\mathcal{M}_{t_{1},t_{0}}^{(n)})^{ \dagger}E(t_{1})|X(t_{0})\rangle\] (105) \[=\langle E(t_{0})|X(t_{0})\rangle \tag{106}\]
independent of \(t_{0}\), so
\[E(t)=\sum_{n=0}^{\infty}(\mathcal{M}_{t_{1},t}^{(n)})^{\dagger}E(t_{1}). \tag{107}\]
To obtain a differential equation analogous to (102) for \(E(t)\) we need to take a derivative with respect to \(t\). It is not immediately clear how to do this from
\[(\mathcal{M}_{t_{1},t}^{(n)})^{\dagger}E(t_{1}) =\textbf{(I)}\int_{t}^{t_{1}}\mathrm{d}W_{\tau_{1}}\int_{t}^{\tau _{1}}\mathrm{d}W_{\tau_{2}}\ldots\] \[\qquad\cdots\int_{t}^{\tau_{n-1}}\mathrm{d}W_{\tau_{n}}M_{\tau_{n }}^{\dagger}\ldots M_{\tau_{2}}^{\dagger}M_{\tau_{1}}^{\dagger}E(t_{1}), \tag{108}\]
since \(t\) appears in every integral.
If the integrals were regular deterministic integrals we could simply re-order the integration boundaries. For example, for an integrable deterministic function \(f(t_{1},t_{2})\) one finds
\[\int_{t}^{t_{1}}\mathrm{d}\tau_{1}\int_{t}^{\tau_{1}}\mathrm{d} \tau_{2}f(\tau_{1},\tau_{2})=\int_{t}^{t_{1}}\mathrm{d}\tau_{2}\int_{\tau_{2}} ^{t_{1}}\mathrm{d}\tau_{2}f(\tau_{1},\tau_{2}). \tag{109}\]
An equivalent result for stochastic integrals was proven by Kuznetsov [70, Ch. 7]. He showed that one can swap the order of integration provided one simultaneously changes from regular Ito to backward Ito integrals, so proceeding inductively we find
\[(\mathcal{M}_{t_{1},t}^{(n)})^{\dagger}E(t_{1})=\] \[=\textbf{(I)}\int_{t}^{t_{1}}\mathrm{d}W_{\tau_{1}}\int_{t}^{ \tau_{1}}\mathrm{d}W_{\tau_{2}}\ldots\] \[\qquad\qquad\qquad\cdots\int_{t}^{\tau_{n-1}}\mathrm{d}W_{\tau_{ n}}M_{\tau_{n}}^{\dagger}\ldots M_{\tau_{2}}^{\dagger}M_{\tau_{1}}^{\dagger}E(t_{1})\] \[=\textbf{(BI)}\int_{t}^{t_{1}}\mathrm{d}W_{\tau_{n}}\ldots\] \[\qquad\qquad\qquad\cdots\int_{\tau_{3}}^{t_{1}}\mathrm{d}W_{\tau _{2}}\int_{\tau_{2}}^{t_{1}}\mathrm{d}W_{\tau_{1}}M_{\tau_{n}}^{\dagger}\ldots M _{\tau_{2}}^{\dagger}M_{\tau_{1}}^{\dagger}E(t_{1})\] \[=\textbf{(BI)}\int_{t}^{t_{1}}M_{\tau_{n}}^{\dagger}\mathrm{d}W_{ \tau_{n}}\cdots\int_{\tau_{3}}^{t_{1}}M_{\tau_{2}}^{\dagger}\mathrm{d}W_{\tau _{2}}\int_{\tau_{2}}^{t_{1}}M_{\tau_{1}}^{\dagger}\mathrm{d}W_{\tau_{1}}E(t_{1})\] \[=\textbf{(BI)}\int_{t}^{t_{1}}M_{\tau}^{\dagger}(\mathcal{M}_{t_{ 1},\tau}^{(n-1)})^{\dagger}\mathrm{d}W_{\tau}E(t_{1}),\]
where all nested integrals in the last three lines are backward Ito integrals. Taking a variation with respect to the lower limit of an integral yields a negative sign, so we find
\[-\mathrm{d}_{t}(\mathcal{M}_{t_{1},t}^{(n)})^{\dagger}=\textbf{(BI)}\ M_{t}^{ \dagger}(\mathcal{M}_{t_{1},t}^{(n-1)})^{\dagger}\mathrm{d}W_{t}, \tag{110}\]
and consequently
\[-\mathrm{d}_{t}E(t)=\textbf{(BI)}\ M_{t}^{\dagger}E(t)\mathrm{d}W_{t}. \tag{111}\]
## Appendix C Cumulant equations of motion
We will now derive the cumulant equations of motion for the quantum state of an arbitrary linear system as described in Secs. II and III. We write \(N^{\text{th}}\)-order cumulants, i. e., of products of \(N\) canonical operators, as
\[\kappa_{m_{1},\ldots,m_{N}}^{(N)}:=\langle\hat{r}_{m_{1}}\ldots\hat{r}_{m_{N}} \rangle_{\rho}^{\text{c}} \tag{112}\]
where the superscript c stands for cumulant, and the indices \(1\leq m_{k}\leq 2M\) indicate that operator \(\hat{r}_{m_{k}}\) is at position \(k\) in the cumulant. We consider only symmetrically ordered cumulants, so the operator position does not matter, and the \(V\) are symmetric under permutations of indices. These general cumulants relate to the means and covariance matrix from the main text as
\[r_{j} =\langle\hat{r}_{j}\rangle_{\rho}=\kappa_{j}^{(1)}, \tag{113}\] \[V_{jk} =\langle\{\hat{r}_{j}-r_{j},\hat{r}_{k}-r_{k}\}\rangle_{\rho}=2 \kappa_{jk}^{(2)}. \tag{114}\]
If one is only interested in Gaussian states all cumulants of order \(N\geq 3\) vanish identically. But for any non-Gaussian state _all_ cumulants have to be taken into account [42].
Cumulants are obtained from the quantum characteristic function \(\chi(\boldsymbol{\xi})\), which is defined as the expectation value of the symmetric Weyl operator \(\mathcal{W}(\boldsymbol{\xi})=\exp(-i\boldsymbol{\xi}^{\text{T}}\sigma\hat{ \textbf{r}})\)[71, 38, 72],
\[\chi(\boldsymbol{\xi}):=\langle\mathcal{W}(\boldsymbol{\xi})\rangle_{\rho}= \operatorname{Tr}\{\mathcal{W}(\boldsymbol{\xi})\rho\}, \tag{115}\]
with phase space variables \(\mathbf{\xi}\in\mathbb{R}^{2M}\). The matrix \(\sigma\) comprises the canonical commutation relations (7). Let us introduce the twisted derivative
\[\tilde{\partial}_{j}:=\sum_{k=1}^{2M}\sigma_{jk}\frac{\partial}{ \partial\xi_{k}},\qquad\qquad\tilde{\mathbf{\nabla}}:=\sigma\mathbf{\nabla}, \tag{109}\]
then \(\chi\) serves as cumulant-generating function via
\[\kappa^{(N)}_{m_{1},\dots,m_{N}} =(-i\tilde{\partial}_{m_{1}})\dots(-i\tilde{\partial}_{m_{N}}) \ln[\chi(\mathbf{\xi})]|_{\mathbf{\xi}=0} \tag{110a}\] \[=(-i\tilde{\partial}_{m_{1}})\dots(-i\tilde{\partial}_{m_{N}})G( \mathbf{\xi})|_{\mathbf{\xi}=0}, \tag{110b}\]
with \(G:=\ln[\chi]\) or \(\chi=\exp(G)\). The usual approach [46, Appendix 12] is to translate the master (or effect) equation into partial differential equations for the cumulants via (110).
### From operator to partial differential equation
To find the correspondence between quantum and partial differential operators, consider the action of \(\tilde{\mathbf{\nabla}}\) on the Weyl operator. We single out one \(\xi_{i}\),
\[-i\mathbf{\xi}^{\mathrm{T}}\sigma\mathbf{\hat{r}}=-i\xi_{i}\sum_{k} \sigma_{ik}\hat{r}_{k}-i\sum_{\begin{subarray}{c}j,k\\ j\neq i\end{subarray}}\xi_{j}\sigma_{jk}\hat{r}_{k}=:\hat{A}_{i}+\hat{B}_{\neq i} \tag{111}\]
so we can write
\[\mathcal{W}(\mathbf{\xi}) :=\exp\Bigl{(}\hat{A}_{i}+\hat{B}_{\neq i}\Bigr{)} \tag{112}\] \[=\exp\Bigl{(}\hat{A}_{i}\Bigr{)}\exp\Bigl{(}\hat{B}_{\neq i} \Bigr{)}\exp\biggl{(}-\frac{1}{2}[\hat{A}_{i},\hat{B}_{\neq i}]\biggr{)}\] (113) \[=\exp\Bigl{(}\hat{B}_{\neq i}\Bigr{)}\exp\Bigl{(}\hat{A}_{i} \Bigr{)}\exp\biggl{(}+\frac{1}{2}[\hat{A}_{i},\hat{B}_{\neq i}]\biggr{)}. \tag{114}\]
We find
\[[\hat{A}_{i},\hat{B}_{\neq i}]=-i\xi_{i}\sum_{\begin{subarray}{c}k\\ k\neq i\end{subarray}}\sigma_{ik}\xi_{k}=-i\xi_{i}(\sigma\mathbf{\xi})_{i}, \tag{115}\]
where we used that \(\sigma\) is skew-symmetric and thus zero on the diagonal. Using this result to apply \(\mathbf{\tilde{\nabla}}\) to \(\mathcal{W}\) yields
\[-i\sigma\mathbf{\nabla}\mathcal{W}(\mathbf{\xi})=\biggl{(}\mathbf{\hat{r}}-\frac{1}{2 }\mathbf{\xi}\biggr{)}\mathcal{W}(\mathbf{\xi})=\mathcal{W}(\mathbf{\xi})\biggl{(}\mathbf{ \hat{r}}+\frac{1}{2}\mathbf{\xi}\biggr{)}, \tag{116}\]
This gives us the important relations
\[\mathbf{\hat{r}}\mathcal{W}(\mathbf{\xi}) =\biggl{(}-i\sigma\mathbf{\nabla}+\frac{1}{2}\mathbf{\xi}\biggr{)} \mathcal{W}(\mathbf{\xi}), \tag{117a}\] \[\mathcal{W}(\mathbf{\xi})\mathbf{\hat{r}} =\biggl{(}-i\sigma\mathbf{\nabla}-\frac{1}{2}\mathbf{\xi}\biggr{)} \mathcal{W}(\mathbf{\xi}), \tag{117b}\]
from which we read off the replacement rules
\[\rho\mathbf{\hat{r}} \to\biggl{(}-i\mathbf{\tilde{\nabla}}+\frac{1}{2}\mathbf{\xi}\biggr{)} \chi(\mathbf{\xi}), \tag{118a}\] \[\mathbf{\hat{r}}\rho \to\biggl{(}-i\mathbf{\tilde{\nabla}}-\frac{1}{2}\mathbf{\xi}\biggr{)} \chi(\mathbf{\xi}). \tag{118b}\]
The following combinations frequently appear in master equations
\[[\mathbf{\hat{r}},\rho] \to-\mathbf{\xi}\chi(\mathbf{\xi}), [\mathbf{\hat{r}}^{\mathrm{T}},\rho] \to-\mathbf{\xi}^{\mathrm{T}}\chi(\mathbf{\xi}), \tag{119}\] \[\{\mathbf{\hat{r}},\rho\} \to-2i\mathbf{\tilde{\nabla}}\chi(\mathbf{\xi}), \{\mathbf{\hat{r}}^{\mathrm{T}},\rho\} \to-2i\mathbf{\tilde{\nabla}}^{\mathrm{T}}\chi(\mathbf{\xi}). \tag{120}\]
### Hamiltonian
For a Hamiltonian with quadratic and linear terms,
\[\hat{H}=\frac{1}{2}\mathbf{\hat{r}}^{\mathrm{T}}H\mathbf{\hat{r}}+\mathbf{h}^{ \mathrm{T}}\mathbf{\hat{r}} \tag{121}\]
with symmetric \(H\in\mathbb{R}^{2M\times 2M}\) and \(\mathbf{h}\in\mathbb{R}^{2M}\) we find
\[-i[\hat{H},\rho]=-i\biggl{(}\frac{1}{2}\mathbf{\hat{r}}^{\mathrm{ T}}H[\mathbf{\hat{r}},\rho]+\frac{1}{2}[\mathbf{\hat{r}}^{\mathrm{T}},\rho]H \mathbf{\hat{r}}+\mathbf{h}^{\mathrm{T}}[\mathbf{\hat{r}},\rho]\biggr{)}. \tag{122}\]
The operator nature of \(\mathbf{\hat{r}}\) and \(\rho\) prohibits us from commuting them, but otherwise we can treat \(H\) as a matrix, \(\mathbf{\hat{r}}\) as a vector, and \(\rho\) as a scalar, which greatly simplifies the notation.
Using the replacement rules from the previous section we obtain
\[[\dot{\chi}]_{H} =-\frac{i}{2}\biggl{(}-i\mathbf{\tilde{\nabla}}-\frac{1}{2}\mathbf{\xi} \biggr{)}^{\mathrm{T}}H(-\mathbf{\xi})\] \[\quad-\frac{i}{2}(-\mathbf{\xi})^{\mathrm{T}}H\biggl{(}-i\mathbf{\tilde{ \nabla}}+\frac{1}{2}\mathbf{\xi}\biggr{)}-i\mathbf{h}^{\mathrm{T}}(-\mathbf{\xi}) \chi(\mathbf{\xi}) \tag{123}\] \[=\frac{1}{2}\Bigl{(}\mathbf{\tilde{\nabla}}^{\mathrm{T}}H\mathbf{\xi}+ \mathbf{\xi}^{\mathrm{T}}H\mathbf{\tilde{\nabla}}+2i\mathbf{h}^{\mathrm{T}}\mathbf{\xi} \Bigr{)}\chi(\mathbf{\xi}). \tag{124}\]
Note that the first term requires use of the product rule since \(\mathbf{\tilde{\nabla}}\) acts on both \(\mathbf{\xi}\) and \(\chi\), so to make this explicit we write
\[\Bigl{(}\mathbf{\tilde{\nabla}}^{\mathrm{T}}H\mathbf{\xi}\Bigr{)}\chi(\mathbf{\xi})=\chi( \mathbf{\xi})\Bigl{(}\mathbf{\tilde{\nabla}}^{\mathrm{T}}H\mathbf{\xi}\Bigr{)}+\mathbf{\xi}^{ \mathrm{T}}H(\mathbf{\tilde{\nabla}}\chi(\mathbf{\xi})). \tag{125}\]
Here we see
\[\mathbf{\tilde{\nabla}}^{\mathrm{T}}H\mathbf{\xi} =\sum_{j,k}\partial_{j}(\sigma^{\mathrm{T}}H)_{jk}\xi_{k}=\sum_{j}( \sigma^{\mathrm{T}}H)_{jj} \tag{126}\] \[=\mathrm{Tr}\bigl{[}\sigma^{\mathrm{T}}H\bigr{]}=0, \tag{127}\]
so the final expression reads
\[-i[\hat{H},\rho]\to\Bigl{(}\mathbf{\xi}^{\mathrm{T}}H\mathbf{\tilde{\nabla}}+i\mathbf{h} ^{\mathrm{T}}\mathbf{\xi}\Bigr{)}\chi(\mathbf{\xi}). \tag{128}\]
To obtain the equations of motion of the cumulants, one has to take the derivative of Eq. (100), so
\[[\mathrm{d}\kappa^{(N)}_{m_{1},\ldots,m_{N}}]_{H}=(-i\tilde{\partial}_{m_{1}}) \ldots(-i\tilde{\partial}_{m_{N}})\mathrm{d}\ln[\chi(\mathbf{\xi})]|_{\mathbf{\xi}=0} \tag{102}\]
\[\begin{split}&=(-i\tilde{\partial}_{m_{1}})\ldots(-i\tilde{ \partial}_{m_{N}})\frac{1}{\chi}\mathrm{d}\chi|_{\mathbf{\xi}=0}\end{split} \tag{103}\]
\[\begin{split}&=(-i\tilde{\partial}_{m_{1}})\ldots(-i\tilde{ \partial}_{m_{N}})\times\\ &\qquad\qquad\times\big{[}\mathrm{e}^{-G(\mathbf{\xi})}\big{(}\mathbf{ \xi}^{T}H\mathbf{\tilde{\nabla}}+i\mathrm{h}^{\mathrm{T}}\mathbf{\xi}\big{)}\mathrm{e} ^{G(\mathbf{\xi})}\big{]}|_{\mathbf{\xi}=0}\mathrm{d}t\\ &=(-i\tilde{\partial}_{m_{1}})\ldots(-i\tilde{\partial}_{m_{N}}) \times\\ &\qquad\qquad\times\big{[}\mathbf{\xi}^{\mathrm{T}}H(\mathbf{\tilde{ \nabla}}G(\mathbf{\xi}))+i\mathrm{h}^{\mathrm{T}}\mathbf{\xi}\big{]}|_{\mathbf{\xi}=0} \mathrm{d}t.\end{split} \tag{104}\]
The twisted derivatives act on \(\mathbf{\xi}\) as \(\tilde{\partial}_{j}\xi_{k}=\sum_{l}\sigma_{jl}\partial_{l}\xi_{k}=\sigma_{jk}\). We thus find, using the Einstein sum convention,
\[\begin{split}[\mathrm{d}\kappa^{(N)}_{m_{1},\ldots,m_{N}}]_{H}& =\sum_{\tau\in\mathcal{S}^{\mathrm{cycl}}(N)}\left(\sigma H\right) _{m_{\tau(1)},k}\kappa^{(N)}_{k,m_{\tau(2)},\ldots,m_{\tau(N)}}\mathrm{d}t\\ &\qquad+(\sigma\mathbf{h})_{m_{1}}\delta_{N,1}\mathrm{d}t\end{split} \tag{105}\]
where \(\tau\) runs through all cyclic permutations of \(1,\ldots,N\).
### Lindblad operators
For linear jump operators \(\hat{\mathbf{L}}=\Lambda\hat{\mathbf{r}}\) with \(\Lambda^{\dagger}\Lambda=:\Delta+i\Omega\) we find the Lindblad operators
\[\sum_{j}\mathcal{D}[\hat{L}_{j}]\rho =\sum_{j,k,l}\Lambda^{*}_{jk}\Lambda_{jl}\bigg{(}\hat{r}_{l}\rho \hat{r}_{k}-\frac{1}{2}\{\hat{r}_{k}\hat{r}_{l},\rho\}\bigg{)} \tag{106}\] \[=\hat{\mathbf{r}}^{\mathrm{T}}\rho(\Lambda^{\dagger}\Lambda)^{ \mathrm{T}}\hat{\mathbf{r}}-\frac{1}{2}\{\hat{\mathbf{r}}^{\mathrm{T}}\Lambda^ {\dagger}\Lambda\hat{\mathbf{r}},\rho\}\] (107) \[=\frac{1}{2}\hat{\mathbf{r}}^{\mathrm{T}}\Delta[\rho,\hat{ \mathbf{r}}]+\frac{1}{2}[\hat{\mathbf{r}}^{\mathrm{T}},\rho]\Delta\hat{ \mathbf{r}}\] (108) \[\quad-\frac{i}{2}\Big{(}\hat{\mathbf{r}}^{\mathrm{T}}\Omega\{ \rho,\hat{\mathbf{r}}\}+\{\hat{\mathbf{r}}^{\mathrm{T}},\rho\}\Omega\hat{ \mathbf{r}}\Big{)}\]
which becomes
\[[\dot{\chi}]_{L} =\bigg{[}\frac{1}{2}\bigg{(}-i\mathbf{\tilde{\nabla}}-\frac{1}{2}\bm {\xi}\bigg{)}^{\mathrm{T}}\Delta\mathbf{\xi}+\frac{1}{2}(-\mathbf{\xi})^{\mathrm{T}} \Delta\bigg{(}-i\mathbf{\tilde{\nabla}}+\frac{1}{2}\mathbf{\xi}\bigg{)}-\bigg{(}-i\bm {\tilde{\nabla}}-\frac{1}{2}\mathbf{\xi}\bigg{)}^{\mathrm{T}}\Omega\mathbf{\tilde{ \nabla}}-\mathbf{\tilde{\nabla}}^{\mathrm{T}}\Omega\bigg{(}-i\mathbf{\tilde{\nabla}}+ \frac{1}{2}\mathbf{\xi}\bigg{)}\bigg{]}\chi(\mathbf{\xi}) \tag{109}\] \[=\bigg{[}-\frac{1}{2}\mathbf{\xi}^{\mathrm{T}}\Delta\mathbf{\xi}-\frac{i} {2}\mathbf{\tilde{\nabla}}^{\mathrm{T}}\Delta\mathbf{\xi}+\frac{i}{2}\mathbf{\xi}^{ \mathrm{T}}\Delta\mathbf{\tilde{\nabla}}+2i\mathbf{\tilde{\nabla}}^{\mathrm{T}}\Omega \mathbf{\tilde{\nabla}}+\frac{1}{2}\mathbf{\xi}^{\mathrm{T}}\Omega\mathbf{\tilde{\nabla}} -\frac{1}{2}\mathbf{\tilde{\nabla}}^{\mathrm{T}}\Omega\mathbf{\xi}\bigg{]}\chi(\mathbf{\xi})\] (110) \[=\bigg{[}-\frac{1}{2}\mathbf{\xi}^{\mathrm{T}}\Delta\mathbf{\xi}+\mathbf{\xi} ^{\mathrm{T}}\Omega\mathbf{\tilde{\nabla}}+\frac{1}{2}\mathrm{Tr}[\sigma\Omega] \bigg{]}\chi(\mathbf{\xi}) \tag{111}\]
where we used the product rule as in (102) to get
\[\mathbf{\tilde{\nabla}}^{\mathrm{T}}\Omega\mathbf{\xi}=-\mathbf{\xi}^{\mathrm{T}}\Omega\bm {\tilde{\nabla}}+\mathrm{Tr}[\sigma\Omega]. \tag{112}\]
We also used that \(\Delta\) is a symmetric matrix so \(-\mathbf{\tilde{\nabla}}^{\mathrm{T}}\Delta\mathbf{\xi}+\mathbf{\xi}^{\mathrm{T}}\Delta \mathbf{\tilde{\nabla}}=\mathrm{Tr}[\sigma\Delta]=0\), and \(\Omega\) is skew-symmetric so \(\mathbf{v}^{\mathrm{T}}\Omega\mathbf{v}=0\) for any vector \(\mathbf{v}\); in particular
\[\mathbf{\tilde{\nabla}}^{\mathrm{T}}\Omega\mathbf{\tilde{\nabla}}=0. \tag{113}\]
The contribution to the cumulant evolution will be given by
\[[\mathrm{d}\kappa^{(N)}_{m_{1},\ldots,m_{N}}]_{L}=(-i\tilde{\partial }_{m_{1}})\ldots(-i\tilde{\partial}_{m_{N}})\mathrm{e}^{-G(\mathbf{\xi})}\bigg{[}- \frac{1}{2}\mathbf{\xi}^{\mathrm{T}}\Delta\mathbf{\xi}+\mathbf{\xi}^{\mathrm{T}}\Omega\mathbf{ \tilde{\nabla}}+\frac{1}{2}\mathrm{Tr}[\sigma\Omega]\bigg{]}\mathrm{e}^{G(\mathbf{ \xi})}\mathrm{d}t \tag{114}\] \[=(-i\tilde{\partial}_{m_{1}})\ldots(-i\tilde{\partial}_{m_{N}}) \bigg{[}-\frac{1}{2}\mathbf{\xi}^{\mathrm{T}}\Delta\mathbf{\xi}+(\mathbf{\xi}^{\mathrm{T}} \Omega\mathbf{\tilde{\nabla}}G(\mathbf{\xi}))+\frac{1}{2}\mathrm{Tr}[\sigma\Omega]\bigg{]} |_{\mathbf{\xi}=0}\mathrm{d}t\] (115) \[=(\sigma\Delta\sigma^{\mathrm{T}})_{m_{1}m_{2}}\delta_{N,2}\mathrm{d}t +\frac{1}{2}\mathrm{Tr}[\sigma\Omega]\delta_{N,0}\mathrm{d}t+\sum_{\tau\in \mathcal{S}^{\mathrm{cycl}}(N)}(\sigma\Omega)_{m_{\tau(1)},k}\kappa^{(N)}_{k,m_{ \tau(2)},\ldots,m_{\tau(N)}}\mathrm{d}t. \tag{116}\]
### Measurement terms
We assume linear measurement operators, \(\hat{\mathbf{C}}=(A+iB)\hat{\mathbf{r}}\), that generate terms
**(I)**: \[\sum_{k}(\hat{C}_{k}\rho+\rho\hat{\mathbf{C}}_{k}^{\dagger})\mathrm{d}Y_{k}=( \hat{\mathbf{C}}\rho+\rho\hat{\mathbf{C}}^{\dagger})^{\mathrm{T}}\mathrm{d} \mathbf{Y}=(A(\hat{\mathbf{r}}\rho+\rho\hat{\mathbf{r}})+iB(\hat{\mathbf{r}} \rho-\rho\hat{\mathbf{r}}))^{\mathrm{T}}\mathrm{d}\mathbf{Y}=(A(\hat{\mathbf{r}} \rho)+iB[\hat{\mathbf{r}},\rho])^{\mathrm{T}}\mathrm{d}\mathbf{Y},\] (117)
which yield a contribution
\[\textbf{(I)}\ [\mathrm{d}\chi]_{C}=\left(-2iA\mathbf{\tilde{\nabla}}-iB\mathbf{\xi} \right)^{\mathrm{T}}\mathrm{d}\mathbf{Y}\chi(\mathbf{\xi}). \tag{108}\]
Note that due to the stochastic nature of the Ito increment we need to implement the Ito table \(\mathrm{d}Y_{j}\mathrm{d}Y_{k}=\delta_{jk}\mathrm{d}t\) so we keep terms of up to second order,
\[\textbf{(I)}\ [\mathrm{d}G]_{C}=[\mathrm{d}\ln[\chi]]_{C}=\frac{1}{\chi}[ \mathrm{d}\chi]_{C}+\frac{1}{2}\bigg{(}-\frac{1}{\chi^{2}}\bigg{)}[\mathrm{d} \chi]_{C}^{2}=:[\mathrm{d}G]_{Y}+[\mathrm{d}G]_{\mathrm{It}\hat{\mathrm{o}}}. \tag{109}\]
The linear term yields
\[\textbf{(I)}\ [\mathrm{d}G]_{Y}=\mathrm{e}^{-G(\mathbf{\xi})}\Big{(}-2iA\mathbf{ \tilde{\nabla}}-iB\mathbf{\xi}\Big{)}^{\mathrm{T}}\mathrm{e}^{G(\mathbf{\xi})} \mathrm{d}\mathbf{Y}=\Big{(}-2iA\mathbf{\tilde{\nabla}}G(\mathbf{\xi})-iB\mathbf{\xi} \Big{)}^{\mathrm{T}}\mathrm{d}\mathbf{Y}, \tag{110}\]
which contributes to the cumulants
\[\textbf{(I)}\ [\mathrm{d}\kappa^{(N)}_{m_{1},\ldots,m_{N}}]_{Y} =(-i\tilde{\partial}_{m_{1}})\ldots(-i\tilde{\partial}_{m_{N}}) \times\Big{(}-2iA\mathbf{\tilde{\nabla}}G(\mathbf{\xi})-iB\mathbf{\xi}\Big{)}^{\mathrm{T} }\mathrm{d}\mathbf{Y}|_{\mathbf{\xi}=0} \tag{111}\] \[=\Big{(}2A_{jk}\kappa^{(N+1)}_{k,m_{1},\ldots,m_{N}}-B_{jk} \sigma_{m_{1}k}\delta_{N,1}\Big{)}\mathrm{d}Y_{j}\] (112) \[=\Big{(}2\kappa^{(N+1)}_{m_{1},\ldots,m_{N},k}A^{\mathrm{T}}_{kj }-(\sigma B^{\mathrm{T}})_{m_{1}j}\delta_{N,1}\Big{)}\mathrm{d}Y_{j}. \tag{113}\]
If we replace the measurement signal with a Wiener increment by extracting the mean,
\[\textbf{(I)}\ \mathrm{d}\mathbf{Y}=\langle\mathbf{\hat{C}}+\mathbf{\hat{C}}^{\dagger} \rangle\mathrm{d}t+\mathrm{d}\mathbf{W}=2A\mathbf{r}\mathrm{d}t+\mathrm{d} \mathbf{W}=2A\kappa^{(1)}\mathrm{d}t+\mathrm{d}\mathbf{W} \tag{114}\]
we find
\[\textbf{(I)}\ [\mathrm{d}\kappa^{(N)}_{m_{1},\ldots,m_{N}}]_{Y}=\Big{(}2 \kappa^{(N+1)}_{m_{1},\ldots,m_{N},k}A^{\mathrm{T}}_{kj}-(\sigma B^{\mathrm{ T}})_{m_{1}j}\delta_{N,1}\Big{)}\mathrm{d}W_{j}+2\Big{(}2\kappa^{(N+1)}_{m_{1}, \ldots,m_{N},k}(A^{\mathrm{T}}A\mathbf{r})_{k}-(\sigma B^{\mathrm{T}}A\mathbf{ r})_{m_{1}}\delta_{N,1}\Big{)}\mathrm{d}t. \tag{115}\]
The quadratic Ito correction contributes terms
\[[\mathrm{d}G]_{\mathrm{It}\hat{\mathrm{o}}} =-\frac{1}{2}\Big{(}-2iA\mathbf{\tilde{\nabla}}G(\mathbf{\xi})-iB\mathbf{\xi }\Big{)}^{\mathrm{T}}\Big{(}-2iA\mathbf{\tilde{\nabla}}G(\mathbf{\xi})-iB\mathbf{\xi} \Big{)}\mathrm{d}t \tag{116}\] \[=\frac{1}{2}\Big{(}4(\mathbf{\tilde{\nabla}}G(\mathbf{\xi}))^{\mathrm{T} }A^{\mathrm{T}}A(\mathbf{\tilde{\nabla}}G(\mathbf{\xi}))+\mathbf{\xi}^{\mathrm{T}}B^{ \mathrm{T}}B\mathbf{\xi}+4\mathbf{\xi}^{\mathrm{T}}B^{\mathrm{T}}A(\mathbf{\tilde{\nabla}} G(\mathbf{\xi}))\Big{)}\mathrm{d}t, \tag{117}\]
where the derivatives act only on the \(G\) right next to them. Evaluating the last two terms is straightforward and one finds
\[(-i\tilde{\partial}_{m_{1}})\ldots(-i\tilde{\partial}_{m_{N}})[\mathbf{\xi}^{ \mathrm{T}}B^{\mathrm{T}}B\mathbf{\xi}]|_{\mathbf{\xi}=0}=-2(\sigma B^{\mathrm{T}}B \sigma^{\mathrm{T}})_{m_{1}m_{2}}\delta_{N,2}, \tag{118}\]
and
\[(-i\tilde{\partial}_{m_{1}})\ldots(-i\tilde{\partial}_{m_{N}})[\mathbf{\xi}^{ \mathrm{T}}B^{\mathrm{T}}A(\mathbf{\tilde{\nabla}}G(\mathbf{\xi}))]|_{\mathbf{\xi}=0}= \sum_{\tau\in\mathcal{S}^{\mathrm{svd}}(N)}(\sigma B^{\mathrm{T}}A)_{m_{\tau(1)},k}\kappa^{(N)}_{k,m_{\tau(2)},\ldots,m_{\tau(N)}}. \tag{119}\]
To compute the remaining term we have to apply the product rule multiple times. This results in a sum over all possible sequences of derivatives acting either on the right or on the left \(\mathbf{\tilde{\nabla}}G\), so
\[(-i\tilde{\partial}_{m_{1}})\ldots(-i\tilde{\partial}_{m_{N}})2[(\mathbf{\tilde{ \nabla}}G(\mathbf{\xi}))^{\mathrm{T}}A^{\mathrm{T}}A(\mathbf{\tilde{\nabla}}G(\mathbf{\xi} ))]|_{\mathbf{\xi}=0}=-2\sum_{n=0}^{N}\sum_{\sigma\in S(N)}\kappa^{(n+1)}_{m_{ \sigma(1)},\ldots,m_{\sigma(n)},j}(A^{\mathrm{T}}A)_{jk}\kappa^{(N-n+1)}_{k,m_{ \sigma(n+1)},\ldots,m_{\sigma(N)}}. \tag{120}\]
where \(\sigma\in S(N)\) runs through all possible permutations of \(\{1,\ldots,N\}\), not just cyclic ones. We extract the terms with \(n=0\) and \(n=N\) from the sum which each contain only one permutation,
\[-2\Big{(}\kappa^{(1)}_{j}(A^{\mathrm{T}}A)_{jk}\kappa^{(N+1)}_{k,m_{1},\ldots, m_{N}}+\kappa^{(N+1)}_{m_{1},\ldots,m_{N},j}(A^{\mathrm{T}}A)_{jk}\kappa^{(1)}_{k} \Big{)}=-4\kappa^{(N+1)}_{m_{1},\ldots,m_{N},k}(A^{\mathrm{T}}A\mathbf{r})_{k}, \tag{121}\]
and we see that this cancels exactly a term from (115). Combining the remaining measurement contributions we find
\[\textbf{(I)}\ [\mathrm{d}\kappa^{(N)}_{m_{1},\ldots,m_{N}}]_{C}=\Big{(}2 \kappa^{(N+1)}_{m_{1},\ldots,m_{N},k}A^{\mathrm{T}}_{kj}-(\sigma B^{\mathrm{T}}) _{m_{1}j}\delta_{N,1}\Big{)}\mathrm{d}W_{j}-2(\sigma B^{\mathrm{T}}A\mathbf{r})_{m _{1}}\delta_{N,1}\mathrm{d}t-(\sigma B^{\mathrm{T}}B\sigma^{\mathrm{T}})_{m_{1}m_{ 2}}\delta_{N,2}\mathrm{d}t\] (122) \[\qquad+\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
### Combined evolution
We put all terms from the previous sections together and find for cumulants of first order
\[\textbf{(I)}\ \mathrm{d}r_{m}=\mathrm{d}\kappa^{(1)}_{m}=Q_{m,k}\kappa^{(1)}_{k} \mathrm{d}t+2\kappa^{(2)}_{m,k}(A^{\mathrm{T}}\mathrm{d}\mathbf{W})_{k}+( \sigma\mathbf{h}\mathrm{d}t-\sigma B^{\mathrm{T}}\mathrm{d}\mathbf{W})_{m} \tag{100}\]
with implicit sums over the repeated index \(k\), and
\[Q:=\sigma(H+\Omega), \tilde{\Delta}:=\Delta-B^{\mathrm{T}}B. \tag{101}\]
In vector form and with \(V_{\rho}=2\kappa^{(2)}_{\rho}\) these equations read
\[\textbf{(I)}\ \mathrm{d}\mathbf{r}_{\rho}=Q\mathbf{r}_{\rho}\mathrm{d}t+ \sigma\mathbf{h}\mathrm{d}t+(2\kappa^{(2)}_{\rho}A^{\mathrm{T}}-\sigma B^{ \mathrm{T}})\mathrm{d}\mathbf{W}=M_{\rho}\mathbf{r}_{\rho}\mathrm{d}t+\sigma \mathbf{h}\mathrm{d}t+(2\kappa^{(2)}_{\rho}A^{\mathrm{T}}-\sigma B^{\mathrm{T }})\mathrm{d}\mathbf{Y}, \tag{102}\]
where we replaced the Wiener increment by the measurement current \(\mathrm{d}\mathbf{Y}\) as in Eq. (100), and introduced the drift matrix from Eq. (19),
\[M_{\rho}(t)=Q+2\sigma B^{\mathrm{T}}A-4\kappa^{(2)}_{\rho}(t)A^{\mathrm{T}}A. \tag{103}\]
At second order we find for the (co)variances
\[\begin{split}\textbf{(I)}\ \mathrm{d}\kappa^{(2)}_{m_{1},m_{2}}& =\sum_{\tau\in\mathcal{S}^{\mathrm{cycl}}(2)}(Q+2\sigma B^{\mathrm{T }}A)_{m_{\tau(1)},k}\kappa^{(2)}_{k,m_{\tau(2)}}\mathrm{d}t+(\sigma\tilde{ \Delta}\sigma^{\mathrm{T}})_{m_{1}m_{2}}\mathrm{d}t+2\kappa^{(3)}_{m_{1},m_{2},k}(A^{\mathrm{T}}\mathrm{d}\mathbf{W})_{k}\\ &\quad-2\sum_{\sigma\in\mathcal{S}(2)}\kappa^{(2)}_{m_{\sigma(1)},j}(A^{\mathrm{T}}A)_{jk}\kappa^{(2)}_{k,m_{\sigma(2)}}\mathrm{d}t\\ &=(Q+2\sigma B^{\mathrm{T}}A)_{m_{1},k}\kappa^{(2)}_{k,m_{2}} \mathrm{d}t+(Q+2\sigma B^{\mathrm{T}}A)_{m_{2},k}\kappa^{(2)}_{k,m_{1}} \mathrm{d}t+(\sigma\tilde{\Delta}\sigma^{\mathrm{T}})_{m_{1}m_{2}}\mathrm{d} t\\ &\quad+2\kappa^{(3)}_{m_{1},m_{2},k}(A^{\mathrm{T}}\mathrm{d} \mathbf{W})_{k}-2\kappa^{(2)}_{m_{1},j}(A^{\mathrm{T}}A)_{jk}\kappa^{(2)}_{k,m _{2}}\mathrm{d}t-2\kappa^{(2)}_{m_{2},j}(A^{\mathrm{T}}A)_{jk}\kappa^{(2)}_{k, m_{1}}\mathrm{d}t,\end{split} \tag{104}\]
which can be written more concisely as
\[\textbf{(I)}\ \mathrm{d}\kappa^{(2)}_{m_{1},m_{2}}=\left[M_{\rho}\kappa^{(2)}_{ \rho}+\kappa^{(2)}_{\rho}M^{\mathrm{T}}_{\rho}+\sigma\tilde{\Delta}\sigma^{ \mathrm{T}}+4\kappa^{(2)}_{\rho}A^{\mathrm{T}}A\kappa^{(2)}_{\rho}\right]_{m_{ 1},m_{2}}\mathrm{d}t+2\kappa^{(3)}_{m_{1},m_{2},k}(A^{\mathrm{T}}\mathrm{d} \mathbf{W})_{k}, \tag{105}\]
with an implicit sum over \(k\) in the last term, turning \(\kappa^{(3)}_{\rho}\) from a tensor into a matrix with the free indices \(m_{1}\), \(m_{2}\). The evolution of cumulants of order \(N\geq 3\) is given by
\[\begin{split}\textbf{(I)}\ \mathrm{d}\kappa^{(N)}_{m_{1},\ldots,m_{N}}& =\sum_{\tau\in\mathcal{S}^{\mathrm{cycl}}(N)}(Q+2\sigma B^{\mathrm{ T}}A)_{m_{\tau(1)},k}\kappa^{(N)}_{k,m_{\tau(2)},\ldots,m_{\tau(N)}} \mathrm{d}t\\ &\quad-2\sum_{n=1}^{N-1}\sum_{\sigma\in\mathcal{S}(N)}\kappa^{(n+1 )}_{m_{\sigma(1)},\ldots,m_{\sigma(n)},j}(A^{\mathrm{T}}A)_{jk}\kappa^{(N-n+1) }_{k,m_{\sigma(n+1)},\ldots,m_{\sigma(N)}}\mathrm{d}t+2\kappa^{(N+1)}_{m_{1}, \ldots,m_{N},k}(A^{\mathrm{T}}\mathrm{d}\mathbf{W})_{k}\\ &=\sum_{\tau\in\mathcal{S}^{\mathrm{cycl}}(N)}M^{\rho}_{m_{\tau(1)},k}(t)\kappa^{(N)}_{k,m_{\tau(2)},\ldots,m_{\tau(N)}}\mathrm{d}t\\ &\quad-2\sum_{n=2}^{N-2}\sum_{\sigma\in\mathcal{S}(N)}\kappa^{(n+1 )}_{m_{\sigma(1)},\ldots,m_{\sigma(n)},j}(A^{\mathrm{T}}A)_{jk}\kappa^{(N-n+1 )}_{k,m_{\sigma(n+1)},\ldots,m_{\sigma(N)}}\mathrm{d}t+2\kappa^{(N+1)}_{m_{1},\ldots,m_{N},k}(A^{\mathrm{T}}\mathrm{d}\mathbf{W})_{k},\end{split} \tag{106}\]
where the terms with \(n=1\) and \(n=N-1\) were used to introduce the drift matrix \(M_{\rho}(t)\). It is worthy to note that generally all cumulants couple to the next higher order through the stochastic term.
### Stability analysis
A common restriction in stable linear systems is to consider only Gaussian states, which collapse to a steady state with fixed covariance matrix (second cumulants) and bounded or decaying means. This restriction is justified if any non-Gaussian state also becomes Gaussian asymptotically, which means that all higher-order cumulants decay over
time. Proving this result rigorously is beyond the scope of this article but we make some qualitative observations, which lead us to conjecture that this is indeed the case.
First, note that assuming stable dynamics implies that the second cumulants of a Gaussian state asymptotically satisfy
\[0=M_{\rho}^{\infty}\kappa_{\infty}^{(2)}+\kappa_{\infty}^{(2)}(M_{\rho}^{\infty })^{\mathrm{T}}+\sigma\tilde{\Delta}\sigma^{\mathrm{T}}+4\kappa_{\infty}^{(2) }A^{\mathrm{T}}A\kappa_{\infty}^{(2)}, \tag{100}\]
with \(M_{\infty}:=Q+2\sigma B^{\mathrm{T}}A-4\kappa_{\infty}^{(2)}A^{\mathrm{T}}A\) having only eigenvalues with negative real part. The collapse to this particular \(\kappa_{\infty}^{(2)}\) is a consequence of the deterministic part of the evolution (i. e., the bracket) in Eq. (101). If we consider some non-Gaussian state with \(\kappa_{\rho}^{(3)}\neq 0\), then \(\kappa_{\rho}^{(2)}(t)\) itself becomes a random process, which in addition to the deterministic drift will experience some diffusion induced by the additive white noise dependent on \(\kappa_{\rho}^{(3)}\). Looking at Eq. (100) we see that these third-order cumulants (as well as all higher orders) themselves also experience deterministic decay induced by all lower orders through matrices \(M_{\rho}(t)\) and \(A^{\mathrm{T}}A\), and white-noise diffusion through the next higher order only. This leads us to conjecture that in stable systems asymptotically all higher-order cumulants vanish on average, but experience stochastic fluctuations about zero (diffusion) induced by the Wiener increment, which leads to some residual variance. Thus also any initial state of the system will asymptotically collapse to a Gaussian on average.
### Backward addition
To obtain the evolution equations of the cumulants associated with general effect operators we can compare the unnormalized master equation for \(\rho\),
\[\textbf{(I)}\ \mathrm{d}\rho(t)=-i[\hat{H},\rho(t)]\mathrm{d}t+\sum_{j=1}^{N_{ L}}\mathcal{D}[\hat{L}_{j}]\rho(t)\mathrm{d}t+\sum_{k=1}^{N_{C}}\Bigl{(}\hat{C}_{ k}\rho(t)+\rho(t)\hat{C}_{k}^{\dagger}\Bigr{)}\mathrm{d}Y_{k}(t), \tag{101}\]
to the corresponding adjoint equation for \(\hat{E}\),
\[\textbf{(BI)}\ -\mathrm{d}\hat{E}(t)=i[\hat{H},\hat{E}(t)]\mathrm{d}t+\sum_{j=1 }^{N_{L}}\mathcal{D}^{\dagger}[\hat{L}_{j}]\hat{E}(t)\mathrm{d}t+\sum_{k=1}^{ N_{C}}\Bigl{(}\hat{C}_{k}^{\dagger}\hat{E}(t)+\hat{E}(t)\hat{C}_{k}\Bigr{)} \mathrm{d}Y_{k}(t) \tag{102}\] \[\qquad=-i[-\hat{H},\hat{E}(t)]\mathrm{d}t+\sum_{j=1}^{N_{L}} \mathcal{D}[\hat{L}_{j}]\hat{E}(t)\mathrm{d}t+\sum_{k=1}^{N_{C}}\Bigl{(}\hat{ C}_{k}^{\dagger}\hat{E}(t)+\hat{E}(t)\hat{C}_{k}\Bigr{)}\mathrm{d}Y_{k}(t)+ \sum_{j=1}^{N_{L}}\Bigl{(}\hat{L}_{j}^{\dagger}\hat{E}(t)\hat{L}_{j}-\hat{L}_{ j}\hat{E}(t)\hat{L}_{j}^{\dagger}\Bigr{)}\mathrm{d}t, \tag{103}\]
where the last sum compensates for replacing \(\mathcal{D}^{\dagger}\mapsto\mathcal{D}\). We can immediately read off the following changes. The sign flip of \(\hat{H}\) causes \(H\to-H\) and replacing the measurement operators \(\hat{C}_{k}\) by their adjoint entails \(B\to-B\). We only need to work out the change \([\mathrm{d}\chi_{E}]_{\mathrm{bwd}}\) stemming from the sandwich terms in the last sum. Using the relations (100), (101) and (102) we obtain
\[-[\mathrm{d}\hat{E}]_{\mathrm{bwd}}=2i\bar{\boldsymbol{\nabla}}^{\mathrm{T}} \Omega\hat{E}\bar{\boldsymbol{\nabla}}\mathrm{d}t \tag{104}\]
which turns into
\[-[\mathrm{d}\chi_{E}]_{\mathrm{bwd}}=2i\biggl{(}-i\bar{\boldsymbol{\nabla}}^{ \mathrm{T}}-\frac{1}{2}\boldsymbol{\xi}\biggr{)}^{\mathrm{T}}\Omega\biggl{(}-i \bar{\boldsymbol{\nabla}}^{\mathrm{T}}+\frac{1}{2}\boldsymbol{\xi}\biggr{)} \chi_{E}=-\mathrm{Tr}[\sigma\Omega]\chi_{E}\mathrm{d}t-2\boldsymbol{\xi}^{ \mathrm{T}}\Omega(\bar{\boldsymbol{\nabla}}\chi_{E})\mathrm{d}t. \tag{105}\]
So the backward cumulants with \(N\geq 1\) will have the additional term
\[-[\mathrm{d}\kappa_{m_{1},\ldots,m_{N}}^{(N)}]_{\mathrm{bwd}}=-2\sum_{\tau\in \mathcal{S}^{\mathrm{cycl}}(N)}\Omega_{m_{\tau(1)},k}\kappa_{k,m_{\tau(2)}, \ldots,m_{\tau(N)}}^{(N)}\mathrm{d}t. \tag{106}\]
Implementing all these changes we find that the difference between forward and backward equation amounts to replacing the drift matrix \(M_{\rho}(t)\) with the backward drift matrix from Eq. (42),
\[M_{E}(t):=-Q-2\sigma B^{\mathrm{T}}A-4\kappa_{E}(t)A^{\mathrm{T}}A, \tag{107}\]
and changing the sign of the constant term in the equations of the means. Spelling this out we find that the means satisfy
\[\textbf{(BI)}\ -\mathrm{d}\mathbf{r}_{E} =-Q\mathbf{r}_{E}\mathrm{d}t-\sigma\mathbf{h}\mathrm{d}t+(2\kappa_{ E}^{(2)}A^{\mathrm{T}}+\sigma B^{\mathrm{T}})\mathrm{d}\mathbf{W} \tag{100}\] \[=M_{E}\mathbf{r}_{E}\mathrm{d}t-\sigma\mathbf{h}\mathrm{d}t+(2 \kappa_{E}^{(2)}A^{\mathrm{T}}+\sigma B^{\mathrm{T}})\mathrm{d}\mathbf{Y}. \tag{101}\]
For the covariance matrix we find
\[\textbf{(BI)}\ -\mathrm{d}\kappa_{m_{1},m_{2}}^{(2)}=\Big{[}M_{E}\kappa_{E}^{(2 )}+\kappa_{E}^{(2)}M_{E}^{\mathrm{T}}+\sigma\tilde{\Delta}\sigma^{\mathrm{T}} +4\kappa_{E}^{(2)}A^{\mathrm{T}}A\kappa_{E}^{(2)}\Big{]}\mathrm{d}t+2\kappa_{m _{1},m_{2},k}^{(3)}(A^{\mathrm{T}}\mathrm{d}\mathbf{W})_{k}, \tag{102}\]
Higher order (\(N\geq 3\)) cumulants evolve as
\[\begin{split}\textbf{(BI)}\ -&\mathrm{d}\kappa_{m_{1}, \ldots,m_{N}}^{(N)}(Q+2\sigma B^{\mathrm{T}}A)_{m_{\tau(1)},k}\kappa_{k,m_{ \tau(2)},\ldots,m_{\tau(N)}}^{(N)}\mathrm{d}t\\ &-2\sum_{n=1}^{N-1}\sum_{\sigma\in S(N)}\kappa_{m_{\sigma(1)}, \ldots,m_{\sigma(n)},j}^{(n+1)}(A^{\mathrm{T}}A)_{jk}\kappa_{k,m_{\sigma(n+1)},\ldots,m_{\sigma(N)}}^{(N-n+1)}\mathrm{d}t+2\kappa_{m_{1},\ldots,m_{N},k}^{( N+1)}(A^{\mathrm{T}}\mathrm{d}\mathbf{W})_{k}.\end{split} \tag{103}\] \[=\sum_{\tau\in\mathcal{S}^{\mathrm{cycl}}(N)}M_{m_{\tau(1)},k}^{E }\kappa_{k,m_{\tau(2)},\ldots,m_{\tau(N)}}^{(N)}\mathrm{d}t\] \[\quad-2\sum_{n=2}^{N-2}\sum_{\sigma\in S(N)}\kappa_{m_{\sigma(1)},\ldots,m_{\sigma(n)},j}^{(n+1)}(A^{\mathrm{T}}A)_{jk}\kappa_{k,m_{\sigma(n+1) },\ldots,m_{\sigma(N)}}^{(N-n+1)}\mathrm{d}t+2\kappa_{m_{1},\ldots,m_{N},k}^{( N+1)}(A^{\mathrm{T}}\mathrm{d}\mathbf{W})_{k}. \tag{104}\]
With regards to stability we can apply the same reasoning to the backward dynamics as to the forward dynamics, which, _mutatis mutandis_, leads to the same conjecture as in 101. Backward dynamics are stable whenever any Gaussian effect operator with arbitrary final covariance matrix \(V_{E}(t_{1})\) approaches an asymptotic covariance matrix \(V_{E}^{\infty}\) as \(t\to-\infty\) such that \(M_{E}^{\infty}\) has only eigenvalues with negative real parts. In that case we conjecture that also any non-Gaussian effect operator \(\hat{E}(t_{1})\) will collapse to a Gaussian one with \(V_{E}^{\infty}\) as \(t\to-\infty\), up to stochastic fluctuations of higher-order cumulants about zero.
|
2309.05176 | Reversibility of whole-plane SLE for $κ> 8$ | Whole-plane SLE$_\kappa$ is a random fractal curve between two points on the
Riemann sphere. Zhan established for $\kappa \leq 4$ that whole-plane
SLE$_\kappa$ is reversible, meaning invariant in law under conformal
automorphisms swapping its endpoints. Miller and Sheffield extended this to
$\kappa \leq 8$. We prove whole-plane SLE$_\kappa$ is reversible for $\kappa >
8$, resolving the final case and answering a conjecture of Viklund and Wang.
Our argument depends on a novel mating-of-trees theorem of independent
interest, where Liouville quantum gravity on the disk is decorated by an
independent radial space-filling SLE curve. | Morris Ang, Pu Yu | 2023-09-11T00:11:10Z | http://arxiv.org/abs/2309.05176v3 | # Reversibility of whole-plane SLE for \(\kappa>8\)
###### Abstract
Whole-plane \(\mathrm{SLE}_{\kappa}\) is a random fractal curve between two points on the Riemann sphere. Zhan established for \(\kappa\leq 4\) that whole-plane \(\mathrm{SLE}_{\kappa}\) is _reversible_, meaning invariant in law under conformal automorphisms swapping its endpoints. Miller and Sheffield extended this to \(\kappa\leq 8\). We prove whole-plane \(\mathrm{SLE}_{\kappa}\) is reversible for \(\kappa>8\), resolving the final case and answering a conjecture of Viklund and Wang. Our argument depends on a novel mating-of-trees theorem of independent interest, where Liouville quantum gravity on the disk is decorated by an independent radial space-filling SLE curve.
## 1 Introduction
In the past two decades, Schramm-Loewner evolution (SLE) has emerged as a central object of study in probability theory. SLE is a random fractal curve in the plane [11, 12] describing the scaling limits of many statistical physics models at criticality [13, 14, 15, 16]. It has a parameter \(\kappa>0\): when \(\kappa\in(0,4]\) SLE is a simple curve, when \(\kappa\in(4,8)\) SLE is self-intersecting but not self-crossing, and when \(\kappa\geq 8\) SLE is space-filling. See for instance [13, 14] for expository works on SLE.
For context, we first discuss chordal SLE, a random curve in a simply connected domain \(D\subset\mathbb{C}\) from a boundary point \(x\) to another boundary point \(y\). We say a random curve from \(x\) to \(y\) is _reversible_ if it is invariant in law under conformal automorphisms of \(D\) switching \(x\) and \(y\). More precisely, fixing such a conformal automorphism \(f\), if \(\eta\) is a curve from \(x\) to \(y\) and \(\widetilde{\eta}\) is the time-reversal of \(f\circ\eta\), then reversibility means \(\eta\) and \(\widetilde{\eta}\) agree in law up to monotone reparametrization of time. [12] conjectured that chordal \(\mathrm{SLE}_{\kappa}\) is reversible for \(\kappa\in(0,8]\); at the time of that conjecture, reversibility was already known for \(\kappa\in\{2,8/3,6,8\}\) via scaling limits of lattice models. Reversibility of chordal SLE was proved by Zhan for \(\kappa\in(0,4]\)[15] and by Miller and Sheffield for \(\kappa\in(4,8)\)[16]. On the other hand, for \(\kappa>8\) chordal SLE is not reversible [12, 14].
We now turn to whole-plane \(\mathrm{SLE}_{\kappa}\), a random curve in \(\hat{\mathbb{C}}:=\mathbb{C}\cup\{\infty\}\) from \(0\) to \(\infty\). A random curve from \(0\) to \(\infty\) is _reversible_ if it is invariant in law under conformal automorphisms of \(\hat{\mathbb{C}}\) switching \(0\) and \(\infty\). Zhan proved that whole-plane \(\mathrm{SLE}_{\kappa}\) is reversible for \(\kappa\leq 4\)[15], and Miller and Sheffield proved reversibility for \(\kappa\in(4,8]\)[16]. We resolve the final case of \(\kappa>8\).
**Theorem 1.1**.: _Whole-plane \(\mathrm{SLE}_{\kappa}\) is reversible when \(\kappa>8\)._
Theorem 1.1 is surprising not only because of non-reversibility of chordal \(\mathrm{SLE}_{\kappa}\) for \(\kappa>8\) and non-reversibility of a variant called whole-plane \(\mathrm{SLE}_{\kappa}(\rho)\) for \(\kappa>8\) and \(\rho>\frac{\kappa}{2}-4\)[16, Remark 1.21], but also because it reveals a fundamental property of SLE not apparent through the lens of _imaginary geometry_. The imaginary geometry framework [16, 17, 18, 19] introduced by Miller and Sheffield studies SLE by coupling it with a Gaussian free field, and has proven an essential tool with wide-ranging applications such as [14, 15, 16]. The reversibility of chordal and whole-plane \(\mathrm{SLE}_{\kappa}\) for \(\kappa\leq 8\) can be shown by imaginary geometry [17, 18] (in fact, for \(\kappa\in(4,8)\), this is the only known approach). However, the reversibility of whole-plane \(\mathrm{SLE}_{\kappa}\) for \(\kappa>8\) seems unnatural from the perspective of imaginary geometry since the left and right boundaries of the curve interact in a complicated way [17, Remark 1.22]. The reversibility for whole-plane \(\mathrm{SLE}_{\kappa}(\rho)\) with \(\kappa>8\) and \(\rho\in(-2,\frac{\kappa}{2}-4]\backslash\{0\}\) remains an open problem.
To our knowledge, apart from the illuminating work of [14], there had been no reason to expect the reversibility of whole plane \(\mathrm{SLE}_{\kappa}\) for \(\kappa>8\). Viklund and Wang proved the inversion invariance of the
\(\kappa\to\infty\) large deviation rate function of whole-plane \(\mathrm{SLE}_{\kappa}\), and consequently conjectured the reversibility of whole-plane \(\mathrm{SLE}_{\kappa}\) for large \(\kappa\). Theorem 1.1 confirms their conjecture.
Our arguments are substantially different from those of Zhan for \(\kappa\leq 4\), who applied commutation relations for SLE [14, 15], and Miller and Sheffield for \(\kappa\leq 8\), who used imaginary geometry. Rather, we employ the _mating-of-trees_ approach [13] where a random planar surface called _Liouville quantum gravity (LQG)_ is coupled with an independent SLE curve. All previously known mating-of-trees theorems [13, 14, 15] involved either chordal SLE or an SLE loop in \(\hat{\mathbb{C}}\) or \(\mathbb{D}\). We establish a mating-of-trees theorem for LQG on the disk coupled with _radial_ SLE, and for LQG on \(\hat{\mathbb{C}}\) coupled with _whole-plane_ SLE, resolving another conjecture of [12]. These novel mating-of-trees theorems are noteworthy in their own right; see for instance the survey [16] for some applications of the mating-of-trees framework.
The starting point of the original mating-of-trees theorem is the _quantum zipper_ coupling of reverse SLE with a certain LQG surface, from which "zooming in" on the base of the curve gives in the limit a _forward_ SLE trace on a _scale-invariant_ LQG surface [13]. All subsequent mating-of-trees theorems were derived from the original by limiting arguments. However, our radial setting is not scale-invariant, nor can it be derived from a scale-invariant picture. Our proof depends on two crucial insights. Firstly, as shown by the first author [14], the quantum zipper describes dynamics on LQG surfaces arising in _Liouville conformal field theory (LCFT)_[11, 12]. The LCFT perspective allows us to use the quantum zipper without zooming in on a boundary point, giving us access to non-scale-invariant LQG surfaces. See [1, 15, 16, 17, 18, 19, 20, 21] for other works that explore the interplay between LCFT and SLE. Secondly, to pass from reverse SLE to forward SLE, we work with the infinite measure \(\int_{0}^{\infty}\mathrm{rasLE}^{t}_{\kappa}\,dt\) corresponding to "radial SLE run until a Lebesgue-typical capacity time". This allows us to exploit the fixed-time symmetry of forward and reverse radial SLE without fixing a capacity time, which is important since capacity time is unnatural for the quantum zipper.
To prove Theorem 1.1, we first derive a radial mating-of-trees theorem (Theorem 3.1) by building on the LCFT dynamics of [14]. Next, using a limiting argument pinching a disk into a sphere, we obtain a whole-plane mating-of-trees theorem (Theorem 4.1) identifying a two-pointed LQG sphere decorated by an independent whole-plane SLE curve with a 2D Brownian excursion. By the time-reversal symmetry of Brownian motion, the decorated quantum surface is invariant in law when the two points are interchanged and the curve is reversed. We conclude that whole-plane SLE is reversible. Our use of mating-of-trees to prove SLE reversibility is parallel to the arguments of [12] where a "mating-of-trees energy duality" is used to establish inversion invariance of the SLE large deviation functional as \(\kappa\) tends to infinity.
**Outline.** Section 2 gives preliminary background on LQG, Liouville conformal field theory, SLE, and mating-of-trees. In Section 3 we prove a radial mating-of-trees result (Theorem 3.1). In Section 4 we take a limit to obtain a whole-plane mating-of-trees (Theorem 4.1), then use it to prove Theorem 1.1. We mention related results in the literature and list some open questions in Section 5.
**Acknowledgements.** We thank Greg Lawler, Scott Sheffield, Xin Sun, Yilin Wang and Dapeng Zhan for helpful discussions. M.A. was supported by the Simons Foundation as a Junior Fellow at the Simons Society of Fellows. P.Y. was partially supported by NSF grant DMS-1712862. P.Y. thanks IAS for hosting his visit during Fall 2022.
## 2 Preliminaries
In this paper we work with non-probability measures and extend the terminology of ordinary probability to this setting. For a finite or \(\sigma\)-finite measure space \((\Omega,\mathcal{F},M)\), we say \(X\) is a random variable if \(X\) is an \(\mathcal{F}\)-measurable function with its _law_ defined via the push-forward measure \(M_{X}=X_{*}M\). In this case, we say \(X\) is _sampled_ from \(M_{X}\) and write \(M_{X}[f]\) for \(\int f(x)M_{X}(dx)\). _Weighting_ the law of \(X\) by \(f(X)\) corresponds to working with the measure \(d\widetilde{M}_{X}\) with Radon-Nikodym derivative \(\frac{d\widetilde{M}_{X}}{dM_{X}}=f\). _Conditioning_ on some event \(E\in\mathcal{F}\) (with \(0<M[E]<\infty\)) refers to the probability measure \(\frac{M[E\cap\cdot]}{M[E]}\) on the measurable space \((E,\mathcal{F}_{E})\) with \(\mathcal{F}_{E}=\{A\cap E:A\in\mathcal{F}\}\), while _restricting_ to \(E\) refers to the measure \(M[E\cap\cdot]\).
### The Gaussian Free Field and Liouville quantum gravity
Let \(m_{\mathbb{D}}\) (resp. \(m_{\mathbb{H}}\)) be the uniform measure on the unit circle \(\partial\mathbb{D}\) (resp. half circle \(\mathbb{H}\cap\partial\mathbb{D}\)). For \(X\in\{\mathbb{D},\mathbb{H}\}\), define the Dirichlet inner product \(\langle f,g\rangle_{\nabla}=(2\pi)^{-1}\int_{X}\nabla f\cdot\nabla g\) on the space \(\{f\in C^{\infty}(X):\int_{X}|\nabla f|^{2}<\infty;\ \int f(z)m_{X}(dz)=0\}\), and let \(H(X)\) be the closure of this space w.r.t. the inner product \(\langle f,g\rangle_{\nabla}\). Let \((f_{n})_{n\geq 1}\) be an orthonormal basis of \(H(X)\), and \((\alpha_{n})_{n\geq 1}\) be a collection of independent standard Gaussian variables. Then the summation
\[h_{X}=\sum_{n=1}^{\infty}\alpha_{n}f_{n}\]
a.s. converges in the space of distributions on \(X\), and \(h_{X}\) is the _Gaussian free field (GFF)_ on \(X\) normalized such that \(\int h_{X}(z)m_{X}(dz)=0\). We denote its law by \(P_{X}\). See [3, Section 4.1.4] for more details.
Let \(|z|_{+}=\max\{|z|,1\}\). For \(z,w\in\bar{\mathbb{H}}\), we define
\[G_{\mathbb{H}}(z,w)=-\log|z-w|-\log|z-\bar{w}|+2\log|z|_{+}+2\log|w|_{+};\quad G _{\mathbb{H}}(z,\infty)=2\log|z|_{+}.\]
Similarly, for \(z,w\in\bar{\mathbb{D}}\), set
\[G_{\mathbb{D}}(z,w)=-\log|z-w|-\log|1-z\bar{w}|.\]
Then the GFF \(h_{X}\) is the centered Gaussian field on \(X\) with covariance structure \(\mathbb{E}[h_{X}(z)h_{X}(w)]=G_{X}(z,w)\).
Now let \(\gamma\in(0,2)\) and \(Q=\frac{2}{\gamma}+\frac{\gamma}{2}\). For a conformal map \(g:D\to\widetilde{D}\) and a generalized function \(h\) on \(D\), define the generalized function \(g\bullet_{\gamma}h\) on \(\widetilde{D}\) by setting
\[g\bullet_{\gamma}h:=h\circ g^{-1}+Q\log|(g^{-1})^{\prime}| \tag{2.1}\]
A quantum surface is a \(\sim_{\gamma}\)-equivalence class of pairs \((D,h)\) where \((D,h)\sim_{\gamma}(\widetilde{D},\widetilde{h})\) if there is a conformal map \(g\) with \(\widetilde{h}=g\bullet_{\gamma}h\). We call a representative \((D,h)\) an _embedding_ of the quantum surface. We will also consider quantum surfaces decorated by points and a curve; in this case we say \((D,h,\eta,(z_{i}))\sim_{\gamma}(\widetilde{D},\widetilde{h},\widetilde{\eta}, (\widetilde{z}_{i}))\) if there is a conformal map \(g:D\to\widetilde{D}\) such that \(g\bullet_{\gamma}h=\widetilde{h}\), \(g\circ\eta=\widetilde{\eta}\), and \(g(z_{i})=\widetilde{z}_{i}\) for all \(i\). As before we call a representative \((D,h,\eta,(z_{i}))\) an embedding of the decorated quantum surface.
For a \(\gamma\)-quantum surface \((D,h)\), its _quantum area measure_\(\mathcal{A}_{h}(dz)\) is defined by taking the weak limit as \(\varepsilon\to 0\) of \(\mathcal{A}_{h_{\varepsilon}}(dz):=\varepsilon^{\frac{\gamma^{2}}{2}}e^{\gamma h _{\varepsilon}(z)}dz\), where \(h_{\varepsilon}(z)\) is the circle average of \(h\) over \(\partial B(z,\varepsilon)\). When \(D=\mathbb{H}\), we can also define the _quantum boundary length measure_\(\mathcal{L}_{h}(dx):=\lim_{\varepsilon\to 0}\varepsilon^{\frac{\gamma^{2}}{2}}e^{ \widetilde{\eta}\,h_{\varepsilon}(x)}dx\) where \(h_{\varepsilon}(x)\) is the average of \(h\) over the semicircle \(\{x+\varepsilon e^{i\theta}:\theta\in(0,\pi)\}\). It has been shown in [11, 12] that all these weak limits are well-defined for the GFF and its variants we are considering in this paper, and if \(f\) is a conformal automorphism of \(\mathbb{H}\) then \(f_{*}\mathcal{A}_{h}=\mathcal{A}_{f\bullet_{\gamma}h}\) and \(f_{*}\mathcal{L}_{h}=\mathcal{L}_{f\bullet_{\gamma}h}\). This latter point allows us to define \(\mathcal{A}_{h}\) and \(\mathcal{L}_{h}\) on other domains by conformally mapping to \(\mathbb{H}\).
### The Liouville field
Recall that \(P_{\mathbb{D}}\) (resp. \(P_{\mathbb{H}}\)) is the law of the free boundary GFF on \(\mathbb{D}\) (resp. \(\mathbb{H}\)) normalized to have average zero on \(\partial\mathbb{D}\) (resp. \(\partial\mathbb{D}\cap\mathbb{H}\)). In the following definitions we use the shorthand \(|z|_{+}=\max\{|z|,1\}\) for \(z\in\mathbb{C}\).
**Definition 2.1**.: _Let \((h,\mathbf{c})\) be sampled from \(P_{\mathbb{D}}\times[e^{-Qc}dc]\) and \(\phi=h+\mathbf{c}\). We call \(\phi\) the Liouville field on \(\mathbb{D}\), and we write \(\mathrm{LF}_{\mathbb{D}}\) for the law of \(\phi\)._
**Definition 2.2**.: _Let \((h,\mathbf{c})\) be sampled from \(P_{\mathbb{H}}\times[e^{-Qc}dc]\) and \(\phi=h-2Q\log|z|_{+}+\mathbf{c}\). We call \(\phi\) the Liouville field on \(\mathbb{H}\), and we write \(\mathrm{LF}_{\mathbb{H}}\) for the law of \(\phi\)._
**Definition 2.3**.: _Let \((\alpha,w)\in\mathbb{R}\times\mathbb{H}\) and \((\beta,s)\in\mathbb{R}\times\partial\mathbb{H}\). Let_
\[C_{\mathbb{H}}^{(\alpha,w),(\beta,s)}=(2\,\mathrm{Im}\,w)^{-\frac{\alpha^{2}}{ 2}}|w|_{+}^{-2\alpha(Q-\alpha)}|s|_{+}^{-\beta(Q-\frac{\beta}{2})}e^{\frac{ \alpha\beta}{2}G_{\mathbb{H}}(w,s)}.\]
_Let \((h,\mathbf{c})\) be sampled from \(C_{\mathbb{H}}^{(\alpha,w),(\beta,s)}P_{\mathbb{H}}\times[e^{(\alpha+\frac{ \beta}{2}-Q)c}dc]\), and_
\[\phi(z)=h(z)-2Q\log|z|_{+}+\alpha G_{\mathbb{H}}(z,w)+\frac{\beta}{2}G_{ \mathbb{H}}(z,s)+\mathbf{c}.\]
_We write \(\mathrm{LF}_{\mathbb{H}}^{(\alpha,w),(\beta,s)}\) for the law of \(\phi\) and call a sample from \(\mathrm{LF}_{\mathbb{H}}^{(\alpha,w),(\beta,s)}\) the Liouville field on \(\mathbb{H}\) with insertions \((\alpha,w),(\beta,s)\)._
**Definition 2.4**.: _Let \(\alpha,\alpha_{1},\beta\in\mathbb{R}\), \(w\in\mathbb{D}\) and \(s\in\partial\mathbb{D}\). Let_
\[C^{(\alpha,0),(\alpha_{1},w),(\beta,s)}_{\mathbb{D}}=(1-|w|^{2})^{-\frac{\alpha _{1}^{2}}{2}}e^{\alpha_{1}\alpha G_{\mathbb{D}}(0,w)+\frac{\alpha_{1}\beta}{2}G _{\mathbb{D}}(s,w)}.\]
_Let \((h,\mathbf{c})\) be sampled from \(C^{(\alpha,0),(\alpha_{1},w),(\beta,s)}_{\mathbb{D}}\times[e^{(\alpha+\alpha_ {1}+\frac{\alpha}{2}-Q)c}dc]\) and_
\[\phi(z)=h(z)+\alpha G_{\mathbb{D}}(z,0)+\alpha_{1}G_{\mathbb{D}}(z,w)+\frac{ \beta}{2}G_{\mathbb{D}}(z,s)+\mathbf{c}.\]
_We call \(\phi\) the Liouville field on \(\mathbb{D}\) with insertions \((\alpha,0),(\alpha_{1},w),(\beta,s)\) and write \(\operatorname{LF}^{(\alpha,0),(\alpha_{1},w),(\beta,s)}_{\mathbb{D}}\) for the law of \(\phi\)._
As we will see later in Lemma 2.9, the Liouville fields introduced for \(\mathbb{H}\) and \(\mathbb{D}\) agree up to conformal coordinate change.
We now state the conformal covariance in \(\mathbb{H}\). For a conformal map \(f:D\to\widetilde{D}\) and a measure \(M\) on \(H^{-1}(D)\), let \(f_{*}M\) be the pushforward of \(M\) under the LQG coordinate change map \(\phi\mapsto f\bullet_{\gamma}\phi\). For \(\alpha\in\mathbb{R}\), we set \(\Delta_{\alpha}=\frac{\alpha}{2}(Q-\frac{\alpha}{2})\).
**Lemma 2.5**.: _Let \((\alpha,w)\in\mathbb{R}\times\mathbb{H}\) and \((\beta,s)\in\mathbb{R}\times\mathbb{R}\). Suppose \(f:\mathbb{H}\to\mathbb{H}\) is a conformal map, such that \(f(s)\neq\infty\). Then_
\[\operatorname{LF}^{(\alpha,f(w)),(\beta,f(s))}_{\mathbb{H}}=|f^{\prime}(w)|^{- 2\Delta_{\alpha}}|f^{\prime}(s)|^{-\Delta_{\beta}}f_{*}\mathrm{LF}^{(\alpha,w ),(\beta,s)}_{\mathbb{H}}.\]
_In particular, when \(f(s)=s=0\), \(f(w)=i\), we have_
\[\operatorname{LF}^{(\alpha,i),(\beta,0)}_{\mathbb{H}}=(\operatorname{Im}w)^{2 \Delta_{\alpha}-\Delta_{\beta}}|w|^{2\Delta_{\beta}}f_{*}\mathrm{LF}^{(\alpha, w),(\beta,s)}_{\mathbb{H}}. \tag{2.2}\]
Proof.: This statement is proved in [18, Theorem 3.5]; see [1, Lemma 2.4] for an explanation.
Now, we define the LCFT measure \(\operatorname{LF}^{(\alpha,0),(\beta,1)}_{\mathbb{D},\ell}\) having fixed boundary length \(\ell\).
**Definition 2.6**.: _Let \(\alpha\in\mathbb{R},\beta<Q\). Let \(h\) be a sample from \(P_{\mathbb{D}}\) and set_
\[\widetilde{h}(z)=h+\alpha G_{\mathbb{D}}(z,0)+\frac{\beta}{2}G_{\mathbb{D}}(z,1).\]
_Fix \(\ell>0\), and let \(L=\mathcal{L}_{\widetilde{h}}(\partial\mathbb{D})\). Define the measure \(\operatorname{LF}^{(\alpha,0),(\beta,1)}_{\mathbb{D},\ell}\) to be the law of \(\widetilde{h}+\frac{2}{\gamma}\log\frac{\ell}{L}\) under the reweighted measure \(\frac{2}{\gamma}\frac{\ell^{\frac{2\alpha+\beta-2Q}{\gamma}-1}}{L^{\frac{2 \alpha+\beta-2Q}{\gamma}}}P_{\mathbb{D}}(dh)\)._
**Lemma 2.7**.: _In the setting of Definition 2.6, \(\{\operatorname{LF}^{(\alpha,0),(\beta,1)}_{\mathbb{D},\ell}\}_{\ell>0}\) is a disintegration of \(\operatorname{LF}^{(\alpha,0),(\beta,1)}_{\mathbb{D}}\) over its boundary length. That is, any sample \(\phi\) from \(\operatorname{LF}^{(\alpha,0),(\beta,1)}_{\mathbb{D},\ell}\) has \(\mathcal{L}_{\phi}(\partial\mathbb{D})=\ell\), and_
\[\operatorname{LF}^{(\alpha,0),(\beta,1)}_{\mathbb{D}}=\int_{0}^{\infty} \operatorname{LF}^{(\alpha,0),(\beta,1)}_{\mathbb{D},\ell}d\ell. \tag{2.3}\]
_Moreover, if \(\alpha+\frac{\beta}{2}>Q\), we have \(|\operatorname{LF}^{(\alpha,0),(\beta,1)}_{\mathbb{D},\ell}|=C\ell^{\frac{2 \alpha+\beta-2Q}{\gamma}-1}\) for some finite constant \(C\)._
Proof.: First, \(\mathcal{L}_{\phi}(\partial\mathbb{D})=\mathcal{L}_{\widetilde{h}+\frac{2}{ \gamma}\log\frac{\ell}{L}}(\partial\mathbb{D})=\frac{\ell}{L}\mathcal{L}_{ \widetilde{h}}(\partial\mathbb{D})=\ell\). Next, for any nonnegative measurable function \(F\) on \(H^{-1}(\mathbb{D})\),
\[\int_{0}^{\infty}\int F(\widetilde{h}+\frac{2}{\gamma}\log\frac{\ell}{L})\frac {2}{\gamma}\frac{\ell^{\frac{2\alpha+\beta-2Q}{\gamma}-1}}{L^{\frac{2\alpha+ \beta-2Q}{\gamma}}}P_{\mathbb{D}}(dh)d\ell=\int_{\mathbb{R}}\int F(\widetilde{h }+c)e^{(\alpha+\frac{\beta}{2}-Q)c}P_{\mathbb{D}}(dh)dc\]
using Fubini's theorem and the change of variables \(c=\frac{2}{\gamma}\log\frac{\ell}{L}\). This justifies (2.3). For the last claim,
\[\begin{split}\operatorname{LF}^{(\alpha,0),(\beta,1)}_{\mathbb{D}} [\{\mathcal{L}_{\phi}(\partial\mathbb{D})\in[a,b]\}]&=\int\int 1_{e^{\frac{2}{\gamma}c}L\in[a,b]}e^{(\alpha+\frac{\beta}{2}-Q)c}P_{\mathbb{D} }(dh)dc\\ &=\frac{2}{\gamma}\int L^{-\frac{2\alpha+\beta-2Q}{\gamma}}P_{ \mathbb{D}}(dh)\cdot\int_{a}^{b}\ell^{\frac{2\alpha+\beta-2Q}{\gamma}-1}d\ell \end{split} \tag{2.4}\]
where we used the change of variables \(\ell=e^{\frac{\gamma}{2}c}L\). Since \(\alpha+\frac{\beta}{2}>Q\), the integral \(\int L^{-\frac{2\alpha+\beta-2Q}{\gamma}}P_{\mathbb{D}}(dh)\) is finite (see e.g. [18, 19]) and the claim then follows.
As we see next, sampling a point from the LQG area measure corresponds to adding an LCFT insertion of size \(\gamma\). Recall \(\mathcal{A}_{\phi}(dz)\) denotes the quantum area measure.
**Lemma 2.8**.: _Let \(w\in\mathbb{D}\), \(\alpha,\beta\in\mathbb{R}\) and \(s\in\partial\mathbb{D}\). Then we have_
\[\mathcal{A}_{\phi}(dz)\mathrm{LF}_{\mathbb{D}}^{(\alpha,0),(\beta,s)}(d\phi)= \mathrm{LF}_{\mathbb{D}}^{(\alpha,0),(\beta,s),(\gamma,z)}(d\phi)dz.\]
Proof.: The proof is identical to that of [10, Proposition 2.5].
Finally, the Liouville fields on \(\mathbb{H}\) and \(\mathbb{D}\) agree up to coordinate change; we now verify the case that we need for this paper.
**Lemma 2.9**.: _Let \(\alpha,\beta\in\mathbb{R}\) with \(\alpha+\frac{\beta}{2}=Q\). For \(w\in\mathbb{H}\), let \(g:\mathbb{H}\to\mathbb{D}\) be a conformal map with \(g(w)=0\) and \(g(0)=1\). Then_
\[\mathrm{LF}_{\mathbb{D}}^{(\alpha,0),(\beta,1)}=2^{\frac{\alpha^{2}}{2}}( \mathrm{Im}\,w)^{2\Delta_{\alpha}-\Delta_{\beta}}|w|^{2\Delta_{\beta}}g_{*} \mathrm{LF}_{\mathbb{H}}^{(\alpha,w),(\beta,0)}. \tag{2.5}\]
Proof.: We will show the claim for \(w=i\), then the general case follows by using Lemma 2.5.
Let \(g:\mathbb{H}\to\mathbb{D}\) be the conformal map such that \(g(i)=0,g(0)=1\). Explicitly, it is given by \(g(z)=\frac{i-z}{i+z}\). By the conformal invariance of the free boundary GFF viewed as a distribution modulo additive constant, if \((h_{\mathbb{H}},\mathbf{c}_{\mathbb{H}})\sim P_{\mathbb{H}}\times dc\) and \((h_{\mathbb{D}},\mathbf{c}_{\mathbb{D}})\sim P_{\mathbb{D}}\times dc\), then \(h_{\mathbb{H}}+\mathbf{c}_{\mathbb{H}}\overset{d}{=}(h_{\mathbb{D}}+\mathbf{ c}_{\mathbb{D}})\circ g\). Next, using the formulas for \(G_{\mathbb{H}}\) and \(G_{\mathbb{D}}\) in Section 2.1, one can directly check that for some constant \(C\), we have
\[\alpha G_{\mathbb{H}}(\cdot,i)+\frac{\beta}{2}G_{\mathbb{H}}(\cdot,0)-2Q\log| \cdot|_{+}=(\alpha G_{\mathbb{D}}(\cdot,0)+\frac{\beta}{2}G_{\mathbb{D}}( \cdot,1))\circ g+Q\log|g^{\prime}|+C\quad\text{ for all }z\in\mathbb{H}.\]
Combining this with the translation invariance of Lebesgue measure, we conclude that
\[g\bullet_{\gamma}(h_{\mathbb{H}}+\mathbf{c}_{\mathbb{H}}+\alpha G_{\mathbb{H }}(\cdot,i)+\frac{\beta}{2}G_{\mathbb{H}}(\cdot,0)-2Q\log|\cdot|_{+})\overset {d}{=}h_{\mathbb{D}}+\mathbf{c}_{\mathbb{D}}+\alpha G_{\mathbb{D}}(\cdot,0)+ \frac{\beta}{2}G_{\mathbb{D}}(\cdot,1).\]
Thus (2.5) holds for \(w=i\), as needed.
### Forward and reverse SLE
In this section we briefly recall the forward and reverse radial SLE processes, and whole-plane SLE. We will not give precise definitions since they will not be used later, but curious readers can refer to [10].
Forward radial SLE\({}_{\kappa}\) in \(\mathbb{D}\) from \(1\) to \(0\) is a random non-self-crossing curve \(\eta:[0,\infty)\to\overline{\mathbb{D}}\) with \(\eta(0)=1\) and \(\lim_{t\to\infty}\eta(t)=0\). Let \(K_{t}\) be the compact subset of \(\overline{\mathbb{D}}\) such that \(\overline{\mathbb{D}}\backslash K_{t}\) is the connected component of \(\mathbb{D}\backslash\eta([0,t])\) containing \(0\), and let \(g_{t}:\mathbb{D}\backslash K_{t}\to\mathbb{D}\) be the conformal map with \(g_{t}(0)=0\) and \(g_{t}^{\prime}(0)>0\). The curve \(\eta\) is parametrized by log conformal radius, meaning that for each \(t\) we have \(g_{t}^{\prime}(0)=e^{t}\). It turns out that there is a random process \(U_{t}\overset{d}{=}e^{i\sqrt{\kappa}B_{t}}\) (where \(B_{t}\) is standard Brownian motion) such that
\[dg_{t}(z)=\Phi(U_{t},g_{t}(z))\,dt\quad\text{ for }z\in\mathbb{D}\backslash K_{t} \text{ and }\Phi(u,z):=z\frac{u+z}{u-z}. \tag{2.6}\]
In fact, (2.6) and the initial condition \(g_{0}(z)=z\) define the family of conformal maps \((g_{t})_{t\geq 0}\) and hence radial SLE\({}_{\kappa}\), see [10] for details.
Similarly, whole-plane SLE\({}_{\kappa}\) is a random non-self-crossing curve \(\eta:(-\infty,\infty)\to\mathbb{C}\) from \(0\) to \(\infty\), such that if \(K_{t}\) is the compact set such that \(\mathbb{C}\backslash K_{t}\) is the unbounded connected component of \(\mathbb{C}\backslash\eta((-\infty,t))\), and \(g_{t}:\mathbb{C}\backslash K_{t}\to\mathbb{C}\backslash\mathbb{D}\) is the conformal map such that \(g_{t}(\infty)=\infty\) and \(g_{t}^{\prime}(\infty)>0\), then
\[dg_{t}(z)=\Phi(U_{t},g_{t}(z))\,dt\quad\text{ for }z\in\mathbb{C}\backslash K_{t}\]
where \(U_{t}\overset{d}{=}e^{i\sqrt{\kappa}B_{t}}\) and \((B_{t})_{t\in\mathbb{R}}\) is two-sided standard Brownian motion. This curve extends continuously to its starting and ending points, i.e. \(\lim_{t\to-\infty}\eta(t)=0\) and \(\lim_{t\to\infty}\eta(t)=\infty\)[10, 13].
Now we discuss _centered reverse_ radial SLE. Unlike the forward case where we have a single random curve, centered reverse radial SLE is a random process of curves \((\eta_{t})_{t\geq 0}\). Each curve \(\eta_{t}:[0,t]\to\overline{\mathbb{D}}\) is parametrized by log conformal radius and has starting point \(\eta_{t}(0)=1\), and \((\eta_{t})_{t\geq 0}\) satisfies the compatibility relation that for \(s<t\), if \(\widetilde{f}_{s,t}\) is the conformal map from \(\mathbb{D}\) to the connected component of \(\mathbb{D}\backslash\eta_{t}([0,t-s])\) containing \(0\) such that \(\widetilde{f}_{s,t}(1)=\eta_{t}(t-s)\) and \(\widetilde{f}_{s,t}(0)=0\), then \(\eta_{s}=\widetilde{f}_{s,t}^{-1}\circ\eta_{t}(\cdot+t-s)|_{[0,s]}\)
Informally, this compatibility relation means that the process \((\eta_{t})_{t\geq 0}\) grows from the base of the curve. We call \(\widetilde{f}_{0,t}\) the centered reverse Loewner map. The process \((\eta_{t})_{t\geq 0}\) satisfies the stochastic differential equation
\[d\widetilde{f}_{0,t}(z)=-i\sqrt{\kappa}\widetilde{f}_{0,t}(z)dB_{t}-\Phi(1, \widetilde{f}_{0,t}(z))\,dt\text{ for }z\in\overline{\mathbb{D}}. \tag{2.7}\]
One can show via the time-reversal symmetry of Brownian motion that for each fixed \(t\), the curve \(\eta_{t}\) has the law of forward radial SLE run for time \(t\).
For \(z_{0}\in\mathbb{H}\) and \(\rho\in\mathbb{R}\), there is also a random process \((\eta_{t})_{t\geq 0}\) called _centered reverse chordal_ SLE\({}_{\kappa}(\rho)\)_with force point at \(z_{0}\)_ (see e.g. [16, Section 4.3], [14, Section 3.3.1]). Each \(\eta_{t}:[0,t]\to\overline{\mathbb{H}}\) is parametrized by half-plane capacity, has \(\eta_{t}(0)=0\), and satisfies a compatibility relation analogous to that of the radial case. It is defined by a stochastic differential equation similar to (2.7) which we omit here. For each \(t>0\) let \(\widetilde{f}_{\mathbb{H},t}:\mathbb{H}\to\mathbb{H}\backslash\eta_{t}\) be the conformal map with \(\widetilde{f}_{\mathbb{H},t}(0)=\eta_{t}(t)\) and \(\widetilde{f}_{\mathbb{H},t}(z)=z+O(1)\) as \(z\to\infty\); we call \(\widetilde{f}_{\mathbb{H},t}\) the centered reverse Loewner map.
Finally, [16, Theorem 4.6]1 gives a change of coordinates result for reverse chordal SLE:
Footnote 1: They use a different notation for weights of force points, see Remark 2 immediately after [16, Corollary 4.8].
**Lemma 2.10**.: _Fix \(\kappa>0\). Let \((\eta_{t})_{t\geq 0}\) be a centered reverse chordal \(\mathrm{SLE}_{\kappa}(\kappa+6)\) process with force point at \(\widetilde{z}_{0}\in\mathbb{H}\). Let \(\widetilde{f}_{t}\) be its associated reverse centered Loewner map. Let \(\varphi_{0}:\mathbb{H}\to\mathbb{D}\) be the conformal map with \(\varphi_{0}(\widetilde{z}_{0})=0\) and \(\varphi_{0}(0)=1\), and \(\varphi_{t}:\mathbb{H}\to\mathbb{D}\) the conformal map such that \(\varphi_{t}(\widetilde{f}_{t}(\widetilde{z}_{0}))=0\) and \(\varphi_{t}(0)=1\). Let \(\eta_{t}^{\prime}\) be \(\varphi_{t}\circ\eta_{t}\) parametrized by log conformal radius. Then up to a time change, \((\eta_{t}^{\prime})_{t\geq 0}\) has the law of centered reverse radial \(\mathrm{SLE}_{\kappa}\) stopped at the time \(\varphi_{0}(\infty)\) hits the driving function, i.e. the first time \(s\) when \(\widetilde{f}_{0,s}(\varphi_{0}(\infty))=1\) where \(\widetilde{f}_{0,s}\) is the centered reverse Loewner map of the reverse radial \(\mathrm{SLE}_{\kappa}\)._
### Chordal mating-of-trees and special quantum surfaces
In this section we state the chordal mating-of-trees theorem of [14], and recall the definition of the _quantum cone_ from [14, 15] and the _quantum cell_ from [14].
Let \(\mathcal{C}=(\mathbb{R}\times[0,2\pi])/{\sim}\) be the horizontal cylinder obtained by gluing the upper and lower boundaries of the strip via the identification \(x\sim x+2\pi i\). We define the GFF on \(\mathcal{C}\) as in Section 2.1, with \(m_{\mathcal{C}}\) the uniform measure on \((\{0\}\times[0,2\pi])/{\sim}\), and likewise define the Hilbert space \(H(\mathcal{C})\). As explained in, e.g., [14, Section 4.1.7], we may decompose \(H(\mathcal{C})=H_{\mathrm{av}}(\mathcal{C})\oplus H_{\mathrm{lat}}(\mathcal{C})\), where \(H_{\mathrm{av}}(\mathcal{C})\) (resp. \(H_{\mathrm{lat}}(\mathcal{C})\)) is the subspace of functions which are constant (resp. have mean \(0\)) on \(\{t\}\times[0,2\pi]\) for each \(t\in\mathbb{R}\). This gives a decomposition \(h_{\mathcal{C}}=h_{\mathrm{av}}+h_{\mathrm{lat}}\) of \(h_{\mathcal{C}}\) into two independent components.
Now we introduce the \(\gamma\)-LQG surfaces called quantum cones.
**Definition 2.11** (\(\alpha\)-quantum cone).: _Fix \(\alpha<Q\). Suppose \(\psi_{\mathrm{av}}\) and \(\psi_{\mathrm{lat}}\) are independent distributions on \(\mathcal{S}\) such that:_
* _We have_ \(\psi_{\mathrm{av}}(z)=X_{\mathrm{Re}\,z}\) _for_ \(z\in\mathcal{C}\)_, where_ \[X_{t}:=\left\{\begin{array}{ccc}B_{t}-(Q-\alpha)t&\text{for}&t\geq 0\\ \widetilde{B}_{-t}+(Q-\alpha)t&\text{for}&t<0\end{array}\right.\] (2.8) _and_ \((B_{t})_{t\geq 0}\) _and_ \((\widetilde{B}_{t})_{t\geq 0}\) _are independent standard Brownian motions conditioned on_ \(\widetilde{B}_{t}-(Q-\alpha)t<0\) _for all_ \(t>0\)_2;_ Footnote 2: This conditioning can be made sense via Bessel processes; see e.g. [14, Section 4.2].
* \(\psi_{\mathrm{lat}}\) _has the same law as_ \(h_{\mathrm{lat}}\)_._
_Set \(\psi=\psi_{\mathrm{av}}+\psi_{\mathrm{lat}}\). We call \((\mathcal{C},\psi,-\infty,+\infty)/{\sim_{\gamma}}\) an \(\alpha\)-quantum cone._
For \(\kappa>4\), there is a random curve in \(\mathbb{C}\) called _space-filling \(\mathrm{SLE}_{\kappa}\) from \(\infty\) to \(\infty\)_. It is defined via the imaginary geometry flow lines of a whole-plane GFF. Space-filling \(\mathrm{SLE}_{\kappa}\) from \(\infty\) to \(\infty\) is reversible since its construction is symmetric. Moreover, if \(\kappa\geq 8\), for each \(z\in\mathbb{C}\) the regions covered by the curve before and after hitting \(z\) are simply connected, and conditioned on the curve up until it hits \(z\), it subsequently evolves as chordal \(\mathrm{SLE}_{\kappa}\) from \(z\) to \(\infty\) in the complementary domain. This follows from the flow line construction of space-filling \(\mathrm{SLE}_{\kappa}\), see [14, Section 1.2.3] for more details.
We are ready to state the mating-of-trees theorem [14, Theorem 1.9, Theorem 1.11]. We shall focus on the \(\kappa>8\) regime.
**Theorem 2.12**.: _Let \(\kappa>8\) and \(\gamma=\frac{4}{\sqrt{\kappa}}\). Let \((\mathbb{C},\phi,0,\infty)\) be an embedding of a \(\gamma\)-quantum cone and \(\eta\) an independent space-filling \(\mathrm{SLE}_{\kappa}\) curve from \(\infty\) to \(\infty\), and we reparameterize \(\eta\) by the \(\gamma\)-LQG measure, in the sense that \(\eta(0)=0\) and \(\mathcal{A}_{\phi}(\eta([s,t]))=t-s\) for \(-\infty<s<t<\infty\). Define \(X_{t}^{-},X_{t}^{+},Y_{t}^{-},Y_{t}^{+}\) as in Figure 1 (left, middle) and let \(X_{t}:=X_{t}^{+}-X_{t}^{-}\) and \(Y_{t}:=Y_{t}^{+}-Y_{t}^{-}\). Then \((X_{t},Y_{t})_{t\in\mathbb{R}}\) is a correlated two-sided two-dimensional Brownian motion with \(X_{0}=Y_{0}=0\), with covariance_
\[\mathrm{var}(X_{t})=\mathrm{var}(Y_{t})=\mathrm{a}^{2}|t|;\quad \mathrm{cov}(X_{t},Y_{t})=-\cos(\frac{4\pi}{\kappa})\mathrm{a}^{2}|t|\quad \text{where }\mathrm{a}^{2}:=\frac{2}{\sin(\frac{4\pi}{\kappa})}. \tag{2.9}\]
_Moreover, the pair \((X,Y)\) a.s. determines the decorated quantum surface \((\mathbb{C},\phi,\eta,0,\infty)/{\sim_{\gamma}}\)._
We can interpret \(X_{t}\) (resp. \(Y_{t}\)) as the change in the quantum length of the left (resp. right) boundary of \(\eta\) relative to time \(0\). The covariance in (2.9) was computed in [10] while the constant a was obtained in [1].
Let \((\phi,\eta)\) and \((X,Y)\) be as in the statement of Theorem 2.12. For each \(a>0\), let \(D_{a}=\eta([0,a])\), \(p=\eta(0)=0\) and \(q=\eta(a)\). Let \(x_{L}\) (resp. \(x_{R}\)) be the last point on the left (resp. right) boundary arc of \(\eta((-\infty,0])\) hit by \(\eta\) before time \(a\). See Figure 1 (right).
**Definition 2.13**.: _We call the \(\mathrm{SLE}_{\kappa}\)-decorated quantum surface \(\mathcal{C}_{a}:=(D_{a},h,\eta|_{[0,a]};p,q,x_{L},x_{R})/{\sim_{\gamma}}\) an area \(a\) quantum cell, and denote its law by \(P_{a}\). We call \((X_{t},Y_{t})_{[0,a]}\) its boundary length process, and \(X_{a}^{-}=-\inf_{0<t<a}X_{t}\), \(X_{a}^{+}=X_{a}+X_{a}^{-}\), \(Y_{a}^{-}=-\inf_{0<t<a}Y_{t}\), \(Y_{a}^{+}=Y_{a}+Y_{a}^{-}\) its boundary lengths._
Note that the quantum length of the arc between \(p\) and \(x_{L}\) (resp. \(x_{R}\)) is \(X_{a}^{-}\) (resp. \(Y_{a}^{-}\)), and the quantum length of the arc between \(q\) and \(x_{L}\) (resp. \(x_{R}\)) is \(X_{a}^{+}\) (resp. \(Y_{a}^{+}\)). [11] gives a different but equivalent definition of the quantum cell in terms of the so-called weight \(2-\frac{\gamma^{2}}{2}\) quantum wedge; the equivalence follows from the fact that in the setting of Theorem 2.12, the quantum surface \((\eta((0,\infty)),\phi,0,\infty)/{\sim}\) has the law of the weight \(2-\frac{\gamma^{2}}{2}\) quantum wedge [14, Theorem 1.9].
By [11, Remark 2.9], \(\mathcal{C}_{a}\) is measurable with respect to \((D_{a},h,\eta|_{[0,a]})/{\sim_{\gamma}}\) since \(\kappa>8\), and therefore we will often omit the marked points of \(\mathcal{C}_{a}\) for notational simplicity. The quantum surface \((D_{a},h,\eta|_{[0,a]})/{\sim_{\gamma}}\) is measurable with respect to \((X_{t},Y_{t})_{0\leq t\leq a}\)[1, Lemma 2.17], and we denote the map sending \((X_{t},Y_{t})_{0\leq t\leq a}\) to \((D_{a},h,\eta|_{[0,a]})/{\sim_{\gamma}}\) by \(F\). We now give two properties of \(F\).
**Lemma 2.14** (Reversibility of \(F\)).: _Fix \(a>0\), sample \(\mathcal{C}_{a}=(D,h,\eta)/{\sim_{\gamma}}\) from \(P_{a}\), and let \((X_{t},Y_{t})_{[0,a]}\) be its boundary length process, so \(F((X_{t},Y_{t})_{[0,a]})=\mathcal{C}_{a}\) a.s.. Let \(\widetilde{\mathcal{C}}_{a}=(D,h,\widetilde{\eta})/{\sim_{\gamma}}\) where \(\widetilde{\eta}\) is the time-reversal of \(\eta\), and let \((\widetilde{X}_{t},\widetilde{Y}_{t})_{[0,a]}=(X_{a-t},Y_{a-t})_{[0,a]}\) be the time-reversal of \((X_{t},Y_{t})_{[0,a]}\). Then \(F((\widetilde{X}_{t},\widetilde{Y}_{t})_{[0,a]})=\widetilde{\mathcal{C}}_{a}\) a.s.._
Proof.: Let \((\mathbb{C},h,0,\infty)\) be an embedding of a \(\gamma\)-quantum cone and let \(\eta\) be an independent \(\mathrm{SLE}\) from \(\infty\) to \(\infty\) in \(\mathbb{C}\) parametrized by quantum area such that \(\eta(0)=0\). Let \(\mathcal{C}_{a}=(\eta([0,a],h,\eta|_{[0,a]})\) so the law of \(\mathcal{C}_{a}\) is \(P_{a}\), and let \((X_{t},Y_{t})_{[0,a]}\) be its boundary length process. Let \(\widetilde{\eta}\) be the time-reversal of \(\eta\), then by
the reversibility of SLE from \(\infty\) to \(\infty\) in \(\mathbb{C}\) we have \((\mathbb{C},h,\eta,0,\infty)/{\sim_{\gamma}}\stackrel{{ d}}{{=}}( \mathbb{C},h,\widetilde{\eta},0,\infty)/{\sim_{\gamma}}\). Let \(\widetilde{\eta}^{\prime}(\cdot)=\widetilde{\eta}(\cdot-a)\) (so \(\widetilde{\eta}^{\prime}|_{[0,a]}\) is the time-reversal of \(\eta|_{[0,a]}\)), then [14, Lemma 8.3] implies \((\mathbb{C},h,\widetilde{\eta},0,\infty)/{\sim_{\gamma}}\stackrel{{ d}}{{=}}(\mathbb{C},h,\widetilde{\eta}^{\prime},\eta(a,\infty))/{\sim_{\gamma}}\), that is, \((\mathbb{C},h,\widetilde{\eta}^{\prime},\eta(a),\infty)/{\sim_{\gamma}}\) is a quantum cone decorated by an independent SLE from \(\infty\) to \(\infty\) in \(\mathbb{C}\). We conclude that the law of \(\widetilde{\mathcal{C}}_{\varrho}\) is also \(P_{\varrho}\), and directly from the definition of boundary length process, the boundary length process of \(\widetilde{\mathcal{C}}_{\varrho}\) is \((\widetilde{X}_{t},\widetilde{Y}_{t})_{[0,a]}\), so \(F((\widetilde{X}_{t},\widetilde{Y}_{t})_{[0,a]})=\widetilde{\mathcal{C}}_{a}\) a.s..
**Lemma 2.15** (Concatenation compatibility of \(F\)).: _Let \(a_{1},a_{2}>0\), and let \((X_{t},Y_{t})_{t\in\mathbb{R}}\) be as in (2.9). Let \(\mathcal{C}_{1}=F((X_{t},Y_{t})_{[0,a_{1}]})\), let \(\mathcal{C}_{2}=F((X_{t+a_{1}}-X_{a_{1}},Y_{t+a_{1}}-Y_{a_{1}})_{[0,a_{2}]})\), and let \(\mathcal{C}=F((X_{t},Y_{t})_{[0,a_{1}+a_{2}]})\). Almost surely, \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\) are the curve-decorated quantum surfaces obtained from \(\mathcal{C}\) by restricting to the domains parametrized by its curve on the time intervals \([0,a_{1}]\) and \([a_{1},a_{1}+a_{2}]\)._
Proof.: This is immediate from the definition of \(F\) and the fact that if \((\mathbb{C},\phi,0,\infty)\) is an embedding of a \(\gamma\)-quantum cone and \(\eta\) is an independent space-filling SLE\({}_{\kappa}\) from \(\infty\) to \(\infty\) parametrized by quantum area, then \((\mathbb{C},\phi,\eta,0,\infty)/{\sim_{\gamma}}\stackrel{{ d}}{{=}}( \mathbb{C},\phi,\eta(\cdot+a_{1}),\eta(a_{1}),\infty)/{\sim_{\gamma}}\)[14, Lemma 8.3].
Finally, we recall the definition of the quantum sphere of [14].
**Definition 2.16**.: _Let \(\alpha<Q\). Let \((B_{s})_{s\geq 0}\) be a standard Brownian motion conditioned on \(B_{s}-(Q-\alpha)s<0\) for all \(s>0\), and let \((\widetilde{B}_{s})_{s\geq 0}\) be an independent copy of \((B_{s})_{s\geq 0}\). Let_
\[Y_{t}=\left\{\begin{array}{ll}B_{t}-(Q-\alpha)t&\text{ if }t\geq 0\\ \widetilde{B}_{-t}+(Q-\alpha)t&\text{ if }t<0\end{array}\right.\]
_Let \(h^{1}(z)=Y_{\operatorname{Re}z}\) for \(z\in\mathcal{C}\), and let \(h^{2}\) be independent of \(h^{1}\) and have the law of the lateral component of the GFF on \(\mathcal{C}\). Let \(\tilde{h}=h^{1}+h^{2}\). Let \(\mathbf{c}\in\mathbb{R}\) be independently sampled from \(\frac{\gamma}{2}e^{2(\alpha-Q)c}\,dc\). Let \(\mathcal{M}_{2}^{\operatorname{ph}}(\alpha)\) be the infinite measure describing the law of the decorated quantum surface \((\mathcal{C},\tilde{h}+\mathbf{c},-\infty,+\infty)/{\sim_{\gamma}}\)._
## 3 A radial mating-of-trees theorem
In this section, we prove our radial mating-of-trees result, namely Theorem 3.1. Throughout this section, let \(\gamma\in(0,\sqrt{2})\) and \(\kappa=\frac{16}{\gamma^{2}}>8\).
Sample \(\phi\sim\operatorname{LF}_{\mathbb{D}}^{(Q-\frac{\gamma}{2},0),(\frac{3\gamma }{2},1)}\) conditioned on having quantum boundary length 1, let \(A=\mathcal{A}_{\phi}(\mathbb{D})\), and let \(\eta:[0,A]\rightarrow\overline{\mathbb{D}}\) be an independent radial SLE\({}_{\kappa}\) in \(\mathbb{D}\) from 1 to 0 parametrized by its \(\mathcal{A}_{\phi}\)-quantum area. There is a unique continuous process \((X_{t},Y_{t})_{[0,A]}\) starting at \((X_{0},Y_{0})=(0,0)\) which keeps track of the local changes in the left and right LQG boundary lengths of \(\mathbb{D}\backslash\eta([0,t])\) in the following sense. For any time \(s\in(0,A)\) and any point \(p\in\partial(\mathbb{D}\backslash\eta([0,s]))\) different from \(\eta(s)\), let \(\sigma>s\) be the next time \(\eta\) hits \(p\). For each \(t\in[s,\sigma)\), let \(X_{t}^{s}\) (resp. \(Y_{t}^{s}\)) be the quantum length of the clockwise (resp. counterclockwise) boundary arc of \(\mathbb{D}\backslash\eta([0,t])\) from \(\eta(t)\) to \(p\). Then \((X_{t}-X_{s},Y_{t}-Y_{s})_{[s,\sigma)}=(X_{t}^{s}-X_{s}^{s},Y_{t}^{s}-Y_{s}^{s} )_{[s,\sigma)}\). See Figure 2 (left, middle). This process can be constructed on the time interval \([0,A)\) by shifting the point \(p\) countably many times, and its value at \(A\) is defined by taking a limit. Note that these LQG lengths exist and are finite by local absolute continuity with respect to the setting of Theorem 2.12.
**Theorem 3.1** (\(\kappa>8\) radial mating-of-trees).: _The process \((X_{t},Y_{t})_{0\leq t\leq A}\) has the law of 2-dimensional Brownian motion with covariance (2.9) stopped at the first time that \(1+X.+Y.=0\). Moreover, for \(0\leq s<t\), on the event that \(t<A\) and \(\eta([s,t])\) is simply connected, we have_
\[F((X_{+s}-X_{s},Y_{+s}-Y_{s})|_{[0,t-s]})=(\eta([s,t]),\phi,\eta(\cdot+s)|_{[0,t -s]})/{\sim_{\gamma}}\quad\text{almost surely}. \tag{3.1}\]
_Here, \(F\) is as in Lemma 2.14._
We note that when \(\eta([s,t])\) is not simply connected, then instead the right hand side of (3.1) is obtained from the left hand side by conformally welding its boundary to itself. In Theorem 3.1 the curve-decorated quantum surface \((\mathbb{D},\phi,\eta,0,1)/{\sim_{\gamma}}\) can a.s. be recovered from \((X_{t},Y_{t})_{[0,A]}\) by conformally welding countably many simply connected quantum surfaces of the form \((\eta([s,t]),\phi,\eta(\cdot+s)|_{[0,t-s]})/{\sim_{\gamma}}\), each of which is measurable with respect to \((X_{t},Y_{t})_{0\leq t\leq A}\) by (3.1).
**Corollary 3.2**.: _For \(\phi\sim\mathrm{LF}_{\mathbb{D}}^{(Q-\frac{\pi}{4},0),(\frac{2\pi}{2},1)}\) conditioned on having boundary length 1, the quantum area \(\mathcal{A}_{\phi}(\mathbb{D})\) has the law of the inverse gamma distribution with shape parameter \(\frac{1}{2}\) and scale parameter \(b=\frac{1}{8}\tan(\frac{\pi\gamma^{2}}{8})\), i.e., the law of \(\mathcal{A}_{\phi}(\mathbb{D})\) is_
\[1_{a>0}\sqrt{\frac{b}{\pi a^{3}}}e^{-\frac{b}{a}}\,da.\]
Proof.: The law of \(X_{t}+Y_{t}\) is Brownian motion with quadratic variation \((2\mathrm{a}\sin(\frac{\pi\gamma^{2}}{8}))^{2}\,dt=4\cot(\frac{\pi\gamma^{2}}{ 8})\,dt\), and \(\mathcal{A}_{\phi}(\mathbb{D})\) equals the hitting time of \(-1\). The claim then follows from the well-known law of Brownian motion first passage times.
**Remark 3.3**.: _Corollary 3.2, together with the result [20, Theorem 1.7] and the computation of [4, Section 4.4], can be used to compute the correlation function of LCFT on the disk with a bulk insertion \(\alpha=Q-\frac{\gamma}{4}\) and a boundary insertion \(\beta=\frac{3\gamma}{2}\). This gives an alternative derivation of a special case of [4, Theorem 1.2], i.e., proves a special case of the physical proposal by [19]._
In Section 3.1 we define a _radial quantum zipper_ where, starting with a sample from \(\mathrm{LF}_{\mathbb{D}}^{(Q+\frac{\pi}{4},0),(-\frac{\gamma}{2},1)}\), we grow the quantum surface by conformal welding with independent quantum cells, giving rise to a coupling of LCFT with _reverse_ radial SLE. In Section 3.2 we prove Proposition 3.7 in which we decorate \(\mathrm{LF}_{\mathbb{D}}^{(Q+\frac{\pi}{4},0),(\frac{3\gamma}{2},1)}\) by _forward_ radial SLE and look at the quantum surfaces parametrized by the curve and its complement. Here, to switch between reverse and forward SLE, we use the fact that for any fixed time, the curve generated by centered reverse radial SLE has the law of forward radial SLE. In Section 3.3, since \(\Delta_{Q+\frac{\pi}{4}}=\Delta_{Q-\frac{\gamma}{4}}\) (with \(\Delta_{\alpha}=\frac{\alpha}{2}(Q-\frac{\alpha}{2})\)), we can use Girsanov's theorem to obtain a variant of Proposition 3.7 about \(\mathrm{LF}_{\mathbb{D}}^{(Q-\frac{\pi}{4},0),(\frac{3\gamma}{2},1)}\) (Proposition 3.11), and hence Theorem 3.1.
### A radial quantum zipper
Let \(\kappa>8\) and \(\gamma=\frac{4}{\sqrt{\kappa}}\). In this section we will define and study a quantum zipper process \((\psi_{t},\eta_{t})_{t\geq 0}\) where the marginal law of \(\psi_{0}\) is \(\mathrm{LF}_{\mathbb{D}}^{(Q+\frac{\pi}{4},0),(-\frac{\gamma}{2},1)}\) and the time-evolution corresponds to conformally welding quantum cells to the boundary of the quantum surface. This process can be viewed as arising from a mating of continuum random trees (as in [2, Section 1.4]), but we do not need to use this perspective.
Let \(\mathrm{CRT}_{\kappa}\) denote the law of (one-sided) correlated two-dimensional Brownian motion \((X_{t},Y_{t})_{t\geq 0}\) with \(X_{0}=Y_{0}=0\) and covariance given by (2.9). Sample \((\widetilde{\psi}_{0},(X_{t},Y_{t}))\sim\mathrm{LF}_{\mathbb{D}}^{(Q+\frac{2}{ 4},0),(-\frac{2}{2},1)}\times\mathrm{CRT}_{\kappa}\), let \(L_{t}=X_{t}+Y_{t}+\mathcal{L}_{\widetilde{\psi}_{0}}(\partial\mathbb{D})\), and let \(\widetilde{\tau}\) be the first time \(t\) that \(L_{t}=0\). For \(s\in(0,\widetilde{\tau})\) we define a random field and curve \((\widetilde{\psi}_{s},\widetilde{\eta}_{s})\) which correspond to "zipping up for quantum time \(s\)" as follows. See Figure 2 (right). Choose finitely many times \(0=s_{1}<\cdots<s_{k}=s\) such that for \(j<k\) we have \((X_{s_{j}}-\inf_{w\in[s_{j},s_{j+1}]}X_{u})+(Y_{s_{j}}-\inf_{u\in[s_{j},s_{j+1 }]}Y_{u})<L_{s_{j}}\). For \(j<k\) let \(\mathcal{C}_{j}=F((X_{+s_{j}}-X_{s_{j}},Y_{+s_{j}}-Y_{s_{j}})_{[0,s_{j+1}-s_{ j}]})\). We iteratively define quantum surfaces with the disk topology decorated by a bulk point, a boundary point, and a curve as follows. Let \(\mathcal{D}_{1}=(\mathbb{D},\widetilde{\psi}_{0},0,1)/{\sim_{\gamma}}\), and iteratively for \(j=1,\ldots,k-1\), we conformally weld \(\mathcal{C}_{j}\) to \(\mathcal{D}_{j}\) to obtain \(\mathcal{D}_{j+1}\). This is done by identifying the starting point of the curve of \(\mathcal{C}_{j}\) with the boundary point of \(\mathcal{D}_{j}\) and conformally welding the two boundary arcs of \(\mathcal{C}_{j}\) adjacent to this point to \(\mathcal{D}_{j}\) by quantum length. Doing this \(k-1\) times produces \(\mathcal{D}_{k}\), which we view as a quantum surface decorated by a bulk point (from \(\mathcal{D}_{0}\)), a curve (obtained by concatenating the \(k\) curves from \(\mathcal{C}_{1},\ldots,\mathcal{C}_{k}\)), and a boundary point (the endpoint of the curve on the boundary). Finally, we conformally embed \(\mathcal{D}_{k}\) in \(\mathbb{D}\), sending the bulk and boundary marked points to \(0\) and \(1\), to get \((\mathbb{D},\widetilde{\psi}_{s},\widetilde{\eta}_{s},0,1)\). This gives our definition of \(\widetilde{\psi}_{s},\widetilde{\eta}_{s}\) for all \(s<\widetilde{\tau}\); note that Lemma 2.15 implies this definition does not depend on the choice of \(s_{1},\ldots,s_{k}\).
For each \(s\), let \(t(s)\) be the log conformal radius of \(\mathbb{D}\backslash\widetilde{\eta}_{s}\) viewed from \(0\), i.e., \(t(s)=-\log|g^{\prime}(0)|\) where \(g:\mathbb{D}\to\mathbb{D}\backslash\eta_{s}\) is any conformal map fixing \(0\). This gives a monotone reparametrization of the process which we denote by \((\psi_{t},\eta_{t})_{t\geq 0}\). We parameterize each curve \(\eta_{t}:[0,t]\to\overline{\mathbb{D}}\) by log conformal radius, so \(\eta_{t}(0)=1\) and the conformal radius of \(\mathbb{D}\backslash\eta_{t}([0,t^{\prime}])\) viewed from \(0\) is \(e^{-t^{\prime}}\). Recall \(\bullet_{\gamma}\) from (2.1).
**Lemma 3.4**.: _For \(\kappa>8\) and \(\gamma=\frac{4}{\sqrt{\kappa}}\), let \(M\) be the law of the process \((\psi_{t},\eta_{t})_{t\geq 0}\) defined immediately above. Then_
1. _For any a.s. finite stopping time_ \(\tau\) _for the filtration_ \(\mathcal{F}_{t}\) _generated by_ \((\eta_{t})_{t\geq 0}\)_, the law of_ \((\psi_{\tau},\eta_{\tau})\) _is_ \(\mathrm{LF}_{\mathbb{D}}^{(Q+\frac{2}{4},0),(-\frac{2}{2},1)}\mathrm{rrSLE}_{ \kappa}^{\tau}\)_, where_ \(\mathrm{rSLE}_{\kappa}^{\tau}\) _denotes the law of centered reverse radial_ \(\mathrm{SLE}_{\kappa}\) _in_ \(\mathbb{D}\) _from_ \(1\) _to_ \(0\) _run until the stopping time_ \(\tau\)_._
2. _For_ \(0<t_{1}<t_{2}\)_, let_ \(\widetilde{f}_{t_{1},t_{2}}:\mathbb{D}\to\mathbb{D}\backslash\eta_{t_{2}}([0,t _{2}-t_{1}])\) _be the conformal map fixing 0 with_ \(\widetilde{f}_{t_{1},t_{2}}(1)=\eta_{t_{2}}(t_{2}-t_{1})\)_, then_ \(\psi_{t_{1}}=\widetilde{f}_{t_{1},t_{2}}^{-1}\bullet_{\gamma}\psi_{t_{2}}\)_._
Proof.: From the definition of \(M\), \((\mathbb{D},\psi_{t_{2}},0,1)/{\sim_{\gamma}}\) is obtained from conformally welding \((\mathbb{D},\psi_{t_{1}},0,1)/{\sim_{\gamma}}\) with another quantum surface, so \((\mathbb{D}\backslash\eta_{t_{2}}([0,t_{2}-t_{1}]),\psi_{2},0,\eta_{t_{2}}(t _{2}-t_{1}))/{\sim_{\gamma}}=(\mathbb{D},\psi_{1},0,1)/{\sim_{\gamma}}\). This gives \(\psi_{t_{1}}=\widetilde{f}_{t_{1},t_{2}}^{-1}\bullet_{\gamma}\psi_{t_{2}}\) so ii) holds.
For i), we first apply a change of coordinates from \((\mathbb{D},1,-1)\) to \((\mathbb{H},0,\infty)\) to change the radial process \((\psi_{t},\eta_{t})_{t\geq 0}\) into a chordal process \((\tilde{\psi}_{s},\tilde{\eta}_{s})_{s\geq 0}\) in \((\mathbb{H},0,\infty)\), apply a result of [1] for the chordal process in \(\mathbb{H}\), and finally convert back to the radial process in \(\mathbb{D}\).
For a sample \((\psi_{t},\eta_{t})_{t\geq 0}\sim M\), let \(\tau_{0}\) be the time \(t\) that \(\widetilde{f}_{0,t}(-1)=1\), or in other words the time the boundary point \(p_{0}=-1\) of \((\mathbb{D},\phi_{0})\) intersects the zipped-in region (colored region in Figure 2 (right)). Let \(g_{0}:\mathbb{D}\to\mathbb{H}\) be the conformal map such that \(g_{0}(0)=i\) and \(g_{0}(1)=0\). For \(t<\tau_{0}\) let \(p_{t}=\widetilde{f}_{0,t}(p_{0})\in\partial\mathbb{D}\backslash\{1\}\), and let \(g_{t}:\mathbb{D}\to\mathbb{H}\) be the conformal map such that \(g_{t}(1)=0,g_{t}(p_{t})=\infty\), and \((g_{t}\circ\widetilde{f}_{0,t}\circ g_{0}^{-1})(z)=z+O(1)\) as \(z\to\infty\). This gives us a process \((g_{t}\bullet_{\gamma}\psi_{t},g_{t}\circ\eta_{t})_{[0,\tau_{0})}\) of (field, curve) pairs in \(\mathbb{H}\); we reparametrize it to obtain a process \((\tilde{\psi}_{s},\tilde{\eta}_{s})_{[0,\infty)}\) such that the half-plane capacity of the trace of \(\tilde{\eta}_{s}\) is \(s\), and \(\tilde{\eta}_{s}:[0,s]\to\overline{\mathbb{H}}\) is parametrized by half-plane capacity. By Lemma 2.9, the law of \((\tilde{\psi}_{0},(X_{t},Y_{t})_{t\geq 0})\) is \(\mathrm{LF}_{\mathbb{D}}^{(Q+\frac{2}{4},i),(-\frac{2}{2},0)}\times\mathrm{CRT}_{\kappa}\), and by our choice of \(g_{t}\) the conformal maps \(\widetilde{f}_{\mathbb{H},s}:\mathbb{H}\to\mathbb{H}\backslash\tilde{\eta}_{s}( [0,s])\) satisfying \(\widetilde{f}_{\mathbb{H},s}(0)=\hat{\eta}_{s}(s)\) and \(\widetilde{f}_{\mathbb{H},s}(z)=z+O(1)\) as \(z\to\infty\) also satisfy \(\hat{\psi}_{0}=\widetilde{f}_{\mathbb{H},s}^{-1}\bullet_{\gamma}\hat{\psi}_{s}\). Therefore [1, Theorem 1.7] gives that, for any stopping time \(\sigma\) for the filtration \(\mathcal{F}_{s}=\sigma(\hat{\eta}_{s})\), the law of \((\hat{\psi}_{\sigma},\hat{\eta}_{\sigma})\) is
\[\frac{1}{\mathcal{Z}(\widetilde{f}_{\mathbb{H},\sigma}(i))}\mathrm{LF}_{ \mathbb{H}}^{(Q+\frac{2}{4},\widetilde{f}_{\mathbb{H},\sigma}(i)),(-\frac{2}{2},0 )}\,\mathrm{r
As we will see, the above result can be iterated to get i) for all \(\tau\). By the previous paragraph, the law of \((\psi_{\tau_{0}},\eta_{\tau_{0}})\) is \(\mathrm{LF}_{\mathbb{D}}^{(Q+\frac{\tau}{2},0),(-\frac{\tau}{2},1)}\times\mathrm{ rrSLE}_{\kappa}^{\tau_{0}}\), so by the Markov property of Brownian motion, the law of \(((\psi_{\tau_{0}+t},\eta_{\tau_{0}+t}|_{[0,t]})t_{\geq 0},\eta_{\tau_{0}})\) is \(M\times\mathrm{rSLE}_{\kappa}^{\tau_{0}}\). Define \(\tau_{1}\) for \((\psi_{\tau_{0}+t},\eta_{\tau_{0}+t}|_{[0,t]})t_{\geq 0}\) in the same way that \(\tau_{0}\) was defined for \((\psi_{t},\eta_{t})t_{\geq 0}\), so \(\tau_{0},\tau_{1}\) are i.i.d.. Conditioning on \(\eta_{\tau_{0}}\) and applying the result of the previous paragraph, we see that i) holds for any stopping time \(\tau\leq\tau_{0}+\tau_{1}\). Proceeding iteratively, we may define \(\tau_{k}\) for all \(k\), and i) holds for all \(\tau\leq\sum_{i\leq k}\tau_{i}\). Since the \(\tau_{k}\) are i.i.d. positive random variables we have \(\sum_{k}\tau_{k}\to\infty\) a.s.., completing the proof of i).
The following lemma essentially tells us that if we run the process \((\psi_{t},\eta_{t})_{t\geq 0}\) until a random amount of quantum area has been added, if the added region is simply connected then it parametrizes a quantum cell independent of \(\psi_{0}\).
The following lemma is the radial analog of [1, Proposition 5.7].
**Lemma 3.5**.: _Let \(\kappa>8\) and \(\gamma=\frac{4}{\sqrt{\kappa}}\). Sample \(((\psi_{t},\eta_{t})_{t\geq 0},A)\) from \(M\times 1_{a>0}da\). Restrict to the event that there is a time \(\tau>0\) such that \(\mathcal{A}_{\psi_{\tau}}(\eta_{\tau}([0,\tau]))=A\). Then the law of \((\psi_{\tau},\eta_{\tau},\tau)\) is \(C\cdot\mathrm{LF}_{\mathbb{D}}^{(Q+\frac{\tau}{2},0),(\frac{3\gamma}{2},1)} \times\mathrm{raSLE}_{\kappa}^{t}1_{t>0}dt\) for some constant \(C>0\). Here \(\mathrm{raSLE}_{\kappa}^{t}\) denotes the law of radial \(\mathrm{SLE}_{\kappa}\) in \(\mathbb{D}\) from 1 to 0 parametrized by log-conformal radius stopped at time \(t\)._
Proof.: Here is a proof sketch. First, if we fix \(\delta>0\) and sample \(((\psi_{t},\eta_{t})_{t\geq 0},A,T)\sim\delta^{-1}1_{T\in[\tau,\tau+\delta]}M \times 1_{A>0}\,dA\times dT\) then the marginal law of \(((\psi_{t},\eta_{t})_{t\geq 0},A)\) is \(M\times 1_{a>0}da\), so the marginal law of \((\psi_{\tau},\eta_{\tau},\tau)\) is the same as in Lemma 3.5. In this new setup, let \(z=\eta_{T}(T-\tau)\), so \(\{T\in[\tau,\tau+\delta]\}=\{z\in\eta_{T}([0,\delta\wedge T])\}\). By Lemma 3.4 the law of \((z,\psi_{T},\eta_{T},T)\) is \(1_{z\in\eta([0,\delta\wedge t])}\mathcal{A}_{\psi}(dz)\mathrm{LF}_{\mathbb{D} }^{(Q+\frac{\tau}{2},0),(-\frac{\tau}{2},1)}(d\psi)\times\mathrm{raSLE}_{ \kappa}^{t}(d\eta)_{t>0}\,dt\). Note we have obtained the term \(\mathrm{raSLE}_{\kappa}^{t}1_{t>0}\,dt\) using the symmetry between forward and reverse radial \(\mathrm{SLE}_{\kappa}\) at fixed time \(t\). Using Lemma 2.8, this law is \(\mathrm{LF}_{\mathbb{D}}^{(Q+\frac{\tau}{2},0),(-\frac{\tau}{2},1),(\gamma,z)} (d\psi)1_{z\in\eta([0,\delta\wedge t])}dz\,\mathrm{raSLE}_{\kappa}^{t}(d\eta) 1_{t>0}\,dt\). As \(\delta\to 0\) we have \(T-\tau\to 0\) so \(z\to 1\), so in the limit the field has the singularity \(\gamma G(\cdot,1)-\frac{\tau}{4}G(\cdot,1)=\frac{1}{2}(\frac{3\gamma}{2})G( \cdot,1)\) at \(1\). This explains the term \(\mathrm{LF}_{\mathbb{D}}^{(Q+\frac{\gamma}{2},0),(\frac{3\gamma}{2},1)}\). The main difficulty in this argument is in taking limits of infinite measures; this is done by truncating on finite events and taking limits of finite measures.
The argument outlined above is implemented in the proof of [1, Proposition 5.7], a chordal analog of our desired result; we refer the reader there for details. The only part of that proof that does not immediately carry over to our setting is a certain finiteness claim [1, Lemma 5.8], whose analog in our setting can be stated as follows. For \(\rho\) the uniform probability measure on \(\{z:|z|=\frac{1}{2}\}\) (the precise choice of \(\rho\) is unimportant), we have
\[(M\times 1_{a>0}da)[E_{N}]<\infty\text{ where }E_{N}:=\{\tau,|(\psi_{0},\rho)|,|( \psi_{\tau},\rho)|<N\}. \tag{3.3}\]
Given this, the proof of our Lemma 3.5 is identical to that of [1, Proposition 5.7]. Thus it suffices to prove (3.3).
First, we observe \((M\times 1_{a>0}da)[E_{N}]\leq(M\times 1_{a>0}da)[\widetilde{E}_{N}]=M[ \mathcal{A}_{\psi_{N}}(\eta_{N}([0,N]))1_{|(\psi_{0},\rho)|<N]}\) where \(\widetilde{E}_{N}=\{\tau,|(\psi_{0},\rho)|<N\}\). Now, our choice of parametrization implies the conformal radius of \(\eta_{N}([0,N])\) viewed from \(0\) is \(e^{-N}\), so the Koebe quarter theorem implies that the ball \(B_{e^{-N}}/4(0)\) is contained in \(\mathbb{D}\backslash\eta_{N}([0,N])\). By Lemma 3.4 the \(M\)-law of \((\psi_{N},\eta_{N})\) is \(\mathrm{LF}_{\mathbb{D}}^{(Q+\frac{\gamma}{2},0),(-\frac{\gamma}{2},1)}\times \mathrm{raSLE}_{\kappa}^{N}\), so it suffices to show the finiteness of
\[(\mathrm{LF}_{\mathbb{D}}^{(Q+\frac{\gamma}{2},0),(-\frac{\gamma}{2},1)}\times \mathrm{raSLE}_{\kappa}^{N})[\mathcal{A}_{\psi}(\mathbb{D}\backslash B_{e^{-N} /4}(0))1_{|(f_{N}\star,\psi,\rho)|<N]}, \tag{3.4}\]
where for the \(\mathrm{raSLE}_{\kappa}^{N}\) curve \(\eta\) the conformal map \(f_{N}:\mathbb{D}\backslash\eta([0,N])\to\mathbb{D}\) satsifies \(f_{N}(0)=0\) and \(f_{N}(\eta(N))=1\). Writing \(\mathbb{E}\) to denote expectation with respect to \((h,\eta)\sim P_{\mathbb{D}}\times\mathrm{raSLE}_{\kappa}^{N}\) and \(\hat{h}=h+(Q+\frac{\gamma}{4})G_{\mathbb{D}}(\cdot,0)-\frac{\gamma}{4}G_{ \mathbb{D}}(\cdot,1)\), this equals
\[\mathbb{E}\big{[}\int_{\mathbb{R}}\mathcal{A}_{\widetilde{h}+c}( \mathbb{D}\backslash B_{e^{-N}/4}(0))1_{|(f_{N}\star,\widetilde{h},\rho)+c|<N }\,dc\big{]} =\mathbb{E}\big{[}\int_{-(f_{N}\star,\widetilde{h},\rho)-N}^{-(f_{ N}\star,\widetilde{h},\rho)+N}e^{\gamma c}\mathcal{A}_{\widetilde{h}}(\mathbb{D} \backslash B_{e^{-N}/4}(0))\,dc\big{]}\] \[=\frac{1}{\gamma}\big{(}e^{\gamma N}-e^{-\gamma N}\big{)}\mathbb{E} \big{[}e^{-\gamma(f_{N}\star,\widetilde{h},\rho)}\mathcal{A}_{\widetilde{h}}( \mathbb{D}\backslash B_{e^{-N}/4}(0))\big{]}.\]
To see this is finite, first note that \(Z:=\mathbb{E}[e^{-\gamma(f_{N}\star,\widetilde{h},\rho)}]<\infty\) by standard conformal distortion estimates. Next, by Girsanov's theorem, the expression equals \(\frac{1}{\gamma}\big{(}e^{\gamma N}-e^{-\gamma N}\big{)}Z\mathbb{E}[ \mathcal{A}_{\hat{h}}(\mathbb{D}\backslash B_{e^{-N}/4}(0)]\) where
\(h+(Q+\frac{7}{4})G_{\mathbb{D}}(\cdot,0)-\frac{7}{4}G_{\mathbb{D}}(\cdot,1)- \gamma\int G_{\mathbb{D}}(\cdot,w)((f_{N}^{-1})_{*}\rho)(dw)\). To finish, we note that \(\hat{h}-h\) is bounded above by a constant on \(\mathbb{D}\backslash B_{e^{-N}/4}(0)\), and that \(\mathbb{E}[\mathcal{A}_{h}(\mathbb{D}\backslash B_{e^{-N}/4}(0))]<\infty\) by standard GMC moment results, see for instance [10, Proposition 3.5]. We conclude that (3.4), and hence (3.3), is finite.
Finally, between two "quantum typical" times for \((\psi_{t},\eta_{t})\sim M\), given the field and curve at the earlier time, on the event the zipped-in quantum surface is simply connected, it is a quantum cell with a boundary length restriction.
**Lemma 3.6**.: _Let \(\kappa>8\) and \(\gamma=\frac{4}{\sqrt{\kappa}}\), and fix \(a_{1},a_{2}>0\). Sample \((\psi_{t},\eta_{t})_{t\geq 0}\) from \(M\) and restrict to the event that there is a time \(\tau_{2}>0\) such that \(\mathcal{A}_{\psi_{\tau_{2}}}(\eta_{\tau_{2}}([0,\tau_{2}]))=a_{1}+a_{2}\). Let \(\tau_{1}\) be the time that \(\mathcal{A}_{\psi_{\tau_{1}}}(\eta_{\tau_{1}}([0,\tau_{1}]))=a_{1}\). Conditioned on \((\psi_{\tau_{1}},\eta_{\tau_{1}})\), the law of \((\eta_{\tau_{2}}([0,\tau_{2}-\tau_{1}]),\psi_{\tau_{2}},\eta_{\tau_{2}}|_{[0, \tau_{2}-\tau_{1}]})/{\sim_{\gamma}}\) restricted to the event \(\{\eta_{\tau_{2}}([0,\tau_{2}-\tau_{1}])\) is simply connected} is_
\[1_{X^{+}_{a_{2}}(\mathcal{C})+Y^{+}_{a_{2}}(\mathcal{C})<\mathcal{L}_{\psi_{ \tau_{1}}}(\partial\mathbb{D})}P_{a_{2}}(d\mathcal{C})\]
_where \(X^{+}_{a_{2}}\) and \(Y^{+}_{a_{2}}\) are as in Definition 2.13._
Proof.: Let \((X_{t},Y_{t})_{t\geq 0}\) be the process in the definition of \(M\), then the law of \(\widetilde{\mathcal{C}}:=F((X_{+a_{1}},Y_{+a_{1}}))_{[0,a_{2}]}\) is \(P_{a_{2}}\), and reversing the orientation of the curve of \(\widetilde{\mathcal{C}}\) gives \(\mathcal{C}:=(\eta_{\tau_{2}}([0,\tau_{2}-\tau_{1}]),\psi_{\tau_{2}},\eta_{ \tau_{2}}|_{[0,\tau_{2}-\tau_{1}]})/{\sim_{\gamma}}\). By construction \(\{\eta_{\tau_{2}}([0,\tau_{2}-\tau_{1}])\) is simply connected\(\}\)\(=\)\(\{X^{-}_{a_{2}}(\widetilde{\mathcal{C}})+Y^{-}_{a_{2}}(\widetilde{\mathcal{C}})<\mathcal{L}_{ \psi_{\tau_{1}}}(\partial\mathbb{D})\}\), and since \(X^{-}_{a_{2}}(\widetilde{\mathcal{C}})=Y^{+}_{a_{2}}(\mathcal{C})\) and \(Y^{-}_{a_{2}}(\widetilde{\mathcal{C}})=X^{+}_{a_{2}}(\mathcal{C})\), this event equals \(\{X^{+}_{a_{2}}(\mathcal{C})+Y^{+}_{a_{2}}(\mathcal{C})<\mathcal{L}_{\psi_{ \tau_{1}}}(\partial\mathbb{D})\}\) as needed.
### Cutting an infinite volume LCFT disk until a quantum typical time
The aim of this section is to prove Proposition 3.7 below. We write \(\mathrm{raSLE}^{t}_{\kappa}\) for the law of radial \(\mathrm{SLE}_{\kappa}\) in \(\mathbb{D}\) from \(1\) to \(0\) stopped at time \(t\), and \(\mathrm{raSLE}^{\varepsilon}_{\kappa}\) for the law of radial \(\mathrm{SLE}_{\kappa}\) in \(\mathbb{D}\) from \(1\) to \(0\) stopped when it hits \(z\in\overline{\mathbb{D}}\backslash\{0\}\).
**Proposition 3.7**.: _Suppose \(\kappa>8\) and \(\gamma=\frac{4}{\sqrt{\kappa}}\). Sample \((\phi,\eta,A)\) from the measure_
\[\mathrm{LF}^{(Q+\frac{7}{2},0),(\frac{3\gamma}{2},1)}_{\mathbb{D}}\times \mathrm{raSLE}_{\kappa}\times 1_{a>0}da \tag{3.5}\]
_and parametrize \(\eta\) by its \(\mathcal{A}_{\phi_{0}}\) quantum area. For \(a\geq 0\), let \(f_{a}:\mathbb{D}\backslash\eta_{0}([0,a])\to\mathbb{D}\) be the conformal map such that \(f_{a}(0)=0\) and \(f_{a}(\eta(a))=1\). Let \(\phi_{a}=f_{a}\bullet_{\gamma}\phi\), \(\widetilde{\eta}_{a}=f_{a}\circ\eta|_{[a,\infty)}\), and \(\mathcal{C}_{a}=(\eta_{0}([0,a]),\phi,\eta|_{[0,a]})/{\sim_{\gamma}}\). Then the law of \((\phi_{A},\widetilde{\eta}_{A},A)\) is given by3_
Footnote 3: There is a slight abuse of notation here: the curve \(\widetilde{\eta}_{A}\) should be viewed as parametrized by log-conformal radius rather than by quantum area for (3.6) to hold. We do this because this section is already notationally dense.
\[\mathrm{LF}^{(Q+\frac{7}{2},0),(\frac{3\gamma}{2},1)}_{\mathbb{D}}\times \mathrm{raSLE}_{\kappa}\times 1_{a>0}da. \tag{3.6}\]
_Moreover, the law of \((\phi_{A},\widetilde{\eta}_{A},\mathcal{C}_{A},A)\) restricted to the event that \(\eta([0,A])\) is simply connected is given by_
\[1_{X^{+}_{a}(\mathcal{C}_{a})+Y^{+}_{a}(\mathcal{C}_{a})<\mathcal{L}_{\phi_{a}}( \partial\mathbb{D})}\mathrm{LF}^{(Q+\frac{7}{2},0),(\frac{3\gamma}{2},1)}_{ \mathbb{D}}(d\phi)\times\mathrm{raSLE}_{\kappa}\times P_{a}(d\mathcal{C}_{a} )\,1_{a>0}da. \tag{3.7}\]
_where \(X^{+}_{a},Y^{+}_{a}\) are as in Definition 2.13._
Proof of Proposition 3.7.: To streamline notation in this proof, we will often use the same notation for a random object as in the description of its law (in the indented equations), or similar notation (e.g. use \(d\psi_{t_{2}}\) in a description of the law of \(\psi_{\tau_{2}}\)). To begin with, sample \((\{(\psi_{t},\eta_{t})_{t\geq 0}\},A_{1},A_{2})\) from \(M\times 1_{A_{1},A_{2}>0}dA_{1}dA_{2}\), and let \(\tau_{1}\) (resp. \(\tau_{2}\)) be the time \(t\) when \(\mathcal{A}_{\psi_{t}}(\eta_{t}([0,t]))\) equals \(A_{1}\) (resp. \(A_{1}+A_{2}\)). We restrict to the event \(E\) that these times exist (\(\tau_{1}<\tau_{2}<\infty\)). Let \(z=\eta_{\tau_{2}}(\tau_{2}-\tau_{1})\), \(S=A_{1}+A_{2}\), and \(\eta^{12}=\eta_{\tau_{2}}|_{[0,\tau_{2}-\tau_{1}]}\). Then the law of \(((\psi_{t},\eta_{t})_{t\geq 0},A_{1},S)\) is \(1_{E}M\times 1_{A_{1}\in[0,S]}dA_{1}S_{>0}dS\), so by Lemma 3.5, the law of \((A_{1},\psi_{\tau_{2}},\eta_{\tau_{2}},\tau_{2})\) is
\[C\cdot 1_{A_{1}\in[0,\mathcal{A}_{\psi_{t_{2}}}(\eta_{t_{2}}([0,t_{2}])]}dA_{1} \,\mathrm{LF}^{(Q+\frac{7}{2},0),(\frac{3\gamma}{2},1)}_{\mathbb{D}}(d\psi_{ t_{2}})\,\mathrm{raSLE}^{t_{2}}_{\kappa}(d\eta_{t_{2}})\,1_{t_{2}>0}dt_{2}.\]
Since \(z\) is the point where \(\eta_{\tau_{2}}\) covers \(S-A_{1}\) units of quantum area when hitting \(z\), it follows that the law of \((z,\psi_{\tau_{2}},\eta_{\tau_{2}},\tau_{2})\) is
\[C\cdot 1_{z\in\eta_{\tau_{2}}([0,t_{2}])}\mathcal{A}_{\psi_{t_{2}}}(dz)\,\mathrm{ LF}_{\mathbb{D}}^{(Q+\frac{\gamma}{4},0),(\frac{3\gamma}{2},1)}(d\psi_{t_{2}}) \,\mathrm{raSLE}_{\kappa}^{t_{2}}(d\eta_{t_{2}})\,1_{t_{2}>0}dt_{2}.\]
Then by Lemma 3.8 below, the law of \((z,\psi_{\tau_{2}},\eta^{12},(\eta_{\tau_{1}},\tau_{1}))\) is
\[\mathcal{A}_{\psi_{t_{2}}}(dz)\mathrm{LF}_{\mathbb{D}}^{(Q+\frac{\gamma}{4},0),(\frac{3\gamma}{2},1)}(d\psi_{t_{2}})\,\mathrm{raSLE}_{\kappa}^{z}(d\eta^{1 2})\times[C\cdot\mathrm{raSLE}_{\kappa}^{t_{1}}(d\eta_{t_{1}})\,1_{t_{1}>0} dt_{1}] \tag{3.8}\]
where \(\mathrm{raSLE}_{\kappa}^{z}\) is as defined before Proposition 3.7.
Since \((\phi,\eta,A)\) is sampled from (3.5) and \(\eta\) is parametrized by quantum area, the law of \((\eta(A),\phi,\eta|_{[0,A]})\) is \(\mathcal{A}_{\phi}(du)\mathrm{LF}_{\mathbb{D}}^{(Q+\frac{\gamma}{4},0),(\frac {3\gamma}{2},1)}(d\phi)\,\mathrm{raSLE}_{\kappa}^{u}(d\eta)\). Then, by the domain Markov property of radial \(\mathrm{SLE}_{\kappa}\), if we instead sample \((\phi,\eta,A,t^{\prime})\) from
\[\mathrm{LF}_{\mathbb{D}}^{(Q+\frac{\gamma}{4},0),(\frac{3\gamma}{2},1)}(d\phi )\times\mathrm{raSLE}_{\kappa}(d\eta)\times 1_{a>0}\,da\times[C1_{t>0}\,dt] \tag{3.9}\]
(or "independently sample \(t^{\prime}\) from \([C1_{t>0}dt]\)") then the law of \((\eta(A),\phi,\eta|_{[0,A]},(\widetilde{\eta}_{A}|_{[0,t^{\prime}]},t^{\prime}))\) is
\[\mathcal{A}_{\phi}(du)\mathrm{LF}_{\mathbb{D}}^{(Q+\frac{\gamma}{4},0),(\frac {3\gamma}{2},1)}(d\phi)\,\mathrm{raSLE}_{\kappa}^{u}(d\eta)\times[C\cdot \mathrm{raSLE}_{\kappa}^{t}1_{t>0}\,dt].\]
This law agrees with (3.8) up to renaming random variables, so \((\eta(A),\phi,\eta|_{[0,A]},\widetilde{\eta}_{A}|_{[0,t^{\prime}]},t^{\prime} )\overset{d}{=}(z,\psi_{\tau_{2}},\eta^{12},\eta_{\tau_{1}},\tau_{1})\). Since \(A_{2}=\mathcal{A}_{\psi_{\tau_{2}}}(\eta^{12})\), \(\psi_{\tau_{1}}=\widetilde{f}_{\tau_{1},\tau_{2}}^{-1}\bullet_{\gamma}\psi_{ \tau_{2}}\) where \(\widetilde{f}_{\tau_{1},\tau_{2}}:\mathbb{D}\to\mathbb{D}\backslash\eta^{12}\) is the conformal map fixing \(0\) and sending \(1\) to the tip of \(\eta^{12}\), it follows that \((\phi_{A},A,\widetilde{\eta}_{A}|_{[0,t^{\prime}]},t^{\prime})\overset{d}{=} (\psi_{\tau_{1}},A_{2},\eta_{\tau_{1}},\tau_{1})\).
On the other hand, by Lemma 3.5, the law of \((\psi_{\tau_{1}},A_{2},\eta_{\tau_{1}},\tau_{1})\) is
\[\mathrm{LF}_{\mathbb{D}}^{(Q+\frac{\gamma}{4},0),(\frac{3\gamma}{2},1)}(d\psi _{t_{1}})\,1_{A_{2}>0}dA_{2}\times\mathrm{raSLE}_{\kappa}^{t_{1}}(d\eta_{t_{ 1}})\,[C1_{t_{1}>0}dt_{1}]. \tag{3.10}\]
Note the term \([C1_{t_{1}>0}dt_{1}]\) above corresponds to \([C1_{t>0}dt]\) in (3.9), so by varying \(t^{\prime}\), for \((\phi,\eta,A)\) sampled from (3.5) the law of \((\phi_{A},\widetilde{\eta}_{A},A)\) is given by (3.7). This concludes the proof of the first claim.
For the second claim, we repeat the above except we restrict to the event \(F:=\{\eta^{12}\) is simply connected\(\}\) throughout. Then the law of \((z,\psi_{\tau_{2}},\eta^{12},\eta_{\tau_{1}},\tau_{1})\) is \(1_{F}\) times (3.8), and by Lemmas 3.5 and 3.6, the law of \((\psi_{\tau_{1}},\eta_{\tau_{1}},(\eta^{12}([0,\tau_{2}-\tau_{1}]),\psi_{\tau_{ 2}},\eta^{12})/{\sim_{\gamma}},A_{2},\tau_{1})\) is
\[1_{X_{A_{2}}^{+}(\mathcal{C})+Y_{A_{2}}^{+}(\mathcal{C})<\mathcal{L}_{\psi_{t_{ 1}}}(\partial\mathbb{D})}\mathrm{LF}_{\mathbb{D}}^{(Q+\frac{\gamma}{4},0),( \frac{3\gamma}{2},1)}(d\psi_{t_{1}})\,\mathrm{raSLE}_{\kappa}^{t_{1}}(d\eta_ {t_{1}})\,P_{A_{2}}(d\mathcal{C})\,1_{A_{2}>0}dA_{2}\,C1_{t_{1}>0}dt_{1},\]
c.f. (3.10). The same argument as that of the first claim then gives the second claim.
In the above proof we needed the following lemma.
**Lemma 3.8**.: _Fix \(z\in\mathbb{D}\), and sample \((\eta,T)\) from \(1_{z\in\eta([0,t])}\mathrm{raSLE}_{\kappa}(d\eta)1_{t>0}\,dt.\) Let \(\tau_{z}\) be the time when \(\eta\) hits \(z\), \(T_{1}=T-\tau_{z}\) and \(\eta^{12}=\eta|_{[0,\tau_{1}]}\). Let \(f_{\tau_{z}}:\mathbb{D}\to\mathbb{D}\backslash\eta([0,\tau_{z}])\) be the centered Loewner map of \(\eta\) at time \(\tau_{z}\), and \(\eta^{1}=f_{\tau_{z}}^{-1}\circ\eta(\cdot+\tau_{z})|_{[0,T_{1}]}\). Then the law of \((\eta^{12},(\eta^{1},T_{1}))\) is \(\mathrm{raSLE}_{\kappa}^{z}\times[\mathrm{raSLE}_{\kappa}^{t_{1}}\,1_{t_{1}>0} dt_{1}]\), where \(\mathrm{raSLE}_{\kappa}^{z}\) is the law of radial \(\mathrm{SLE}_{\kappa}\) run until it hits \(z\)._
Proof.: By a change of variables, the law of \((\eta^{12},T_{1})\) is \(\mathrm{raSLE}_{\kappa}^{z}(d\eta^{12})\times 1_{t_{1}>0}dt\). By the domain Markov property of radial SLE, conditioned on \(\eta^{12}\) and \(T_{1}\), the law of \(\eta^{1}\) is \(\mathrm{raSLE}_{\kappa}^{T_{1}}\). This finishes the proof.
### Proof of Theorem 3.1
In this section we prove Theorem 3.1. We first use Proposition 3.7 about \(\mathrm{LF}_{\mathbb{D}}^{(Q+\frac{\gamma}{4},0),(\frac{3\gamma}{2},1)}\) to obtain an analogous result for \(\mathrm{LF}_{\mathbb{D}}^{(Q-\frac{\gamma}{4},0),(\frac{3\gamma}{2},1)}\) (Proposition 3.11). The idea is to weight the field to change \(Q+\frac{\gamma}{4}\) into \(Q-\frac{\gamma}{4}\) via the Girsanov theorem; this is done in Lemmas 3.9 and 3.10.
Let \(B_{\varepsilon}(0):=\{z\in\mathbb{C}\,:\,|z|<\varepsilon\}\), and let \(\theta_{\varepsilon}\) denote the uniform probability measure on the circle \(\partial B_{\varepsilon}(0)\).
**Lemma 3.9**.: _Let \(\alpha_{1},\alpha_{2},\beta\in\mathbb{R}\) and \(\varepsilon\in(0,1)\). Let \(\widetilde{\mathrm{L}}^{\mathrm{F}}_{\mathbb{D},\varepsilon}^{(\alpha_{2},0),( \beta,1)}\) be the law of \(\psi|_{\mathbb{D}\setminus B_{\varepsilon}(0)}\), where \(\psi\sim\mathrm{LF}^{(\alpha_{2},0),(\beta,1)}_{\mathbb{D}}\). Sample \(\phi\) from the measure \(\mathrm{LF}^{(\alpha_{1},0),(\beta,1)}_{\mathbb{D}}\) and weight its law by \(\varepsilon^{\frac{1}{2}(\alpha_{2}^{2}-\alpha_{1}^{2})}e^{(\alpha_{2}-\alpha_ {1})(\phi,\theta_{\varepsilon})}\). Then the law of \(\phi|_{\mathbb{D}\setminus B_{\varepsilon}(0)}\) is \(\widetilde{\mathrm{L}}^{(\alpha_{2},0),(\beta,1)}_{\mathbb{D},\varepsilon}\)._
Proof.: Recall \(P_{\mathbb{D}}\) is the law of the free boundary GFF on \(\mathbb{D}\) normalized to have average \(0\) on \(\partial\mathbb{D}\). By Girsanov's theorem, for \(h\) sampled from \(P_{\mathbb{D}}\) weighted by \(\varepsilon^{\frac{1}{2}\alpha^{2}}e^{\alpha(h,\theta_{\varepsilon})}\), we have \(h|_{\mathbb{D}\setminus B_{\varepsilon}(0)}\stackrel{{ d}}{{=}}(h^{ \prime}-\alpha\log|\cdot|)|_{\mathbb{D}\setminus B_{\varepsilon}(0)}\) where \(h^{\prime}\sim P_{\mathbb{D}}\). In other words, this weighting introduces an \(\alpha\)-log singularity at \(0\). Using the above and keeping track of the terms that arise in the definition of the Liouville field, the lemma follows from a direct computation. See [1, Lemma 4.7] for details in the case where \(\alpha_{1}=\beta=\gamma\); the argument is identical in our setting.
**Lemma 3.10**.: _Let \(\alpha_{1},\alpha_{2},\beta\in\mathbb{R}\) and \(\varepsilon\in(0,1)\). Let \(z\in\mathbb{D}\backslash\{0\}\) and let \(K\subset\overline{\mathbb{D}}\) be a compact set such that \(\mathbb{D}\backslash K\) is simply connected, contains \(0\), and has \(z\) on its boundary. Let \(f:\mathbb{D}\backslash K\to\mathbb{D}\) be the conformal map such that \(f(0)=0\) and \(f(z)=1\). Let \(\widetilde{\mathrm{LF}}^{(\alpha_{2},0),(\gamma,z),(\beta,1)}_{\mathbb{D},K,\varepsilon}\) be the law of \(\psi|_{\mathbb{D}\setminus f^{-1}(B_{\varepsilon}(0))}\) where \(\psi\sim\mathrm{LF}^{(\alpha_{2},0),(\gamma,z),(\beta,1)}_{\mathbb{D}}\)._
* _Define the pushforward measure_ \(\hat{\theta}_{\varepsilon}=f_{*}^{-1}\theta_{\varepsilon}\)_. For_ \(\phi\sim\mathrm{LF}^{(\alpha_{1},0),(\gamma,z),(\beta,1)}_{\mathbb{D}}\) _with its law weighted by_ \((|(f^{-1})^{\prime}(0)|\varepsilon)^{\frac{1}{2}(\alpha_{2}^{2}-\alpha_{1}^{2} )}e^{(\alpha_{2}-\alpha_{1})(\phi,\hat{\theta}_{\varepsilon})}\)_, the law of_ \(\phi|_{\mathbb{D}\setminus f^{-1}(B_{\varepsilon}(0))}\) _is_ \(\widetilde{\mathrm{LF}}^{(\alpha_{2},0),(\gamma,z),(\beta,1)}_{\mathbb{D},K,\varepsilon}\)_._
* _Suppose_ \(\alpha_{1}+\alpha_{2}=2Q\)_. For_ \(\phi\sim\mathrm{LF}^{(\alpha_{1},0),(\gamma,z),(\beta,1)}_{\mathbb{D}}\) _with its law weighted by_ \(\varepsilon^{\frac{1}{2}(\alpha_{2}^{2}-\alpha_{1}^{2})}e^{(\alpha_{2}- \alpha_{1})(f_{\bullet},\phi,\theta_{\varepsilon})}\)_, the law of_ \(\phi|_{\mathbb{D}\setminus f^{-1}(B_{\varepsilon}(0))}\) _is_ \(\widetilde{\mathrm{LF}}^{(\alpha_{2},0),(\gamma,z),(\beta,1)}_{\mathbb{D},K,\varepsilon}\)_._
Proof.: The first claim follows from the same argument as that of Lemma 3.9. Indeed, \(|(f^{-1})^{\prime}(0)|\varepsilon\) is the conformal radius of \(f^{-1}(\partial B_{\varepsilon}(0))\) viewed from \(0\) and \(\hat{\theta}_{\varepsilon}\) is a probability measure on \(f(\partial B_{\varepsilon}(0))\), and these play the role of \(\varepsilon\) and \(\theta_{\varepsilon}\) in Lemma 3.9. See [1, Lemma 4.8] for details. For the second claim, note that
\[(f\bullet_{\gamma}\phi,\theta_{\varepsilon})=(\phi\circ f^{-1}+Q\log|(f^{-1}) ^{\prime}|,\theta_{\varepsilon})=(\phi,f_{*}^{-1}\theta_{\varepsilon})+Q(\log|( f^{-1})^{\prime}|,\theta_{\varepsilon})=(\phi,\hat{\theta}_{ \varepsilon})+Q\log|(f^{-1})^{\prime}(0)|.\]
Since \(\alpha_{1}+\alpha_{2}=2Q\) implies \((\alpha_{2}-\alpha_{1})Q=\frac{1}{2}(\alpha_{2}^{2}-\alpha_{1}^{2})\), we conclude \((|(f^{-1})^{\prime}(0)|\varepsilon)^{\frac{1}{2}(\alpha_{2}^{2}-\alpha_{1}^{2} )}e^{(\alpha_{2}-\alpha_{1})(\phi,\hat{\theta}_{\varepsilon})}=\varepsilon^{ \frac{1}{2}(\alpha_{2}^{2}-\alpha_{1}^{2})}e^{(\alpha_{2}-\alpha_{1})(f_{ \bullet},\phi,\theta_{\varepsilon})}\). This with the first claim gives the second claim.
**Proposition 3.11**.: _Let \((\phi,\eta,T)\) be a sample from \(1_{0<t<\mathcal{A}_{\phi}(\mathbb{D})}\mathrm{LF}^{(Q-\frac{\pi}{2},0),(\frac{3 \gamma}{2},1)}_{\mathbb{D}}(d\phi)\times\mathrm{raSLE}_{\kappa}(d\eta)\times dt\) and parametrize \(\eta\) by its \(\mathcal{A}_{\phi}\) quantum area. For \(t>0\), let \(f_{t}:\mathbb{D}\backslash\eta([0,t])\to\mathbb{D}\) be the conformal map fixing 0 such that \(f_{t}(\eta(t))=1\). Let \(\phi_{t}=f_{t}\bullet_{\gamma}\phi\), \(\eta_{t}(s)=f_{t}(\eta(s+t))\) for \(0\leq s\leq\mathcal{A}_{\phi}(\mathbb{D})-t\), and \(\mathcal{C}_{t}=(\eta([0,t]),\phi,\eta|_{[0,t]})/{\sim_{\gamma}}\). Restricted to the event that \(\eta([0,T])\) is simply connected, the law of \((\phi_{T},\eta_{T},\mathcal{C}_{T},T)\) is_
\[1_{X_{t}^{+}(\mathcal{C})+Y_{t}^{+}(\mathcal{C})<\mathcal{L}_{\phi_{t}}( \partial\mathbb{D})}\mathrm{LF}^{(Q-\frac{\pi}{2},0),(\frac{3\gamma}{2},1)}_{ \mathbb{D}}(d\phi_{t})\times\mathrm{raSLE}_{\kappa}(d\eta)\times P_{t}(d \mathcal{C})\,1_{t>0}dt, \tag{3.11}\]
_where \(X_{t}^{+}(\mathcal{C}),Y_{t}^{+}(\mathcal{C})\) are as in Definition 2.13._
Proof.: Sample \((\widetilde{\phi},z,\widetilde{\eta})\) from
\[\mathrm{LF}^{(Q+\frac{\pi}{2},0),(\gamma,z),(\frac{3\gamma}{2},1)}_{\mathbb{D}}( d\widetilde{\phi})\,\mathrm{raSLE}_{\kappa}(d\widetilde{\eta})\,1_{z\in \mathbb{D}}dz\]
and parametrize \(\widetilde{\eta}\) by \(\mathcal{A}_{\phi}\)-quantum area. Let \(A\) be the time such that \(\widetilde{\eta}(A)=z\), let \(\widetilde{\eta}^{z}=\widetilde{\eta}|_{[0,A]}\); let \(f:\mathbb{D}\backslash\widetilde{\eta}^{z}\to\mathbb{D}\) be the conformal map such that \(f(0)=0\) and \(f(z)=1\), let \(\widetilde{\phi}_{A}=f\bullet_{\gamma}\widetilde{\phi}\), let \(\widetilde{\eta}_{A}=f\circ\widetilde{\eta}(.+A)\) and let \(\widetilde{\mathcal{C}}_{A}=(\widetilde{\eta}^{z}([0,A]),\widetilde{\phi}, \widetilde{\eta}^{z})/{\sim_{\gamma}}\). By Lemma 2.8 the law of \((\widetilde{\phi},\widetilde{\eta},A)\) is \(\mathrm{LF}^{(Q+\frac{\pi}{2},0),(\frac{3\gamma}{2},1)}_{\mathbb{D}}(d \widetilde{\phi})\times\mathrm{raSLE}_{\kappa}(d\widetilde{\eta})\times 1_{a>0}da\) so Proposition 3.7 implies the law of \((\widetilde{\phi}_{A},\widetilde{\eta}_{A},\widetilde{\mathcal{C}}_{A},A)\) restricted to the event \(\{\widetilde{\eta}^{z}\) is simply connected\(\}\) is
\[1_{\widetilde{X}_{a
Now for \(\varepsilon>0\), let \(\theta_{\varepsilon}\) be the uniform probability measure on \(\partial B_{\varepsilon}(0)\). Let \(\alpha_{1}=Q+\frac{1}{4}\) and \(\alpha_{2}=Q-\frac{\gamma}{4}\). Weight the law of \((\widetilde{\phi},z,\widetilde{\eta})\) by \(\varepsilon^{\frac{\alpha_{2}-\alpha_{1}^{2}}{2}}e^{(\alpha_{2}-\alpha_{1})( \widetilde{\phi},\theta_{\varepsilon})}\). By Lemma 3.10 the law of \((\widetilde{\phi}|_{\mathbb{D}\setminus\widetilde{f}_{x}^{-1}(B_{\varepsilon }(0))},z,\widetilde{\eta})\) under this weighting is
\[\widetilde{\mathrm{LF}}^{(Q-\frac{\gamma}{4},0),(\gamma,z),(\frac{3\gamma}{2}, 1)}_{\mathbb{D},\widetilde{\eta}^{*},\varepsilon}(d\widetilde{\phi})\,\mathrm{ raSLE}_{\kappa}(d\widetilde{\eta})\,1_{z\in\mathbb{D}}dz.\]
On the other hand, by Lemma 3.9, the weighted law of \((\widetilde{\phi}_{A}|_{\mathbb{D}\setminus B_{\varepsilon}(0)},\widetilde{ \eta}_{A},\widetilde{\mathcal{C}}_{A},A)\) restricted to the event \(\{\widetilde{\eta}^{*}\) is simply connected\(\}\) is
\[1_{\widetilde{X}_{a}^{+}(\widetilde{\mathcal{C}})+\widetilde{Y}_{a}^{+}( \widetilde{\mathcal{C}})<\mathcal{L}_{\widetilde{\phi}_{a}}(\partial\mathbb{D })}\widetilde{\mathrm{LF}}^{(Q-\frac{\gamma}{4},0),(\frac{3\gamma}{2},1)}_{ \mathbb{D},\varepsilon}(d\widetilde{\phi}_{a})\times\mathrm{raSLE}_{\kappa} \times P_{a}(d\widetilde{\mathcal{C}})1_{a>0}da. \tag{3.12}\]
To rephrase, if \((\widetilde{\phi},z,\widetilde{\eta})\) is sampled from
\[\mathrm{LF}^{(Q-\frac{\gamma}{4},0),(\gamma,z),(\frac{3\gamma}{2},1)}_{ \mathbb{D},\ell}(d\widetilde{\phi})\,\mathrm{raSLE}_{\kappa}(d\widetilde{ \eta})\,1_{z\in\mathbb{D}}dz \tag{3.13}\]
with \(\widetilde{\eta}\) parametrized by quantum area, and \(A\) is the time when \(\widetilde{\eta}\) hits \(z\), then on the event where \(\widetilde{\eta}^{*}\) is simply connected, the law of \((\widetilde{\phi}_{A}|_{\mathbb{D}\setminus B_{\varepsilon}(0)},\widetilde{ \eta}_{A},\widetilde{\mathcal{C}}_{A},A)\) is given by (3.12). Sending \(\varepsilon\to 0\), the same statement holds for \(\varepsilon=0\) when (3.12) is replaced by (3.11). On the other hand, by Lemma 2.8, the law of \((\phi,\eta,\eta(T))\) is given by (3.13) (up to renaming of variables). We conclude the proof by observing that the pair \((\phi,\eta,\eta(T))\) and the pair \((\phi,\eta,T)\) uniquely determine each other.
Recall the disintegration by quantum boundary length \(\{\mathrm{LF}^{(Q-\frac{\gamma}{4},0),(\frac{3\gamma}{2},1)}_{\mathbb{D}, \ell_{0}}\}_{\ell>0}\) from Lemma 2.7.
**Corollary 3.12**.: _Fix \(t,\ell_{0}>0\). Let \((\phi,\eta)\) be a sample from \(1_{\mathcal{A}_{\phi}(\mathbb{D})>t}\mathrm{LF}^{(Q-\frac{\gamma}{4},0),( \frac{3\gamma}{2},1)}_{\mathbb{D},\ell_{0}}(d\phi)\times\mathrm{raSLE}_{\kappa }(d\eta)\) and parametrize \(\eta\) by its \(\mathcal{A}_{\phi_{0}}\) quantum area. Let \(f_{t},\phi_{t},\eta_{t}\) and \(\mathcal{C}_{t}\) be determined by \((\phi,\eta)\) in the same way as Proposition 3.11. Then on the event that \(\eta([0,t])\) is simply connected, the law of \((\phi_{t},\mathcal{C}_{t},\eta_{t})\) is_
\[1_{X_{t}^{-}(\mathcal{C})+Y_{t}^{-}(\mathcal{C})<\ell_{0}}\mathrm{LF}^{(Q- \frac{\gamma}{4},0),(\frac{3\gamma}{2},1)}_{\mathbb{D},\ell_{0}+X_{t}( \mathcal{C})+Y_{t}(\mathcal{C})}(d\phi_{t})\,P_{t}(d\mathcal{C})\times\mathrm{ raSLE}_{\kappa}(d\eta), \tag{3.14}\]
_where \(X_{t}(\mathcal{C}),Y_{t}(\mathcal{C}),X_{t}^{-}(\mathcal{C}),Y_{t}^{-}( \mathcal{C})\) are as in Definition 2.13._
Proof.: If we do not fix the boundary length of \(\phi\), i.e., we instead assume that \((\phi,\eta)\) is sampled from \(1_{\mathcal{A}_{\phi}(\mathbb{D})>t}\mathrm{LF}^{(Q-\frac{\gamma}{4},0),(\frac {3\gamma}{2},1)}_{\mathbb{D}}(d\phi)\times\mathrm{raSLE}_{\kappa}(d\eta)\), then it follows from Proposition 3.11 by disintegrating on the value of \(T\) that the law of \((\phi_{t},\mathcal{C}_{t},\eta_{t})\) is
\[1_{X_{t}^{+}(\mathcal{C})+Y_{t}^{+}(\mathcal{C})<\mathcal{L}_{\phi_{t}}( \partial\mathbb{D})}\mathrm{LF}^{(Q-\frac{\gamma}{4},0),(\frac{3\gamma}{2},1) }_{\mathbb{D}}(d\phi_{t})\,P_{t}(d\mathcal{C})\times\mathrm{raSLE}_{\kappa}(d \eta). \tag{3.15}\]
Now we disintegrate over \(\mathcal{L}_{\phi}(\partial\mathbb{D})\), and the claim follows from \(\mathcal{L}_{\phi_{t}}(\partial\mathbb{D})=\mathcal{L}_{\phi}(\partial \mathbb{D})+X_{t}(\mathcal{C})+Y_{t}(\mathcal{C})\) and \(\{X_{t}^{+}(\mathcal{C})+Y_{t}^{+}(\mathcal{C})<\mathcal{L}_{\phi_{t}}( \partial\mathbb{D})\}=\{X_{t}^{-}(\mathcal{C})+Y_{t}^{-}(\mathcal{C})<\mathcal{ L}_{\phi}(\partial\mathbb{D})\}\).
Proof of Theorem 3.1.: Recall that \(\mathrm{CRT}_{\kappa}\) is the law of correlated two-dimensional Brownian motion \((\widetilde{X}_{t},\widetilde{Y}_{t})_{t\geq 0}\) with \(\widetilde{X}_{0}=\widetilde{Y}_{0}=0\) and covariance given by (2.9). Sample \((\widetilde{X}_{t},\widetilde{Y}_{t})_{t\geq 0}\) from \(\mathrm{CRT}_{\kappa}\) and let \(\widetilde{\tau}\) be the first time \(t\) that \(1+\widetilde{X}_{t}+\widetilde{Y}_{t}=0\). Our first goal is to show that \((X_{t},Y_{t})_{[0,A]}\stackrel{{ d}}{{=}}(\widetilde{X}_{t}, \widetilde{Y}_{t})_{[0,\widetilde{\tau}]}\). To that end, we will show that \((X_{s},Y_{s})_{[0,\tau_{1}]}\stackrel{{ d}}{{=}}(\widetilde{X}_{s}, \widetilde{Y}_{s})_{[0,\widetilde{\tau}_{1}]}\) for suitable stopping times \(\tau_{1},\widetilde{\tau}_{1}\) corresponding to "wrapping around", then iterate to conclude. Afterwards, we establish (3.1) to complete the proof.
Recall that \(Z:=|\mathrm{LF}^{(Q-\frac{\gamma}{4},0),(\frac{3\gamma}{2},1)}_{\mathbb{D}, \ell}|\) does not depend on \(\ell\) (Lemma 2.7). Suppose \(\phi\) is a sample from \(Z^{-1}\mathrm{LF}^{(Q-\frac{\gamma}{4},0),(\frac{3\gamma}{2},1)}_{\mathbb{D},1}\), and \(\eta\) is an independent radial \(\mathrm{SLE}_{\kappa}\) process from \(1\) to \(0\) parametrized by its \(\mathcal{A}_{\phi}\) quantum area. Fix \(t>0\) and let \(f_{t},\phi_{t},\eta_{t}\) and \(\mathcal{C}_{t}\) be determined by \((\phi,\eta)\) in the same way as Proposition 3.11. By Corollary 3.12, when restricted to the event \(F_{t}\) that \(\mathcal{A}_{\phi}(\mathbb{D})>t\) and \(\eta([0,t])\) is simply connected, the joint law of \((\phi_{t},\mathcal{C}_{t},\eta_{t})\) is
\[1_{X_{t}^{-}(\mathcal{C})+Y_{t}^{-}(\mathcal{C})<1}Z^{-1}\mathrm{LF}^{(Q- \frac{\gamma}{4},0),(\frac{3\gamma}{2},1)}_{\mathbb{D},1+X_{t}(\mathcal{C})+Y_ {t}(\mathcal{C})}(d\phi_{t})\,P_{t}(d\mathcal{C}_{t})\times\mathrm{raSLE}_{ \kappa}(d\eta_{t}),\]
so the joint law of \((\phi_{t},(X,Y)_{[0,t]},\eta_{t})\) is
\[Z^{-1}\mathrm{LF}^{(Q-\frac{\gamma}{4},0),(\frac{3\gamma}{2},1)}_{\mathbb{D},1+X_ {t}+Y_{t}}(d\phi_{t})\,\times 1_{\
where \(\mathrm{CRT}_{\kappa}^{t}\) is the law of a sample from \(\mathrm{CRT}_{\kappa}\) restricted to the time interval \([0,t]\), and \(\widetilde{F}_{t}=\{-\inf_{[0,t]}X.-\inf_{[0,t]}Y.>-1\}\). Since \(|Z^{-1}\mathrm{LF}_{\mathbb{D},1+X_{t}+Y_{s}^{\prime}}^{(Q-\frac{\tau}{2},0),( \frac{3\gamma}{2},1)}|=|\mathrm{raSLE}_{\kappa}|=1\) regardless of the value of \(1+X_{t}+Y_{t}\), the marginal law of \((X.,Y)_{[0,t]}\) restricted to \(F_{t}\) is \(1_{\widetilde{F}_{t}}\mathrm{CRT}_{\kappa}^{t}\). Since \(t\) is arbitrary, we conclude that \((X.,Y)_{[0,\tau_{1}]}\stackrel{{ d}}{{=}}(\widetilde{X}., \widetilde{Y})_{[0,\widetilde{\tau}_{1}]}\) where \(\tau_{1}=\inf\{s:-\inf_{[0,s]}X.-\inf_{[0,s]}Y.\leq-1\}\) and \(\widetilde{\tau}_{1}=\inf\{s:-\inf_{[0,s]}\widetilde{X}.-\inf_{[0,s]} \widetilde{Y}.\leq-1\}\).
Next, let \(\tau_{2}\) (resp. \(\widetilde{\tau}_{2}\)) be the first time \(t>\tau_{1}\) (resp. \(t>\widetilde{\tau}_{1}\)) that \(\inf_{\tau_{1}<s<t}X_{s}+\inf_{\tau_{1}<s<t}Y_{s}=-1\) (resp. \(\inf_{\widetilde{\tau}_{1}<s<t}\widetilde{X}_{s}+\inf_{\widetilde{\tau}_{1}<s <t}\widetilde{Y}_{s}=-1\)). We will show that \((X.,Y)_{[0,\tau_{2}]}\stackrel{{ d}}{{=}}(\widetilde{X}, \widetilde{Y})_{[0,\widetilde{\tau}_{2}]}\). Fix \(t_{1}>0\) and condition on \(\{t_{1}<\tau_{1}\}\). Then the conditional law of \((\phi_{t_{1}},\eta_{t_{1}})\) given \(\mathcal{C}_{t_{1}}\) is \(Z^{-1}\mathrm{LF}_{\mathbb{D},1+X_{t_{1}}+Y_{t_{1}}}^{(Q-\frac{\tau}{2},0),( \frac{3\gamma}{2},1)}(d\phi)\times\mathrm{raSLE}_{\kappa}\), and the boundary length process of \((\phi_{t_{1}},\eta_{t_{1}})\) is specified by \((X_{t}-X_{t_{1}},Y_{t}-Y_{t_{1}})_{t_{1}\leq t\leq A}\). Therefore following the same reasoning, if we let \(\sigma_{2}\) (resp. \(\widetilde{\sigma}_{2}\)) be the first time \(t\) such that \(\inf_{t_{1}<s<t}(X_{s}-X_{t_{1}})+\inf_{t_{1}<s<t}(Y_{s}-Y_{t_{1}})=-1-X_{t_{1 }}-Y_{t_{1}}\) (resp. \(\inf_{t_{1}<s<t}(\widetilde{X}_{s}-\widetilde{X}_{t_{1}})+\inf_{t_{1}<s<t}( \widetilde{Y}_{s}-\widetilde{Y}_{t_{1}})=-1-\widetilde{X}_{t_{1}}-\widetilde{Y }_{t_{1}}\)), then \((X_{t}-X_{t_{1}},Y_{t}-Y_{t_{1}})_{t_{1}\leq t\leq\sigma_{2}}\) is independent of \((X_{s},Y_{s})_{0\leq s\leq t_{1}}\) and agrees in law with \((\widetilde{X}_{t}-\widetilde{X}_{t_{1}},\widetilde{Y}_{t}-\widetilde{Y}_{t_{1 }})_{t_{1}\leq t\leq\widetilde{\sigma}_{2}}\) conditioned on \(\{t_{1}<\widetilde{\tau}_{1}\}\). This implies that conditioned on \(\{t_{1}<\tau_{1}\}\), the law of \((X_{s},\widetilde{Y}_{s})_{0\leq s\leq\sigma_{2}}\) agrees with that of \((\widetilde{X}_{s},\widetilde{Y}_{s})_{0\leq s\leq\widetilde{\sigma}_{2}}\) conditioned on \(\{t_{1}<\widetilde{\tau}_{1}\}\). Since \(t_{1}\) is arbitrary, we conclude \((X.,Y)_{[0,\tau_{2}]}\stackrel{{ d}}{{=}}(\widetilde{X}., \widetilde{Y})_{[0,\widetilde{\tau}_{2}]}\).
Arguing similarly, if we iteratively define \(\tau_{n}\) (resp. \(\widetilde{\tau}_{n}\)) to be the first time \(t>\tau_{n-1}\) (resp. \(t>\widetilde{\tau}_{n-1}\)) such that \(\inf_{\tau_{n-1}<s<t}X_{s}+\inf_{\tau_{n-1}<s<t}Y_{s}=-1\) (resp. \(\inf_{\widetilde{\tau}_{n-1}<s<t}\widetilde{X}_{s}+\inf_{\widetilde{\tau}_{n-1} <s<t}\widetilde{Y}_{s}=-1\)), then \((X.,Y)_{[0,\tau_{n}]}\stackrel{{ d}}{{=}}(\widetilde{X}., \widetilde{Y})_{[0,\widetilde{\tau}_{n}]}\) for all \(n\). Since \(\lim_{n\to\infty}\tau_{n}=A\) and \(\lim_{n\to\infty}\widetilde{\tau}_{n}=\widetilde{\tau}\) where \(\widetilde{\tau}=\inf\{t>0:1+\widetilde{X}_{t}+\widetilde{Y}_{t}=0\}\), it follows that \((X_{t},Y_{t})_{0\leq t\leq A}\stackrel{{ d}}{{=}}(\widetilde{X}_{t },\widetilde{Y}_{t})_{0\leq t\leq\widetilde{\tau}}\). This proves the first claim.
Finally, we prove (3.1), which is immediate from Corollary 3.12 when \(s=0\). We first claim that for each fixed \(s>0\), conditioned on the event \(s<\mathcal{A}_{\phi}(\mathbb{D})\) and \((X.,Y)_{[0,s]}\), the law of \((\phi_{s},\eta_{s})\) is \(Z^{-1}\mathrm{LF}_{\mathbb{D},1+X_{s}+Y_{s}^{\prime}}^{(Q-\frac{\tau}{2},0),( \frac{3\gamma}{2},1)}\times\mathrm{raSLE}_{\kappa}\). To see this, fix \(n>0\). For \(1\leq k\leq 2^{n}\), let \(E_{n,k,s}\) be the event where \(\frac{k\kappa}{2^{n}}<\mathcal{A}_{\phi}(\mathbb{D})\) and for each \(1\leq j\leq k\), \(\eta([\frac{(j-1)s}{2^{n}},\frac{js}{2^{n}}])\) is simply connected. Then conditioned on \(E_{n,1,s}\) and \((X.,Y)_{[0,\frac{\tau}{2^{n}}]}\), by Corollary 3.12 the law of \((\phi_{\frac{\tau}{2^{n}}},\eta_{\frac{\tau}{2^{n}}})\) is \(Z^{-1}\mathrm{LF}_{\mathbb{D},1+X\frac{\tau}{2^{n}}+Y\frac{\tau}{2^{n}}}^{(Q- \frac{\tau}{2},0),(\frac{3\gamma}{2},1)}\times\mathrm{raSLE}_{\kappa}\). Applying Corollary 3.12 once more to \((\phi_{\frac{\tau}{2^{n}}},\eta_{\frac{\tau}{2^{n}}})\), we see that conditioned on \(E_{n,2,s}\) and \((X.,Y)_{[0,\frac{\tau}{2^{n}}]}\), the law of \((\phi_{\frac{\tau}{2^{n}}},\eta_{\frac{2^{n}}{2^{n}}})\) is \(Z^{-1}\mathrm{LF}_{\mathbb{D},1+X\frac{\tau}{2^{n}}+Y\frac{\tau}{2^{n}}}^{(Q- \frac{\tau}{2},0),(\frac{3\gamma}{2},1)}\times\mathrm{raSLE}_{\kappa}\). By iterating this argument \(2^{n}\) times, conditioned on \(E_{n,2^{n},s}\), the law of \((\phi_{s},\eta_{s})\) is \(Z^{-1}\mathrm{LF}_{\mathbb{D},1+X_{s}+Y_{s}^{\prime}}^{(Q-\frac{\tau}{2},0),( \frac{3\gamma}{2},1)}\times\mathrm{raSLE}_{\kappa}\). On the other hand, using the continuity of the curve \(\eta\), conditioned on \(\mathcal{A}_{\phi}(\mathbb{D})>s\) the event \(E_{n,2^{n},s}\) holds with probability \(1-o_{1}(n)\) as \(n\to\infty\). Now we can apply Corollary 3.12 to \((\phi_{s},\eta_{s})\) and conclude that conditioned on the event that \(t<\mathcal{A}_{\phi}(\mathbb{D})\) and \(\eta_{s}([0,t-s])\) is simply connected, the law of \((\eta_{s}([0,t-s]),\phi_{s},\eta_{s}|_{[0,t-s]})/\!\sim_{\gamma}\) is absolutely continuous with respect to \(P_{t-s}\). Therefore \(F((X.+_{s}-X_{s},Y_{+s}-Y_{s})_{[0,t-s]})=(\eta_{s}([0,t-s]),\phi_{s},\eta_{s}| _{[0,t-s]})/\!\sim_{\gamma}\) a.s
**Theorem 4.1** (Spherical mating-of-trees).: _Let \((L_{t},Z_{t})=(X_{t}+Y_{t},X_{t}-Y_{t})\). Then \(L_{t}\) has the law of a Brownian excursion with quadratic variation \((2\mathrm{a}\sin(\frac{\pi\gamma^{2}}{8}))^{2}\,dt\) conditioned to have duration at least 1, and given the process \((L_{t})\) with random duration \(\tau\), the process \((Z_{t})_{[0,\tau]}\) is conditionally independent Brownian motion with quadratic variation \((2\mathrm{a}\cos(\frac{\pi\gamma^{2}}{8}))^{2}\,dt\) run for time \(\tau\). Here \(\mathrm{a}\) is as in (2.9). Moreover, for any \(0<s<t\), on the event that \(t<\tau\) and \(\eta([s,t])\) is simply connected, we have_
\[F((X_{\cdot+s}-X_{s},Y_{\cdot+s}-Y_{s})_{[0,t-s]})=(\eta([s,t]),\phi,\eta(\cdot +s)|_{[0,t-s]})/{\sim_{\gamma}} \tag{4.1}\]
_where \(F\) is the map from Lemma 2.14._
We note that \(L_{t}\) is the quantum length of \(\partial(\eta([0,t]))\) for all \(t\).
To prove Theorem 4.1, we start with the radial mating of trees Theorem 3.1, and condition on having quantum area at least 1 but having small boundary length \(\ell\ll 1\) (event \(F_{\ell}\) from (4.3)). Lemma 4.2 below implies that when \(\ell\to 0\) the limiting boundary length process is that of Theorem 4.1. On the other hand, when \(\ell\to 0\) the curve-decorated quantum surface converges to the conditioned quantum sphere decorated by independent whole-plane SLE (Proposition 4.4). Combining these two facts gives Theorem 4.1.
**Lemma 4.2**.: _Let \(\ell>0\) and let \(L_{t}^{\ell}\) be Brownian motion starting at \(\ell\) and having quadratic variation \((2\mathrm{a}\sin(\frac{\pi\gamma^{2}}{8}))^{2}\,dt\), run until the time \(\tau\) that it first hits 0. Given \((L_{t}^{\ell})_{[0,\tau]}\) let \(Z_{t}^{\ell}\) be an independent Brownian motion with quadratic variation \((2\mathrm{a}\cos(\frac{\pi\gamma^{2}}{8}))^{2}\,dt\) run for time \(\tau\). As \(\ell\to 0\), the process \((L_{t}^{\ell},Z_{t}^{\ell})\) conditioned on \(\tau\geq 1\) converges in distribution to the Brownian process described in Theorem 4.1._
Proof.: This is immediate from the limiting construction of the Brownian excursion.
Given Theorem 4.1, the proof of Theorem 1.1 goes as follows. Let \(\mathfrak{S}\) be the SLE-decorated quantum surface in Theorem 4.1. Since its boundary length process agrees in law with its time-reversal, we have \(\mathfrak{S}\stackrel{{ d}}{{=}}\widetilde{\mathfrak{S}}\) where \(\widetilde{\mathfrak{S}}\) is obtained from \(\mathfrak{S}\) by switching its two points and reversing its curve. This implies that the law of the curve is reversible, as desired.
In Section 4.1 we show that a certain quantum sphere can be obtained from a disk by taking a limit (Proposition 4.4). In Section 4.2 we use this to obtain Theorem 4.1 and then Theorem 1.1.
### Pinching an LCFT disk to get an LCFT sphere
The goal of this section is to prove Proposition 4.4 which states that a Liouville field on the disk conditioned to have area at least 1 and boundary length \(\ell\) converges as \(\ell\to 0\) to a sample from \(\mathcal{M}_{2}^{\mathrm{sph}}(Q-\frac{\gamma}{4})\) conditioned to have area at least 1. Although the statement of Proposition 4.4 does not involve SLE or mating-of-trees, our arguments will use these to establish that the field remains "well behaved" near the boundary despite the conditioning on low probability events.
Instead of working in the domains \(\mathbb{C}\) and \(\mathbb{D}\), we will parametrize by the horizontal cylinder \(\mathcal{C}:=(\mathbb{R}\times[0,2\pi])/{\sim}\) and half-cylinder \(\mathcal{C}_{+}:=([0,\infty)\times[0,2\pi])/{\sim}\) where the upper and lower boundaries are identified by \(x\sim x+2\pi i\). This simplifies our exposition later.
We first define the Liouville field on \(\mathcal{C}_{+}\). Let \(f:\mathcal{C}_{+}\to\mathbb{D}\) be the map such that \(f(z)=e^{-z}\).
Figure 3: The boundary length process \((X_{t},Y_{t})_{[0,A]}\) of Theorem 4.1 is characterized by \(X_{0}=Y_{0}=0\) and the property that for each time \(s\) and choice of boundary point \(p\in\partial\eta([0,s])\) not equal to \(\eta(s)\), for any time \(t>s\) before the time \(\eta\) next hits \(p\), we have \((X_{t}^{s}-X_{s}^{s},Y_{t}^{s}-Y_{s}^{s})=(X_{t}-X_{s},Y_{t}-Y_{s})\), where \((X_{\cdot}^{s},Y_{\cdot}^{s})\) is shown in red and blue. Here \(\eta([0,s])\) is shown in dark gray, and \(\eta([s,t])\) is colored light grey.
**Definition 4.3**.: _For \(\alpha,\beta\in\mathbb{R}\) and \(\ell>0\), define \(\mathrm{LF}^{(\alpha,+\infty),(\beta,0)}_{\mathcal{C}_{+},\ell}\)\(:=f^{-1}\bullet_{\gamma}\mathrm{LF}^{(\alpha,0),(\beta,1)}_{\mathbb{D},\ell}\)._
\(\mathrm{LF}^{(\alpha,+\infty),(\beta,0)}_{\mathcal{C}_{+},\ell}\) inherits the following Markov property from \(\mathrm{LF}^{(\alpha,0),(\beta,1)}_{\mathbb{D},\ell}\). For \(\phi\sim\mathrm{LF}^{(\alpha,+\infty),(\beta,0)}_{\mathcal{C}_{+},\ell}\), conditioned on \(\phi|_{\partial\mathcal{C}_{+}}\) we have
\[\phi\stackrel{{ d}}{{=}}\mathfrak{h}+h_{0}-(Q-\alpha)\Re. \tag{4.2}\]
where \(\mathfrak{h}\) is the harmonic function on \(\mathcal{C}_{+}\) with boundary conditions \(\phi|_{\mathcal{C}_{+}}\), and \(h_{0}\) is a Dirichlet GFF on \(\mathcal{C}_{+}\).
We define a probability measure \(\mathcal{L}\) on fields on \(\mathcal{C}\) as follows. Consider \((\hat{h},\mathbf{c})\) sampled as in Definition 2.16 with \(\alpha=Q-\frac{\gamma}{4}\) and conditioned on the event that \(\mathcal{A}_{\hat{h}+\mathbf{c}}(\mathcal{C})>1\), let \(\sigma\in\mathbb{R}\) satisfy \(\mathcal{A}_{\hat{h}+\mathbf{c}}([\sigma,+\infty)\times[0,2\pi])=\frac{1}{2}\), let \(\phi^{\prime}=\hat{h}(\cdot+\sigma)+\mathbf{c}\), and let \(\mathcal{L}\) be the law of \(\phi^{\prime}\). Thus, \(\phi^{\prime}\sim\mathcal{L}\) corresponds to a sample from \(\mathcal{M}_{2}^{\mathrm{sph}}(\alpha)\) conditioned to have quantum area greater than \(1\), embedded such that \(\mathcal{A}_{\phi}(\mathcal{C}_{+})=\frac{1}{2}\).
The main result of this section is that for small \(\ell\), a field sampled from \(\mathrm{LF}^{(\alpha,+\infty),(\beta,0)}_{\mathcal{C}_{+},\ell}\) conditioned on \(F_{\ell}\) resembles a quantum sphere conditioned to have quantum area at least \(1\).
**Proposition 4.4**.: _Let \((\alpha,\beta)=(Q-\frac{\gamma}{4},\frac{3\gamma}{2})\) and \(\ell>0\). Sample \(\phi\) from \(\mathrm{LF}^{(\alpha,+\infty),(\beta,0)}_{\mathcal{C}_{+},\ell}\) conditioned on_
\[F_{\ell}:=\{\mathcal{A}_{\phi}(\mathcal{C}_{+})>1\}. \tag{4.3}\]
_Let \(\sigma>0\) satisfy \(\mathcal{A}_{\phi}(\mathcal{C}_{+}+\sigma)=\frac{1}{2}\) and let \(\widetilde{\phi}=\phi(\cdot+\sigma)\). For any \(U\subset\mathcal{C}\) bounded away from \(-\infty\), as \(\ell\to 0\) the field \(\widetilde{\phi}|_{U}\) converges in distribution to \(\phi^{\prime}|_{U}\) where \(\phi^{\prime}\sim\mathcal{L}\)._
We first state a version of Proposition 4.4 where we additionally condition on the field near \(\partial\mathcal{C}_{+}\) not behaving too wildly, in the sense that it has "scale \(\ell\)" observables near \(\partial\mathcal{C}_{+}\).
**Lemma 4.5**.: _Let \((\alpha,\beta)=(Q-\frac{\gamma}{4},\frac{3\gamma}{2})\). Fix a nonnegative smooth function \(\rho\) in \(\mathcal{C}\) supported on \([1,2]\times[0,\pi]\), such that \(\rho\) is constant on each vertical segment4\(\{t\}\times[0,\pi]\) and \(\int\rho=1\). Let \(K,\ell>0\). Sample a field \(\phi\) from \(\mathrm{LF}^{(\alpha,+\infty),(\beta,0)}_{\mathcal{C}_{+},\ell}\) conditioned on_
Footnote 4: This is convenient for the proof of Lemma 4.5 since \((\phi,\rho)\) only depends on the projection of \(\phi\) to \(H_{\mathrm{av}}(\mathcal{C})\).
\[E_{\ell,K}:=F_{\ell}\cap\{\mathcal{A}_{\phi-\frac{2}{\gamma}\log\ell}([0,1] \times[0,2\pi])<K\text{ and }|(\phi,\rho)-\frac{2}{\gamma}\log\ell|<K\}. \tag{4.4}\]
_Let \(\sigma>0\) satisfy \(\mathcal{A}_{\phi}(\mathcal{C}_{+}+\sigma)=\frac{1}{2}\) and let \(\widetilde{\phi}=\phi(\cdot+\sigma)\). For any \(U\subset\mathcal{C}\) bounded away from \(-\infty\), as \(\ell\to 0\) the field \(\widetilde{\phi}|_{U}\) converges in distribution to \(\phi^{\prime}|_{U}\) where \(\phi^{\prime}\sim\mathcal{L}\)._
The statement of Lemma 4.5 is parallel to that of [11, Proposition 4.1], except that we condition on an event measurable with respect to \(\phi|_{[0,2]\times[0,2\pi]}\) (the second set in RHS of (4.4)), while they more strongly assert the asymptotic independence of \(\phi|_{[0,2]\times[0,2\pi]}\) and \(\widetilde{\phi}|_{U}\) (or rather, the corresponding fields in their setting). Using the Markov property (4.2) of \(\mathrm{LF}^{(\alpha,+\infty),(\beta,0)}_{\mathcal{C}_{+},\ell}\), the proof of Lemma 4.5 is identical to the proof of [11, Proposition 4.1], so we omit it.
Next, we show that conditioned on \(F_{\ell}\), with high probability \(E_{\ell,K}\) occurs. To that end, we will control the field near \(\partial\mathcal{C}_{+}\) when we condition on \(F_{\ell}\) by using the following lemma. Any planar domain \(A\) with the annulus topology is conformally equivalent to \(\{z:1<|z|<e^{2\pi M}\}\) for some unique \(M>0\); this \(M\) is called the _modulus_ of \(A\), and we denote it by \(\mathrm{Mod}(A)\).
**Lemma 4.6**.: _Let \((\alpha,\beta)=(Q-\frac{\gamma}{4},\frac{3\gamma}{2})\) and \(n\geq 1\). Consider the setting of Theorem 3.1, except we embed in \((\mathcal{C}_{+},+\infty,0)\) rather than \((\mathbb{D},0,1)\), so \(\phi\) is sampled from \(\mathrm{LF}^{(\alpha,+\infty),(\beta,0)}_{\mathcal{C}_{+},1}\) and \(\eta\) is an independent radial \(\mathrm{SLE}\) in \((\mathcal{C}_{+},+\infty,0)\). Let \((L_{t},Z_{t})=(1+X_{t}+Y_{t},X_{t}-Y_{t})\) and let \(\tau_{x}=\inf\{t:L_{t}=x\}\). Conditioned on \(\{\tau_{2^{n}}<\tau_{0}\}\), the explored region \(A=\eta([0,\tau_{2^{n}}])\) is annular with probability \(1-o_{n}(1)\), and its modulus tends to \(\infty\) in probability as \(n\to\infty\)._
Proof.: First consider \(n=1\). Condition on \(\tau_{2}<\tau_{0}\) and let \(A_{1}=\eta([0,\tau_{2}])\). Since Brownian motion stays arbitrarily close to any deterministic path with positive probability, \(A_{1}\) is annular with positive probability. Thus there exists \(m_{0}>0\) such that the event \(E_{1}=\{A_{1}\text{ annular and }\mathrm{Mod}(A_{1})>m_{0}\}\) has conditional probability \(p>0\) given \(\tau_{2}<\tau_{0}\).
Now consider general \(n\geq 1\). Condition on \(\tau_{2^{n}}<\tau_{0}\), and for \(1\leq i\leq n\) define \(A_{i}=\eta([\tau_{2^{i}-1},\tau_{2^{i}}])\) and \(E_{i}=\{A_{i}\text{ annular and }\text{Mod}(A_{i})>m_{0}\}\). By the scale invariance and strong Markov property of Brownian motion, the events \(E_{1},\ldots,E_{n}\) are conditionally independent and each occur with probability \(p\). Let \(I\) be the random set of \(i\) such that \(E_{i}\) holds, then \(|I|\to\infty\) in probability as \(n\to\infty\), so in particular \(\mathbb{P}[A\text{ annular}]\geq\mathbb{P}[|I|\geq 1]\to 1\) as \(n\to\infty\). Finally, by the subadditivity of moduli, on the event \(\{A\text{ annular}\}\) we have
\[\text{Mod}(A)\geq\sum_{i\in I}\text{Mod}(A_{i})\geq m_{0}|I|.\]
Since \(|I|\to\infty\) in probability, \(\text{Mod}(A)\to\infty\) in probability as desired.
Lemma 4.6 states that on the rare event that the boundary length hits \(2^{n}\), with high probability the explored region \(A\) at this hitting time is an annulus with large modulus. Next, we give a uniform bound on the field for all embeddings of \((A,\phi,0)/{\sim}\) in \(\mathcal{C}_{+}\) having \(\partial\mathcal{C}_{+}\) as a boundary component.
**Lemma 4.7**.: _There is an absolute constant \(m>0\) such that the following holds. Fix \(n\geq 1\) and let \(\rho\) be as defined as in Lemma 4.5. In the setting of Lemma 4.6, condition on \(\{\tau_{2^{n}}<\tau_{0}\}\) and on \(\text{Mod}(A)>m\). Let \(\widetilde{A}\subset\mathcal{C}_{+}\) be any bounded annulus having \(\partial\mathcal{C}_{+}\) as a boundary component such that \(\text{Mod}(\widetilde{A})=\text{Mod}(A)\). Let \(\widetilde{\phi}\) be the field on \(\widetilde{A}\) such that \((\widetilde{A},\widetilde{\phi},0)/{\sim}_{\gamma}=(A,\phi,0)/{\sim}_{\gamma}\), then_
\[\sup|(\widetilde{\phi},\rho)|<\infty\quad\text{ almost surely}, \tag{4.5}\]
_where the supremum is taken over all choices of \(\widetilde{A}\)._
Proof.: We first fix a canonical embedding \((\widetilde{A}_{0},\widetilde{\phi}_{0},0)\) by specifying that \(A_{0}\) is concentric, i.e., \(A_{0}=[0,t]\times[0,2\pi]\) where \(t=2\pi\text{Mod}(A)>2\pi m\). For \(b>0\), let \(H_{b}\) be the set of nonnegative smooth functions \(f\) in \(\mathcal{C}\) supported in \([\frac{1}{2},3]\times[0,2\pi]\) with \(f\geq 0\), \(\int f(x)\,dx=1\) and \(\|f^{\prime}\|_{\infty}\leq b\). Since \(\widetilde{\phi}_{0}\) is locally absolutely continuous with respect to a GFF, as explained in the paragraph just after [14, Proposition 9.19] we have \(\sup_{f\in H_{b}}|(\widetilde{\phi}_{0},f)|<\infty\) a almost surely.
For any other embedding \((\widetilde{A},\widetilde{\phi},0)\) let \(g:\widetilde{A}_{0}\to\widetilde{A}\) be the conformal map such that \(\widetilde{\phi}=g\bullet_{\gamma}\widetilde{\phi}_{0}\), then
\[(\widetilde{\phi},\rho)=(\widetilde{\phi}_{0}\circ g^{-1}+Q\log|(g^{-1})^{ \prime}|,\rho)=(\widetilde{\phi}_{0},|g^{\prime}|^{2}\rho\circ g)+Q(\log|(g^{- 1})^{\prime}|,\rho).\]
Assuming the absolute constant \(m\) is chosen sufficiently large, conformal distortion estimates (e.g. [13, Theorem 5]) give \(\sup_{[\frac{1}{2},3]\times[0,2\pi]}|g^{\prime}-1|<\frac{1}{10}\). Thus, \(|(\tilde{\phi}_{0},|g^{\prime}|^{2}\rho\circ g)|\leq\sup_{f\in F_{b}}|( \widetilde{\phi}_{0},f)|\) for some \(b\) depending only on \(\rho\) and \(|Q(\log|(g^{-1})^{\prime}|,\rho)|\leq 10Q\), giving the desired uniform bound for \(|(\widetilde{\phi},\rho)|\).
Now, we will prove that conditioned on \(F_{\ell}\), the event \(E_{\ell,K}\) is likely. Briefly, conditioning on \(F_{\ell}\), Theorem 3.1 gives a description of the quantum surface near \(\partial\mathcal{C}_{+}\) which we use to bound the field average near \(\partial\mathcal{C}_{+}\) via Lemma 4.7.
**Proposition 4.8**.: _Let \((\alpha,\beta)=(Q-\frac{7}{4},\frac{3\gamma}{2})\). For each \(\delta>0\) there exists \(K_{0}>0\) such that for all \(K>K_{0}\)_
\[\liminf_{\ell\to 0}\text{\rm{LF}}^{(\alpha,+\infty),(\beta,0)}_{\mathcal{C}_{+}, \ell}[E_{\ell,K}\mid F_{\ell}]>1-\delta.\]
Proof.: Fix \(n=n(\delta)\geq 1\) sufficiently large such that in the setting of Lemma 4.6 we have \(\mathbb{P}[\text{Mod}(A)\geq m]\geq 1-\frac{\delta}{4}\), where \(m\) is the absolute constant in Lemma 4.7.
Sample \(\phi\sim\text{\rm{LF}}^{(\alpha,+\infty),(\beta,0)}_{\mathcal{C}_{+},\ell}\), and independently sample a radial \(\text{\rm{SLE}}_{\kappa}\) curve \(\eta\) in \((\mathcal{C}_{+},0)\) targeting \(+\infty\) and parametrized by \(\mathcal{A}_{\phi}\). The law of \(\phi^{0}:=\phi-\frac{2}{\gamma}\log\ell\) is \(\text{\rm{LF}}^{(\alpha,+\infty),(\beta,0)}_{\mathcal{C}_{+},1}\). Let \((X_{t},Y_{t})\) be the boundary length process for \((\phi^{0},\eta)\) as in Lemma 4.6, let \((L_{t},Z_{t})=(1+X_{t}+Y_{t},X_{t}-Y_{t})\), and let \(\tau_{x}\) be the time \(L_{t}\) first hits \(x\). By Lemma 4.2 we have \(\text{\rm{LF}}^{(\alpha,+\infty),(\beta,0)}_{\mathcal{C}_{+},\ell}[\tau_{2^{n} }<\tau_{0}\mid F_{\ell}]=1-\sigma_{\ell}(1)\), and furthermore conditioning on \(\{\tau_{2^{n}}<\tau_{0}\}\cap F_{\ell}\) the conditional law of \((L_{t},Z_{t})_{[0,\tau_{2^{n}}]}\) is within \(1-\sigma_{\ell}(1)\) in total variation distance of the corresponding process of Lemma 4.6. We conclude that conditioned on \(F_{\ell}\), the conditional law of the quantum surface \(\mathcal{A}:=(\eta([0,\ell^{2}\tau_{2^{n}}]),\phi-\frac{2}{\gamma}\log\ell,0)\) is within \(1-\sigma_{\ell}(1)\) in total variation distance of the quantum surface of Lemma 4.6, and hence within \(1-\frac{\delta}{2}-\sigma_{\ell}(1)\) in total variation distance of the quantum surface of Lemma 4.7. Choose \(K_{0}\) sufficiently large that in Lemma 4.7 the finite constant in (4.5) is
bounded by \(K_{0}-\log 2\) with probability at least \(1-\frac{\delta}{4}\), and the quantum area of the annular quantum surface is bounded by \(K_{0}\) with probability \(1-\frac{\delta}{4}\). Then for \(\phi\sim\mathrm{LF}^{(\alpha,+\infty),(\beta,0)}_{\mathcal{C}_{+},\ell}\) conditioned on \(F_{\ell}\), with probability at least \(1-\delta-o_{\ell}(1)\) we have \(|(\phi-\frac{2}{\gamma}\log\ell,\rho)|<K_{0}-\log 2\) and \(\mathcal{A}_{\phi-\frac{2}{\gamma}\log\ell}([0,1]\times[0,2\pi])<K_{0}\). We are done.
Proof of Proposition 4.4.: The result is immediate from Lemma 4.5 and Proposition 4.8.
### Proofs of Theorems 4.1 and 1.1
Proof of Theorem 4.1.: Let \((L^{\infty}_{t},Z^{\infty}_{t})_{[0,\tau^{\infty}]}\) have the law of the Brownian process described in Theorem 4.1.
**Step 1: Constructing a pair \((\widetilde{\phi}^{\infty},\widetilde{\eta}^{\infty})\) with boundary length process \((L^{\infty}_{t},Z^{\infty}_{t})\).** For \(x>0\) let \(\tau^{\infty}_{x}\) be the first time \(L^{\infty}_{t}\) hits \(x\) (or, if no such time exists, \(\tau^{\infty}_{x}=\infty\)). For each \(\ell\) of the form \(2^{-n}\) such that \(\tau^{\infty}_{\ell}\neq\infty\), by Theorem 3.1 a.s. there is a corresponding SLE-decorated quantum surface \(\mathcal{D}^{\infty}_{\ell}\) associated to the process \((L_{t-\tau^{\infty}_{x}},Z_{t-\tau^{\infty}_{x}})_{[0,\tau^{\infty}-\tau^{ \infty}_{x}]}\), and the \(\mathcal{D}^{\infty}_{\ell}\) are consistent in the sense that for \(\ell^{\prime}<\ell\) the decorated quantum surface \(\mathcal{D}^{\infty}_{\ell}\) arises as a sub-surface of \(\mathcal{D}^{\infty}_{\ell^{\prime}}\). Thus by the Kolmogorov extension theorem there is a curve-decorated quantum surface \((\mathcal{C},\widetilde{\phi}^{\infty},\widetilde{\eta}^{\infty},-\infty,+\infty)\) such that for all \(\ell=2^{-n}\) such that \(\tau^{\infty}_{\ell}\neq\infty\), we have \(\mathcal{D}^{\infty}_{\ell}=(\widetilde{\eta}^{\infty}([\tau^{\infty}_{\ell},\tau^{\infty}]),\widetilde{\phi}^{\infty},\widetilde{\eta}^{\infty}(\cdot+ \tau^{\infty}_{\ell})|_{[0,\tau^{\infty}-\tau^{\infty}_{\ell}]},\widetilde{ \eta}^{\infty}(\tau^{\infty}_{\ell}),+\infty)\).
Let \(\phi^{\prime}\sim\mathcal{L}\) as in Proposition 4.4, so \((\mathcal{C},\phi^{\prime},-\infty,+\infty)/\)\(\sim_{\gamma}\) has the law of \(\mathcal{M}^{\mathrm{sph}}_{2}(\alpha)\) conditioned to have quantum area greater than \(1\). Independently let \(\eta^{\prime}\) be whole-plane \(\mathrm{SLE}_{\kappa}\) from \(-\infty\) to \(+\infty\) in \(\mathcal{C}\).
**Step 2: \((\phi^{\prime},\eta^{\prime})\) is the \(\ell\to 0\) limit of \(\mathrm{LF}^{(\alpha,+\infty),(\beta,0)}_{\mathcal{C}_{+},\ell}\) decorated by independent radial SLE.** By Proposition 4.4, for \(\phi^{\ell}\sim\mathrm{LF}^{(\alpha,+\infty),(\beta,0)}_{\mathcal{C}_{+},\ell}\) conditioned on \(F_{\ell}\), with \(\sigma_{\ell}>0\) satisfying \(\mathcal{A}_{\phi^{\ell}}(\mathcal{C}_{+}+\sigma_{\ell})=\frac{1}{2}\) and \(\widetilde{\phi}^{\ell}=\phi^{\ell}(\cdot+\sigma_{\ell})\), for any \(U\subset\mathcal{C}\) bounded away from \(-\infty\), as \(\ell\to 0\) the field \(\widetilde{\phi}^{\ell}|_{U}\) converges in distribution to \(\phi^{\prime}|_{U}\). Note that \(\sigma_{\ell}\to\infty\) in probability as \(\ell\to 0\) (e.g. by taking \(U=[-N,\infty)\times[0,2\pi]\) in the above statement).
Next, sample a radial \(\mathrm{SLE}_{\kappa}\) curve \(\eta^{\ell}\) in \((\mathcal{C}_{+},0,+\infty)\) independently of \(\phi^{\ell}\) and parametrize it by quantum area. Let \(\widetilde{\eta}^{\ell}=\eta^{\ell}+\sigma_{\ell}\), and for each neighborhood \(U\) of \(+\infty\) bounded away from \(-\infty\) define the curve \(\widetilde{\eta}_{U}:[0,\infty)\to\mathcal{C}\) by \(\widetilde{\eta}_{U}:=\widetilde{\eta}(\cdot+\sigma_{U})\) where \(\sigma_{U}\) is the first time \(\widetilde{\eta}\) hits \(\overline{U}\). Since whole-plane \(\mathrm{SLE}_{\kappa}\) is the local limit of radial \(\mathrm{SLE}_{\kappa}\) as the domain tends to the whole plane, the curve \(\widetilde{\eta}_{U}\) converges in the topology of uniform convergence on compact sets to \(\eta^{\prime}_{U}:=\eta^{\prime}(\cdot+\sigma^{\prime}_{U})\), where \(\sigma^{\prime}_{U}\) is the time \(\eta^{\prime}\) first hits \(\overline{U}\).
Thus, in the setup of Theorem 3.1 with boundary length \(\ell\) rather than \(1\), conditioned on having quantum area at least \(1\), as \(\ell\to 0\) the field and curve \(\widetilde{\phi}^{\ell},\widetilde{\eta}^{\ell}\) converge in law to \(\phi^{\prime},\eta^{\prime}\) above.
**Step 3: Showing \((\mathcal{C},\widetilde{\phi}^{\infty},\widetilde{\eta}^{\infty},-\infty,+ \infty)/\)\(\sim_{\gamma}^{d}\)**\((\mathcal{C},\phi^{\prime},\eta^{\prime},-\infty,+\infty)/\)\(\sim_{\gamma}\).** For \(\ell\) of the form \(2^{-n}\), let \((X^{\ell}_{t},Y^{\ell}_{t})\) be the boundary length process associated to \(\widetilde{\phi}^{\ell},\widetilde{\eta}^{\ell}\) defined above, and let \((L^{\ell}_{t},Z^{\ell}_{t})=(X^{\ell}_{t}-Y^{\ell}_{t},X^{\ell}_{t}+Y^{\ell}_{t})\). Since \(\tau^{\infty}_{\ell}\to 0\) in probability as \(\ell\to 0\), we can couple \((L^{\infty}_{t-\tau^{\infty}_{\ell}},Z^{\infty}_{t-\tau^{\infty}_{\ell}})\) to agree with \((L^{\ell}_{t},Z^{\ell}_{t})\) with probability \(1-o_{\ell}(1)\). On this event \((\widetilde{\eta}^{\infty}([\tau^{\infty}_{\ell},T^{\infty}]),\widetilde{\phi}^ {\infty},\widetilde{\eta}^{\infty}(\cdot+\tau^{\infty}_{\ell})|_{[0,\tau^{ \infty}-\tau^{\infty}_{\ell}]},\widetilde{\eta}^{\infty}(\tau^{\infty}_{\ell} ),+\infty)/\)\(\sim_{\gamma}\) = \((\mathcal{C}_{+}-\sigma_{\ell},\widetilde{\phi}^{\ell},\widetilde{\eta}^{\ell},0,+ \infty)/\)\(\sim_{\gamma}\); let \(f_{\ell}\) be the conformal map sending \(\widetilde{\eta}^{\infty}([\tau^{\infty}_{\ell},\tau^{\infty}])\) to \(\mathcal{C}_{+}-\sigma_{\ell}\) such that \(f_{\ell}(\widetilde{\eta}^{\infty}(\tau^{\infty}_{\ell}))=-\sigma_{\ell}\) and \(f_{\ell}(+\infty)=+\infty\). Since for any \(N\) the regions \(\mathcal{C}\backslash\widetilde{\eta}^{\infty}([\tau^{\infty}_{\ell},\tau^{ \infty}])\) and \(\mathcal{C}\backslash(\mathcal{C}_{+}-\sigma_{\ell})\) are subsets of \((-\infty,N)\times[0,2\pi]\) in probability as \(\ell\to\infty\), standard conformal distortion estimates give that for every neighborhood \(U\) of \(+\infty\) bounded away from \(-\infty\) we have \(\sup_{U}|f^{\prime}_{\ell}-1|\to 0\) in probability. This implies that there is a coupling of \((\phi^{\prime},\eta^{\prime})\) with \((\widetilde{\phi}^{\infty},\widetilde{\eta}^{\infty})\) and a random rotation \(f_{\infty}:\mathcal{C}\to\mathcal{C}\) of the cylinder (i.e. conformal map fixing \(\pm\infty\) with \(\mathrm{Re}\,f_{\infty}(z)=\mathrm{Re}\,z\) for all \(z\)) such that \(\widetilde{\phi}^{\infty}=f_{\infty}\bullet_{\gamma}(\phi^{\prime})\) and \(\widetilde{\eta}^{\infty}=f_{\infty}(\eta^{\prime})\) a.s., completing the step.
**Conclusion.**\((\mathcal{C},\widetilde{\phi}^{\infty},\widetilde{\eta}^{\infty},-\infty,+ \infty)/\)\(\sim_{\gamma}\) has the law of the curve-decorated quantum surface of Theorem 4.1 (Step 3), and its boundary length process is as desired (Step 1). The measurability claim (4.1) is immediate from that of Theorem 3.1 and the construction of Step 1.
Proof of Theorem 1.1.: In the setting of
Let \(\mathrm{Inv}(z)=z^{-1}\), let \(\widetilde{\phi}=\mathrm{Inv}\bullet_{\gamma}\phi\), let \(\widetilde{\eta}\) be the time-reversal of \(\mathrm{Inv}\circ\eta\) (so \(\widetilde{\eta}\) is also a curve from \(0\) to \(\infty\)), and let \((\widetilde{L}_{t},\widetilde{Z}_{t})\) be the time-reversal of \((L_{t},Z_{t})\). Let \(\mathfrak{S}=(\mathbb{C},\phi,\eta,\infty)\) and \(\widetilde{\mathfrak{S}}=(\mathbb{C},\widetilde{\phi},\widetilde{\eta}, \infty,0)\). By definition \(F((L_{t},Z_{t}))=\mathfrak{S}\), so as a consequence of Lemma 2.14 we have \(F((\widetilde{L}_{t},\widetilde{Z}_{t}))=\widetilde{\mathfrak{S}}\) a.s.. Thus, \((L_{t},Z_{t})\stackrel{{ d}}{{=}}(\widetilde{L}_{t},\widetilde{ Z}_{t})\) implies \(\mathfrak{S}\stackrel{{ d}}{{=}}\widetilde{\mathfrak{S}}\).
Let \(r>0\) be such that \(\mathcal{A}_{\phi}(r\mathbb{D})=\mathcal{A}_{\phi}(\mathbb{C}\backslash r \mathbb{D})\), let \(\theta\) be uniformly sampled from \([0,2\pi)\) independently of \((\phi,\eta)\), define \(f(z)=r^{-1}e^{i\theta}z\), and set \(\phi_{0}=f\bullet_{\gamma}\phi\) and \(\eta_{0}=f\circ\eta\). Likewise define \(\widetilde{\phi}_{0},\widetilde{\eta}_{0}\) by applying the same embedding procedure for \(\widetilde{\phi},\widetilde{\eta}\). Since \(\mathfrak{S}\stackrel{{ d}}{{=}}\widetilde{\mathfrak{S}}\) we conclude \((\phi_{0},\eta_{0})\stackrel{{ d}}{{=}}(\widetilde{\phi}_{0}, \widetilde{\eta}_{0})\). Since \(\phi\) and \(\eta\) are independent, and whole-plane SLE is invariant in law under dilations and rotations of the plane, the law of \(\eta_{0}\) is whole-plane SLE. Likewise \(\widetilde{\eta}_{0}\) has the law of the time-reversal of whole-plane SLE after applying \(\mathrm{Inv}\). The statement \(\eta_{0}\stackrel{{ d}}{{=}}\widetilde{\eta}_{0}\) is thus the desired reversibility of whole-plane SLE for \(\kappa>8\).
## 5 Open problems
The most natural variants of \(\mathrm{SLE}_{\kappa}\) are the \(\mathrm{SLE}_{\kappa}(\underline{\rho})\) processes [13, 14, 15]. For \(\kappa\in(0,8]\), the time reversal of chordal \(\mathrm{SLE}_{\kappa}(\rho)\) has been solved when the sum of the weights is larger than \((-2)\vee(\frac{\kappa}{2}-4)\)[13, 12, 14], while [13, Theorem 1.18] gives a criterion for the reversibility of \(\mathrm{SLE}_{\kappa}(\rho^{-};\rho^{+})\) curves when \(\kappa>8\). On the whole plane side, the most natural variant of whole-plane \(\mathrm{SLE}_{\kappa}\) is whole-plane \(\mathrm{SLE}_{\kappa}(\rho)\) for \(\rho>-2\), which agrees with whole-plane \(\mathrm{SLE}_{\kappa}\) when \(\rho=0\) (see e.g. [13, Section 2.1.3]). Miller and Sheffield showed that when \(\kappa\in(0,4]\) and \(\rho>-2\), or \(\kappa\in(4,8]\) and \(\rho\geq\frac{\kappa}{2}-4\), whole-plane \(\mathrm{SLE}_{\kappa}(\rho)\) is reversible [13, Theorem 1.20]. They also show that when \(\kappa>8\) and \(\rho\geq\frac{\kappa}{2}-4\) whole-plane \(\mathrm{SLE}_{\kappa}(\rho)\) is not reversible [13, Remark 1.21]. They do not treat the regime where \(\kappa>4\) and \(\rho\in(-2,\frac{\kappa}{2}-4)\) because it is not as natural in the imaginary geometry framework, see [13, Remark 1.22]. On the other hand, Theorem 1.1 gives reversibility when \(\kappa>8\) and \(\rho=0\) even though it falls into this regime, so there is still hope for reversibility in this range.
**Problem 5.1**.: _When \(\kappa>4\), for which \(\rho\in(-2,\frac{\kappa}{2}-4)\) is \(\mathrm{SLE}_{\kappa}(\rho)\) reversible?_
A further generalization of whole-plane \(\mathrm{SLE}_{\kappa}(\rho)\) can be obtained by adding a constant drift term to the driving function. Zhan [12] showed that when \(\kappa\in(0,4]\), \(\rho=0\) and any constant drift is chosen, the curve is reversible. Miller and Sheffield [13, Theorem 1.20] showed that when \(\kappa\in(0,4]\) and \(\rho>-2\), or \(\kappa\in(4,8]\) and \(\rho\geq\frac{\kappa}{2}-4\), for any chosen drift the curve is reversible.
**Problem 5.2**.: _When \(\kappa>4\) and \(\rho\in(-2,\frac{\kappa}{2}-4)\), what choices of drift coefficient give a reversible curve?_
The statement of Theorem 1.1 involves only SLE, but our arguments depend on couplings with LQG.
**Problem 5.3**.: _Find a proof of Theorem 1.1 not using mating-of-trees._
It seems likely that a solution to Problem 5.3 would represent a significant step towards solving Problems 5.1 and 5.2.
The convergence of lattice statistical physics models to SLE was a primary reason to expect the reversibility of chordal \(\mathrm{SLE}_{\kappa}\) for \(\kappa\leq 8\). Conversely, Theorem 1.1 suggests the following question.
**Problem 5.4**.: _Find a lattice statistical physics model whose scaling limit is whole-plane \(\mathrm{SLE}_{\kappa}\) for some \(\kappa>8\)._
Questions of this sort are sometimes easier when the underlying lattice is random, i.e., is a random planar map. Some random planar maps decorated by statistical physics models can be encoded by a pair of trees, which in turn may be described by a random walk on the 2D lattice. If this random walk converges in the scaling limit to Brownian motion with covariance given by (2.9), then we say the corresponding decorated random planar map converges in the _peanosphere topology_ to \((\gamma=\frac{4}{\sqrt{\kappa}})\)-LQG decorated by space-filling \(\mathrm{SLE}_{\kappa}\). In the case when the \(\mathrm{SLE}_{\kappa}\) is a space-filling loop in \(\hat{\mathbb{C}}\) from \(\infty\) to \(\infty\), such convergences are known for random planar maps decorated by bipolar orientations (\(\kappa=12\)) [13], Schnyder woods (\(\kappa=16\)) [13], or a variant of spanning trees (\(\kappa>8\)) [11]; see the survey [12] for examples where \(\kappa\leq 8\). The next problem asks for such a result for whole-plane \(\mathrm{SLE}_{\kappa}\) where \(\kappa>8\).
**Problem 5.5**.: _Exhibit a random planar map decorated by a statistical physics model which can be encoded by a random walk converging in the limit to \((X_{t},Y_{t})\) defined in Theorem 3.1 or in Theorem 4.1. In other words, find a random planar map model which converges in the peanosphere sense to LQG decorated by radial or whole-plane SLE._
|
2309.08452 | MBAPPE: MCTS-Built-Around Prediction for Planning Explicitly | We present MBAPPE, a novel approach to motion planning for autonomous driving
combining tree search with a partially-learned model of the environment.
Leveraging the inherent explainable exploration and optimization capabilities
of the Monte-Carlo Search Tree (MCTS), our method addresses complex
decision-making in a dynamic environment. We propose a framework that combines
MCTS with supervised learning, enabling the autonomous vehicle to effectively
navigate through diverse scenarios. Experimental results demonstrate the
effectiveness and adaptability of our approach, showcasing improved real-time
decision-making and collision avoidance. This paper contributes to the field by
providing a robust solution for motion planning in autonomous driving systems,
enhancing their explainability and reliability. | Raphael Chekroun, Thomas Gilles, Marin Toromanoff, Sascha Hornauer, Fabien Moutarde | 2023-09-15T14:57:02Z | http://arxiv.org/abs/2309.08452v1 | # MBAPPE: MCTS-Built-Around Prediction for Planning Explicitly
###### Abstract
We present MBAPPE, a novel approach to motion planning for autonomous driving combining tree search with a partially-learned model of the environment. Leveraging the inherent explainable exploration and optimization capabilities of the Monte-Carlo Search Tree (MCTS), our method addresses complex decision-making in a dynamic environment. We propose a framework that combines MCTS with supervised learning, enabling the autonomous vehicle to effectively navigate through diverse scenarios. Experimental results demonstrate the effectiveness and adaptability of our approach, showcasing improved real-time decision-making and collision avoidance. This paper contributes to the field by providing a robust solution for motion planning in autonomous driving systems, enhancing their explainability and reliability. Code is available under [https://github.com/raphychek/mbappe-nuplan](https://github.com/raphychek/mbappe-nuplan).
## I Introduction
Innovations in machine learning techniques have led to significant advancements in self-driving technology. Particularly, the use of deep learning has greatly improved the perception stage of autonomous driving. These developments have been complemented by progress in sensor technology and mapping methods. As a result, the focus is now shifting to the next challenges of autonomous driving, and motion planning emerges as a pivotal component. After identifying roads and monitoring nearby vehicles and object entities, the autonomous driving system must now decide its future path and plan its trajectory accordingly to ensure a collision-free route while respecting traffic rules.
Therefore, this study centers on the mid-to-end stage of autonomous driving, presuming that perception tasks have already been accomplished and working toward an efficient and explainable motion planning. In this realm, recent research mostly focus on Imitation Learning (IL) [1, 2, 3] or hybrid IL and rule-based methods [4, 5].
However, rule-based methods for autonomous driving are limited by their lack of scalability, adaptability, robustness in complex and ambiguous situations, and their inability to handle unconventional scenarios. This contrasts with machine-learning based approaches that address these limitations through data-driven learning and adaptability.
Nonetheless, while Neural Networks (NN) provide a powerful and flexible tool for learning to drive using supervised labels with IL methods [1, 6, 7], they remain limited in the long-term understanding of the consequences of their actions. Therefore, they may not comprehend the full scope of interactions with the map and other agents. Deep Reinforcement Learning (Deep RL) based methods [8, 9, 10] aim to incorporate long-term returns of such consequences in the training of these networks. However, this causal understanding remains implicit and not guaranteed, and Deep RL training is most often sample inefficient.
Our approach aims to get the best of both worlds by using an IL prior to guide a MCTS [11, 12] into explicitly exploring the consequences of actions, validating the NN trajectory if it respects driving constraints, or exploring new actions if required, see Figure 1. The main challenge in running a MCTS is that it assumes environment transitions to be deterministic and perfectly known. While this is true for the displacement of the ego vehicle given its actions, and for the update of the map that remains the same, other agents will also move on their own accord. In order to have a realistic world model, we developed an IL model to predict all the other agents future trajectories. This way we get an approximate of the future transitions that enables us to roll out the consequences of our chosen actions on multiple time-steps.
In this paper, we extend the MCTS paradigm to partially-learned environment and apply it to autonomous driving.
Fig. 1: Visualization of the exploration done by MBAPPE in one planning step. We display the bird-eye-view trajectory pieces in xy coordinates. As the road is turning right, the MCTS explores multiple steering angle and acceleration configurations to correctly take the turn. MBAPPE finally selects the path which maximizes the Q-value (in green).
Next, we validate our performance on nuPlan [13] simulation environment and compare to other existing baselines. Lastly, we highlight the explainability of our approach which allows easy observation and analysis of the steps leading to any given decision via its decision tree.
## II Related work
MBAPPE seeks to leverage imitation learning (IL) to guide a MCTS model in exploring the outcomes of its actions. As such, this section is dedicated to examining rule-based and learning-based motion planning techniques, and strategies integrating MCTS with deep learning.
Rule-based methodsRule-based methods employ explicit rules to dictate the behavior of autonomous vehicle, making them interpretable by nature [14, 15, 16]. A notable instance is the Intelligent Driver Model (IDM) [17], designed to track leading vehicles while maintaining safe distances through computation of optimal acceleration based on the leading vehicle's speed. Rule-based methods were extended in predictive rule-based approaches which anticipate future environmental states to improve collision avoidance [18, 19, 20]. However, rule-based methods are inflexible and rely on perfect and consistent representation of the environment. This characteristic make them struggle with generalization to novel scenarios or with the inherent variability of real-world conditions.
Imitation learning methodsImitation learning methods allow to learn how to drive from supervised data, leading to more generalizability than rule-based methods. Some of these methods directly create driving plans or commands [21, 7], but they suffer from a lack of interpretability and general robustness. To address these issues, some other approaches focus on making the planning decisions more interpretable. For instance, Dauner et al. developed Predictive Driver Model (PDM) [4] to combine an interpretable IDM with a simple neural network. Some methods deal with the robustness problem by generating multiple planning options with deep learning and then choosing the best one with the lowest cost [22, 23, 24, 25] or by refining deep-based predictions [26, 2]. However, IL methods still suffers from distribution mismatch where agent fails to recover from accumulation error thus leading to increasingly out of expert distribution states, and lacks of long-time reasoning.
Reinforcement learning methodsInstead of copying human behavior like IL, RL models use a reward system to judge how good a strategy is. This can lead to improved decision-making, sometimes even outperforming humans [27]. Model-free reinforcement learning focuses on learning optimal actions directly from observed states and rewards without creating an explicit model of the driving environment. Even though RL is successful for simple autonomous driving tasks [8], up to now, no published work has reported success of exclusively RL-based method in autonomous driving for complex urban environments [28]. Furthermore, RL suffers from sample inefficiency and lack of convergence guarantees and interpretability. Recent works leveraged supervised learning in RL pipelines to overcome these limitations [10, 29], thus compensating the weakness of the RL gradient during training.
Methods integrating MCTS with deep learningIntegrating MCTS with deep learning techniques has emerged as a compelling approach to enhance decision-making processes in various domains. Silver et al. [27] pioneered this fusion by combining MCTS with deep supervised learning to achieve groundbreaking results in the game of Go with AlphaGo. This paradigm was extended with AlphaZero [30] by relying solely on self-play and RL. MuZero [31] finally embraced implicitness and extended the generality of these approaches by employing learned models to simulate outcomes and inform strategic decision-making.
In the realm of autonomous driving, Chen et al. [32] integrated MCTS with deep learning but relied on implicitness for the tree transitions and prior computation, possibly leading to inexplicable behaviors which are not desirable for this domain of application. Other published methods lack generalizability and constraint their applicative fields to simplified custom environments such as highway driving without possibility for public benchmarks comparison [33, 34], or high level tactical decisions [35].
## III Method
In this section, we introduce MBAPPE and its components. In particular, we present the known and learned features of the world model, and technical details of our MCTS design and exploration steps.
Fig. 2: **MBAPPE pipeline** A prediction model infers future trajectories of other agents in the scene. This information is fed to the MCTS which outputs a sequence of consecutive actions. Those are integrated to form an improved trajectory planning for the ego.
### _MBAPPE framework_
At each time-step, a neural network (based on an open-loop version of Urban Driver [21]) predicts an estimation of the ego trajectory and of the future trajectories of every other agents around the ego. This information is fed to the MCTS, which will deploy an internal lightweight simulation where the ego trajectory is used as a prior to guide the first steps of exploration, and other agents trajectories are leveraged to build the world model. At each simulation-step, which follow a planning time axis inside the tree, the MCTS explores the possible actions and internally simulates the evolution of the environment to check how those explored actions will impact its driving performances (driving out of area, check for collisions with static objects, check collisions with other agents thanks to their estimated trajectory, etc).
The global pipeline is represented in Figure 2.
### _World Model_
The Monte-Carlo tree search leverages an internal simplified representation of the world where it can quickly iterate to explore possible sequences of actions and their consequences. This environment is made of two categories of features:
* Known features:
* The map information, including traffic light,
* Static objects such as traffic cones and barriers
* Dynamic objects such as neighboring vehicles, traffic cones or pedestrians, which we will consider as other agents evolving in the simulated environment
* Learned features:
* Estimated future trajectories of other agents given by the NN prediction.
### _MCTS design and tree steps_
Our MCTS is based on a kinematic bicycle model of the vehicle. Actions are defined as a tuple \((a,\delta)\), where \(a\) is the acceleration and \(\delta\) the steering angle. Accelerations and steering are discretized in 13 values each, in the respective range of \([-3,3]\) m.s\({}^{-2}\) and \([-\pi/4,\pi/4]\) rad. Actions are integrated every 0.1 s.
The simulation process of our tree search is detailed in Fig. 1. The tree is initialized with a single root node representing the current context. Each tree node stores 3 values: Q the expected return, P the action prior and N the number of visits. The nodes are built and evaluated iteratively through the following steps:
* **Selection**: We follow the PUCTS [27] formula to select the next action following a trade-off between the exploitation of Q and the exploration of unvisited nodes with low \(N\). At a node state \(\mathcal{S}\) the action \(\mathcal{A}\) is chosen using the following formula: \[\mathcal{A}_{t}=\operatorname*{argmax}_{\mathcal{A}}Q(\mathcal{S},\mathcal{A} )+c_{puct}P(\mathcal{S},\mathcal{A})\frac{\sqrt{\sum_{\mathcal{B}}N(\mathcal{ S},\mathcal{B})}}{1+N(\mathcal{S},\mathcal{A})}\] (1) with \(c_{puct}\) an hyper-parameter balancing the trade-off between exploration and exploitation. We found \(c_{puct}=2\) to perform the best in our experiments.
* **Expansion**: We expand leaf nodes by all physically possible actions from the state of the leaf node, following a prior \(P\) and some continuity constraints. These constraints ensure both comfort and physical feasibility of successive actions. Prior design and continuity constraints are described in Section III-D.
* **Evaluation**: We consider that driving rewards are rather short term (crash or not, exit road or not within the next 6 or 8 seconds). Therefore they do not need to be bootstrapped by a learned value network, but rather can be evaluated at the current simulation step by checking for them directly. Our computed reward \(r_{t}\) at state \(s_{t}\) is made of these main components: **-** Progress: distance advanced since the last node,
Fig. 3: **MCTS steps** a) Each simulation pass in the tree follows a trade-off between exploitation of the best \(Q\) value of an action, and the exploration term \(u(P)\) that encourages to explore nodes with less visits \(N\) along the prior \(P\). b) The leaf node is possibly expanded following some probabilities depending on the prior \(P\) and the continuity constraints. c) After the simulation, the leaf node is evaluated by explicitly computing the reward \(r\) described in Section III-C. d) \(Q\)-values are updated so means of the rewards \(r\) in the sub-tree below each actions are tracked.
normalized by maximum allowed speed limit (\([0,1]\)), * Collision: penalty for collision with car and pedestrian (\(-5\)) or object (\(-2\)), * Route: \(-0.5\) if the vehicle is not on the expected road, * Divable area: \(-1\) if the vehicle is not on the drivable area, * Center of the road: * \(-sin(\theta)/2\) where \(\theta\) is the angle difference between the ego heading and the closest centerline heading, * \(-d/2\) where \(d\) is the distance between the ego position and the closest centerline.
* **Back up**: We update the Q values using the cumulative reward as in MuZero [31]: \[G^{k}=\sum_{\tau=0}^{l-1-k}\gamma^{\tau}r_{k+1+\tau}\] (2) \[Q\left(s^{k-1},a^{k}\right):=\frac{N\left(s^{k-1},a^{k}\right) \times Q\left(s^{k-1},a^{k}\right)+G^{k}}{N\left(s^{k-1},a^{k}\right)+1}\] \[N\left(s^{k-1},a^{k}\right):=N\left(s^{k-1},a^{k}\right)+1\]
We use a discount factor \(\gamma\) of 1.
### _Prior and continuity constraints_
An efficient MCTS exploration process can be achieved by leveraging two approaches.
Firstly, providing the MCTS an intuition over actions to explore to prioritize the more probable ones. This issue is tackled using a prior over the distribution of actions for each node. This prior is usually learned and inferred for every node [31], which is computationally expensive, or handcrafted. Secondly, to further streamline the exploration process, we narrowed down the action space, thereby reducing the overall actions that need to be explored to the most critical ones. To achieve this, we integrated continuity constraints into the MCTS to ensure not only the physical feasibility of the actions explored but also to enhance comfort and to reduce the exploration time.
#### Iii-D1 The prior
We designed a prior which relies on both handcrafted rules and learned rules, all without incurring any additional computational overhead.
The prior function is made of two parts:
* The handcrafted prior \(P_{h}\) prioritizes exploration around the constant speed with null steering angle,
* The learned prior \(P_{l}\) is obtained by deriving the prediction of the ego trajectory by the NN into consecutive actions. This prior advantages the possibility of following NN actions for the first \(T\) time steps of the internal simulation of the MCTS. We found \(T=1\) s to perform the best in our experiments.
Both \(P_{l}\) and \(P_{h}\) are Gaussians centered on the chosen action. The Gaussian are parametrized with a very high variance (\(\sigma^{2}=100\)) to encourage an almost uniform exploration.
The designed prior can be written:
\[P^{t}=\begin{cases}P_{h}^{t}+P_{l}^{t}&\text{if }t\leq T\\ P_{h}^{t}&\text{if }t>T\end{cases} \tag{3}\]
#### Iii-D2 Continuity constraints
To ensure the output trajectory is physically feasible and to minimize the total number of actions to explore, we implemented continuity constraints in the MCTS.
These constraints are two folded:
* The Tree Constraint: At a given step \(t\) of the real-world vehicle movement, the root node of the novel tree will be constrained to explore neighboring accelerations and steering angles relatively to the actions taken at time \(t-1\) by the previous tree. This constraint favors a behavior continuity between successive time-steps and corresponding MCTS.
* The Node Constraint: During the MCTS internal expansion phase, exploration only focuses on neighboring accelerations and steering angle values relatively to the actions of his parent node. This constraint favors a behavior continuity during the expansion phase of a given MCTS.
We formulate both continuity constraints as restricting the following action \((a_{t+1},\delta_{t+1})\) to be within a range of \(a_{t}\pm 0.15\) m.s\({}^{-2}\) for the acceleration and \(\delta_{t}\pm\pi/240\) rad for the steering angle with \((a_{t},\delta_{t})\) the action at the previous time-step.
## IV Experimental results
**Dataset:** We show results on the nuPlan dataset. It encompasses 1300 hours worth of real vehicle motion data along with its corresponding simulator. Within the nuPlan framework, we chose to assess the performance of planners on closed-loop non-reactive agents benchmark. We focus on this benchmark, as evaluations conducted in closed-loop more effectively assess an agent's driving capabilities without the need to compare them to a flawed 'ideal' behavior as typically seen in open-loop assessments. Additionally, we chose non-reactive agents for our study, as preliminary experiments and other performance benchmarks [4, 13] have demonstrated that outcomes are largely consistent between reactive and non-reactive agents. All simulations are ran on 100 scenarios of each of the 14 scenarios types (totaling 1,118 scenarios in practice, as all 14 types do not have 100 available scenarios) of the nuPlan challenge, following the Val14 benchmark validation set [4].
**Score and metrics:** We use the nuPlan official score, which measures driving quality between 0 and 100 through a combination of 16 normalized driving metrics related to infraction rate, ego comfort, or progress toward the goal. We decided to put a special emphasis on the metrics of collision rate (CR), driving area non-compliance (DA) and ego progress (EP) in our experiments, as they are key elements for a safe and efficient autonomous driving system.
**Implementation details:** For ablations studies, the number of simulation steps is limited to 256 in each MCTS. In our setup (Intel Core i7-9700K CPU @ 3.60GHz) the whole pipeline inference time is \(\sim 0.15\) seconds for this setup,
including input pre-processing, prediction model, MCTS and post-processing. The pipeline runs on CPU only. For inference speed purposes, we only expand new possible actions every 1 s. We observed no drop of performance.
### _Ablation study over the prior_
An ablation study over the choice of prior is presented table I. Continuity constraints are the one described section III-D.
We can see from results of Table I that MCTS without prior is inefficient. Exploration being unguided, the expansion phase does not create node leading to a good reward _a priori_. Following \(P_{l}\) for the first steps of the simulation allowed to significantly improve the exploration phase by guiding the MCTS to stay within the driving area. Indeed, thanks to continuity constraints, a good beginning of the trajectory allows to stay on the road and reach acceptable metrics. Interestingly, leveraging only \(P_{l}\) leads to an increase of the collision rate: if the MCTS first actions differ from the prior's, there will be a mismatch between the guidance it provides and the actual scenes which can lead to collisions.
Leveraging \(P_{h}\) allows the MCTS to prioritize exploration of the most common behavior on average (staying at around the same velocity with a null steering angle), therefore minimizing collisions and optimizing overall progress. Notably, using only this naive prior without any kind of learning already yields very good performance, highlighting the power of guided exploration in the MBAPPE method. Finally, leveraging \(P_{h}+P_{l}\) allows to prioritize this kind of behavior while starting with a better heads up and leads to best results on this set of experiments.
### _Ablation study over continuity constraints_
An ablation study over the choice of continuity constraints is presented Table II. For these experiments, prior is \(P_{h}+P_{l}\).
It becomes apparent that when applied separately, continuity constraints offer only marginal improvements to our method. A possible explanation is that the handcrafted identity prior already directs the MCTS towards a form of constrained exploration similar to what is achieved through node constraints. However, utilizing both node and tree constraints independently does enhance the exploration process. Importantly, the combined effects of these constraints not only substantially increase performance but also ensure a consistent selection of actions, both within a single tree and across multiple trees that correspond to sequential planning steps.
### _Comparison with state-of-the-art methods_
We compare MBAPPE's performance with other state-of-the-art method on the validation scenario of the Val14 benchmark [4]. See Table III.
**Baselines:** Urban Driver [21] utilizes PointNet [36] layers to process polyline and employs a MLP following a multi-head attention block to forecast the ego trajectory. GameFormer Planner [2] exploits a Transformer to predict all agents trajectories before refining ego planning via non-linear optimization. PlanCNN [3] leverages a CNN on rasterized inputs to predicts the ego trajectory. PDM [4] leverages an improved IDM [17] model combined with a simple MLP to generate several trajectories which are then scored to return the optimal one. GC-PGP [5] categorizes proposed plans according to their traversal of a route-constrained lane graph, and then identifies the most probable cluster center.
For this comparison, we extended Urban Driver to predict trajectories of all other agents in the scene in addition to the ego's. We name this updated version Urban Driver Multi-Agent (Urban Driver MA). Then, we evaluated two versions of MBAPPE. One leverages Urban Driver MA as prediction and prior model (c.f. Figure 2), and the other a GameFormer model. Other components of those systems are identical.
In our experiments, we found that enhancing a prediction model with MBAPPE consistently results in improved planning. Specifically, when integrated with GameFormer, MBAPPE yields a substantial improvement in key metrics
compared to using non-linear optimization techniques as done with the GameFormer Planner.
Thus, MBAPPE not only delivers state-of-the-art performance, but is also an explainable and interpretable operator when applied to predictive models. This dual benefit both refines decision-making policies and provides added adaptability.
## V An explicit and explainable method
A key benefit of this technique is its simplicity: it requires only basic high-level directives in the form of a reward function (e.g., move ahead, avoid collisions, stick to the route, and remain on the road). Despite its vague prior, the method yields highly effective and realistic planning. This eliminates the need for specific, hard-to-generalize rules, like basing decisions on the road's curvature or the speed of the car ahead, as well as the use of hardly interpretable neural networks. As a result, our approach is highly flexible, adaptable, and explainable.
Indeed, decisions of the MCTS are explainable and the internal process that led to those decisions can be easily observed and analyzed. Figure 4 provides an example of a decision tree of the MCTS in which we can observe several exploration branches and their consequences on the tree expansion. In particular, we observe on the green right branch that internal exploration leading to desirable behavior yields the highest Q-value and further exploration of that branch. When exploration leads to collisions or to the ego leaving its expected route, the Q-value is low and exploration stops, as shown in the red middle and orange left branches. Figure 4 shows that MCTS decisions-making process is transparent and explainable, thus leading to an explicit and safe planning.
## VI Conclusion
This paper presents MBAPPE, a novel approach extending MCTS for planning within a partially learned environment in the context of autonomous driving. Through ablation studies, we highlighted the advantages of incorporating the designed priors and continuity constraints into the MCTS tree. Comparative analysis using a benchmark on the nuPlan simulator revealed that MBAPPE is an effective refinement operator for planning models, consistently outperforming vanilla models across all evaluation metrics. Finally, we emphasize the interpretability provided by this technique, a critical attribute for ensuring the safety and reliability of autonomous vehicles.
In terms of future work, as MBAPPE improves planning model capabilities, one could fine-tune the prior network similarly to the approach used in AlphaGo [27]. This would enable the network to better emulate the MCTS output, thereby refining its priors and initiating a cycle of self-improvement. Better results could also be achieved with a more complex learned prior inferred for each node [31, 32], as well as learning a bootstrapped value network to estimate node expected returns in addition to the current reward. However this would require more network inferences and could harm the execution time.
Fig. 4: A subset of a decision tree obtained with MCTS exploration. Nodes are colored according to their Q-value. The root node correspond to the present state of the vehicle in the nuPlan simulator. We observe that the orange left branch exploration leads to the ego leaving the expected route, hence the low Q-value. The red middle branch exploration leads to a collision, thus explaining the low Q-value. The green right branch exploration presents the expected behavior and therefore has the highest Q-value. The explored planning can also be observed in Figure 1. |
2309.03261 | On non-supersymmetric stable marginal deformations in AdS$_3$/CFT$_2$ | We discuss a continuous family of non-supersymmetric AdS$_3\times
S^3\times{\rm T^4}$ vacua in heterotic and type II supergravities whose
complete Kaluza-Klein spectrum is computed and found to be free from
instabilities. This family is protected as well against some non-perturbative
decay channels, and as such it provides the first candidate for a
non-supersymmetric holographic conformal manifold in 2$d$. We also describe the
operators realising the deformations in the worldsheet and boundary CFT's. | Camille Eloy, Gabriel Larios | 2023-09-06T18:00:00Z | http://arxiv.org/abs/2309.03261v2 | # On non-supersymmetric stable marginal deformations in AdS\({}_{3}\)/Cft\({}_{2}\)
###### Abstract
We discuss a continuous family of non-supersymmetric AdS\({}_{3}\times S^{3}\times\mathrm{T}^{4}\) vacua in heterotic and type II supergravities whose complete Kaluza-Klein spectrum is computed and found to be free from instabilities. This family is protected as well against some non-perturbative decay channels, and as such it provides the first candidate for a non-supersymmetric holographic conformal manifold in \(2d\). We also describe the operators realising the deformations in the worldsheet and boundary CFT's.
+
Footnote †: preprint: MI-HET-811
Operators in any conformal field theory in \(d\) dimensions can be classified according to their conformal dimension as relevant (\(\Delta<d\)), marginal (\(\Delta=d\)), or irrelevant (\(\Delta>d\)). While the cases with \(\Delta\neq d\) trigger (possibly trivial) RG-flows between isolated CFTs, marginal operators play a different role: they describe the space of theories into which the original CFT can be deformed continuously without breaking conformal invariance.
In holographic CFTs, the bulk perspective over these conformal manifolds has remained an open challenge. The gravitational description of these deformations is given by families of AdS solutions that share the same cosmological constant and are labelled by free parameters. The main approach to building these solutions is provided by the TsT prescription [1], which can be applied whenever the undeformed solution preserves a number of abelian isometries (see [2; 3] for novel approaches), and whenever the undeformed solution admits a consistent truncation down to a gauged supergravity in \(d+1\) dimensions, the massless modes dual to the marginal operators usually sit at higher Kaluza-Klein levels [1; 4; 5], which prevents using the lower dimensional theory to describe the deformations - see [6; 7; 8] for a few recent exceptions to this rule.
In this work, we present a family of AdS\({}_{3}\)/CFT\({}_{2}\) duals realising a two-dimensional conformal manifold labelled by parameters \((\omega,\zeta)\). The undeformed solution is the AdS\({}_{3}\times S^{3}\times\mathrm{T}^{4}\) configuration of type II supergravities that preserves the small \(\mathcal{N}=(4,4)\) superalgebra
\[\left[\mathrm{SU}(2)_{\mathrm{l}}\times\mathrm{SU}(2|1,1)_{\mathrm{L}}\right] \times\left[\mathrm{SU}(2)_{\mathrm{r}}\times\mathrm{SU}(2|1,1)_{\mathrm{R}} \right]\times\mathrm{U}(1)^{4}. \tag{1}\]
This ten-dimensional solution only has a non-trivial profile for the fields in the NSNS sector, and can thus also be realised in heterotic string theory. For each of these ten-dimensional solutions, there are consistent truncations down to gauged supergravity in three dimensions, and unlike in [1; 4; 5], the scalar modes dual to the operators in the conformal manifold can already be captured within an appropriate truncation [9].
Thanks to the reformulation of supergravity in the language of Exceptional Field Theory (ExFT), we construct the deformed solutions in \(D=10\) by means of generalised Scherk-Schwarz (gSS) Ansatze. For generic values of the marginal deformations, the \(10d\) spacetime is
\[\mathrm{AdS}_{3}\times M^{4}\times\mathrm{T}^{3}\,, \tag{2}\]
with the manifold \(M^{4}\) (trivially) fibered over a deformed \(S^{3}\) as \(S^{1}_{y^{7}}\hookrightarrow M^{4}\to M^{3}_{\omega,\zeta}\), with \(y^{7}\) one of the coordinates on \(\mathrm{T}^{4}\). The generic deformations only preserve
\[\left(\mathrm{U}(1)_{\mathrm{L}}\times\mathrm{U}(1)_{\mathrm{R}}\right)\times \mathrm{SU}(2)_{\mathrm{diag}}\times\mathrm{U}(1)^{4}\,, \tag{3}\]
and no supersymmetry, where \(\mathrm{U}(1)_{\mathrm{L},\mathrm{R}}\subset\mathrm{SU}(2)_{\mathrm{L},\mathrm{ R}}\) and \(\mathrm{SU}(2)_{\mathrm{diag}}\) being the diagonal subgroup of \(\mathrm{SU}(2)_{\mathrm{l}}\times\mathrm{SU}(2)_{\mathrm{r}}\) in (1). Within the two-dimensional space of parameters, there is a line that preserves the \(\mathcal{N}=(0,4)\) superalgebra
\[\mathrm{U}(1)_{\mathrm{L}}\times\left[\mathrm{SU}(2)_{\mathrm{diag}}\times \mathrm{SU}(2|1,1)_{\mathrm{R}}\right]\times\mathrm{U}(1)^{4}. \tag{4}\]
Owing to the fact that ExFT's are not only a tool to describe consistent truncations, but encode the entire dynamics of the corresponding supergravities, we are also able to obtain the complete spectrum of Kaluza-Klein modes [10] on the solutions we construct. This allows us to show that, despite generically non-supersymmetric, the two-parameter family of solutions is perturbatively stable for a finite range of the parameters.
In the remainder of this letter, we describe how these solutions appear in \(D=3\) gauged supergravity and present their uplift both to heterotic and type II supergravities through the ExFT formalism. Subsequently, we discuss how the spectroscopy techniques of [10; 11] can be applied to these cases and the stability of the non-supersymmetric solutions.
The family of solutions found in [9] sits within the \(3d\) half-maximal supergravity whose scalar manifold is
\[\frac{\mathrm{SO}(8,8)}{\mathrm{SO}(8)\times\mathrm{SO}(8)}\,. \tag{5}\]
The gauging procedure can be described by an embedding tensor \(\Theta_{\bar{K}\bar{L}|\bar{M}\bar{N}}\), with the index \(\bar{M}\) in the vector representation of \(\mathrm{SO}(8,8)\). This embedding tensor must
obey a quadratic constraint [12, 13] for the gauge algebra to close. Additionally, supersymmetry requires that the embedding tensor is restricted to take values in [14]
\[\Theta\in\mathbf{1}\oplus\mathbf{135}\oplus\mathbf{1820}\,, \tag{6}\]
and, therefore, it can be parameterised as
\[\Theta_{\bar{K}\bar{L}|\bar{M}\bar{N}}= \tag{7}\] \[\theta_{\bar{K}\bar{L}\bar{M}\bar{N}}+\frac{1}{2}\big{(}\eta_{ \bar{M}[\bar{K}}\theta_{\bar{L}|\bar{N}}-\eta_{\bar{N}[\bar{K}}\theta_{\bar{L }|\bar{M}}\big{)}+\theta\,\eta_{\bar{N}[\bar{K}}\eta_{\bar{L}]\bar{M}}\,,\]
in terms of totally antisymmetric, symmetric traceless and singlet tensors, and with \(\eta_{\bar{M}\bar{N}}\) the \(\mathrm{SO}(8,8)\) invariant tensor. The embedding tensor describing our gauged supergravity is specified by the choice
\[\theta=0\,,\quad\theta_{\bar{0}\bar{0}}=-4\sqrt{2}\,,\quad\theta_{\bar{M}\bar {N}\bar{P}\,\bar{0}}=-\tfrac{1}{\sqrt{2}}X_{\bar{M}\bar{N}\bar{P}}\,, \tag{8}\]
with
\[X_{\bar{m}\bar{n}\bar{p}}=X_{\bar{m}}{}^{\bar{n}\bar{p}}=X^{\bar{m}}{}_{\bar{ n}}{}^{\bar{p}}=X^{\bar{m}\bar{n}\bar{n}}{}_{\bar{p}}=\varepsilon_{\bar{m} \bar{n}\bar{p}}\,, \tag{9}\]
in terms of indices following the breaking
\[\mathrm{SO}(8,8) \supset \mathrm{SO}(1,1)\times\mathrm{GL}(3)\times\mathrm{GL}(3)\times \mathrm{SO}(1,1)\,, \tag{10}\] \[X^{\bar{M}} \to \{X^{\bar{0}},X_{\bar{0}},X^{\bar{m}},X_{\bar{m}},X^{\bar{1}},X_ {\bar{1}},X^{\bar{7}},X_{\bar{7}}\}\,.\]
Here, the indices range as \(\bar{m}\in\llbracket 1,3\rrbracket\) and \(\bar{i}\in\llbracket 4,6\rrbracket\), and for future convenience we also introduce \(X^{\bar{a}}=\{X^{\bar{1}},\ X^{\bar{7}}\}\). A vacuum of this gauged supergravity is specified by a coset representative \(\mathcal{V}\) in (5), that extremises the scalar potential and defines the scalar matrix \(M_{\bar{K}\bar{L}}=\mathcal{V}_{\bar{K}}{}^{\bar{P}}\mathcal{V}_{\bar{L}}{}^ {\bar{P}}\). In the basis (10) and with the \(\mathfrak{so}(8,8)\) generators normalised as \((T^{\bar{M}\bar{N}})_{\bar{P}}{}^{\bar{Q}}=2\,\delta_{\bar{P}}{}^{[\bar{M}} \eta^{\bar{N}]\bar{Q}}\), the \((\omega,\zeta)\) solution can be characterised by
\[\mathcal{V}_{\bar{M}}{}^{\bar{N}}=\exp\!\Big{[}-\omega\,T^{\bar{3}}{}_{\bar{3 }}-\frac{\omega\zeta}{1-e^{-\omega}}\big{(}T^{\bar{3}\bar{7}}-T^{\bar{3}}{}_{ \bar{7}}\big{)}\Big{]}\,, \tag{11}\]
with all points sharing the AdS radius \(\ell^{2}_{\mathrm{AdS}}=-2/V_{0}\).
To describe the ten-dimensional fields in a duality covariant language, we resort to \(\mathrm{SO}(8,8)\) ExFT [15], whose bosonic fields are
\[\{g_{\mu\nu},\ \mathcal{M}_{MN},\ \mathcal{A}_{\mu}^{MN},\ \mathcal{B}_{\mu\, MN}\}\,, \tag{12}\]
with \(\mu\in\llbracket 0,2\rrbracket\) and \(M,N\in\llbracket 1,16\rrbracket\) in the fundamental of \(\mathrm{SO}(8,8)\). All these fields depend on both external coordinates \(x^{\mu}\) and internal ones \(Y^{MN}\), with the latter in the adjoint of \(\mathrm{SO}(8,8)\). The 7-dimensional internal coordinates \(y^{i}\) parametrising the three-sphere and torus are embedded in \(Y^{MN}\). To ensure that the fields depend only on \(y^{i}\), the coordinate dependance is subject to the section constraints
\[\partial_{[MN}\otimes\partial_{PQ]}=0,\quad\eta^{PQ}\partial_{MP}\otimes \partial_{NQ}=0, \tag{13}\]
which can be solved by breaking
\[\mathrm{SO}(8,8) \supset \mathrm{SO}(1,1)\times\mathrm{GL}(7)\,, \tag{14}\] \[X^{M} \longrightarrow \{X^{0},\ X_{0},X^{i},\ X_{i}\}\,,\]
and restricting coordinate dependance to \(y^{i}=Y^{i0}\). We align ExFT indices with the ones in the three-dimensional theory by embedding \(\mathrm{GL}(3)\times\mathrm{GL}(3)\times\mathrm{SO}(1,1)\subset\mathrm{GL}(7)\) as in (10).
The explicit dictionary between the \(\mathrm{SO}(8,8)\)-ExFT generalised metric and the internal components of the NSNS fields is given by
\[\mathcal{M}^{00}=\hat{g}^{-1}e^{\hat{\Phi}/2}\,,\quad\mathcal{M}^ {0i}=\tfrac{1}{6!}\,\mathcal{M}^{00}e^{ij_{1}\ldots j_{6}}\,\bar{b}_{j_{1} \ldots j_{6}}\,, \tag{15}\] \[\mathcal{M}^{00}\mathcal{M}^{ij}-\mathcal{M}^{0i}\mathcal{M}^{0j }=\hat{g}^{-1}\hat{g}^{ij}\,,\] \[\mathcal{M}^{00}\mathcal{M}^{i}{}_{j}-\mathcal{M}^{0i}\mathcal{M}^ {0j}=\hat{g}^{-1}\hat{g}^{ik}\,b_{kj}\,,\]
where \(\hat{g}_{ij}\) is the purely internal block of the ten-dimensional metric in Einstein frame, and \(\hat{g}\) its determinant. The fields \(b\) and \(\bar{b}\) are not directly related to the higher-dimensional two-form, but retrieve its field strength as
\[\hat{H}=\mathrm{d}b+e^{\hat{\Phi}/8}\star_{10}\mathrm{d}\bar{b}\,. \tag{16}\]
Upon solving the section conditions, contact with gauged supergravity is achieved through the gSS Ansatz
\[g_{\mu\nu}(x,Y) =\rho^{-2}g_{\mu\nu}(x)\,, \tag{17}\] \[\mathcal{M}_{MN}(x,Y) =U_{M}{}^{\bar{M}}U_{N}{}^{\bar{N}}M_{\bar{M}\bar{N}}(x)\,,\] \[\mathcal{A}_{\mu}^{MN}(x,Y) =\sqrt{2}\,\rho^{-1}(U^{-1})_{\bar{M}}{}^{M}(U^{-1})_{\bar{N}}{}^{ N}A_{\mu}^{\bar{M}\bar{N}}(x)\,,\] \[\mathcal{B}_{\mu KL}(x,Y) =-\frac{\rho^{-1}}{2\sqrt{2}}U_{M\bar{N}}\partial_{KL}(U^{-1})_{ \bar{M}}{}^{M}A_{\mu}^{\bar{M}\bar{N}}(x)\,,\]
with \(\rho(Y)\) a scale factor and \(U_{M}{}^{\bar{M}}(Y)\) an element of \(\mathrm{SO}(8,8)\) controlling the twisting of the \(3d\) metric, vectors and scalars by the internal coordinates. The relevant pair \((\rho,\,U)\) which recovers (8) can be constructed out of the \(\mathrm{SO}(4,4)\)-ExFT parallelisation discussed in [9] by embedding it in the \(\{X^{0},X_{0},X^{m},X_{m}\}\) block in (10).
The solutions advertised in (2) then follow from introducing the Ansatz (17) with the \(3d\) representative (11) in (15). We choose our coordinates as
\[Y^{m,0} =\{\cos\alpha\,\cos\beta\,,\cos\alpha\,\sin\beta\,,\sin\alpha\, \cos\gamma\}\,, \tag{18}\] \[Y^{a,0} =y^{a}\,,\]
with \(y^{a}\sim y^{a}+1\) parameterising \(\mathrm{T}^{4}\), and the angles \(0\leq\alpha\leq\frac{\pi}{2}\) and \(0\leq\beta,\gamma\leq 2\pi\) describing a deformed three-sphere with metric
\[\mathrm{d}s^{2}(M_{\omega,\zeta}^{3})=\] \[\mathrm{d}\alpha^{2}+e^{\omega}\Delta^{4}\big{(}\cos^{2}\!\alpha\, \mathrm{d}\beta^{2}+(\zeta^{2}+e^{-2\omega})\sin^{2}\!\alpha\,\mathrm{d} \gamma^{2}\big{)}\] \[\qquad-e^{2\omega}\zeta^{2}\Delta^{8}\big{(}\cos^{2}\!\alpha\, \mathrm{d}\beta-\sin^{2}\!\alpha\,\mathrm{d}\gamma\big{)}^{2}\,. \tag{19}\]
In terms of these coordinates, the solution reads
\[e^{\hat{\Phi}} =\Delta^{2},\] \[\mathrm{d}\hat{s}_{\mathrm{s}}^{2} =\mathrm{d}s^{2}(\mathrm{AdS}_{3})+\mathrm{d}s^{2}(M_{\omega,\zeta} ^{3})+\delta_{\mathrm{ij}}\,\mathrm{d}y^{\mathrm{i}}\mathrm{d}y^{\mathrm{j}}\] \[\quad+\big{[}\mathrm{d}y^{7}+e^{\omega}\zeta\Delta^{4}\,\big{(} \cos^{2}\!\alpha\,\mathrm{d}\beta-\sin^{2}\!\alpha\,\mathrm{d}\gamma\big{)} \big{]}^{2},\] \[\hat{H}_{(3)} =2\,\mathrm{vol}(\mathrm{AdS}_{3})+\sin(2\alpha)\,\Delta^{8}e^{2\omega} \tag{20}\] \[\quad\times\mathrm{d}\alpha\wedge(\mathrm{d}\beta+\zeta\mathrm{d} y^{7})\wedge\big{(}(\zeta^{2}+e^{-2\omega})\mathrm{d}\gamma-\zeta\mathrm{d}y^{7} \big{)}\,,\]
with the function
\[\Delta^{2}=\frac{e^{-\omega/2}}{\sqrt{1+(\zeta^{2}+e^{-2\omega}-1)\cos^{2}\! \alpha}}\,, \tag{21}\]
and the string frame metric in (20) given by \(\hat{g}_{\hat{\eta}\hat{\nu}}\!=\!\epsilon^{\hat{\Phi}/2}\hat{g}_{\hat{\eta}\hat {\nu}}\). For generic values of the marginal parameters, the solution preserves (3), with the abelian factors acting as shifts on \(\beta\), \(\gamma\) and the angles on the torus, and \(\mathrm{SU(2)_{diag}}\) as rigid rotations preserving \(\delta_{\mathrm{ij}}\mathrm{d}y^{\mathrm{i}}\mathrm{d}y^{\mathrm{j}}\).
Effectively, the \(\zeta\) modulus controls the fibration of \(M^{4}\) in (2). When setting \(\zeta^{2}=1-e^{-2\omega}\), \(M_{\omega,\zeta}^{3}\) itself becomes a Hopf fibration, \(S_{\theta}^{1}\hookrightarrow M_{\omega}^{3}\to\mathbb{CP}^{1}\), and the family of solutions in (20) simplifies to
\[\hat{\Phi} =-\frac{\omega}{2},\] \[\mathrm{d}\hat{s}_{\mathrm{s}}^{2} =\mathrm{d}s^{2}\big{(}\mathrm{AdS}_{3}\big{)}+\delta_{\mathrm{ij }}\,\mathrm{d}y^{\mathrm{i}}\mathrm{d}y^{\mathrm{j}}+\mathrm{d}s^{2}\big{(} \mathbb{CP}^{1}\big{)}+e^{-2\omega}\,\mathbf{\eta}^{2}\] \[\quad+\big{(}\mathrm{d}y^{7}+\sqrt{1-e^{-2\omega}}\,\mathbf{\eta} \big{)}^{2}\,, \tag{22}\] \[\hat{H}_{(3)} =2\,\mathrm{vol}(\mathrm{AdS}_{3})+2\,\mathbf{\eta}\wedge\mathbf{J}+2 \sqrt{1-e^{-2\omega}}\,\mathbf{J}\wedge\mathrm{d}y^{7}\,,\]
which, away from the scalar origin, preserves the \(\mathcal{N}\!=(0,4)\) superalgebra in (4). The \(\mathrm{SU(2)_{R}}\) there is realised as the isometries of the Fubini-Study metric on \(\mathbb{CP}^{1}\), and \(\mathrm{U(1)_{L}}\) as shifts of the angle \(\theta\) along the Hopf fibre. Here, we define
\[\mathbf{\eta}=\cos^{2}\!\alpha\,\mathrm{d}\beta-\sin^{2}\!\alpha\, \mathrm{d}\gamma\,, \tag{23}\]
together with \(\mathbf{J}\) and \(\mathbf{\Omega}\), who are respectively the contact, Kahler and complex holomorohic forms of the Sasaki-Einstein structure on \(S^{3}\). They satisfy
\[\mathrm{d}\mathbf{\eta}=2\mathbf{J}\,,\qquad\quad\mathrm{d}\mathbf{\Omega}=2 i\mathbf{\eta}\wedge\mathbf{\Omega}\,, \tag{24}\] \[\mathbf{J}\wedge\mathbf{\Omega}=0\,,\qquad\mathbf{\eta}\wedge\mathbf{J}=\frac{i}{ 2}\mathbf{\eta}\wedge\mathbf{\Omega}\wedge\mathbf{\bar{\Omega}}=\,\mathrm{vol}(S^{3})\,.\]
The solution (22) is analogous to the \(\mathcal{N}=4\) vacua found in [16] in the context of \(\mathrm{AdS}_{3}\times S^{3}\times S^{3}\times S^{1}\). Recently, similar solutions have appeared in [17; 18; 19]. However, those solutions require the presence of D-branes which sit outside the S-duality orbit of our purely F1-NS5 configuration. The presence of the aforementioned fibrations has also been an obstacle for obtaining them as the near horizon limit of brane intersection with flat branes.
From a string worldsheet perspective, the configuration (20) can be described as a deformation of the \(\mathrm{SL(2,\mathbb{R})\times SU(2)\times U(1)^{4}}\) WZW model [20; 21] corresponding to the undeformed solution [22]. Focusing on (22) for simplicity, the operator controlling the deformation is \(J^{z}_{\mathrm{SU(1)}\tau}\), where \(J^{z}_{\mathrm{SU(2)}}\) is a component of the holomorphic current realising the left-moving copy of \(\mathrm{SU(2)}\) in the symmetry group, and \(\bar{J}_{\mathrm{U(1)}\tau}\) the anti-holomorphic current corresponding to shifts in \(y^{7}\). Being the product of conserved (anti-)holomorphic commuting currents, the operator \(J^{z}_{\mathrm{SU(2)}}\bar{J}_{\mathrm{U(1)}\tau}\) is exactly marginal [23] and breaks the superalgebra from (1) to (4). Analogously, in the conformal \(\mathrm{Sym}^{N}(\mathbb{T}^{4})\) theory conjectured to sit at the boundary of \(\mathrm{AdS}_{3}\)[24; 25], the single-particle operator realising the deformation can be identified as
\[\mathcal{O}\sim\sum_{k}^{N}(j^{z}_{\mathrm{SU(2)}}\bar{j}_{S^{3}})_{k}\,, \tag{25}\]
with now \(j^{z}_{\mathrm{SU(2)}}\) a component of the left-moving R-symmetry group and \(\bar{j}_{S^{3}}\) one of the right-moving currents of \(\mathrm{T}^{4}\). The sum in (25) assures that this operator survives the orbifold projection. Similar considerations can be made for (20), with the deformations now breaking supersymmetry completely.
If one perturbs the Scherk-Schwarz parallelisation in (17) a la [10], the spectrum of modes that only excite NSNS fields can be retrieved. We thus consider the following expansion in (17) [11]:
\[g_{\mu\nu}(x) \to\bar{g}_{\mu\nu}(x)+h_{\mu\nu}{}^{\Lambda(p_{\alpha})}(x) \mathcal{Y}^{\Lambda(p_{\alpha})}\,,\] \[M_{\bar{M}\bar{N}}(x) \to\bar{M}_{\bar{M}\bar{N}}+j_{\bar{M}\bar{N}}{}^{\Lambda(p_{ \alpha})}(x)\mathcal{Y}^{\Lambda(p_{\alpha})}\,, \tag{26}\] \[A^{\bar{M}\bar{N}}_{\mu}(x) \to a^{\bar{M}\bar{N}\,\Lambda(p_{\alpha})}_{\mu}(x)\mathcal{Y}^{ \Lambda(p_{\alpha})}\,,\]
where the background is described by
\[\{g_{\mu\nu},\ M_{\bar{M}\bar{N}},\,A^{\bar{M}\bar{N}}_{\mu}\}=\{\bar{g}_{\mu\nu}, \ \bar{M}_{\bar{M}\bar{N}},\,0\}\,, \tag{27}\]
and \(\{h_{\mu\nu}{}^{\Lambda(p_{\alpha})},\ j_{\bar{M}\bar{N}}{}^{\Lambda(p_{ \alpha})},\ a^{\bar{M}\bar{N}\,\Lambda(p_{\alpha})}_{\mu}\}\) are the perturbations expanded in a basis of scalar harmonics of the \(S^{3}\times\mathbb{T}^{4}\) configuration that preserves maximal isometry. As such, they furnish the infinite-dimensional reducible representation of \(\mathrm{SO}(4)\times\mathrm{U(1)}^{4}\)
\[\mathcal{Y}^{\Lambda(p_{\alpha})}=\mathcal{Y}^{\Lambda}\,e^{2\pi i\sum p_{ \alpha}y_{\alpha}}\in\bigoplus_{p_{\alpha}\in\mathbb{Z}^{4}}\ \bigoplus_{n=0}^{\infty}\Big{(}\frac{n}{2},\frac{n}{2}\Big{)}_{(p_{\alpha})}\,. \tag{28}\]
Here, \(\Lambda\) denotes Kaluza-Klein index on \(S^{3}\), which can be expanded as
\[\mathcal{Y}^{\Lambda}=\big{\{}1,\ \mathcal{Y}^{\alpha},\ \mathcal{Y}^{\{\alpha \ \mathcal{Y}^{\
which, following (28), can in turn be decomposed as
\[\dot{\mathcal{T}}_{\bar{M}\bar{N}}{}^{(p_{a})\Lambda\Sigma}=\dot{\mathcal{T}}_{ \bar{M}\bar{N}}{}^{\Lambda\Sigma}+\delta^{\Lambda\Sigma}\,\dot{\mathcal{T}}_{ \bar{M}\bar{N}}{}^{(p_{a})}\,. \tag{31}\]
For our twist, the SO(4) piece \(\dot{\mathcal{T}}_{\bar{M}\bar{N}}{}^{\Lambda\Sigma}\) has non-vanishing components
\[\dot{\mathcal{T}}_{\bar{m}\bar{0}}{}^{\alpha\beta}=\sqrt{2}\,\delta_{4}^{[ \alpha}\delta_{\bar{m}}^{\beta]}\,,\qquad\dot{\mathcal{T}}^{\bar{m}}{}_{\bar{0 }}{}^{\alpha\beta}=\frac{1}{\sqrt{2}}\,\varepsilon^{\bar{m}4\alpha\beta}\,, \tag{32}\]
when acting on the level \(n=1\) harmonics. Higher level tensors can then be constructed recursively from (32) [9]. Similarly, the U(1)\({}^{4}\) block is given by
\[\dot{\mathcal{T}}_{\bar{a}\bar{0}}{}^{(p_{a})}=-\frac{1}{\sqrt{2}}\,2\pi i\,p_ {a}\,. \tag{33}\]
Introducing (26) into the ExFT equations of motion and keeping only terms linear in the perturbations, one can read off mass matrices whose eigenvalues, modulo removal of redundancies and Goldstone modes [9], are the masses of the modes in the KK spectrum that only excite NSNS fields. To additionnaly capture perturbations exciting vectors in the heterotic theory, we can embed SO(8, 8) into SO(8, 24) with trivial components on the SO(16) block. On the other hand, to describe the modes that excite RR fields of type II supergravities, the SO(8, 8) theory must be embedded in E\({}_{8(8)}\) as
\[\text{E}_{8(8)} \supset \text{SO(8,8)}\,,\] \[\mathbf{248} \rightarrow \mathbf{120}+\mathbf{128}_{\text{s}}\,, \tag{34}\] \[t^{\mathcal{M}} \rightarrow \{t^{[MN]},\ t^{\mathcal{A}}\}\,,\]
and analogously for barred indices. The relevant \(3d\) gauged supergravity is described by a symmetric embedding tensor \(X_{\bar{\mathcal{M}}\bar{\mathcal{N}}}\) living in the \(\mathbf{1}\oplus\mathbf{3875}\) representation of E\({}_{8(8)}\) and subject to [26; 27]
\[X_{\bar{\mathcal{R}}\bar{\mathcal{P}}}\,X_{\bar{\mathcal{S}}(\bar{\mathcal{M} }}\,f_{\bar{\mathcal{N}})}{}^{\mathcal{R}\mathcal{S}}=0\,, \tag{35}\]
with \(f_{\bar{\mathcal{N}}\bar{\mathcal{R}}}{}^{\bar{\mathcal{S}}}\) the E\({}_{8(8)}\) structure constants. Taking the latter as
\[f_{MN,PQ}{}^{RS} =-8\,\delta_{[M}{}^{[R}\eta_{N][P}\delta_{Q]}{}^{\bar{S}]}\,,\] \[f_{MN,\mathcal{A}}{}^{\bar{B}} =\frac{1}{2}\,\Gamma_{MN}\,{}_{\mathcal{A}}{}^{\bar{B}}\,, \tag{36}\] \[f_{\mathcal{A}\mathcal{B}}{}^{MN} =-\frac{1}{2}\,\Gamma_{\mathcal{A}\mathcal{B}}^{MN}\,,\]
the E\({}_{8(8)}\) quadratic constraint (35) is solved by embedding the SO(8, 8) embedding tensor (8) in \(X_{\bar{\mathcal{M}}\bar{\mathcal{N}}}\) as [13]
\[X_{\bar{K}\bar{L}|\bar{M}\bar{N}} =2\,\Theta_{\bar{K}\bar{L}|\bar{M}\bar{N}}\,, \tag{37}\] \[X_{\bar{\mathcal{A}}\bar{\mathcal{B}}} =-\theta\,\eta_{\bar{\mathcal{A}}\bar{\mathcal{B}}}+\tfrac{1}{48} \,\Gamma_{\bar{\mathcal{A}}\bar{\mathcal{B}}}^{\bar{K}\bar{L}\bar{M}\bar{N}} \theta_{\bar{K}\bar{L}\bar{M}\bar{N}}\,,\]
in terms of the chiral gamma matrices of SO(8, 8) and the charge conjugation matrix \(\eta_{\bar{\mathcal{A}}\bar{\mathcal{B}}}\). The \((\omega,\zeta)\) family of solutions is then characterised by the E\({}_{8(8)}\)/SO(16) representative
\[\mathcal{V}_{\bar{\mathcal{M}}}{}^{\bar{\mathcal{N}}}=\exp\Bigl{[}-\omega\,f^{ \bar{3}}{}_{\bar{3}}-\frac{\omega\zeta}{1-e^{-\omega}}\bigl{(}f^{\bar{3}\bar{ \phantom{3}}}{}_{\bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}_{\bar{\phantom{3} }}{}_{\bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}_{\bar{ \phantom{3}}}{}_{\bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{ }_{\bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}_{\bar{ \phantom{3}}}{}_{\bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}_{ \bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}_{\bar{ \phantom{3}}}{}_{\bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}_{ \bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}_{\bar{ \phantom{3}}}{}_{\bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}_{ \bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}_{ \bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}_{ \bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}_{ \bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}_{ \bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}_{ \bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}_{ \bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}_{ \bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}_{ \bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}_{ \bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}_{ \bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}_{ \bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}{}_{\bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}_{ \bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}{}_{ \bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}{}_{\bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}{}_{ \bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}{}_{ \bar{\phantom{3}}}{}{}_{\bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}{}_{\bar{ \phantom{3}}}{}_{\bar{\phantom{3}}}{}{}_{\bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}_{ \bar{\phantom{3}}}{}{}_{\bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}{}_{\bar{ \phantom{3}}}{}_{\bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}_{ \bar{\phantom{3}}}{}{}_{\bar{\phantom{3}}}{}{}_{\bar{\phantom{3}}}{}_{\bar{ \phantom{3}}}{}{}_{\bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}{}_{ \bar{\phantom{3}}}{}{}_{\bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}_{\bar{ \phantom{3}}}{}{}_{\bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}^{\bar{\phantom{3}}}{}_{ \bar{\phantom{3}}}{}{}_{\bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}^{\bar{\phantom{3}}}{}_{ \bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}^{\bar{\phantom{3}}}{}_{\bar{\phantom{3}}}{}^{ \bar{\phantom{3}}
with the harmonics in (28). Again, thanks to the choice of harmonics, the mass operators that can be read off from the linearised equations of motion in ExFT become algebraic matrices, given that
\[\rho^{-1}(U^{-1})_{\bar{\mathcal{M}}}{}^{\mathcal{M}}\partial_{\mathcal{M}} \mathcal{Y}^{\Lambda(p_{a})}=-\,\mathcal{T}_{\bar{\mathcal{M}}}{}^{(p_{a}) \Delta\Sigma}\,\mathcal{Y}^{\Sigma(p_{a})}\,, \tag{45}\]
with only non-vanishing components \(\mathcal{T}_{\bar{\mathcal{M}}\bar{N}}=2\,\mathring{\mathcal{T}}_{\bar{ \mathcal{M}}\bar{N}}\). Further details on these \(\mathrm{E}_{8(8)}\)-covariant Kaluza-Klein mass matrices will be given elsewhere [30].
Armed with the ExFT mass matrices for the KK modes, we have computed the masses in the different \(3d\) supergravities and ExFT's for the first few levels on the \(S^{3}\) and arbitrary level on the \(\mathrm{T}^{4}\) for bosons and fermions. These results can be encapsulated in a simple master formula in terms of the charges of the modes under the relevant symmetry (super-)algebra.
The Kaluza-Klein spectrum of type II supergravities on the round \(\mathrm{AdS}_{3}\times S^{3}\times\mathrm{T}^{4}\) organises into supermultiplets of (1). We denote by \(p_{a}\) the \(\mathrm{U}(1)^{4}\) charges, and long multiplets of \(\mathrm{SU}(2)\times\mathrm{SU}(2|1,1)\) as \([\Delta,j^{-},j^{+}]\) - see the appendix A of ref. [9] for a review - with \(\Delta\) the conformal dimension of the conformal primary and \(j^{+},j^{-}\) its spins under the two \(\mathrm{SU}(2)\) factors. The type II spectrum at this point is given by (_c.f._[31])
\[\mathcal{S}=\bigoplus_{\begin{subarray}{c}j^{+}\geq 0\\ p_{a}\in\mathbb{Z}^{4}\end{subarray}}\Big{(}\big{[}\Delta_{\mathrm{L}},0,j^{+} \big{]}\otimes\big{[}\Delta_{\mathrm{R}},0,j^{+}\big{]}\Big{)}_{\{p_{a}\}}\,, \tag{46}\]
where the conformal dimension of the primary of each factor is
\[\Delta_{\mathrm{L}}=\Delta_{\mathrm{R}}=-\frac{1}{2}+\frac{1}{2}\sqrt{1+f}\,, \tag{47}\]
with \(f\) depending on the quantum numbers as
\[f=4\,j^{+}\big{(}j^{+}+1\big{)}+\sum_{a}(2\pi\,p_{a})^{2}\,. \tag{48}\]
The unitary bound \(\Delta_{\mathrm{L},\mathrm{R}}=j^{+}\) is saturated for \(p_{a}=0\) and the multiplets get shortened (see [9]).
Turning on generic \((\omega,\zeta)\) deformations, the spectrum organises itself in representations of (3). The spectrum on arbitrary points of the family can be obtained by shifting the dimension of each physical mode in (46) as \(f\to f+\Delta f\) with
\[\Delta f=\frac{e^{2\omega}}{4}\Big{(}(q_{\mathrm{L}}-q_{\mathrm{R}})+(q_{ \mathrm{L}}+q_{\mathrm{R}})\,(e^{-2\omega}+\zeta^{2})+4\pi\,p_{\mathcal{T}}\, \zeta\Big{)}^{2}-q_{\mathrm{L}}^{2}\,, \tag{49}\]
where \(q_{\mathrm{L},\mathrm{R}}\) denote the (integer-normalised) charges under \(\mathrm{U}(1)_{\mathrm{L},\mathrm{R}}\), respectively.
In the heterotic case, the \(\mathcal{N}=(0,4)\) supergroup organising the spectrum at the scalar origin is
\[\begin{split}\big{[}\mathrm{SU}(2)_{\mathrm{l}}\times\mathrm{SU}( 2)_{\mathrm{L}}\big{]}\times\big{[}\mathrm{SU}(2)_{\mathrm{r}}\times& \mathrm{SU}(2|1,1)_{\mathrm{R}}\big{]}\\ &\times\mathrm{U}(1)^{4}\times\mathrm{SO}(16).\end{split} \tag{50}\]
The spectrum follows from eq. (46) as a truncation that only keeps those states with integer spin under \(\mathrm{SU}(2)_{\mathrm{l}}\), and further supplemented at each level by 16 copies of the multiplet
\[\Big{(}\big{(}\Delta_{\mathrm{L}},0,j^{+}\big{)}\otimes\big{[}\Delta_{\mathrm{ R}},0,j^{+}\big{]}\Big{)}_{p_{4},p_{5},p_{6},p_{7}}, \tag{51}\]
with \(\Delta_{\mathrm{L}}=\frac{1}{2}+\frac{1}{2}\sqrt{1+f}\) and \(\Delta_{\mathrm{R}}\) in (47), forming an \(\mathrm{SO}(16)\) vector. For the \((\omega,\zeta)\) deformation, the conformal dimension of each physical field gets shifted as in eq. (49).
The masses \(m_{(0)}\) of all scalars in the spectrum can be retrieved using eq. (46)-(48) and (49) on any point of the family of non-supersymmetric solutions (20), and analogously for the heterotic case. The Breitenlohner-Freedman bound [32] in \(3d\), _i.e._\(\big{(}m_{(0)}\ell_{\mathrm{AdS}}\big{)}^{2}\geq-1\), shows that there are only two potentially unstable types of modes for each level \(n=2\,j^{+}\), \(p_{a}=0\) in (28). As conjectured in [9], those modes have masses
\[\begin{split}-(4+2\,n)+(2+n)^{2}\ e^{2\omega}\ [2]\,,\\ -(4+2\,n)+(2+n)^{2}\ e^{-2\omega}\ \big{(}1+e^{2\omega}\,\zeta^{2} \big{)}^{2}\ [2]\,,\end{split} \tag{52}\]
with the integers between square brackets indicating their two-fold degeneracy. The stability condition is most requiring at the gauged supergravity level \(n=0\). Thereby, the configuration is perturbatively stable if \((\omega,\zeta)\) lie within the range
\[e^{-\omega}\leq\frac{2}{\sqrt{3}}\,,\ \ \ \ \zeta^{2}+\left(e^{-\omega}-\frac{ \sqrt{3}}{4}\right)^{2}\geq\frac{3}{16}. \tag{53}\]
In this Letter, Exceptional Field Theory has been utilised to find a family of non-supersymmetric deformations of the \(\mathrm{AdS}_{3}\times S^{3}\times\mathrm{T}^{4}\) solutions of heterotic and type II supergravities. We showed that these new solutions are perturbatively stable within a finite region of the parameter space and that there exists a one-dimensional subspace where \(\mathcal{N}=(0,4)\) supersymmetry is preserved. Moreover, the holomorphicity arguments in the worldsheet formulation and boundary CFT\({}_{2}\) description suggest that these are solutions of string theory and not only purely large \(N\) configurations. It will be interesting to explicitly compute the 1-loop corrections a la [33; 34] in the future to check this expectation.
The stability of the solution against non-perturbative decay channels needs to be investigated. One possible decay channel for non-supersymmetric \(\mathrm{AdS}_{3}\) solutions is the destabilisation of the stack of branes that comprise them [35; 36; 37; 38]. We have explicitly checked that that is not the case for our non-supersymmetric solutions by considering probe F1- as well as D\(p\)- and NS5-branes with no worldvolume fluxes. These branes can be embedded in \(\mathrm{AdS}_{3}\), possibly wrapping the internal geometry, and their worldvolume actions show that they are attracted to the original stack, instead of emitted from it.
Another possible decay channel is the nucleation of bubbles, including Coleman-de Luccia bubbles [39] and bubbles of nothing [40; 41]. We leave this question for
future work, but in line with the arguments in [8], one could expect the family to be protected due to the fact that it is away from the SUSY vacua only by a marginal deformation.
###### Acknowledgements.
We are grateful to Iosif Bena, Yolanda Lozano, Niall Macpherson, Emil Martinec, Chris N. Pope and Ergin Sezgin for helpful discussions and correspondence. We would also like to specially thank Emanuel Malek and Henning Samtleben for their feedback on a first version of this manuscript and collaboration on related projects. GL wants to thank the organisers of the 'Supergravity, Strings and Branes' workshop at Bogazici University, Turkey, for giving him the opportunity to present this work. CE is supported by the FWO-Vlaanderen through the project G006119N and by the Vrije Universiteit Brussel through the Strategic Research Program "High-Energy Physics". GL is supported by endowment funds from the Mitchell Family Foundation.
|
2309.12378 | Unsupervised Semantic Segmentation Through Depth-Guided Feature
Correlation and Sampling | Traditionally, training neural networks to perform semantic segmentation
required expensive human-made annotations. But more recently, advances in the
field of unsupervised learning have made significant progress on this issue and
towards closing the gap to supervised algorithms. To achieve this, semantic
knowledge is distilled by learning to correlate randomly sampled features from
images across an entire dataset. In this work, we build upon these advances by
incorporating information about the structure of the scene into the training
process through the use of depth information. We achieve this by (1) learning
depth-feature correlation by spatially correlate the feature maps with the
depth maps to induce knowledge about the structure of the scene and (2)
implementing farthest-point sampling to more effectively select relevant
features by utilizing 3D sampling techniques on depth information of the scene.
Finally, we demonstrate the effectiveness of our technical contributions
through extensive experimentation and present significant improvements in
performance across multiple benchmark datasets. | Leon Sick, Dominik Engel, Pedro Hermosilla, Timo Ropinski | 2023-09-21T11:47:01Z | http://arxiv.org/abs/2309.12378v2 | # Spatially Guiding Unsupervised Semantic Segmentation Through
###### Abstract
Traditionally, training neural networks to perform semantic segmentation required expensive human-made annotations. But more recently, advances in the field of unsupervised learning [11] have made significant progress on this issue and towards closing the gap to supervised algorithms. To achieve this, semantic knowledge is distilled by learning to correlate randomly sampled features from images across an entire dataset. In this work, we build upon these advances by incorporating information about the structure of the scene into the training process through the use of depth information. We achieve this by (1) learning depth-feature correlation by spatially correlate the feature maps with the depth maps to induce knowledge about the structure of the scene and (2) implementing farthest-point sampling to more effectively select relevant features by utilizing 3D sampling techniques on depth information of the scene. Finally, we demonstrate the effectiveness of our technical contributions through extensive experimentation and present significant improvements in performance across multiple benchmark datasets.
## 1 Introduction
Semantic segmentation plays a critical role in many of today's vision systems in a multitude of domains. These include, among others, autonomous driving, retail applications, face recognition, and many more [7, 17, 23, 27, 28]. Until recently, the main body of research in this area was focused on supervised models that require a large amount pixel-level annotations for training. Not only is sourcing this image data often an effortful process, but also annotating the large datasets required for good performance comes at a high price. Several benchmark datasets report their annotation times. For example, the MS COCO dataset [18] required more than 28K hours of human annotations for around 164K images, and annotating a single image in the Cityscapes dataset [9] took 1.5 hours on average. These costs have triggered the advent of unsupervised semantic segmentation [8, 11, 13, 25], which aims to remove the need for labeled training data in order to train segmentation models. Recently, work by Hamilton et al. [11] have accelerated the progress towards removing the need for labels to achieve good results on semantic segmentation tasks. Their model, STEGO, uses a DINO-pretrained [6] Vision Transformer (ViT) [10] to extract features that are then distilled across the entire dataset to learn semantically relevant features, using a contrastive learning approach. The to-be-distilled features are sampled randomly from feature maps calculated from the same image, k-NN matched images as well as other negative images. Seong et al. [25] build on this process by trying to identify features that are most relevant to the model by discovering hidden positives. Their work exposes an inefficiency of random sampling in STEGO as hidden positives sampling leads to significant improvements. But both approaches only operate in the pixel space and therefore fail to take into account the spatial layout of the scene. Not only do we human perceive the world in 3D, but also previous work [5, 12, 26] has shown that supervised semantic segmentation can benefit greatly from spatial information during training. Inspired by these observations, we propose to incorporate spatial information in the form of depth maps into the STEGO training process. Depth is considered a product of vision and does not provide a labeled training signal. To obtain depth information for the benchmark image datasets in our evaluations, we make use of ZoeDepth [3], an off-the-shelf zero-shot monocular depth estimator to obtain spatial information of the scene.
With our method, _DepthG_, we propose to **(1)** guide the model to learn a rough spatial layout of these scene, since
we hypothesize this will aid the network in differentiating objects much better. We achieve this by extending the contrastive process to the spatial dimension: We do not limit the model to learning only Feature-Feature Correlations, but also _Depth-Feature Correlations_. Through this process, the model is guided towards pulling apart the features with high distances in the feature and also the 3D space, as well as mapping them closer together if their distance is low in feature and depth space.
With the information about the spatial layout of the scene present, we furthermore propose to **(2)** spatially inform our features sampling process by utilizing _Farthest-Point Sampling (FPS)_[21] on the depth map, which equally samples scenes in 3D. We show that this is beneficial for unsupervised segmentation, since for our evaluations on COCO-Stuff [4], we demonstrate state-of-the-art performance with _33% fewer feature samples_ per image compared to STEGO.
To the best of our knowledge, we are the first to propose a mechanism to incorporate 3D knowledge of the scene into unsupervised learning for 2D images _without_ encoding depth maps as part of the network input. This alleviates the risk of the model developing an input dependency, where its performance degrades at inference time since depth information is no longer available. Our approach does not rely on depth information during inference.
## 2 Related Work
### Unsupervised Semantic Segmentation
Recent works [13, 8, 11, 25] have attempted to tackle semantic segmentation without the use of human annotations. Ji et al. [13] propose IIC, a method that aims to maximize the mutual information between different augmented versions of an image. PiCIE, published by Cho et al. [8], introduces an inductive bias made up of the invariance to photometric transformations and equivariance to geometric manipulations. DINO [6] often serves as a critical component to unsupervised segmentation algorithms, since the self-supervised pre-trained ViT can produce semantically relevant features. Recent work by Seitzer et al. [24] build upon this ability by training a model with slot attention [20] to reconstruct the feature maps produced by DINO from the different slots. The features of their object-centric model are clustered with k-means [19] where each slot is associated with a cluster. In their 2021 work, Hamilton et al. [11] have also built upon DINO features by introducing a feature distillation process with features from the same image, k-NN retrieved examples as well as random other images from the dataset. Their learned representations are finally clustered and refined with a CRF [15] for semantic segmentation. While STEGO's feature selection process is random, Seong et al. [25] introduce a more effective sampling strategy by discovering hidden positives. During training, they form task-agnostic and task-specific feature pools. For an anchor feature, they then compute the maximum similarity to any of the pool features and sample locations in the image have greater similarity than the determined value. A more detailed introduction to both latter works is provided in Section 3.1.
### Depth For Semantic Segmentation
Previous research [12, 5, 26] has sought to incorporate depth for semantic segmentation in different settings. Wang et al. [26] propose to use depth for adapting segmentation models to new data domains. Their method adds depth estimation as an auxiliary task to strengthen the prediction of segmentation tasks. Furthermore, they approximate the pixel-wise adaption difficulty from source to target domain through the use of depth decoders. Work by Hoyer et al. [12] explores three further strategies of how depth can be useful for segmentation. First, they propose using a shared backbone to share learning features for segmentation and self-supervised depth estimation, similar to Wang et al. [26]. Second, they use depth maps to introduce a data augmentation that is informed by the structure of the scene. And lastly, they detail the integration of depth into an active learning loop as part of a student-teacher setup.
Figure 1: **Guiding the feature space for unsupervised segmentation with depth information.** Our intuition behind the proposed approach is simple: For locations in the 3D space with a low distance, we guide the model to map their features closer together. Vice versa, the features are learned to be drawn apart in feature space if their distance in the metric space is large.
## 3 Method
In the following, we detail our proposed method for guiding unsupervised segmentation with depth information. An overview of our technique is presented in Figure 2.
### Preliminary
Our approach builds upon work by Hamilton et al. [11]. In their work, each image is 5-cropped and k-NN correspondences between these images are calcualted using the DINO ViT [6]. Generally, STEGO uses a feature extractor \(\mathcal{F}\) to calculate a feature map \(f\in\mathbb{R}^{C\times H\times W}\) with height \(H\), width \(W\) and feature dimension \(C\) of the input image. These features are then further encoded by a segmentation head \(\mathcal{S}\) to calculate the code space \(g\in\mathbb{R}^{C\times I\times J}\) with code dimension \(C\). With the goal of forming compact clusters and amplifying the correlation of the learned features, let \(f\) and \(g\) be feature maps for a given input pair of \(x_{i}\) and \(y_{i}\), which are then used to calculate \(s:=\mathcal{S}(f)\) and \(q:=\mathcal{S}(g)\) from the segmentation head \(\mathcal{S}\). In practice, STEGO samples \(N^{2}\) vectors from the feature map during training. Hamilton et al. [11] introduced the concept of constructing the feature correspondence tensor as follows:
\[\boldsymbol{F}_{hw,ij}=\frac{f_{hw}\cdot g_{ij}}{\|f_{hw}\|\|g_{ij}\|} \tag{1}\]
where \(\cdot\) denotes the dot product. After the same computation for \(s\) and \(q\), we get \(\boldsymbol{S}_{hw,ij}\). Consequently, the feature correlation loss is defined as:
\[\mathcal{L}_{\text{Corr}}:=-\sum_{hw,ij}(\boldsymbol{F}_{hw,ij}-b)\max( \boldsymbol{S}_{hw,ij},0) \tag{2}\]
where \(b\) is a bias hyperparameter. Empirical evaluations have shown, that applying spatial centering to the feature correlation loss along with zero-clamping it further improves performance. STEGO calculates these correlations for two crops from the same image and one from a different but similar image, determined by the k-NN correspondence pre-processing. Finally, negative images are sampled randomly. The final loss is a weighted sum of the different losses where each of them has their individual weight \(\lambda_{i}\) and bias \(b_{i}\):
\[\mathcal{L}_{\text{STEGO}}=\lambda_{\text{self}}\mathcal{L}_{\text{self}}+ \lambda_{\text{kmn}}\mathcal{L}_{\text{kmn}}+\lambda_{\text{random}}\mathcal{ L}_{\text{random}} \tag{3}\]
After training, the resulting feature maps for a test image are clustered and refined with a conditional random field (CRF) [15].
### Depth Map Generation
Since in many cases, depth information about the scene is not readily available, we make use of recent progress in the field of monocular depth estimation [1, 2, 3, 16, 22] to obtain depth maps from RGB images. Recently, methods from this field have made significant for zero-shot depth estimation i.e., predicting depth values for scenes from data domains not seen during training. This property makes them especially suitable for our method since it enables us to obtain high-quality depth predictions for a wide variety of data domains without ever re-training the depth network. It also limits the computational cost for our method. We further discuss this aspect of our method in Section 5.2. For our method, we experiment with different state-of-the-art monocular depth estimators, and use ZoeDepth [3] in our experiments. Give an cropped RGB image \(x_{i}\), we use the monocular depth estimator \(M\) to predict depth \(d(x_{i})_{ij}\in[0,1]\) with:
\[d(x_{i})=M(x_{i}) \tag{4}\]
After prediction, we transform \(d(x_{i})\) to be in \([0,255]\) and downsample it to match the dimensions of the feature map.
### Depth-Feature Correlation Loss
With our depth-feature correlation loss, we aim to enforce spatial consistency in the feature map by transferring the distances from the spatial layout to the latent space.
In contrastive learning, the network is incentiviced to decrease the distance in feature space for similar instances, therefore learning to map their latent representations closer together. Likewise, different instances are drawn further apart in feature distance. This can be achieved through a constrastive objective such as:
\[\mathcal{L}(\mathbf{z}_{i},\mathbf{z}_{j})=-\log\frac{\exp(\text{sim}( \mathbf{z}_{i},\mathbf{z}_{j})/\tau)}{\sum_{k=1}^{2N}\mathbf{1}_{[k\neq i]}\exp (\text{sim}(\mathbf{z}_{i},\mathbf{z}_{k})/\tau)}\]
where \(\text{sim}(\mathbf{z}_{i},\mathbf{z}_{j})\) computes the similarity score between two feature representations \(\mathbf{z}_{i}\) and \(\mathbf{z}_{j}\), and \(\tau\) is a temperature parameter that controls the sharpness of the probability distribution over similarities. \(\mathbb{1}_{[k\neq i]}\) is an indicator function that is \(1\) when \(k\neq i\) and \(0\) otherwise.
We assume the same concept to be true in 3D space: The spatial distance between two points from the same depth plateau is smaller, while the distance between a point in the foreground and one in the background is larger. Since, in both spaces, the concept of measuring difference is represented by the distance between two points, we propose to align them through our concept of _depth-feature correlation_: For large distances in the 3D space, we guide the network to produce vectors that are further apart, and vice versa. With this, we induce the model with knowledge about the spatial structure of the scene, enabling it to better differentiate between objects in the pixel and vector space. For the depth maps, just like for features, we compute a correspondence tensor.
Let \(u=d(x_{i})\) and \(v=d(y_{i})\) be the depth maps obtained for two different crops. The depth maps represent the estimated depths at each pixel of the respective image. We construct the depth correspondence tensor \(\mathbf{D}\), defined as follows:
\[\mathbf{D}_{hw,ij}=u_{hw}v_{ij}, \tag{5}\]
where \((h,w)\) and \((i,j)\) represent the pixel positions in the depth maps \(u\) and \(v\) respectively. Together with the zero clamping, our depth-feature correlation loss is defined as:
\[\mathcal{L}_{\text{DepthG}}:=-\sum_{hw,ij}(\mathbf{D}_{hw,ij}-b)\max(\mathbf{S}_{hw, ij},0) \tag{6}\]
where \(\mathbf{D}_{hw,ij}\) represents the depth correlation tensor, and \(\mathbf{S}_{hw,ij}\) represents the feature correlation tensor computed from the output features of the segmentation head \(\mathcal{S}\). By also using zero-clamping, we limit erroneous learning signals that aim to draw apart instances of the same class if they have large spatial differences.
With this, we extend the STEGO loss so it can be formulated as follows:
\[\mathcal{L}_{\text{Total}}=\mathcal{L}_{STEGO}+\lambda_{DepthG}\mathcal{L}_{ DepthG} \tag{7}\]
with depth-feature correlation weight \(\lambda_{DepthG}\). By inducing depth knowledge during training _without_ encoding the depth maps as part of the model input, we alleviate the problem of the networks being at risk of depth input dependence at test time when depth input is no longer available. To the best of our knowledge, we are the first to achieve this depth distillation for unsupervised learning using only image input to the model.
### Depth-Guided Feature Sampling
We also aim to make the feature sampling process informed by the spatial layout of the scene. To perform sampling in the depth space, we transform the downsampled depth map \(d(x_{i})\) into a point cloud with points \(\{p_{1},p_{2},...,p_{n}\}\). On this point cloud, we apply farthest point sampling (FPS), in an iterative fashion by always selecting the next point \(p_{ij}\) as the point with the maximum distance in 3D space with respect to the rest of points \(\{p_{i_{1}},p_{i_{2}},...,p_{i_{j-1}}\}\). After having sampled \(N^{2}\) points, we end up with a set of samples \(\{p_{i_{1}},p_{i_{2}},...,p_{i_{N^{2}}}\}\) which are consequently converted two 2D sampling indices for the feature maps \(f\) and \(g\). In contrast to the data-agnostic random sampling applied in STEGO, our feature selection process takes into account the geometry of the input scene and more equally covers the spatial structure. This more equal sampling of depth space further increases the effectiveness of our depth feature correlation loss due to the increase diversity in selected 3D locations.
### Guidance Scheduling
While our depth-feature correlation loss is effective at enriching the model's learning process with spatial information of the scene, we aim to alleviate the potential of it interfering the learning of feature correlations during model training. We hypothesize that our model most greatly benefits from depth information towards the beginning of training when its only knowledge is encoded in the features maps output by the frozen ViT backbone. To give it a head
Figure 2: **Overview of the DepthG training process.** After 5-cropping the image, each crop is encoded by the DINO-pretrained ViT \(\mathcal{F}\) to output a feature map. Using farthest-point-sampling (FPS), we sample the 3D space equally and convert the coordinates to select samples in the feature map. The sampled features are further transformed by the segmentation head \(\mathcal{S}\). For both feature maps, the correlation tensor is computed. Following, we sample the depth map at the coordinates obtained by FPS and compute a correlation tensor in the same fashion. Finally, we compute our depth-feature correlation loss and combine it with the feature distillation loss from STEGO. We guide the model to learn depth-feature correlation for crops of the same image, while the feature distillation loss is also applied to k-NN-selected and random images.
start, we increase the weight of our depth-feature correlation loss at the start and gradually decrease its influence during training. Vice versa, the knowledge distillation process in the feature space be will emphasised more strongly as the model training progresses. In this way, the network builds upon the already learned rough structure of the scene achieved through our depth guidance process. We find an exponential decay of the weight for our loss component work particularly well. Therefore, we update the weight \(\lambda_{\text{Depth}}\) and bias \(b_{\text{Depth}}\) every \(m\) steps according to:
\[\lambda_{\text{Depth}}(t)=\begin{cases}\lambda_{\text{Depth}}(t-1)^{\lfloor \frac{t}{m}\rfloor},&\text{if }t>0\\ \lambda_{\text{Depth}}^{\text{init}}&\text{if }t=0\end{cases} \tag{8}\]
and
\[b_{\text{Depth}}(t)=\begin{cases}b_{\text{Depth}}(t-1)^{\lfloor\frac{t}{m} \rfloor},&\text{if }t>0\\ b_{\text{Depth}}^{\text{init}}&\text{if }t=0\end{cases} \tag{9}\]
In practice, \(\lambda_{\text{Depth}}\) and \(b_{\text{Depth}}\) are never decayed to 0.
## 4 Experiments
### Evaluation Settings
To evaluate our method, we largely follow the protocols from STEGO. [11]
**Datasets and Model Sizes.** We conduct experiments on the COCO-Stuff [4], Cityscapes [9], and Potsdam-3 datasets. The COCO-Stuff contains a wide variety of scenes and its class distribution can be split into 101 classes (fine) and 27 classes (coarse). In our evaluation, we follow [11, 13, 25] an provide results on the coarse class split, COCO-Stuff 27. In contrast, Cityscapes contains traffic scenes from 50 cities from a driver-like viewpoint. Lastly, the Potsdam-3 dataset is composed of aerial, top-down images from the city of Potsdam. We use the DINO [6] backbones ViT-Small (ViT-S) and ViT-Base (ViT-B) with a patch size of \(8x8\), which were pre-trained in a self-supervised manner.
**Evaluation Protocols.** Similar to [11, 25], we evaluate our models in the unsupervised, clustering-based setting as well as the linear probe setting. Since the output of our model is a pixel-level map of features and not class labels, these features are clustered. Following, the pseudo-labeled clusters are aligned with the ground truth labels through Hungarian matching across the entire validation dataset. To perform linear probing, an additional linear layer is added on top of the model and trained with cross-entropy loss to learn classifying the produced features.
### COCO-Stuff
We present our evaluation on COCO-Stuff27 in Table 1. For the ViT-S/8, our experiments show that our method is able to improve upon STEGO in most metrics, with improved unsupervised accuracy by **+8.0%** and unsupervised mIoU increased by **+1.1%**. When comparing our approach to Hidden Positives, a method with much more computational overhead, for the ViT-S/8, we show competitive performance for unsupervised accuracy and outperform their approach by **+1.0%** on unsupervised mIoU. When using the DINO ViT-B/8 encoder, our approach again outperforms STEGO as well as all other presented methods on unsupervised metrics. Most notably, we are able to increase the unsupervised mIoU by **+0.8%**. In their study on STEGO, Koenig et al. [14] observe that frozen DINO with the frozen STEGO layers on top already shows good performance for linear probing, even outperforming trained STEGO on linear mIoU.
### Cityscapes
We further evaluate our approach in the Cityscapes dataset [9], made up of various scenes from 50 different cities, annotated with 30 classes. As can be seen in Table 2, our method significantly outperforms STEGO as well as Hidden Positives on both metrics. For unsupervised mIoU,
\begin{table}
\begin{tabular}{l l c c c c} \hline \hline Setting & & \multicolumn{2}{c}{Unsupervised} & \multicolumn{2}{c}{Linear} \\ \cline{2-5} Method & Model & Acc. & mIoU & Acc. & mIoU \\ \hline IIC [13] & R18+FPN & 21.8 & 6.7 & 44.5 & 8.4 \\ PiCIE [8] & R18+FPN & 48.1 & 13.8 & 54.2 & 13.9 \\ PiCIE+H [8] & R18+FPN & 50.0 & 14.4 & 54.8 & 14.8 \\ \hline STEGO [11] & ViT-S/8 & 48.3 & 24.5 & 74.4 & 38.3 \\ STEGO + HP [25] & ViT-S/8 & **57.2** & 24.6 & **75.6** & **42.7** \\ STEGO + _Ours_ & ViT-S/8 & 56.3 & **25.6** & 73.7 & 38.9 \\ \hline DINO [6, 14] & ViT-B/8 & 42.2 & 13.0 & 75.8 & 44.4 \\ DINOSAUR [24] & ViT-B/8 & 44.9* & 24.0* & - & - \\ STEGO [11] & ViT-B/8 & 56.9 & 28.2 & **76.1** & 41.0 \\ STEGO + _Ours_ & ViT-B/8 & **58.6** & **29.0** & 75.5 & **41.6** \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Evaluation on COCO-Stuff-27.** We report results on COCO-Stuff with 27 high-level classes. Overall, our method outperforms STEGO and HP on unsupervised segmentation with the ViT-B/8, while showing competitive performance for the ViT-S/8. *Results from the paper obtained without post-processing optimization.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline Method & Model & U. Acc & U. mIoU \\ \hline IIC [13] & R18+FPN & 47.9 & 6.4 \\ PiCIE [8] & R18+FPN & 65.6 & 12.3 \\ \hline STEGO & ViT-B/8 & 73.2 & 21.0 \\ STEGO + HP & ViT-B/8 & 79.5 & 18.4 \\ STEGO + _Ours_ & ViT-B/8 & **81.6** & **23.1** \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Results on Cityscapes.** We report unsupervised accuracy and mIoU on Cityscapes. Our method outperforms both STEGO variants by substantial margins. Notably, our method is the first to improve upon unsupervised mIoU.
while Hidden Positives decreased performance compared to STEGO, we observe our approach to achieve a **+2.1%** increase. Similarity, we report state-of-the-art performance in accuracy, building upon Hidden Positives' already impressive improvements upon STEGO and outperforming it by **+2.1%**.
### Potsdam
Lastly, we evaluate our model on the Potsdam-3 dataset, containing aerial images of the German city of Potsdam. Contrary to the other benchmarks, which contain images in a first-person perspective, Potsdam-3 contains only birds-eye-view images, a perspective that is considered OOD for the monocular depth estimator. Despite this inherent limitation of our approach for aerial data, Table 3 we are able to demonstrate a relatively commendable performance by improving STEGO's performance but coming short of Hidden Positives.
### Qualitative Results
We present qualitative results of our method in Figure 3 and compare with segmentation maps from STEGO. On multiple occasions, our depth guidance reduces erroneous predictions from the model caused by visual irritations in the pixel space. In the example of the boy with the baseball bat in Figure (a)a, false classifications from STEGO are caused by shadows on the ground. Our model is able to correct this. Furthermore, it goes beyond the noisy label and also correctly classifies the glimpse of a plant that can be seen through a hole in the background. This is an indication that our model does not overfit to the depth map, since this visual cue is only observable from the pixel space.
## 5 Ablations
### Individual Influence
We investigate the effect of our technical contributions on training our model with a ViT-S/8 backbone on COCO-Stuff 27. Our observations in Table 4 show that our depth-feature correlation loss itself already improves the performance of STEGO. This improvement is further increased through the use of FPS, which enables us to sample the
\begin{table}
\begin{tabular}{l l c} \hline \hline Method & Model & Unsupervised Accuracy \\ \hline \hline IIC [13] & R18+FPN & 65.1 \\ \hline STEGO & ViT-B/8 & 77.0 \\ STEGO + HP & ViT-B/8 & **82.4** \\ STEGO + _Ours_ & ViT-B/8 & 80.4 \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Results on Potsdam.** We report unsupervised accuracy on the Potsdam dataset. Our method is able to improve upon STEGO, but falls short of catching HP. We hypothesize that with a zero-shot depth estimator more suitable for aerial images, the results for our method could further improve.
Figure 3: **Qualitative results.** We show qualitative differences for plain STEGO compared to STEGO with our depth guidance, using ViT-S models for COCO and ViT-B for Cityscapes. Where STEGO struggles to differentiate difference instances, our model is able to correct this and successfully separate them for segmentation. In the case of the the building in (a)a, our method alleviates visual irritations from the pixel space and significantly correctly the segmentation of the building. For (b)c Cityscapes, our model is able to better handle visual inconsistencies from shadows.
depth space more equally and therefore encourages more diversity in the depth correlation tensor \(\mathbf{D}_{hw,ij}\). Intuitively, this sampling diversity significantly amplifies our depth-feature correlation for aligning the feature space with the depth space.
### Computational Cost
Our method only leads to an insignificant increase in runtime versus the baseline STEGO model, since we solely guide the loss as well as the feature sampling and do not introduce additional layers. In contrast, the competitive method Hidden Positives [25] relies on a computationally more expensive process to select features and introduces an additional segmentation head to fill their task-specific feature pool. To keep the computational overhead of our method low, we make use of a pre-trained monocular depth estimation network with impressive zero-shot capabilities. While a task specific training of this method would increase the computational cost of our method, we consider this not a necessity, since the model is zero-shot capabilities generalize well to different scenes and domains. Therefore, in our experiments consisting of a diverse array of scenes, we do not re-train or finetune the depth estimator, and consider the additional computational cost for generating the depth maps to be negligible.
## 6 Limitations
While we have demonstrated our method effectiveness for many real-world cases, our method's applicability is limited in settings unsuitable for depth estimation, such as slices of CT scans and other medical data domains. Furthermore, the experiments on Potsdam-3 have shown, our method can improve unsupervised semantic segmentation despite suboptimal viewing perspectives for the monocular depth estimator, but we could not demonstrate state-of-the-art performance. We assume this represents a rare case where, for an increase in performance to be observed, the depth estimator would need to be retrained on domain-specific data. We also present failure cases of our model in Figure 4.
## 7 Conclusion & Future Work
In this work, we have presented a novel method to induce spatial knowledge of the scene into our model for unsupervised semantic segmentation. We have proposed the extension to correlate the feature space with the depth space and use the 3D information to more equally sample features in a spatially informed way. Furthermore, we have demonstrated that these contributions produce state-of-the-art performance on many real-world datasets and thus further the progress in unsupervised segmentation. The applicability of our approach for other tasks is further to be explored since we hypothesize it can be useful beyond unsupervised segmentation as part of any constrastive process. We consider this to be a promising direction for future work. Furthermore, it remains to be investigated which information could be useful to transfer our approach to medical data.
|
2301.01240 | Modeling Effective Lifespan of Payment Channels | While being decentralized, secure, and reliable, Bitcoin and many other
blockchain-based cryptocurrencies suffer from scalability issues. One of the
promising proposals to address this problem is off-chain payment channels.
Since, not all nodes are connected directly to each other, they can use a
payment network to route their payments. Each node allocates a balance that is
frozen during the channel's lifespan. Spending and receiving transactions will
shift the balance to one side of the channel. A channel becomes unbalanced when
there is not sufficient balance in one direction. In this case, we say the
effective lifespan of the channel has ended.
In this paper, we develop a mathematical model to predict the expected
effective lifespan of a channel based on the network's topology. We investigate
the impact of channel unbalancing on the payment network and individual
channels. We also discuss the effect of certain characteristics of payment
channels on their lifespan. Our case study on a snapshot of the Lightning
Network shows how the effective lifespan is distributed, and how it is
correlated with other network characteristics. Our results show that central
unbalanced channels have a drastic effect on the network performance. | Soheil Zibakhsh Shabgahi, Seyed Mahdi Hosseini, Seyed Pooya Shariatpanahi, Behnam Bahrak | 2022-09-11T08:06:51Z | http://arxiv.org/abs/2301.01240v1 | # Modeling Effective Lifespan of Payment Channels
###### Abstract
While being decentralized, secure, and reliable, Bitcoin and many other blockchain-based cryptocurrencies suffer from scalability issues. One of the promising proposals to address this problem is off-chain payment channels. Since, not all nodes are connected directly to each other, they can use a payment network to route their payments. Each node allocates a balance that is frozen during the channel's lifespan. Spending and receiving transactions will shift the balance to one side of the channel. A channel becomes unbalanced when there is not sufficient balance in one direction. In this case, we say the effective lifespan of the channel has ended. In this paper, we develop a mathematical model to predict the expected effective lifespan of a channel based on the network's topology. We investigate the impact of channel unbalancing on the payment network and individual channels. We also discuss the effect of certain characteristics of payment channels on their lifespan. Our case study on a snapshot of the lightning Network shows how the effective lifespan is distributed, and how it is correlated with other network characteristics. Our results show that central unbalanced channels have a drastic effect on the network performance.
Bitcoin, Lightning Network, Payment Channel, Lifespan, Random Walk.
## 1 Introduction
Bitcoin is the first decentralized cryptocurrency, introduced in 2008 which provides security, anonymity, transparency, and democracy without any trusted third party [1]. Most of these properties are achieved by using a blockchain as a distributed ledger. An inherent problem with using a blockchain over a network is that it sacrifices scalability [2, 3]. The reason is that all nodes, potentially tens of thousands, must exchange, store, and verify each and every transaction in the system [4]. Furthermore, each block has a limited size and blocks get generated at regular intervals (approximately every 10 minutes). This means that with the current blocksize of 1 MB the throughput of Bitcoin is about 4.6 transactions per second, which is much slower than centralized systems like Visa, WeChatPay, and PayPal [5]; making the use of Bitcoin in everyday transactions impractical.
Another trade-off the Bitcoin consensus makes is that it ensures security by waiting for other miners to confirm a transaction by extending the block holding that transaction, which reduces the throughput. This way it makes sure that the double spending attack is highly improbable. Currently, the standard waiting time for a block to be confirmed is 6 blocks, which is almost one hour [6].
Bitcoin's capacity limitations are being felt by users in the form of increased transaction fees and latency. With an increasing demand for making transactions, users need to pay more transaction fees in order to make sure that their transaction is more profitable for the miners; hence have a higher chance of making it into a block. Queuing of transactions and network bandwidth will lead to a longer delay time for a transaction to appear in the blockchain.
There are many different proposals to solve the scalability problem. Most of the proposals fall into three categories: \(Layer0\), \(Layer1\), and \(Layer2\) solutions [7]. \(Layer0\) solutions try to enhance the infrastructure, like the network that connects the nodes. \(Layer1\) solutions try to enhance the blockchain's shortcomings by changing the consensus mechanism and protocols [8, 9]. \(Layer2\) solutions propose ways to move away from the blockchain, and for this reason, they are also called off-chain solutions [10].
In 2016 the idea of Lightning Network (LN) was proposed to move the transactions to the second layer (off-chain) [4]. The Lightning Network consists of payment channels in a P2P fashion. Payment channels allow two parties to exchange payments with negligible time and cost, but both parties must freeze an initial fund in the channel so no one can spend more money than they own and no double spending occurs. It is important to note that the sum of funds in each channel remains constant throughout the channel's lifespan and only the channel's balance changes. When two parties that do not have a direct channel want to exchange payments they can use other parties to route their payments. So a network of nodes is constructed and all the connected nodes can send each other payments.
This system moves the cost of submitting a transaction off the blockchain. Only the final states between two nodes will eventually make it into the blockchain, which significantly increases throughput. Furthermore, no time is needed for the transaction to be confirmed and all transactions in a channel happen almost instantly.
After several transactions through a channel, the channel starts to get unbalanced; meaning all of its funds have gone to one of the parties and the other node cannot route any more payments through the channel. In this case, it is best to close the unbalanced channel or open a new one.
In this paper, we investigate the expected effective lifespan of a channel in a payment network. Our contributions can be summarized as follows:
* We provide simulation evidence of how channel unbalancing impacts its throughput. Moreover, we show how the performance of the payment network
can be affected if a number of channels become unbalanced.
* We present a mathematical model of payment channels to predict the expected time for a channel to get unbalanced considering the channel's position in the network and its initial balance. We call this time the _Effective Lifespan_ of the channel.
* We evaluate our model through simulation, and observe how the Effective Lifespan of a channel is affected if we change any of its characteristics.
* By analyzing a recent snapshot of the Lightning Network, we find the distribution of real-world channel lifespans and its correlation with the network's topological parameters. We also investigate the relationship between the centrality of a channel in the network and its effective lifespan.
## 2 Related Work
While the LN white paper [4] does not discuss channel re-balancing, there exists some research on channel balances and their significance.
The importance of channel balances is mainly discussed in four major areas; re-balancing, security, performance, and financial.
**Re-balancing**: [11] proposes a method for re-balancing payment channels. This work allows arbitrary sets of users to securely re-balance their channels. However, this paper does not discuss the application of re-balancing, and how frequently it should be performed. [12] also proposes methods for rebalancing LN channels, but does not discuss the frequency of rebalancing.
**Performance**: In [13] the authors discuss why it is in the best interest of the network to have balanced channels. They propose a method to re-balance some channels to improve the network's performance. [14] presents a method in which a node can make its channels balanced through circular subgraphs. It also develops a method for measuring imbalance in a payment network.
**Security**: There has been some research on the security aspects of channel unbalancing. In [14, 15], and [16] the authors describe a method in which it is possible for an adversary to uncover channel balances. Having unbalanced channels poses the threat of griefing attacks. The incentive for honest behavior in the LN channels is the penalty for misbehavior. If a node cheats by publishing an old contract, it will be penalized and all of the channel funds can be claimed by the victim. When channels are unbalanced the penalty is less so there is less incentive for honest behavior. In [17] the authors discuss some countermeasures like watchtowers to keep the misbehaving nodes from closing the channel.
**Financial**: Routing payments through a channel can make revenue for the owner. So payment channels can be looked at as investments. In [18] the authors do an in-depth financial analysis on how much should payment channels charge for routing payments. One of the key factors in this analysis is the lifespan of payment channels. In order to analyze investing in a payment channel, nodes should be able to have an estimate on how long the investment stays profitable and what is the impact of channel unbalancing on the profits of a channel. Branzei et al. [18] assume an equal probability of having a payment from each side in a channel and use the lifetime of channels for financial analyses. We will show how the lifespan of a channel could be affected by this probability.
In this paper we focus on the details of estimating channel lifespans; considering parameters such as the placement of the channel in the topology and payment rates between each pair and explain the importance of estimating channel lifespans. This gives us a better and more realistic estimation of the channel's lifespan compared to existing work. Moreover, we measure the impact of imbalanced channels on the network.
Despite the importance of payment channel's lifespan, to the best of our knowledge, the expected lifespan of channels in the payment network has not been discussed in detail.
## 3 Technical Background
In this section, we provide a technical background to understand the remainder of this paper thoroughly.
### Payment Channels
Payment channel is a financial contract between two parties in a cryptocurrency like Bitcoin. The contract allocates a balance of funds from both parties. The contract is established by a 2-of-2 multisignature address which requires the cooperation of both parties to spend the funds.
Payments are made off the blockchain by passing on a new version of the contract with a different balance of allocated funds on the spending transaction; which both parties have to sign. The channel is closed when one of the parties publishes the latest version of the contract to the blockchain. We define the payment direction to be the direction in which funds are moving during a transaction.
In this paper, we call the sum of locked funds in a channel the channel's capacity. When all of the funds of a channel are allocated to one of the parties, the channel becomes _unbalanced_. In this case payments can only be made from one side of the channel. A channel's _effective lifespan_ is the time from creation of a channel until the first imbalance occurs. A channel's success probability is defined as the number of successful payments made through the channel divided by the total number of payment attempts.
Several connected payment channels can form a payment network, in the case of Bitcoin, this network is called the Lightning Network [4]. This network is used to route payments through intermediate channels between nodes who do not have a direct channel between them. We define a network's success probability as the total number of successful payments made on the network divided by the total attempts to route payments through the network [19].
### Random Walk
The random walk model has been used in a wide variety of contexts to model the movement of objects in different spaces. This paper uses one-dimensional random walk to model the liquidity balance in a payment channel. Two endpoints on the left and right sides of the random walk are assumed to represent the channel imbalance condition.
In our model, each payment corresponds to one step of the random walk model, and the direction of the payment determines the direction of that step. Suppose we take probabilities \(p\) and \(1-p\) as the probability of payment direction (i.e., step direction). We can find the expected number of payments (steps) needed for the channel (the random walk model)to get unbalanced (to reach one of the endpoints).
### _Betweenness Centrality_
Betweenness centrality is a measure based on shortest paths for the importance of the location of a node or an edge in a graph. Betweenness centrality for an \(edge(a,b)\) in the network is defined as follows: \(\sum_{\begin{subarray}{c}s,t\in U\\ s\neq t\end{subarray}}\frac{\sigma(s,tedge(a,b))}{\sigma(s,t)}\), where \(\sigma(s,t)\) is the total number of shortest paths between nodes \(s\) and \(t\) and \(\sigma(s,t|edge(a,b))\) is the total number of shortest paths between \(s\) and \(t\) that pass through \(edge(a,b)\).
## 4 Motivation
One of the important characteristics of a payment network is reliability. Reliability can be defined as the probability of payment success [19].
In this section, we analyze the payment routing failure probability of a singular channel after unbalancing, and the network's success probability of routing a payment when some channels are unbalanced.
### _Singular Channel_
We ran a simulator of a single payment channel to see how much the failure rate increases after the first time that the channel becomes unbalanced. Fig. 2 shows the failure rate after the first time a channel becomes unbalanced. During the simulation, 5000 payments were being routed through an initially balanced channel. Then the simulator calculates the failure rate after the first time the channel becomes unbalanced. As Fig. 2 suggests, the probability of the direction of payments (\(p\)) is a key factor in determining how much the probability of payment success degrades after the first imbalance occurs. Channels capacity has little to no impact on how well it performs after unbalancing.
These results show that the probability of payment direction (\(p\)), which depends on the network topology and the network's transaction flow, is one of the most important parameters in determining the channel's lifespan; more importantly, shows the impact of unbalancing on channel success probability after the channel becomes unbalanced.
### _Network Performance_
Using the CLoTH simulator [19] we simulate and measure the performance of Lightning Network. In each iteration we take channels from the given LN snapshot and make them unbalanced, we then measure the success probability after attempting \(5000\) payments. Choosing more central channels as unbalanced channels is more reasonable, because they route more payments and thus have a higher probability of becoming unbalanced in the real world. We considered two scenarios for selecting channels to unbalance: choosing channels randomly and choosing channels that have a higher betweenness centrality. As illustrated in Fig. 1, as the percentage of unbalanced channels increases, the routing success rate decreases dramatically for both channel selection scenarios. In the random selection scenario, it is noticeable that the first 10 percent of unbalanced channels have less effect on the network performance than the last 10 percent of unbalanced channels. We see that unbalancing channels with a higher betweenness centrality has a higher impact on the network performance in contrast with the random selection scenarios. Therefore, per any percentage of unbalanced channels, selection with betweenness centrality is more effective.
Seres et al. [20] suggest that in the Lightning Network, the top \(14\%\) central channels will have the most significant impact on the network. In a different experiment we made \(15\%\) of the network's channels unbalanced, we first sort the channels by betweenness centrality and take a window of \(15\%\) of the channels per experiment. We start with the \(15\%\) most central channels and move all the way up to \(15\%\) least central channels. It can be inferred from the results in Fig. 3 that more central channels have more impact on the network success rate when they become unbalanced. As we can see in Fig. 3, the top \(15\%\) central channels have the most significant effect on the success rate when they become unbalanced. This confirms the result from Fig. 1.
Fig. 1: Relation of network success rate with percentage of unbalanced channels in the network.
Fig. 2: Failure rate after unbalancing.
## 5 The model
As we discussed in Section 4 channel balances have a significant effect on both channel, and network performance. In this section, we introduce a mathematical model to determine the expected time for a channel to get unbalanced; we call this the channel's expected lifespan. We model the dynamics of a payment channel with a random walk problem. Each payment passing through the channel will represent a step the random walker takes. We will first discuss our assumptions and describe the model in detail. We then discuss how to find the model parameters. We proceed by doing an analysis on how the expected lifespan is affected by changing channel's characteristics.
### _Random Walk Model_
Take a payment channel between two nodes \(A\) and \(B\), and take their initial balance allocated for the channel to be \(F_{A}\) and \(F_{B}\), respectively. The goal is to determine the expected time it will take for this payment channel to become unbalanced for the first time. We make the following assumptions:
* All the payments have the same amount denoted with \(\omega\) (PaymentSize).
* The payments from each node come with a Poisson distribution.
Since the number of nodes is large and the probability of sending a transaction for a given time is small, we can assume that transaction arrival for each channel is a Poisson process for moderate time windows [21]. Although the dynamics of the network will change over time, we make the assumption of having a fixed topology.
We model the dynamics of a payment channel with a random walk problem. Each payment is simulated by a step the random walk takes. To simulate a payment channel, take the liquidity of node \(A\) as the distance of the random walk from the endpoint on the right hand side and the liquidity of node \(B\) as the distance from the endpoint on the left hand side.
The payment direction determines the direction of that step. So the payment direction probability is the probability of going to the right or left for the random walk in each step.
Let the random walk start at the origin of the number line. The two endpoints \(a\) and \(-b\) are \(\lfloor\frac{F_{A}}{\omega}\rfloor\) and \(\lfloor\frac{F_{B}}{\omega}\rfloor\), respectively.
Since we assume that the payments from each side are made independently with a Poisson process, and the sum of two independent Poisson processes is itself a Poisson process, we can say that payments come to the channel with a Poisson distribution having:
\[\lambda_{payment}=\lambda_{A,B}+\lambda_{B,A}, \tag{1}\]
thus the relation between expected time and expected number of random walk steps is:
\[E_{time}=\frac{E_{steps}}{\lambda_{payment}}, \tag{2}\]
where \(E_{time}\) is the expected time until unbalancing and \(E_{steps}\) is the expected number of steps until unbalancing occurs.
The expected number of payments until unbalancing occurs, can be a better metric depending on the application; when multiplied by average fee per payment, it gives the expected routing income, and when divided by \(\lambda\) it gives the expected lifespan.
The objective is to determine the time it takes for a channel to become unbalanced. We first try to find the expected number of steps needed for the random walker to reach \(+a\) or \(-b\) for the first time.
**Lemma 1.**_The expected number of steps to reach \(+a\) or \(-b\) for the first time starting from zero considering the probability \(p\) for the positive direction and \(q=1-p\) for the negative direction is:_
\[E_{steps}=\begin{cases}\frac{ap^{a}(p^{b}-q^{b})+bq^{b}(q^{a}-p^{a})}{(p-q)(p ^{a+b}-q^{a+b})}&p\neq 1/2\\ ab&p=1/2\end{cases} \tag{3}\]
_._
We provide the proof of Lemma 1 in Appendix A.
We simulated a Random Walk which starts from point zero with the same probability of going to each side (\(p=\frac{1}{2}\)). The simulation ran 1000 times to find the distribution of the number of steps needed to reach \(+a\) or \(-b\). Fig. 4 illustrates the result of the simulation. We can observe that most of the times the random walk reaches one of the bounds in less than 400 steps, but there are not many situations where
Fig. 4: Distribution of expected lifespan with 10000 random walk simulations with \(p=\frac{1}{2}\) and \(a=b=1.2\,Msat\).
Fig. 3: Per each data point the \(i\)-th to \((i+4500)\)-th most central channels are unbalanced and the success rate of the network is measured. The total number of channels is 30457.
it takes a huge number of steps to reach these bounds. However, the average number of steps needed to reach these bounds is 400.5 confirming 11.
### _Finding p_
In Section 5.1 we modeled the payment channel dynamics with a random walk and a parametric formula was constructed according to Lemma 1.
A payment network can be formally expressed by an unweighted directed graph. \(V\) represents the set of all nodes, and the set of edges is denoted by \(E\). Each channel is represented using two edges from \(E\) each for one of the directions.
We define \(MRates\) to be the matrix of payment rates between each two nodes. The rate of payments (i.e., number of payments per day) from node \(i\) to node \(j\) is denoted by \(MRates_{ij}\).
\(\lambda_{a,b}\) represents the rate of payments transmitted over the \(edge(a,b)\). \(\lambda_{a,b}\) consists of the sum of portions of the payment rate between each pair of nodes that pass through \(edge(a,b)\). So we have:
\[\lambda_{a,b}=\sum_{\begin{subarray}{c}s,t\in V\\ s\neq t\end{subarray}}\frac{\sigma(s,t|edge(a,b))}{\sigma(s,t)}MRates_{st}, \tag{4}\]
where \(\sigma(s,t)\) is the number of shortest paths from node \(s\) to node \(t\) and \(\sigma(s,t|edge(a,b))\) is the number of shortest paths from node \(s\) to node \(t\) passing through \(edge(a,b)\) in the directed graph G.
**Lemma 2**.: \(\frac{p}{q}=\frac{\lambda(a,b)}{\lambda(b,a)}\)_._
According to Lemma 2:
\[p=\frac{\lambda(a,b)}{\lambda(a,b)+\lambda(b,a)} \tag{5}\]
Therefore we can find \(p\) based on the network topology.
**Lemma 3**.: _If \(\forall s,t\in V:MRates_{st}=MRates_{ts}\) then \(p=0.5\)._
We provide the proofs of Lemmas 2 and 3 in Appendix A. If we assume that \(MRates\) is a symmetric matrix, according to Lemma 3, \(p\) is independent of \(MRates\) matrix and the network topology.
### _Model Analysis_
In this section we analyze the effect of channel parameters on the channel's expected lifespan and perform a financial analysis for channel lifespan.
For more realistic parameter values we used a recent snapshot of the Lightning Network taken on \(Feb2019\) as a reference point. The average payment size is considered to be \(60000\,sat\)1[22] and the average channel capacity is considered \(2.4\,Msat\)2 according to the snapshot.
Footnote 1: satoshi
Footnote 2: million satoshi
For simplicity we use "lifespan" and "expected number of payments until channel is unbalanced", interchangeably.
We first answer the question of how sensitive is a channel's lifespan to the changes in \(p\). As demonstrated in Fig. 5; if the channel is initially balanced, the maximum lifespan happens on \(p=\frac{1}{2}\). Also, lifespan is more sensitive to changes in \(p\) when the capacity is higher. From this result we can infer that it is an important consideration for a node to make sure the channel is placed in a way that \(p\) is close to \(\frac{1}{2}\), otherwise the channel's lifespan is affected dramatically. A reasonable proposal for nodes who want to keep their channels active as long as possible is to charge routing fees in a way that encourages other nodes to route their payments through the node in order to achieve \(p=0.5\). Fig. 6 shows that if a channel is initially unbalanced, its maximum possible lifespan takes a hit. Although the maximum lifespan does not occur at exact \(p=\frac{1}{2}\), it occurs at a point close to this value. So even if a channel is somewhat unbalanced, the nodes must try to keep \(p\) as close as possible to 50%.
We now answer the question of how the lifespan is affected by the channel capacity. As Fig. 7 suggests, the channel lifespan increases with increasing its capacity. It is noteworthy that the slope of this graph is increasing. So if a node doubles its channel capacity, the channel's lifespan will be more than doubled. Moreover, Fig. 7 shows the effect of a channel's initial imbalance on its lifespan.
Usually when a node wants to create a new channel
Fig. 5: Effect of payment direction probability on balanced channels, according to different channel capacities.
Fig. 6: The effect of payment direction probability on expected number of payments, according to different initial balance ratios. The channel capacity is considered 2.4 Msat.
with another node in the network, the only parameter it has control over is the amount of funds it wants to put in the channel, not the funds its partner puts in the channel. This brings up the question: how will the channel's lifespan be affected with the amount the other node wants to put in the channel if our fund stays at a fixed value. Figures (8) and (9) illustrate this effect. Fig. 8 shows the maximum achievable lifespan considering any \(p\) value and how it is affected by the fund that the other node commits to the channel. The maximum lifespan grows with the initial fund of the other edge in a linear fashion. Fig. 9 illustrates the effect of our edge capacity if the peer node's capacity is fixed. Figure (9) shows that if \(p\) is in favor of payments in the direction of our edge (\(p\geq\frac{1}{2}\)), the lifespan increases almost linearly; otherwise (\(p<\frac{1}{2}\)), the other edge becomes the bottleneck and the fund we put towards the channel will have little to no effect on the expected lifespan of the channel. If the funds we put towards the channel do not have an effect on the channel's lifespan, we have wasted cost opportunities.
## 6 Implementation and Evaluation
We provided a simulation proof of concept on a constructed Lightning Network to show the accuracy of the model discussed in Section 5. In this section we describe our methodology for creating data and calculating accuracy of our model. We later analyze the results to see under which conditions the model performs better.
### _Methodology_
The testing pipeline shown in Fig. 10 uses the following modules:
#### 6.1.1 Network Generator
For each test, a random network was generated using NetworkX's [23]\(gnp\_random\_graph\) with the number of nodes being 50 and the channel existence probability being \(20\%\) (245 edges on average).
#### 6.1.2 \(Mrates\) Generator
As discussed in previous sections the \(Mrates\) matrix holds the rates in which each two nodes send payments to one another. The \(Mrates\) generator takes two main parameters: SC and SK. SC determines the sparseness of the \(Mrates\) matrix and SK determines the matrix skewness in relation to its main diagonal. Per each test, a new matrix is generated. In table (1) the sparse coefficient and the skew were changed to test how the model will perform in each scenario.
#### 6.1.3 Lifespan Predictor
The lifespan predictor takes the network and the \(Mrates\) matrix and using the model discussed in 5 gives the expected time for each channel to become unbalanced.
#### 6.1.4 Payment Generator
Payment generator creates random payments in CLoTH simulator's input format [19]. These payments follow the \(Mrates\) generator values on average.
#### 6.1.5 Simulator
We used a modified version of the CLoTH simulator [19]. We modified CLoTH such that the simulator logs the unbalancing of channels and chooses paths randomly in cases where more than one shortest path exists.
The payment generator and simulator run 100 iterations per test.
Fig. 8: For a fixed a = 1.2 Msat, the effect of channel b’s capacity on the maximum possible lifespan in any p.
Fig. 7: Effect of channel capacity on the expected number of payments, according to initial balance ratios
Fig. 9: Having a fixed initial balance from peer node (b) analyzing the effect of our initial balance fund (a), according to different payment direction probabilities (p).
#### 6.1.6 Lifespan Calculator
This module aggregates the results of 100 iterations of the previous step and calculates the average lifespan and its error. This data will be used to determine the accuracy of the model.
#### 6.1.7 Model Evaluation
The error of each channel is calculated as \(\frac{|real-prediction|}{real}\). Because some channels are positioned in a way that almost no payments pass through them, only after a long while that most channels are unbalanced, some payments pass through them, we count these channels as abnormalities and do not consider them in the error calculations. These are usually the channels that are estimated to have a very long lifespan.
The means of calculated errors are given in Table (1) considering different SC and SK values.
As we see, better results are obtained with smaller SCs (meaning a busier Lightning Network). It is also notable that SK value has little to no effect on the model performance. This means that the model performs well in either case that \(p\) is close to \(0.5\) and \(p\) is far from \(0.5\).
## 7 Lightning Network Analysis
In this section we will provide an analysis on channel lifespans of a recent snapshot of the Lightning Network. The simulation is constituted by nodes and channels taken from a snapshot of the Lightning Network Mainnet [24] on Feb 2019.
In Section 5, we proposed a model for a payment channel using a random walk and we derived a formula to predict expected channel lifespans. Moreover, the expected lifespan of a payment channel can be found if the rate of payments are known by using 2. Lemma 3 shows that if we have the same rate for every pair of nodes, the probability of going to each side is equal to \(0.5\).
Because payment rates and channel balances usually are not public in the Lightning Network, we have to make assumptions on the distribution, the amount of payments, and channel balances. We assume that all payment rates have the same value \(r\), which means that the rates matrix (\(MRates\)) is symmetric. Thus according to Lemma 3, \(p=\frac{1}{2}\) for every channel in the network. According to 11 the expected number of payments is equal to \(a\times b\), where \(a=\lfloor\frac{F_{a}}{\omega}\rfloor\) and \(b=\lfloor\frac{F_{B}}{\omega}\rfloor\) for a bidirectional channel between \(A\) and \(B\). We assume that all channels are initially balanced, meaning \(a=b=\frac{C}{2\omega}\), where \(C\) is the channel's capacity.
According to previous results in Section 5 (equations (1) and (4)) we have:
\[\lambda_{payment}=(\sum_{\begin{subarray}{c}s,t\in V\\ s\neq t\end{subarray}}\frac{\sigma(s,t|edge(a,b))}{\sigma(s,t)}+\frac{\sigma( s,t|edge(b,a))}{\sigma(s,t)})r \tag{6}\]
We also know that \(\sum_{\begin{subarray}{c}s,t\in V\\ s\neq t\end{subarray}}\frac{\sigma(s,t|edge(a,b))}{\sigma(s,t)}\) is equal to the edge betweenness centrality of \(edge(a,b)\) (\(EBC(a,b)\)) in directed graph \(G\)[25].
Because all channels are bidirectional: \(\forall edge(j,i)\rightarrow\exists edge(i,j)\), thus \(\forall s,t\in V:\)
\[\frac{\sigma(s,t|edge(a,b))}{\sigma(s,t)}=\frac{\sigma(t,s|edge(b,a))}{\sigma (t,s)}. \tag{7}\]
Assuming \(G^{{}^{\prime}}\) as an undirected graph that is derived from \(G\) we have:
\[EBC_{G}(a,b)=EBC_{G^{{}^{\prime}}}(a,b), \tag{8}\]
thus
\[\lambda_{payment}=2\times EBC_{G^{{}^{\prime}}}(a,b)\times r. \tag{9}\]
If we put all results in (2), we have:
\[E_{time}=\frac{(\frac{C}{\omega})^{2}}{4\times 2\times EBC_{G^{{}^{\prime}}}(a,b) \times r} \tag{10}\]
In what follows, we first calculate all payment channels' lifespans in the LN snapshot using equation (10). Then we focus on the relation between edge betweenness centrality and lifespan of the channels.
### _Distribution of Channels Effective Lifespan_
Equation (10) shows that the lifespan of a channel can be calculated based on its edge betweenness centrality and initial fund. We assume that \(r=0.0022\) transactions per day [26] and \(\omega=60000\,sat\)[22]. The distribution of channels lifespans in our snapshot is shown in Fig. 11. Much like the distribution of channel capacities that resemble the Power Law distribution, Fig. 11 shows that there are a lot of channels with a low lifespan and very few channels with a very high lifespan.
According to Seres et.al. [20] the most effective channels are the channels with the highest betweenness centrality. This paper suggests that the top 14% of the channels have the most significant effect on the network's performance.
Table (2) gives average, standard deviation, and median, for all channels in the network and the top 14% central channels.
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline & \multicolumn{4}{c}{SK} \\ \hline & 1 & 4 & 6 & 10 \\ \hline \multirow{3}{*}{SC} & 0.9 & 0.15 & 0.10 & 0.09 & 0.10 \\ & 0.5 & 0.11 & 0.10 & 0.08 & 0.07 \\ & 0.3 & 0.09 & 0.07 & 0.05 & 0.06 \\ & 0 & 0.11 & 0.07 & 0.07 & 0.07 \\ \hline \hline \end{tabular}
\end{table} TABLE I: error of prediction and real lifetime
Fig. 10: Model evaluation pipeline.
### _Betweenness-Lifespan Correlation_
As Seres et.al. [20] suggests, the most central channels have the most impact on the network. As Fig. 12 shows, more central channels have a shorter lifespan because they route more payments per unit of time. In Fig. 12 we took batches of the most central edges and calculated the average centrality and the average lifespan per batch. The result shows that in general the more central a channel is, the sooner it will get unbalanced. We see an exception to this statement in the middle of the plot, where betweenness has a positive correlation with the average lifespan. This is due to the fact that some very central edges have a large capacity so they can route more payment considering that lifespan increases with capacity quadratically.
In Section 7 A we showed that channels with larger edge betweenness centrality values have a higher impact on the performance of the network. In this section, we have shown these central channels will have shorter lifespans. Therefore, the network success rate will decrease quickly due to unbalancing.
## 8 Conclusion
In this paper we modeled payment channel liquidity with a random walk to estimate how long it takes for a channel to become unbalanced and the effect of being unbalanced on a channel's probability of successful routing. We also analyzed how unbalanced channels degrade the network's performance, and the relation between a channel's centrality and its lifespan. We showed that the network's success probability is sensitive to the channels' unbalancing.
We also introduced a method to estimate the lifespan of a channel in a payment network which can be used for determining a good placement in the network. We provided a proof of concept for our model and showed the results are \(95\%\) accurate.
This work shows that just allocating more funds towards a channel does not lead to having a more successful channel. The results show the channel's success in the network depends greatly on the network topology, transaction flow, and the amount of funds the peer node puts in the channel.
We suggested the amount a node should invest in a channel to get the longest channel lifespan and maximize its return on investment. These results show that a misplaced channel can have a very short lifespan and lose up to 40% of its efficiency, so nodes could potentially create a market based on these criteria to sell each other good connections in the network.
## Appendix A Proofs
_Lemma 1. The expected number of steps to reach \(+a\)_or \(-b\) for the first time starting from zero is
\[E_{steps}=\begin{cases}\frac{ap^{a}(p^{b}-q^{b})+bq^{b}(q^{a}-p^{a})}{(p-q)(p ^{a+b}-q^{a+b})}&p\neq 1/2\\ ab&p=1/2\end{cases} \tag{11}\]
Consider \(S_{x}\) as the expected number of steps to reach \(+a\) or \(-b\) for the first time starting from \(x\). Let \(p\) be the probability of going to the positive direction and \(q\) the probability of going in the negative direction (\(p+q=1\)). Then we can say that if the Random Walk starts from \(x\), he will go to \(x+1\) with probability of p and \(x-1\) with probability of q. so we can infer this recurrence equation: \(s_{x}=1+qs_{x-1}+ps_{x+1}\) where \(s_{x}\) is the expected number of steps until reaching the end point starting from point \(x\). For the boundary conditions we have: \(s_{a}=s_{-b}=0\) implying that the expected number of steps needed to reach \(+a\) or \(-b\) starting from \(+a\) or \(-b\) is zero.
so:
\[s_{x}=\frac{1}{p}s_{x-1}-\frac{q}{p}s_{x-2}-\frac{1}{p} \tag{12}\]
The characteristic equation of (12) is:
\[(z^{2}-\frac{1}{p}z+\frac{q}{p})(z-1)=0 \tag{13}\]
Fig. 11: Histogram of expected lifespan for the LN snapshot in Feb 2019.
\begin{table}
\begin{tabular}{l l l} \hline & All Channels & Central Channels \\ \hline average & 1833.2 & 172.3 \\ STD & 7086.9 & 587.2 \\ median & 27.0 & 1.6 \\ \hline \end{tabular}
\end{table} TABLE II: Lifespan statistics of the LN snapshot (day).
Fig. 12: The relation between expected lifespan and betweenness centrality of channels in the LN snapshot.
if \(p=q=1/2\) we have \(\Delta=0\) therefore \(z_{1}=z_{2}=z_{3}=1\) so the expected number of steps needed to reach \(+a\) or \(-b\) starting from \(x\) is:
\[s_{x}=(a-x)(b-x) \tag{14}\]
if \(p\neq 1/2\) we have \(\sqrt{\Delta}=|\frac{1-2p}{p}|\) therefore \(z_{1}=z_{2}=1,z_{3}=\frac{q}{p}\) and for the number of steps we have:
\[s_{x}=\frac{ap^{\mu+b}+bq^{\mu+b}}{(2p-1)(p^{\mu+b}-q^{\mu+b})}+\frac{1}{1-2p} x+\frac{(a+b)p^{a}q^{b}}{(2p-1)(q^{a+b}-p^{a+b})}(\frac{q}{p})^{x} \tag{15}\]
we take \(x=0\) as this gives the expected number of steps to reach \(+a\) or \(-b\) starting from zero. so we have:
\[S_{0}=\begin{cases}\frac{ap^{\mu}(p^{a}-q^{b})+bq^{b}(q^{a}-p^{a})}{(p-q)(p^{a +b}-q^{a+b})}&p\neq 1/2\\ ab&p=1/2\end{cases} \tag{16}\]
### _Lemma 2. \(\frac{p}{q}=\frac{\lambda(A,B)}{\lambda(B,A)}\)_
In assumptions of Section 5 it is assumed that each node sends its payments to other nodes with a Poisson distribution. The parameter of the distribution for \(edge(a,b)\) is \(\lambda(a,b)\), which is the payment rate between nodes \(a\) and \(b\). Assume the random variable of payments from \(a\) to \(b\) as \(X\) and the random variable of payments from \(b\) to \(a\) as \(Y\).Thus we have:
\[P(X=n)=\frac{e^{-\lambda(A,B)}(\lambda(A,B))^{n}}{n!} \tag{17}\]
The total payment rate in each channel is the sum of rates of its two edges. It is known that the distribution of a random variable which is the sum of two random variables with a Poisson process is a Poisson process; the rate of this process equals the sum of rates.
When we have a payment from two Poisson distributions sending payments to the same channel; The probability for the payment to be a payment from node \(a\) to node \(b\) (\(p\)) is:
\[p=P(X=1|X+Y=1)=\frac{\frac{e^{-\lambda_{x}}(\lambda_{x})^{1}}{\lambda}\times \frac{e^{-\lambda_{y}}(\lambda_{y})^{0}}{0}}{\frac{e^{-(\lambda_{x}+\lambda_{y} )}(\lambda_{x}+\lambda_{y})^{1}}{1!}} \tag{18}\]
Thus:
\[p=\frac{\lambda_{x}}{\lambda_{x}+\lambda_{y}}=\frac{\lambda(a,b)}{\lambda(a,b )+\lambda(b,a)} \tag{19}\]
### _Lemma 3. If \(\forall s,t\in V:MRates_{st}=MRates_{ts}\) then \(p=0.5\)._
We know from lemma 2 that: \(\frac{p}{q}=\frac{\lambda(a,b)}{\lambda(b,a)}\) so we have:
\[\frac{p}{q}=\frac{\sum\frac{\sigma(s,tedge(a,b))}{\sigma(s,t)}M Rates_{st}}{ \sum\frac{\sigma(s,tedge(b,a))}{\sigma(t,s)}MRates_{ts}} \tag{20}\]
Because all channels are bidirectional(\(\forall edge(a,b):\exists edge(b,a)\)) we have \(\forall s,t\in V:\)
\[\frac{\sigma(s,t|edge(a,b))}{\sigma(s,t)}=\frac{\sigma(t,s|edge(b,a))}{\sigma( t,s)} \tag{21}\]
In the other hand if we have \(\forall s,t\in V:MRates_{st}=MRates_{ts}\), we can say:
\[\frac{\sigma(s,t|edge(a,b))}{\sigma(s,t)}\times MRates_{st}=\frac{\sigma(T,S|edge (b,a))}{\sigma(t,s)}\times MRates_{ts} \tag{22}\]
then finally we have:
\[\lambda(a,b)=\lambda(b,a) \tag{23}\]
so
\[p=\frac{1}{2} \tag{24}\]
|
2309.13066 | Causal Discovery and Counterfactual Explanations for Personalized
Student Learning | The paper focuses on identifying the causes of student performance to provide
personalized recommendations for improving pass rates. We introduce the need to
move beyond predictive models and instead identify causal relationships. We
propose using causal discovery techniques to achieve this. The study's main
contributions include using causal discovery to identify causal predictors of
student performance and applying counterfactual analysis to provide
personalized recommendations. The paper describes the application of causal
discovery methods, specifically the PC algorithm, to real-life student
performance data. It addresses challenges such as sample size limitations and
emphasizes the role of domain knowledge in causal discovery. The results reveal
the identified causal relationships, such as the influence of earlier test
grades and mathematical ability on final student performance. Limitations of
this study include the reliance on domain expertise for accurate causal
discovery, and the necessity of larger sample sizes for reliable results. The
potential for incorrect causal structure estimations is acknowledged. A major
challenge remains, which is the real-time implementation and validation of
counterfactual recommendations. In conclusion, the paper demonstrates the value
of causal discovery for understanding student performance and providing
personalized recommendations. It highlights the challenges, benefits, and
limitations of using causal inference in an educational context, setting the
stage for future studies to further explore and refine these methods. | Bevan I. Smith | 2023-09-18T10:32:47Z | http://arxiv.org/abs/2309.13066v1 | # Causal Discovery and Counterfactual Recommendations for Personalized Student Learning +
###### Abstract
The paper focuses on identifying the causes of student performance to provide personalized recommendations for improving pass rates. We introduce the need to move beyond predictive models and instead identify causal relationships. We propose using causal discovery techniques to achieve this. The study's main contributions include using causal discovery to identify causal predictors of student performance and applying counterfactual analysis to provide personalized recommendations.
The paper describes the application of causal discovery methods, specifically the PC algorithm, to real-life student performance data. It addresses challenges such as sample size limitations and emphasizes the role of domain knowledge in causal discovery. The results reveal the identified causal relationships, such as the influence of earlier test grades and mathematical ability on final student performance.
Limitations of this study include the reliance on domain expertise for accurate causal discovery, and the necessity of larger sample sizes for reliable results. The potential for incorrect causal structure estimations is acknowledged. A major challenge remains, which is the real-time implementation and validation of counterfactual recommendations.
In conclusion, the paper demonstrates the value of causal discovery for understanding student performance and providing personalized recommendations. It highlights the challenges, benefits, and limitations of using causal inference in an educational context, setting the stage for future studies to further explore and refine these methods.
Causal discovery Counterfactual explanations Student performance
## 1 Introduction
What causes a student to be at-risk of failing? By knowing the causes of student performance, we are able to provide personalized recommendations to improve pass rates and throughput. In terms of student performance, much research has been carried out using machine learning to predict student performance [3, 2], and although predicting student performance is indeed valuable, we do not only want to know who is predicted to be at-risk, but what is causing the poor performance.
One potential way is to perform interpretable (or explainable) machine learning [5]. This is to identify the most significant predictors of the outcome [9]: to explain why a student is at-risk [5]. The limitation of explainable machine learning is that we cannot assume the predictors to be _causal_ to the outcome, only correlated. Explainable machine learning can describe the student well, but we cannot assume those explanations to be causal.
To answer causal questions and determine if predictors (or variables) are causal, we need causal inference and causal discovery. To provide recommendations for an at-risk student on how to change some aspect of his/her studies in order
to pass, it is vital that we identify causal features in the data and not only correlated features. Only causal features can affect the outcome of the student's performance.
This study contributes two main ideas: using **causal discovery** to identify causal predictors of student performance and applying **counterfactual analysis** to show how to change causal features to achieve a desirable output for a student. We show how to take a real-life dataset and use causal discovery to estimate the true causal structure in the data which then allows us to generate counterfactual scenarios to provide personal recommendations for students.
## 2 Causal Inference and Discovery
Causal inference is the field of finding causes of things. Judea Pearl defines causality as follows: A causes B if B _listens_ to A [6]. This means that if we change A, then we observe a change in B as well. A does not need to be a direct cause of B, but can also be a cause of events that cause B.
### Causal Graphs
In causal inference, we are interested in estimating if (and to what extent) an event (or feature) causes an outcome of interest. For example, do extra tutorials cause an increase in academic performance? Consider Figure 1 below which is a directed acyclic graph (DAG). This represents the true data generating process, the true causal structure in the data. Let \(y\) be the outcome of interest, say student performance, and let \(X\) be extra tutorials. Say we are interested in estimating the causal effect of extra tutorials (\(X\)) on student performance (\(y\)). However, there is a third variable, \(Z\), which in this case acts as a confounder, a parent, of both \(X\) and \(y\). Let's say \(Z\) refers to personal motivation of the student; the inner drive to succeed. What this specific causal graph is telling us is that the variable \(Z\) is a cause of both attending the extra-tutorials and the performance: more motivated and driven students will both perform better (\(y\)) _and_ use every opportunity to get better, by attending extra-tutorials (\(X\)). Therefore, if we are trying to estimate the true causal effect of extra-tutorials (\(X\)) on performance \(y\), if we merely estimate the causal effect of \(X\) on \(y\), there will be bias because of the confounding variable \(Z\).
To estimate only the causal effect of \(X\) on \(y\), we need to remove the effect of \(Z\) on \(X\). We can do this via Pearl's Do-Calculus which is to block the causal path from \(X\) to \(y\). We perform \(\text{do}(X)\) which is to manipulate \(X\) to see its effect on \(y\). This can be performed practically by running randomized experiments or carrying out matching or regression techniques. This is to essentially adjustment or control for, the variable \(Z\), i.e. fix it so that it no longer has an effect on \(X\). This essentially breaks any relationship that \(X\) has with \(Z\), as shown in Figure 2. The detailed description of causal inference and do-calculus is beyond the scope of this study, but this example is presented to show how causal inference can be used to estimate true causal effects.
The point of the the above discussion is to show the the power of having a _causal graph_ and to know the true causal structure and true data generating process. If we didn't have such a graph, we might not have known that \(Z\) is a
Figure 1: DAG showing common parent confounding where \(Z\) is the parent of both \(X\) and \(y\).
Figure 2: DAG showing adjusting for \(Z\).
confounding feature that needs to be adjusted for using causal inference methods. Knowing the causal structure in the data shines light on exactly how to apply causal inference methods to estimate true causal effects.
Three basic causal structures exist. One is the fork which is shown in the example we just discussed, where we have a confounder or parent of two other variables: \(X\gets Z\to y\). Another is a chain structure where one feature causes a second that causes the third: \(X\to Z\to y\). The last is a collider where two independent variables cause a third: \(X\to Z\)\(\leftarrow\)\(y\). These are three distinctly different causal structures. However, if you were to obtain a dataset with these three variables you would not know how they are causally related. If we do not know the causal structure, then estimating the treatment effects of one feature on another would most likely be biased.
### Causal Discovery
Therefore if we desire to more accurately measure treatment (causal) effects by applying causal inference methods, we must know what the data generating process (causal graph) looks like. This is where causal discovery comes in. Causal discovery aims to take a real-life dataset where we do not know the complete causal structure (or graph), and generate the causal graph. This causal graph can then guide us into a host of causal inference methods that could add much value to the problem we are trying to solve.
Causal discovery (CD) was first proposed in 2000 to study gene expression [7]. Over the last twenty years there has been widespread use of these methods as well as a large increase in the number of CD algorithms developed [1]. CD methods generally fall into four categories: constraint-based methods, score-based methods, functional causal discovery, and gradient-based methods [1, 4]. In this study we limit our methods to the first two. The aim of this study is to present the potential benefits of and downsides to, using causal discovery for student performance. In future studies we will compare the different methods in more detail. It is vital to include here that causal discovery works optimally when a domain expert or previous domain knowledge is available to model the causal graph. We next consider two common CD methods that are mainly used in this study.
#### 2.2.1 Constraint-based methods
This method is based on the concept of conditional independance tests. For example, consider two variables x and y. If we determine that x and y are correlated using statistical tests, then it could either mean that x causes y (x \(\rightarrow\) y), or y causes x (y \(\rightarrow\) x), or a third confounding variable exists such that x \(\leftarrow\) z \(\rightarrow\) x. Therefore constraint-based methods use conditional independence tests to estimate potential causal relations. However, the limitation is that at best, these methods produce a Markov equivalence class (MEC): where multiple causal graphs produce the same conditional independence. In other words, these tests can at best discover a class of causal graphs that produce the same condition independence. However, the positive side is that we can then look at the potential graphs and use logic and domain knowledge to identify the true graph. More detail on this can be found in [1, 4]. The PC algorithm is an example of a constraint-based method and perhaps traditionally the most commonly used CD method.
#### 2.2.2 Score-based methods
Whereas the PC method (constraint-based) starts with a fully connected graph and uses conditional independence to remove edges, score-based methods such as greedy-equivalence-search method (GES), start with a blank graph and add and remove causal edges, again based on statistical tests. Each generated graph is an attempt to fit the observational data in the best way. To each graph GES assigns a score and the best score aims to represent the most likely causal relationship in the data.
## 3 Counterfactual Analysis
A counterfactual represents a hypothetical scenario where one or more factors are changed while keeping everything else the same. The aim is to determine how an output will change in a counterfactual scenario. This idea is key to providing personalized recommendations to students or to any other field of interest.
The aim of counterfactual analysis is to predict how an outcome would have changed had we manipulated the input causal feature: what would we need to do to the causal inputs in order to obtain a desirable output. In the context of student performance, how would an individual student need to change some causal input in order to switch from being at-risk to passing?
In this study we follow the counterfactual method developed by Judea Pearl [6], and not those presented as counterfactual explanations in the explainable machine learning literature [5]. Reasons for this are beyond the scope of this study but
can be found in [8]. Note that counterfactual analysis is performed after we have performed causal discovery to find the causal structure.
Pearl's counterfactual method follows the following steps:
1. Abduction: Once we have the true causal structure and have estimated the structural causal models, we use existing measured data for an individual case of interest, and compute all exogenous (noise) inputs. Computing the noise variables for individual cases is a vital aspect of counterfactual analysis.
2. Action: After computing the noise variables, we intervene (perform do-calculus) on the causal input variable of interest. This we sever the relationship that the input causal variable had with its' parents [6, 8]. Intervening on a variable means all inputs into that variable no longer affect it.
3. Prediction: We now choose counterfactual inputs and using the noise variables calculated for the individual case, we predict counterfactual outcomes. Ultimately, we want to identify a desirable outcome (such as a student passing the course) and the necessary counterfactual inputs.
For a detailed description of Pearl's counterfactual analysis see [6, 8].
### Counterfactual Requirements: Causal and Actionable
In order for counterfactual recommendations to be meaningful, they need to be causal and actionable. Causality is satisfied by the causal discovery step where we uncover the causal structure. However, even though features may cause another feature, they might not be actionable. For example, we may find that age is an important cause for an output. This type of feature is not actionable: we can't physically manipulate someone's age to change the output. Or we may find a causal feature has already occurred and you cannot go back in time to manipulate that feature.
What can we do in these cases? We can find proxy factors that are related. For example, we can ask ourselves, why does the age variable cause this output? Is it experience in something else that is the true cause? Or the feature that has already occurred, is there something that it is currently related to that might also be causal?
### Causal Inference Steps
Based on the above, we identify the causal inference steps needed to provide personalized counterfactual recommendations for students (or for similar cases).
1. Perform causal discovery: uncover/discover the true data generating process, the causal structure in the data, the DAG.
2. Identify the causal question you want to answer. Average treatment effect and/or counterfactual?
3. Perform do-calculus (adjust, control, etc.) to estimate average treatment effects.
4. Perform counterfactual analysis to provide personalized recommendations.
## 4 Methods
The above was applied in two parts: first to synthetic data (Section 4.1) and then to real-life data (Section 4.2). A synthetic causal structure (and data) was generated with the aim of seeing how well the causal discovery methods perform on data with a **ground truth** causal structure. How well do these methods discover the true causal structure? We study the PC and GES algorithms here.
Second, CD and counterfactual analysis were then applied to real-life data. The major challenge with real-life data is that we have no ground truth causal structure. We use the PC algorithm for CD here.
### Synthetic Student Causal Structure
Synthetic data was generated mimicking student data as shown in Table 1. The simulated data represents student information such as grades, gender, age etc.
The _causal structure_ was then then generated according the chain causal structure shown in Figure 3 (\(x_{7}\to x_{3}\to y\)) and the structural causal model (SCM) shown in Equations 1 and 2. This data therefore simulates a final grade \(y\) as a function of the other variables.
\[x_{3}=\beta_{0}x_{7}+e_{1} \tag{1}\] \[y=\beta_{0}+\beta_{1}x_{1}+\beta_{2}x_{2}+\beta_{3}x_{3}+\beta_{4} x_{4}+\beta_{5}x_{5}+\beta_{6}x_{6}+e_{2} \tag{2}\]
We also include two errors in the data generating process, \(e_{1}\) = \(\sim\mathcal{N}(0,1)\) and \(e_{2}\) = \(\sim\mathcal{N}(0,2)\). The \(\beta\) coefficients from 0 to 6 were arbitrarily selected as follows: 0.4, 0.6, 0.4, 0.6, 0.7, 0.4, and 0.4. As can be seen from Figure 3, it is important in the data generating process, to make sure the errors are different, otherwise we introduce confounding. That is, if the error into \(x_{3}\) and into \(y\) are from the same distribution, it acts as a parent of both \(x_{3}\) and \(y\), introducing problems when applying causal discovery. Errors (or noise) in statistical models refer to all other inputs into the data that are not accounted for in the measured features.
### Real-life Data
The real-life data was taken from Smith et al. [9] which records student performance data for a first year engineering mechanics course. All details can be found there. The dataset comprised 39 input features such as gender, age, ethnicity, degree of study, high school grades, first year engineering grades etc. and one target feature, the final engineering mechanics grade. There were 878 observations, i.e. students, from which we captured the data thereby utilizing a dataset of with shape (878,40).
### Causal Discovery Methods
As mentioned earlier, the aim was to study how using CD can be used for personalized student recommendations. We mainly applied the PC algorithm. Future studies will include other methods.
## 5 Results
### CD on Synthetic Student Data
The causal graph produced by the PC and GES algorithms were identical, as shown in the Figure 4. The results closely match the ground truth causal graph in Figure 3, except for the relationship between x7 and x3, shown as 6 and 2 in the graph, respectively. This can be attributed to the contraint-based methods being conditional probability methods that can at best produce Markov Equivalence Classes.
To show how these methods are somewhat volatile, PC and GES were again applied to the dataset (without seed) to produce the DAG in Figure 5. Here we can see that the algorithms (again producing identical DAGs) estimate no relationship between x5 (node 4) and y (node 7). This shows that even for relatively simple causal structures, the models
\begin{table}
\begin{tabular}{|l|l|l|} \hline Feature & Description & Statistics \\ \hline x1 & Grades & \(\mu\)=50, \(\sigma\)=5 \\ x2 & Age & \(\mu\)=20, \(\sigma\)=1 \\ x3 & Grades & \(\mu\)=45, \(\sigma\)=6 \\ x4 & Gender & \(p\) = 0.6 \\ x5 & Bursary & \(p\) = 0.3 \\ x6 & Grades & \(\mu\)=70, \(\sigma\)=5 \\ x7 & Grades & \(\mu\)=50, \(\sigma\)=5 \\ \hline \end{tabular}
\end{table}
Table 1: Description of features used to generate the synthetic student data for the chain causal structure.
Figure 3: Causal structure (data generating process) for synthetic student data, based on the chain causal structure.
can generate incorrect causal structures. However, it was further found that as we increased the sample size to much larger samples, the models produced correct DAGs each time.
Therefore what we can see from the synthetic data is that as the data size grows the models become more trustworthy but at low sample sizes, incorrect DAGs tend to be produced.
### Student Performance Data
Here we apply CD and counterfactual analysis to real-life student data. Using the PC algorithm, we obtained the causal structure shown in Figure 6. The dataset comprised 39 input features and 1 target feature. The DAG represents all 40 features.
It is more valuable to isolate the feature we are interested in, namely the final student performance, node 39, to see what is directly influencing that feature. Note that Python indexes data from 0 and not 1, hence the target variable being node 39 and not 40. The reduced DAG is shown in Figure 7. From this initial DAG produced by the PC algorithm, we can see that a few relationships don't make sense. First of all, the final student performance, node 39, has arrows pointing **to** other features, nodes 16 and 34. This is impossible since those features occurred in time before node 39. Nodes 16 and 34 refer to grades that occurred either earlier in the semester or previously in high school. In actual fact, no features in the dataset occurred _after_ the target feature. This is helpful because it allows us to introduce a constraint into the PC algorithm called priori knowledge. Using this constraint we are able to educate the algorithm as to which causal directions are allowed and which are forbidden as it generates the DAG.
Figure 4: Causal graph of synthetic student data using the PC and GES algorithms (identical).
Figure 5: Second causal graph of synthetic student data using the PC and GES algorithms (identical).
#### 5.2.1 Priori Knowledge
Because node 39 occurs temporally after all the features in the dataset, arrows _from node 39 to_ any other feature was forbidden and the PC algorithm was applied again and the DAG in Figure 8 was generated. Here we see all the arrows pointing to node 39 or to each other, but no arrows point away from node 39, which is correct. This also shows how a domain expert might introduce causal relationships into the model.
Analyzing the actual features from Figure 8, we see three causal features into the target feature (node 39). Nodes 13 and 16 refers to the mechanics and math test grades occurring earlier in the semester, and node 34 refers to mathematics grades in high school. These results are plausible. It makes perfect sense that the previous mechanics test grade (node 13) is a cause of the final mechanics grade. It further makes sense that mathematics has a causal influence on engineering mechanics since it contains a substantial math component, namely linear algebra and calculus. What the DAG further suggests is that the mechanics grades (13) and high school math act as confounders, parents, of the math test (node 16) and final mechanics grades. This suggests that mathematical ability is a cause of both the previous mechanics grades and the final grades. This is plausible.
As an initial observation, this suggests that to be successful at engineering mechanics requires high performance in both high school and first year engineering mathematics. Therefore if general, if we are advising a student to improve his/her mechanics marks, then we can advise them improve their earlier mechanics and mathematics grades. That is very general. However, the causal structure is not an end in itself. We now have insight into how to perform causal
Figure 6: Causal graph of student performance data generated using PC algorithm
Figure 7: Causal graph isolating the feature of interest. The graph was generated without incorporating priori constraints. Impossible causal directions are included here.
inference to estimate the true causal (treatment) effects of another feature on the final mechanics grades (Section 5.3) and to perform counterfactual analysis (5.4).
### Estimating Treatment Effects
We could now ask causal questions about any of the causal inputs into node 39. Let's consider how node 16, the math test, is causally influencing the final mechanics grade (node 39). To do this we need to perform causal inference: control/adjust for the confounders, nodes 13 and 34. Once we control for those features, we can then estimate the true treatment effect of the previous math test on the final grade. Discovering the DAG has helped us identify which features to control for to estimate the true treatment effect of any of those variables on the final grades. This is shown next.
This can be done using a regression analysis which would include nodes 16, 13 and 34 as the independent variables and node 39 as the dependent. Note that if we only include node 16 as the independent variable, then the regression model will estimate a biased coefficient (treatment effect). Figure 9 below shows that all three input variables are significant (\(p\) < 0.05), but more importantly, the coefficient of 0.199 for March.Test.MATH1014 (math test, node 16) refers to the true treatment effect of math test on final mechanics grades. Had we _not_ included the other two variables, the estimated treatment effect of the math test would have been approximately 0.6 which would have overestimated the causal effect. We now have the true causal effect of math test on final mechanics grades.
The coefficient of 0.199 indicates that for every increase of one unit of the math test, there is approximately a 0.2 increase in the final mechanics grades. A very important point to note is that these are average treatment effects. We could therefore advise an individual student that the better you perform in the math test, the better you will perform in the final grade; but that would not necessarily be a personal recommendation. For this, we need to turn to counterfactual analysis, shown next. Although not performed here, we could easily use causal inference methods to also estimate the true causal effects of mechanics test (node 13) or high school math (node 34) on final grades. For example, we can see that node 13 has causal flow directly to node 39 as well as through node 16. This includes a chain type causal structure which requires specific causal inference methods.
Figure 8: Causal graph of only feature of interest data generated using PC algorithm
Figure 9: Regression results for estimating causal effect of mechanics test on final mechanics grades.
### Counterfactual Recommendation
The previous section was valuable for estimating average treatment effect. The aim though, is to not only provide average treatment effect information, but to provide personal recommendations. Here we turn to Pearl's counterfactual method discussed in Section 3. To illustrate how we generate a counterfactual, consider Figure 10 which is a reproduction of Figure 8, but now with important noise terms included. Noise terms refer to all other causes of variance in a variable that are not accounted for in the measured/observed variables. They are vital for generating counterfactuals, because (1) they should be unique to an individual student and (2) the noise should be fixed while changing the causal features. The assumption is that the noise remains invariant while the counterfactuals are changed.
Step 1 of Pearl's method is to perform Abduction. Because we aim to manipulate all three nodes (13, 16 and 34), that is, generate counterfactuals using all three nodes, we sever their relationships with their parents. Therefore, in that case, we don't need to calculate their noise terms, only the noise term \(u_{1}\), feeding into node 39 (the final mechanics grade). After severing the parent relationship, we end up with the causal graph in Figure 11 and the structural causal model (SCM) in Equation 3.
\[y=c_{1}n_{13}+c_{2}n_{34}+c_{3}n_{16}+u_{1} \tag{3}\]
In order to calculate the noise term \(u_{1}\) for a particular student, we need to input the observed data into the SCM. We know \(y\), \(n_{13}\), \(n_{34}\), \(n_{16}\) as well as the coefficients (see Figure 9). We input that into the equation and calculate \(u_{1}\). This is the first step of Abduction.
To illustrate this, we select actual observed data for a random student from the population who is predicted to fail the course, i.e. at-risk. Note that all data has been normalized for computation and anonymizing purposes. See the student's normalized data in Table 2.
We now feed the observed data from Table 2 into Equation 3 and compute \(u_{1}\) to be -0.763.
\[y =c_{1}n_{13}+c_{2}n_{34}+c_{3}n_{16}+\mathbf{u_{1}} \tag{4}\] \[-1.29 =0.19\cdot-2.57+0.486\cdot 0.06+0.187\cdot(-0.365)+\mathbf{u_{1}}\] (5) \[\mathbf{u_{1}} =-0.763 \tag{6}\]
We have now computed the noise term for this specific student. We can now apply the final step of Prediction: to calculate counterfactual quantities. We use the noise term and input counterfactual input features to compute a
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Mechanics Test (13) & Math Test (16) & High School Math (34) & Final Grade (39) \\ \hline
0.06 & -2.57 & -0.365 & -1.29 \\ \hline \end{tabular}
\end{table}
Table 2: Randomly selected at-risk student data.
Figure 11: Causal graph when manipulating nodes 13, 16 and 34. Parent relationships are severed.
Figure 10: Causal graph reproduced for counterfactual analysis. Noise terms are included here.
counterfactual output \(y\). Recall that the aim is to manipulate the input causal features to change the student's results from failing to passing. This is an optimization exercise which goes beyond the scope of this study. The point is that we have three options (nodes 13, 16, 34) and multiple combinations of those input features, to change in order to affect the output. Due to the data being normalized, for our dataset, the passing value in normalized terms was z-score = -0.901 (for 50%). The student we selected has a z-score = -1.29 which is below passing. Therefore in order to increase this student's mechanics grade from -1.29 to -0.901, we could, for example, keep everything the same and only change the math test (16) from a z-score of 0.06 to approximately 0.9. As you can see there are multiple options and combinations that could achieve the desired output of improving the student's grade from -1.29 to -0.901. This is an optimization exercise. We can therefore use these results to advise the student on how to improve in order to pass the course.
## 6 Discussion
The study presented in this paper explored the application of causal discovery methods and counterfactual analysis to student performance data. Two main contributions were made: (1) the use of causal discovery to reveal the underlying causal structure of the data, enabling informed causal inference, and (2) the implementation of Pearl's counterfactual method for generating personalized recommendations.
There are however, some limitations of the current study. Although the features that were selected using causal discovery as causal inputs (nodes 13, 16 and 34) are indeed plausible, they are not actionable. This means that the student cannot actually act on those specific features. These three features have already occurred and so the student cannot go back and change his/her mechanics and math test grades or high school math grades. Nevertheless, as mentioned in Section 3.1, we can use proxies for the actual feature and advise the student in this case to focus a substantial amount of time on their math skills, specifically those related to mechanics. Identifying actionable features or proxy features is important. We could in future studies, make sure we included later assessments that could be actionable.
Further challenges to using causal discovery is that we do not know the ground truth causal structure and SCMs. For this we need domain experts who might know the true causal relationships. Recall that at best, the PC and GES algorithms produce Markov Equivalence Classes. A domain expert can then provide known causal relationships for the CD algorithm. Unless there are obvious causal relationships that are easy to spot, it is advised that CD algorithms not be used on their own and applied without human experts guiding them.
Another major challenge and a potential huge area for future work is that this study focuses on historical data. It has not been applied in real-time. That is, we have not applied this in real-time yet to a student early in the semester to provide personal recommendations to him/her on how to switch from being at-risk to passing. This is perhaps the greatest challenge in counterfactual recommendations. Say we go through the steps outlined in this study and generate counterfactual recommendations for a student. For example, say a month into the semester we identify an at-risk student: meaning we _predict_ that this student will fail at the end of the semester. We then use all relevant data and generate counterfactual recommendations that if followed, _predict_ that the student will pass. The challenge here is two-fold. The first is, how do we ensure that he/she follows the recommendation fully? The student may follow it for some time and then fade away. Then it is difficult to know if our recommendations worked. The second challenge is, how do we keep all other things constant? A counterfactual assumes that we only change that thing (or those things), while all else remains the same. Say recommended that the student needs to improve her mathematics skills. Then she begins focusing on that but perhaps by focusing on that, she focuses less on other important things. This might now change the data distribution (i.e the causal structure) that we assumed when performing causal discovery and counterfactual analysis. This is therefore a challenging problem. But we believe this is a very promising set of methods that can substantially help students. We need to invest more focus and study on how to solve these problems. Nevertheless, we believe that even though it is challenging to validate how effective these methods are, being able to identify the root cause of a student's poor performance and knowing that it is causal, is still of great value. This field is highly promising, not only for students but for any field where we desire to know the causal influences and generate counterfactual recommendations.
|
2306.17485 | Detection-segmentation convolutional neural network for autonomous
vehicle perception | Object detection and segmentation are two core modules of an autonomous
vehicle perception system. They should have high efficiency and low latency
while reducing computational complexity. Currently, the most commonly used
algorithms are based on deep neural networks, which guarantee high efficiency
but require high-performance computing platforms. In the case of autonomous
vehicles, i.e. cars, but also drones, it is necessary to use embedded platforms
with limited computing power, which makes it difficult to meet the requirements
described above. A reduction in the complexity of the network can be achieved
by using an appropriate: architecture, representation (reduced numerical
precision, quantisation, pruning), and computing platform. In this paper, we
focus on the first factor - the use of so-called detection-segmentation
networks as a component of a perception system. We considered the task of
segmenting the drivable area and road markings in combination with the
detection of selected objects (pedestrians, traffic lights, and obstacles). We
compared the performance of three different architectures described in the
literature: MultiTask V3, HybridNets, and YOLOP. We conducted the experiments
on a custom dataset consisting of approximately 500 images of the drivable area
and lane markings, and 250 images of detected objects. Of the three methods
analysed, MultiTask V3 proved to be the best, achieving 99% mAP_50 for
detection, 97% MIoU for drivable area segmentation, and 91% MIoU for lane
segmentation, as well as 124 fps on the RTX 3060 graphics card. This
architecture is a good solution for embedded perception systems for autonomous
vehicles. The code is available at: https://github.com/vision-agh/MMAR_2023. | Maciej Baczmanski, Robert Synoczek, Mateusz Wasala, Tomasz Kryjak | 2023-06-30T08:54:52Z | http://arxiv.org/abs/2306.17485v1 | # Detection-segmentation convolutional neural network for autonomous vehicle perception
###### Abstract
Object detection and segmentation are two core modules of an autonomous vehicle perception system. They should have high efficiency and low latency while reducing computational complexity. Currently, the most commonly used algorithms are based on deep neural networks, which guarantee high efficiency but require high-performance computing platforms. In the case of autonomous vehicles, i.e. cars, but also drones, it is necessary to use embedded platforms with limited computing power, which makes it difficult to meet the requirements described above. A reduction in the complexity of the network can be achieved by using an appropriate: architecture, representation (reduced numerical precision, quantisation, pruning), and computing platform. In this paper, we focus on the first factor - the use of so-called detection-segmentation networks as a component of a perception system. We considered the task of segmenting the drivable area and road markings in combination with the detection of selected objects (pedestrians, traffic lights, and obstacles). We compared the performance of three different architectures described in the literature: MultiTask V3, HybridNets, and YOLOP. We conducted the experiments on a custom dataset consisting of approximately 500 images of the drivable area and lane markings, and 250 images of detected objects. Of the three methods analysed, MultiTask V3 proved to be the best, achieving 99% \(mAP_{50}\) for detection, 97% MIoU for drivable area segmentation, and 91% MIoU for lane segmentation, as well as 124 fps on the RTX 3060 graphics card. This architecture is a good solution for embedded perception systems for autonomous vehicles. The code is available at: [https://github.com/vision-agh/MMAR_2023](https://github.com/vision-agh/MMAR_2023).
detection-segmentation convolutional neural network, autonomous vehicle, embedded vision, YOLOP, HybridNets, MultiTask V3
## I Introduction
Perception systems in mobile robots, including self-driving cars and unmanned aerial vehicles (UAV), use sensors like cameras, LiDAR (Light Detection and Ranging), radar, IMU (Inertial Measurement Unit), GNSS (Global Navigation Satellite Systems) and more to provide crucial information about the vehicle's position in 3D space and detect relevant objects (e.g. cars, pedestrians, cyclists, traffic lights, etc.). Image and LiDAR data processing involve two main tasks: detection, which identifies objects and labels them with bounding boxes or masks, and segmentation, which assigns labels to each pixel based on its representation in the image. Instance segmentation assigns different labels to objects belonging to the same class (e.g. different cars). This allows all objects to be correctly identified and tracked. Typically, both tasks are performed by different types of deep convolutional neural networks. For detection, networks from the YOLO family (_You Only Look Once_[1]) are the most commonly used solution. For segmentation, networks based on the CNN architecture are used, such as U-Net [2] and the fully convolutional network for semantic segmentation, and the mask R-CNN for instance segmentation. It is also worth mentioning the increasing interest in transformers neural networks in this context [3]. However, the use of two independent models has a negative impact on the computational complexity and energy efficiency of the system. For this reason, network architectures that perform both of the above tasks simultaneously are being researched. There are two approaches that can be used to solve this challenge: using instance segmentation networks or detection-segmentation networks. Instance segmentation networks are a special class of segmentation networks and require the preparation of a training dataset that is common to all detected objects. In addition, their operation is rather complex, and only part of the results are used for self-driving vehicles (distinguishing instances of classes such as road, lane,
Fig. 1: Illustration of the discussed network architectures.
etc. is unnecessary for further analysis and often difficult to define precisely). Detection-segmentation networks consist of a common part (called the backbone) and several detection and segmentation heads. This architecture allows the preparation of a separate training dataset for detection and often several subsets for segmentation (e.g. a separate one for lane and road marking segmentation). This allows the datasets to be scaled according to how important the accuracy of the module is. In addition, the datasets used can contain independent sets of images, which greatly simplifies data collection and labeling. The three architectures discussed so far: detection, segmentation, and detection-segmentation are shown in Figure 1. In addition, limiting the number of classes will reduce the time needed for post-processing, which involves filtering the resulting detections, e.g. using the NMS (Non-Maxima Suppression) algorithm. Segmenting the image into only selected categories can also reduce inference time and increase accuracy. All these arguments make detection-segmentation networks a good solution for embedded perception systems for autonomous vehicles.
In this paper, we compared the performance of three detection-segmentation networks: MultiTask V3 [4], HybridNets [5], and YOLOP [6]. We conducted the experiments on a custom dataset, recorded on a mock-up of a city. The road surface and road markings were segmented, and objects such as pedestrians, traffic lights, and obstacles were detected. To the best of our knowledge, this is the first comparison of these methods presented in the scientific literature.
The rest of the paper is structured as follows. Section II discusses the most important works on the use of neural networks for simultaneous object detection and segmentation. The architectures of the tested networks are then presented in Section III. The methods for training the neural networks are described in Section IV. The results obtained are summarised in Section V. The paper ends with conclusions and a discussion of possible future research.
## II Related works
Many different methods have been described in the scientific literature for the detection of drivable area and road markings, as well as for the detection of objects, e.g. pedestrians, cars, traffic signs, traffic lights, etc. One of the solutions available is the use of deep neural networks. These can be divided into detection, segmentation, and detection-segmentation networks.
Detection networks are designed to locate, classify and label existing objects in any image using a bounding box. This is a set of coordinates of the corners of the rectangles that mark the detected objects in the image. A conventional method of object detection is based on proposing regions and then classifying each proposal into different object categories. This includes network architectures based on regions with convolutional neural networks (R-CNN) [7]. Another approach considers object detection as a regression or classification problem in order to directly obtain the final results (categories and locations). These include, among others, the YOLOv7 network architectures [1].
Segmentation networks are based on an encoder-decoder architecture. They are used to classify each pixel in the image. Two types of segmentation can be distinguished: semantic and instance. A representative example of semantic segmentation is U-Net [2]. The encoder module uses convolution and pooling layers to perform feature extraction. On the other hand, the decoder module recovers spatial details from the sub-resolution features, while predicting the object labels. A standard choice for the encoder module is a lightweight CNN backbone, such as GoogLeNet or a revised version of it, namely Inception-v3 [8]. To improve the accuracy and efficiency of semantic segmentation networks, multi-branch architectures have been proposed. They allow high-resolution segmentation of objects in the image. To this end, multi-branch networks introduce a fusion module to combine the output of the encoding branches. This can be a Feature Fusion module in which the output features are joined by concatenation or addition, an Aggregation Layer (BiSeNet V2 [9]), a Bilateral Fusion module (DDRNet [10]) or a Cascade Feature Fusion Unit (ICNet [11]). Moreover, there is more and more research in the direction of object detection and segmentation to use transformer-based neural networks, such as DETR [12], SegFormer [13]. In the segmentation task, there are only a few architectures proposed at the moment, while in the object detection task there are many solutions, of which transformer-based methods achieve the best performance. Vision transformers offer robust, unified, and even simpler solutions for various tasks. Compared to CNN approaches, most transformer-based approaches have simpler pipelines but stronger performance. However, transformer-based methods require a lot of training data.
Many dedicated solutions require both detection and segmentation of objects in the image. It should be noted that once full segmentation (i.e. for all object classes under consideration) has been performed, there is no need to implement detection - the bounding boxes can be obtained from the masks of individual objects. However, networks implementing accurate multi-class semantic segmentation or instance segmentation are characterized by high computational complexity, as highlighted in the paper [14]. The authors showed that the performance of the three most accurate networks did not exceed 24 fps (frames per second) on an RTX 3090 and 12 fps on a GTX 1080 Ti graphics card. This shows that for this type of network, achieving real-time processing (60 fps) on an embedded computing platform is challenging. Hence the idea of combining fast detection with segmentation limited to a few classes with relatively little variation (such as roadway, road markings, or vegetation/buildings). A key feature of this type of solution is the encoder common to both functionalities. This approach makes it possible to run deep networks on embedded devices equipped with computing chips that consume less power but have also less computing power. Furthermore, as will be shown later, the process of learning a segmentation-detection network is easier and faster than, an alternative solution based on a segmentation network only. In the papers [15, 16, 4, 17, 5], detection-segmentation network architectures have been proposed that currently achieve the
best results. The training process typically uses the following datasets: _KITTI_, _Cityscapes_, VOC2012 or _BDD100k_[18, 19, 20, 21].
When pre-selecting the appropriate solutions for the experiments, we took into account the diversity of the proposed architectures, the fulfillment of the requirements related to the FPT'22 competition [22], as well as the possibility of quantizing and accelerating the network on embedded computing platforms, i.e. eGPU (embedded Graphic Processing Unit), SoC FPGA (System on Chip Field Programmable Gate Array). Therefore, we decided to use the following three networks in our research: MultiTask V3 [4], HybridNet [5], and YOLOP [6].
## III The considered detection-segmentation neural networks
The MultiTask V3 network [4] is a model proposed by the developers of the Vitis AI (AMD Xilinx) platform for users using neural networks on SoC FPGA platforms. A scheme of the MultiTask V3 neural network architecture is shown in Figure 2. It allows five tasks to be performed simultaneously - detection, three independent image segmentations, and depth estimation. The backbone of the network, which determines the underlying feature vector, is based on the ResNet-18 convolutional neural network. Subsequent features are extracted using encoders and convolutional layer blocks. Branches responsible for a given part of the network then generate the corresponding output using convolution, ReLU activation operations, and normalization. Due to the large number of tasks to be performed, the network was trained to segment road markings, lanes (including direction), and objects (pedestrians, obstacles, and traffic lights) separately. Detection was performed on the same set of objects. The model was trained using only our own custom datasets, which were transformed into the format recommended by the network developers. The resulting network processes images with a resolution of \(512\times 320\) pixels. In addition, thanks to the model quantization tools, it is possible to reduce the precision and run the network on SoC FPGA platforms using DPUs (Deep Learning Processor Units). The performance on the original _BDD100k_ dataset [21] was not given, as the network was not previously described in any scientific paper.
The second detection-segmentation neural network considered is the YOLOP [6]. A scheme of the architecture is shown in Figure 3. It performs 3 separate tasks within a single architecture - detection of objects in the road scene, segmentation of the drivable area, and road markings. The network consists of a common encoder and 3 decoders, with each decoder dedicated to a separate task. The drivable area represents all lanes in which the vehicle was allowed to move - opposite lanes were not taken into account. The network was originally trained on the _BDD100k_ dataset [21]. To reduce memory requirements, the images were scaled from a resolution of \(1280\times 720\times 3\) to a resolution of \(640\times 384\times 3\). The network achieved a \(mAP_{50}\) score for single class detection (cars) of 76.5%, drivable area segmentation mIoU of 91.5%, and lane line segmentation mIoU score of 70.5%.
HybridNets [5] network is another example of simultaneous segmentation and detection models. HybridNets, like YOLOP, only performs object detection and segmentation of road markings and drivable area (without considering lane direction). A scheme of the architecture is shown in Figure 4. It does not have the semantic segmentation and depth estimation branches available in MultiTask V3. The network consists of four elements: a feature extractor in the form of EfficientNet V2 [23], a neck in the form of BiFPN [24], and two branches, one for a detection head similar to YOLOv4 [25] and the other for segmentation consisting of a series of convolutions and fusion of the outputs of successive layers of the neck. The network was initially trained on the _BDD100k_ dataset [21], whose images were scaled to a size of \(640\times 384\times 3\). It achieved a \(mAP_{50}\) for single class detection (cars) equal to 77.3%, drivable area segmentation mIoU of 90.5%, and lane line segmentation mIoU score of 31.6%.
## IV Experiments performed
A custom training dataset was prepared to compare the above-mentioned neural network models. It was divided into three subsets containing objects (pedestrian figures, obstacles,
Fig. 4: Scheme of the HybridNets neural network architecture.
Fig. 3: Scheme of the YOLOP neural network architecture.
Fig. 2: Scheme of the MultiTask V3 neural network architecture
traffic lights), road markings, and drivable area, respectively. The subsets were prepared based on the collected recordings from the city mock-up which was constructed according to the rules of the FPT'22 competition [22]. Subsequently, labels were applied to the images obtained from the recordings. The road markings dataset was prepared semi-automatically by pre-selecting a threshold and performing binarization. Annotations were then prepared for all sets using the Labelme software [26]. The resulting label sets were adapted to the formats required by the tools designed to train the aforementioned networks. The final dataset consisted of 500 images of the city mock-up with road markings, 500 images with the drivable area, and 250 images with objects. Size of the dataset was dictated by a small environment with little changes of lightning and camera angles, as the purpose of the trained model was to be used only on a given mock-up. The prepared datasets were divided into training and validation subsets in an 80/20 ratio. The validation set was later used as the test set. This decision was made because the size of the prepared dataset was relatively small (but still sufficient to properly train the model, as shown in Figure 6. An example of an input data set from a training set is shown in Figure 5.
In the case of the MultiTask V3 network, a path to the prepared dataset was passed to the training program. The application managed the training sets independently so that it was possible to run the training procedure from start to finish on all sets. Network has been trained using default hyperparameters, provided by developers. The base learning rate was set to 0.01. The optimiser used for training was a Stochastic Gradient Descent (SGD). Training included data augmentation as random mirroring of the input images, photometric distortion, and random image cropping. The model was trained using a batch size of 16. As the MultiTask V3 network also performs object segmentation, the maximum number of epochs was set to the highest of all the models considered. A value of 450 epochs was chosen, after which no significant increase in validation results was observed.
The YOLOP network training program did not allow different parts of the model to be trained simultaneously with independent sets. As the segmentation sets were different from the detection set, it was necessary to split the network training procedure. The training procedure began with the backbone layers and the upper detection layers (segmentation layers were frozen). Once this was completed, the layers responsible for segmentation were unfrozen, the remaining layers were frozen, and the training procedure was restarted. Network has been trained using default hyperparameters, provided by developers. The base learning rate was set to 0.01. The optimiser used for training was an Adam algorithm. Training included data augmentation as random changes in image perspective and random changes in the image's colour hue, saturation, and value. The model was trained using a batch size of 2. The training was stopped after 390 epochs, as the validation results did not improve in the following steps.
As with YOLOP, the HybridNet training program does not allow simultaneous training with two independent data sets. Therefore, a similar training strategy to YOLOP was used. First, the backbone and the detection branch were trained. Default hyperparameter settings provided by the developers were used, including the AdamW optimiser. There were only two parameters that were changed: a batch size of 4 and an initial learning rate of 0.001. The change of learning rate during detection training was chosen, as starting with the default learning rate of 0.0001 didn't show promising results. After 150 epochs, when no further performance improvement was observed on the validation set, training was stopped, the backbone and detection branches were frozen, and training was started on the segmentation set. This time the default hyperparameters were kept, including the learning rate of 0.0001. The segmentation branch was trained for 100 epochs until no improvement in performance was observed. In total, the network was trained for 250 epochs. Both of the branches were trained using the default data augmentation provided by the researchers, in the form of: left-right flip, change of hue, rotation, shear and translation.
## V Results and Discussion
Figure 6 shows the results of the considered neural network in terms of object detection, driving area, and road marking segmentation for a view containing a straight road. To verify the effectiveness of our selected detection-segmentation neural network models, we compared the performance of each single-task scheme separately, as well as the multitask scheme.
Table I shows the performance of the models on the NVIDIA GeForce GTX 1060 M and NVIDIA GeForce RTX 3060 graphics cards. It can be seen that YOLOP and MultiTasks networks for comparable resolutions process data in real-time, while HybridNets is slightly slower. Here it should be noted that the original implementation of HybridNets was used. Unlike the YOLOP and MultiTask V3 models, it makes extensive use of subclassing to implement most of the layers used in the network. This may cause large discrepancies
Fig. 5: Examples of training sets. Set (b) was generated for the MultiTask V3 network only, and sets (a), (c) and (d) for all models.
in the inference speed of the network compared to other models. Table II summarises the input image resolution and computational complexity of the selected neural networks. MultiTask V3 has the highest FLOPS value, especially when normalised with respect to the input image resolution and the highest number of parameters. On the other hand, it achieved the best performance on both GPUs, possibly due to the highly optimised parallel implementation. We then performed an evaluation to assess the performance of each task: object detection and drivable and lane segmentation. We considered the object detection performance of three models on a custom dataset. As shown in Table III, we use \(mAP_{50}\), \(mAP_{70}\), and \(mAP_{75}\) as the evaluation metrics of the detection accuracy. For YOLOP and MultiTask V3, the \(mAP_{50}\) score is above 95%, proving that both networks have been successfully trained. For MultiTask V3, the score does not change much as the IoU (Intersection over Union) acceptance level increases, while for YOLOP it decreases slightly. This result shows that the detections made by MultiTask V3 are very similar to those provided by the validation dataset, while YOLOP's detections are close to them, but do not overlap perfectly. The \(mAP_{50}\) score for the HybridNets architecture is about 84%. This score is lower than the previous two architectures but still allows for acceptable detection accuracy. We used IoU and mIoU (mean IoU) as evaluation metrics for drivable area segmentation and lane segmentation accuracy. A comparison of the drivable area segmentation results for MultiTask V3, YOLOP and HybridNets is shown in Table IV. Note that one of the requirements of the FPT 22 competition is left-hand traffic. It can be seen that the best performance is achieved by the MultiTask V3 network. However, the other neural networks also perform very well, with an accuracy of no less than 84%. A high IoU score for the drivable area class for all networks shows that the predicted segmentations are almost the same as those in the validation dataset. Achieving such high results was predictable as the driving area surfaces are large, simple in shape, and uniform in color. It is therefore relatively easy to distinguish them from the background. As the background is classified as any other pixel not belonging to the driving area class, the results obtained are even higher.
according to the requirements of the FPT'22 competition. The dataset was created solely for training models that could be used on the city mock-up. Due to the constant environmental factors and relatively few corner cases (such as intersections, turns, etc.), there was no need to obtain more data. However, for real-world applications, more work should be done to prepare the dataset. It should include more data, including different locations and environments (lighting, weather factors, etc.) to make the models reliable in a diverse environment. The results obtained confirm the high attractiveness of this type of networks - they allow good detection and segmentation accuracy, and real-time performance. Moreover, the training of these networks is simpler, since certain parts can be trained independently, even on separate datasets. Of the three methods analysed, MultiTask V3 proved to be the best, obtaining 99% \(mAP_{50}\) for detection and 97% MIoU for drivable area segmentation and 91% MIoU for lane segmentation, as well as 124 fps on the RTX 3060 graphic card. This architecture is a good solution for embedded perception systems for autonomous vehicles. As part of future work, we plan to focus on two further stages of building an embedded perception system based on a deep convolutional neural network. First, we want to perform quantization and pruning of the analysed network architectures to see how they will affect efficiency and computational complexity. Next, we will run the networks on an eGPU (e.g. Jetson Nano) and an SoC FPGA (e.g. Kria from AMD Xilinx). Networks will be compared on given platforms for performance and power consumption. It is worth noting that initial tests on an eGPU with MultiTask V3 and YOLOP have shown, showing that MultiTask V3 provides faster inference while consuming less energy. In the final step, we will add a control system to the selected perception system, place the selected computational system on a model of an autonomous vehicle and test its performance on the created mock-up. Secondly, we will attempt to use the _weakly supervised learning_ and _self-supervised learning_ methods, which, in the case of an atypical, custom dataset, would allow a significant reduction in the labeling process of the learning data. Thirdly, we also want to consider adding modules for depth estimation and optical flow, as elements often used in autonomous vehicle perception systems.
## Acknowledgment
The work presented in this paper was supported by the AGH University of Krakow project no. 16.16.120.773 and by the programme "Excellence initiative - research university" for the AGH University of Krakow.
|
2301.13644 | Exploring QSAR Models for Activity-Cliff Prediction | Pairs of similar compounds that only differ by a small structural
modification but exhibit a large difference in their binding affinity for a
given target are known as activity cliffs (ACs). It has been hypothesised that
quantitative structure-activity relationship (QSAR) models struggle to predict
ACs and that ACs thus form a major source of prediction error. However, a study
to explore the AC-prediction power of modern QSAR methods and its relationship
to general QSAR-prediction performance is lacking. We systematically construct
nine distinct QSAR models by combining three molecular representation methods
(extended-connectivity fingerprints, physicochemical-descriptor vectors and
graph isomorphism networks) with three regression techniques (random forests,
k-nearest neighbours and multilayer perceptrons); we then use each resulting
model to classify pairs of similar compounds as ACs or non-ACs and to predict
the activities of individual molecules in three case studies: dopamine receptor
D2, factor Xa, and SARS-CoV-2 main protease. We observe low AC-sensitivity
amongst the tested models when the activities of both compounds are unknown,
but a substantial increase in AC-sensitivity when the actual activity of one of
the compounds is given. Graph isomorphism features are found to be competitive
with or superior to classical molecular representations for AC-classification
and can thus be employed as baseline AC-prediction models or simple
compound-optimisation tools. For general QSAR-prediction, however,
extended-connectivity fingerprints still consistently deliver the best
performance. Our results provide strong support for the hypothesis that indeed
QSAR methods frequently fail to predict ACs. We propose twin-network training
for deep learning models as a potential future pathway to increase
AC-sensitivity and thus overall QSAR performance. | Markus Dablander, Thierry Hanser, Renaud Lambiotte, Garrett M. Morris | 2023-01-31T13:56:55Z | http://arxiv.org/abs/2301.13644v1 | # Exploring QSAR Models for Activity-Cliff Prediction
###### Abstract
**Introduction and Methodology:** Pairs of similar compounds that only differ by a small structural modification but exhibit a large difference in their binding affinity for a given target are known as activity cliffs (ACs). It has been hypothesised that QSAR models struggle to predict ACs and that ACs thus form a major source of prediction error. However, a study to explore the AC-prediction power of modern QSAR methods and its relationship to general QSAR-prediction performance is lacking. We systematically construct nine distinct QSAR models by combining three molecular representation methods (extended-connectivity fingerprints, physicochemical-descriptor vectors and graph isomorphism networks) with three regression techniques (random forests, k-nearest neighbours and multilayer perceptrons); we then use each resulting model to classify pairs of similar compounds as ACs or non-ACs and to predict the activities of individual molecules in three case studies: dopamine receptor D2, factor Xa, and SARS-CoV-2 main protease.
**Results and Conclusions:** We observe low AC-sensitivity amongst the tested models when the activities of both compounds are unknown, but a substantial increase in AC-sensitivity when the actual activity of one of the compounds is given. Graph isomorphism features are found to be competitive with or superior to classical molecular representations for AC-classification and can thus be employed as baseline AC-prediction models or simple compound-optimisation tools. For general QSAR-prediction, however, extended-connectivity fingerprints still consistently deliver the best performance. Our results provide strong support for the hypothesis that indeed QSAR methods frequently fail to predict ACs. We propose twin-network training for deep learning models as a potential future pathway to increase AC-sensitivity and thus overall QSAR performance.
**Keywords:** QSAR modelling; Activity cliffs; Activity cliff prediction; Machine learning; Deep learning; Molecular representation; Physicochemical descriptors; Extended-connectivity fingerprints; Graph isomorphism networks; Binding affinity prediction
## Introduction
Activity cliffs (ACs) are pairs of small molecules that exhibit high structural similarity but at the same time show an unexpectedly large difference in their binding affinity against a given pharmacological target [64, 67, 68, 69, 14, 47, 63]. The existence of ACs directly defies the intuitive idea that chemical compounds with similar structures should have similar activities, often referred to as the _molecular similarity principle_. An example of an AC between two inhibitors of blood coagulation factor Xa [43] is depicted in Figure 1; a small chemical modification involving the addition of a hydroxyl group leads to an increase in inhibition of almost three orders of magnitude.
For medicinal chemists, ACs can be puzzling and confound their understanding of structure-activity relationships (SARs) [19, 67, 77]. ACs reveal small compound-modifications with large biological impact and thus represent rich sources of pharmacological information. Mechanisms by which a small structural transformation can give rise to an AC include a drastic change in 3D-conformation and/or the switching to a different binding mode or even binding site. ACs form discontinuities in the SAR-landscape and can therefore have a crucial impact on the success of lead-optimisation programmes. While knowledge about ACs can be powerful when trying to escape from flat regions of the SAR-landscape, their presence can be detrimental in later stages of the drug development process, when multiple molecular properties beyond mere activity need to be balanced carefully to arrive at a safe and effective compound [67, 14].
In the field of computational chemistry, ACs are suspected to form one of the major roadblocks for successful quantitative structure-activity relationship (QSAR) modelling [63, 14, 26, 47]; abrupt changes in potency are expected to negatively influence machine learning algorithms for pharmacological activity prediction. During the development of QSAR models, ACs are sometimes dismissed as measurement errors [49], but simply removing ACs from a training data set can result in a loss of precious SAR-information [15].
Golbraikh et al. [26] developed the MODI metric to quantify the smoothness of the SAR-landscape of binary molecular classification data sets and showed that the SAR-landscape smoothness is a strong determinant for downstream QSAR-modelling performance. In a related work, Sheridan et al. [63] found that the density of ACs in a molecular data set is strongly predictive of its overall modelability by classical descriptor- and fingerprint-based QSAR methods. Furthermore, they found that such methods incur a significant drop in performance when the test set is restricted to "cliffy" compounds that form a large number of ACs. In a more extensive study, van Tilborg et al. [75] observed a similar drop in performance when testing classical and graph-based QSAR techniques on compounds involved in ACs. Notably,
Figure 1: Example of an activity cliff (AC) for blood coagulation factor Xa. A small structural transformation in the upper compound leads to an increase in inhibitory activity of almost three orders of magnitude. Both compounds were identified in the same ChEMBL assay with ID 658338.
in both studies this performance drop was also observed for highly nonlinear and adaptive deep learning models. In fact, van Tilborg reports that descriptor-based QSAR methods even outperform more complex deep learning models on "cliffy" compounds associated with ACs. This runs counter to earlier hopes expressed in the literature that the approximation power of deep neural networks might ameliorate the problem of ACs [79].
While these works provide valuable insights into the detrimental effects of SAR discontinuity on QSAR models, they consider ACs mainly indirectly by focussing on _individual_ compounds involved in ACs. Arguably, a distinct and more natural approach would be to investigate ACs directly at the level of compound _pairs_. This approach has been followed in the AC-prediction field which is concerned with developing techniques to classify whether a pair of similar compounds forms an AC or not. An effective AC-prediction method would be of high value for drug development with important applications in rational compound optimisation and automatic SAR-knowledge acquisition.
The AC-prediction literature is still very thin compared to the QSAR-prediction literature. An attempt to conduct an exhaustive literature review on AC-prediction techniques revealed a total number of 15 methods [4, 7, 10, 27, 29, 32, 34, 39, 44, 51, 52, 54, 57, 71], all of which have been published since 2012. Current AC-prediction methods are often based on creative ways to extract features from pairs of molecular compounds in a manner suitable for standard machine learning pipelines. For example, Horvath et al. [29] used condensed graphs of reactions [28, 35], a representation technique originally introduced for modelling of chemical reactions, to encode pairs of similar compounds and subsequently predict ACs. Another method was recently described by Iqbal et al. [34] who investigated the abilities of convolutional neural networks operating on 2D images of compound pairs to distinguish between ACs and non-ACs. Interestingly, none of the AC-prediction methods we identified employ feature extraction techniques built on modern graph neural networks (GNNs) [20, 25, 40, 76, 81] with the exception of Park et al. [54] who recently applied graph convolutional methods to compound-pairs to predict ACs.
In spite of the existence of advanced AC-prediction models there are significant gaps left in the current AC-prediction literature. Note that any QSAR model can immediately be repurposed as an AC-prediction model by using it to individually predict the activities of two structurally similar compounds and then thresholding the predicted absolute activity difference. Nevertheless, at the moment there is no study that uses this straightforward technique to investigate the potential of current QSAR models to classify whether a pair of compounds forms an AC or not. Importantly, this also entails that the most salient AC-prediction models [27, 29, 34, 44, 71] have not been compared to a simple QSAR-modelling baseline applied to compound pairs. It is thus an open question to what extent (if at all) these tailored AC-prediction techniques outcompete repurposed QSAR methods in the detection of ACs. This is especially relevant in light of the fact that several published AC-predict,ion models [27, 34, 44] are evaluated via compound-pair-based data splits which incur a significant overlap between training set and test set at the level of individual molecules; this type of data split should strongly favour standard QSAR models for AC-prediction, yet a comparison to such baseline methods is lacking.
We address these gaps by systematically investigating the abilities of nine frequently used QSAR models to classify pairs of similar compounds as ACs or non-ACs within three pharmacological data sets: dopamine receptor D2, factor Xa, and SARS-CoV-2 main protease. Each QSAR model is constructed by combining a molecular representation method (physicochemical-descriptor vectors (PDVs) [72], extended-connectivity fingerprints (ECFPs) [59], or graph isomorphism networks (GINs) [81]) with a regression technique (random forests (RFs), k-nearest neighbours (kNNs), or multilayer perceptrons (MLPs)). All models are used for two distinct prediction tasks: QSAR-prediction at the level of individual molecules, and AC-classification at the level of compound-pairs. The main contribution of this study is to shed light on the following questions:
* What is the relationship between the ability of a QSAR model to predict the activities of individual compounds, versus its ability to classify whether pairs of similar compounds form ACs?
* When (if at all) are common QSAR models capable of predicting ACs?
* When (if at all) are common QSAR models capable of predicting which of two similar compounds is the more active one?
* Which QSAR model shows the strongest AC-prediction performance, and should thus be used as a baseline against which to compare tailored AC-prediction models?
* Do differentiable GINs outperform classical non-trainable ECFPs and PDVs as molecular representations for QSAR- and/or AC-prediction?
* How could ACs potentially be used to improve QSAR-modelling performance?
## Experimental Methodology
### Molecular Data Sets
We built three binding affinity data sets of small-molecule inhibitors of dopamine receptor D2, factor Xa, and SARS-CoV-2 main protease. Factor Xa is an enzyme in the coagulation cascade and a canonical target for blood-thinning drugs [43]. Dopamine receptor D2 is the main site of action for classic antipsychotic drugs which act as antagonists of the D2 receptor [62]. SARS-CoV-2 main protease is one of the key enzymes in the viral replication cycle of the SARS coronavirus 2, that recently caused the unprecedented COVID-19 pandemic; it is one of the most promising targets for antiviral drugs against this coronavirus [74].
For dopamine receptor D2 and factor Xa, data was extracted from the ChEMBL database [45] in the form of SMILES strings with associated K\({}_{\text{i}}\) [nM] values. For SARS-CoV-2 main protease, data was obtained from the COVID moonshot project [1] in the form of SMILES strings with associated IC\({}_{50}\) [nM] values. SMILES strings were standardised and desalted via the ChEMBL structure pipeline [8]. This step also removed solvents and all isotopic information. Following this, SMILES strings that produced error messages when turned into an RDKit ind object were deleted. Finally, a scan for duplicate molecules was performed: If the activities in a set of duplicate molecules were within the same order of magnitude then the set was unified via geometric averaging. Otherwise, the measurements were considered unreliable and the corresponding set of duplicate molecules was removed. This procedure reduced the data set for dopamine receptor D2 / factor Xa / SARS-CoV-2 main protease from 8883 / 4116 / 1926 compounds to 6333 / 3605 / 1924 unique compounds whereby 174 / 21 / 0 sets of duplicate SMILES were removed and the rest was unified.
### Activity Cliffs: Definition of Binary Classification Tasks
The exact definition of an AC hinges on two concepts: structural similarity and large activity difference. An elegant technique to measure structural similarity in the context of AC analysis is given by the matched molecular pair (MMP) formalism [31, 38]. An MMP is a pair of compounds that share a common structural core but differ by a small chemical transformation at a specific site. Figure 1 depicts an example of an MMP whose variable parts are formed by a hydrogen atom and a hydroxyl group. To detect MMPs algorithmically, we used the mmpdb Python-package provided by Dalke et al. [17]. We restricted ourselves to MMPs with the following commonly used [27, 29, 71] size constraints: the MMP core was required to contain at least twice as many heavy atoms as either of the two variable parts; each variable part was required to contain no more than 13 heavy atoms; the maximal size difference between both variable parts was set to eight heavy atoms; and bond cutting was restricted to single exocyclic bonds. To guarantee a well-defined mapping from each MMP to a unique structural core, we canonically chose the core that contained the largest number of heavy atoms whenever there was ambiguity. Based on the ratio of the activity values of both MMP compounds, each MMP was assigned to one of three classes: "AC", "non-AC" or "half-AC". In accordance with the literature [5, 27, 29, 52, 77] we assigned an MMP to the "AC"-class if both activity values differed by at least a factor of 100. If both activity values differed by no more than a factor of 10, then the MMP was assigned to the "non-AC"-class. In the residual case the MMP was assigned to the "half-AC"-class. To arrive at a well-separated binary classification task, we labelled all ACs as positives and all non-ACs as negatives. The half-ACs were removed and not considered further in our experiments. It is relevant to know the direction of a potential activity cliff, i.e. which of the compounds in the pair is the more active one. We thus assigned a binary label to each MMP indicating its potency direction (PD). PD-classification is a balanced binary classification task. Table 1 gives an overview of all our curated data sets.
### Data Splitting Technique
ACs are molecular pairs rather than single molecules; it is thus not obvious how best to split up a chemical data set into non-overlapping training- and test sets for the fair evaluation of an AC-prediction method. There seems to be no consensus about which data splitting strategy should be canonically used. Several authors [27, 34, 44] have employed a random split at the level of compound pairs. While this technique is conceptually straightforward, it must be expected to incur a significant overlap between training- and test set at the level of individual molecules. For example, randomly splitting up a set of three MMPs \(\{\{s_{1},s_{2}\},\{s_{1},s_{3}\},\{s_{2},s_{3}\}\}\) into a training- and a test set might lead to \(\{s_{1},s_{2}\}\) and \(\{s_{1},s_{3}\}\) getting assigned to the training- and \(\{s_{2},s_{3}\}\) getting assigned to the test set which leads to a full inclusion of the test set in the training set at the level of individual molecules. This molecular overlap is problematic for at least three reasons: Firstly, it likely leads to overly optimistic results for AC-prediction methods since they will have already encountered some of the test compounds during training. Secondly, it does not model the natural situation encountered by medicinal chemists who we assume will not know the activity value of at least one
compound in a test-set pair. Thirdly, the mentioned molecular overlap should lead to strong AC-prediction results for standard QSAR models, but to the best of our knowledge, no such control experiments have been run in the literature.
Horvath et al. [29] and Tamura et al. [71] have made efforts to address the shortcomings of a compound-pair-based random split. They came up with advanced data splitting algorithms designed to mitigate the molecular-overlap problem by either managing distinct types of test sets according to compound membership in the training set or by designing splitting techniques based on the structural cores of MMPs. However, their data splitting schemes exhibit a relatively high degree of complexity which can make their implementation and interpretation difficult.
We propose a novel data splitting method which represents a favourable trade-off between rigour, interpretability and simplicity. Our technique shares some of its concepts with the methods proposed by Horvath et al. [29] and Tamura et al. [71] but might be simpler to implement and interpret. We first split the data into a training- and test set at the level of individual molecules and then use this basic split to distinguish several types of test sets at the level of compound pairs. Let
\[\mathcal{D}=\{s_{1},s_{2},...\}\]
be the given data set of individual molecules. Furthermore, let
\[\mathcal{M}\subseteq\{\{s,\tilde{s}\}\ |\ s\neq\tilde{s}\ \text{and}\ s, \tilde{s}\in\mathcal{D}\}\]
be the set of all MMPs in \(\mathcal{D}\) that have been labelled as either ACs or non-ACs. Each MMP \(\{s,\tilde{s}\}\in\mathcal{M}\) shares a common structural core denoted as \(\text{core}(\{s,\tilde{s}\})\). We use a random split to partition \(\mathcal{D}\) into a training set \(\mathcal{D}_{\text{train}}\) and a test set \(\mathcal{D}_{\text{test}}\) and then define the following MMP-sets:
\[\mathcal{M}_{\text{train}} =\{\{s,\tilde{s}\}\in\mathcal{M}\ |\ s,\tilde{s}\in\mathcal{D}_{\text{train}}\},\] \[\mathcal{M}_{\text{inter}} =\{\{s,\tilde{s}\}\in\mathcal{M}\ |\ s\in\mathcal{D}_{\text{train}},\ \tilde{s}\in\mathcal{D}_{\text{test}}\},\] \[\mathcal{M}_{\text{test}} =\{\{s,\tilde{s}\}\in\mathcal{M}\ |\ s,\tilde{s}\in\mathcal{D}_{\text{test}}\},\] \[\mathcal{M}_{\text{cores}} =\{\{s,\tilde{s}\}\in\mathcal{M}_{\text{test}}\ |\ \text{core}(\{s, \tilde{s}\})\notin\mathcal{C}_{\text{train}}\}.\]
Here,
\[\mathcal{C}_{\text{train}}=\{\text{core}(\{s,\tilde{s}\})\ |\ \{s,\tilde{s}\}\in \mathcal{M}_{\text{train}}\cup\mathcal{M}_{\text{inter}}\},\]
which describes the set of MMP-cores that appear in \(\mathcal{D}_{\text{train}}\).
Note that \(\mathcal{M}_{\text{train}}\cup\mathcal{M}_{\text{inter}}\cup\mathcal{M}_{\text {test}}=\mathcal{M}\). The pair \((\mathcal{D}_{\text{train}},\mathcal{M}_{\text{train}})\) describes the training space at the level of individual molecules and MMPs, and can be used to train a QSAR- or AC-prediction method. A trained method can then classify MMPs in \(\mathcal{M}_{\text{test}}\), \(\mathcal{M}_{\text{inter}}\) and \(\mathcal{M}_{\text{cores}}\). \(\mathcal{M}_{\text{test}}\) models an AC-prediction setting where the activities of both MMP-compounds are unknown. \(\mathcal{M}_{\text{cores}}\) represents the subset of MMPs in \(\mathcal{M}_{\text{test}}\) whose structural cores do not appear in \(\mathcal{M}_{\text{train}}\cup\mathcal{M}_{\text{inter}}\); \(\mathcal{M}_{\text{cores}}\) thus models the difficult task of predicting ACs in a structurally novel area of chemical space. Finally, \(\mathcal{M}_{\text{inter}}\) represents an AC-prediction scenario where the activity of one MMP-compound is given _a priori_; this can be interpreted as a compound-optimisation task where one strives to predict small AC-inducing modifications of a query compound with known activity. An illustration of our data splitting strategy is given in Figure 2.
We implemented our data splitting strategy within a \(k\)-fold cross validation scheme repeated with \(m\) random seeds. This generated data splits of the form
\[\mathcal{S}^{ij}=(\mathcal{D}^{ij}_{\text{train}},\mathcal{D}^{ij}_{\text{ test}},\mathcal{M}^{ij}_{\text{train}},\mathcal{M}^{ij}_{\text{test}}, \mathcal{M}^{ij}_{\text{inter}},\mathcal{M}^{ij}_{\text{cores}})\]
for \(i\in\{1,...,m\}\) and \(j\in\{1,...,k\}\) where \((\mathcal{D}^{ij}_{\text{train}},\mathcal{D}^{ij}_{\text{test}})\) represents the \(j\)-th split of \(\mathcal{D}\) in the cross validation
\begin{table}
\begin{tabular}{p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}}
**Data Set** & **Dopamine Receptor D2** & **Factor Xa** & **SARS-CoV-2** \\ & & & **Main Protease** \\ \hline \hline
**Compounds** & \(6333\) & \(3605\) & \(1924\) \\ \hline
**MMPs** & \(35484\) & \(21292\) & \(12594\) \\ \hline
**ACs** & \(461\) & \(1896\) & \(521\) \\ \hline
**Half-ACs** & \(3804\) & \(4693\) & \(1762\) \\ \hline
**Non-ACs** & \(31219\) & \(14703\) & \(10311\) \\ \hline
**ACs : Non-ACs** & \(\approx 1:68\) & \(\approx 1:8\) & \(\approx 1:20\) \\ \end{tabular}
\end{table}
Table 1: Sizes of our curated data sets and their respective numbers of matched molecular pairs (MMPs), activity cliffs (ACs), half-activity-cliffs (half-ACs) and non-activity-cliffs (non-ACs).
round with random seed \(i\). The overall QSAR- and AC-prediction performance of each model was recorded as the average over the \(mk\) training- and test runs based on all data splits \(\mathcal{S}^{1,1},...,\mathcal{S}^{mk}\). We chose the configuration \((k,m)=(2,3)\) which gave a good trade-off between computational costs and accuracy and reasonable numbers of MMPs in the compound-pair-sets. In particular, random cross-validation with \(k=2\) gave expected relative sizes of:
\[|\mathcal{M}_{\text{train}}|:|\mathcal{M}_{\text{inter}}|:|\mathcal{M}_{\text {test}}|=1:2:1.\]
On average, 12.7 %, 11.91 %, and 6.84 % of MMPs in \(\mathcal{M}_{\text{test}}\) were also in \(\mathcal{M}_{\text{cores}}\) for dopamine receptor D2, factor Xa, and SARS-CoV-2 main protease, respectively.
### Prediction Strategies and Performance Measures
In a data split of the form
\[\mathcal{S}=(\mathcal{D}_{\text{train}},\mathcal{D}_{\text{test}},\mathcal{M} _{\text{train}},\mathcal{M}_{\text{test}},\mathcal{M}_{\text{inter}},\mathcal{ M}_{\text{cores}})\]
each individual compound, \(s\in\mathcal{D}_{\text{train}}\cup\mathcal{D}_{\text{test}}=\mathcal{D}\), can be associated with an activity label \(\text{a}(s)\in\mathbb{R}\), defined as the negative decadic logarithm of the experimentally measured activity of \(s\). We stuck with the canonical units used in the ChEMBL database and the COVID moonshot project ([nM] for \(\text{K}_{i}\) and [nM] for \(\text{IC}_{50}\)); each activity label \(a(s)\) thus represents a standard \(\text{pK}_{i}\)- or \(\text{pIC}_{50}\) value (with an additive shift towards 0 caused by the units which might slightly benefit prediction techniques initialised around the origin). We are interested in QSAR-prediction functions,
\[f:\mathcal{D}\rightarrow\mathbb{R},\]
that can map a chemical structure \(s\in\mathcal{D}\) to an estimate of its binding affinity \(a(s)\). The mapping \(f\) is found via an algorithmic training process on the labelled data set
\[\{(s,a(s))\ |\ s\in\mathcal{D}_{\text{train}}\}\]
and can then either be used to predict the activity labels of compounds in \(\mathcal{D}_{\text{test}}\), or it can be repurposed to classify whether an MMP forms an activity cliff (AC-classification) and what the potency direction of an MMP is (PD-classification). If \(\{s,\tilde{s}\}\in\mathcal{M}_{\text{inter}}\), then one can assume that the activity label of one of the compounds, say \(a(s)\), is known; \(f\) is then used to clas
Figure 2: Illustration of our data splitting strategy. We distinguish between three MMP-sets, \(\mathcal{M}_{\text{train}},\mathcal{M}_{\text{inter}}\) and \(\mathcal{M}_{\text{test}}\), depending on whether both MMP-compounds are in \(\mathcal{D}_{\text{train}}\), one MMP-compound is in \(\mathcal{D}_{\text{train}}\) and the other one is in \(\mathcal{D}_{\text{test}}\), or both MMP-compounds are in \(\mathcal{D}_{\text{test}}\). We additionally consider a fourth MMP-set, \(\mathcal{M}_{\text{cores}}\), consisting of the MMPs in \(\mathcal{M}_{\text{test}}\) whose structural cores do not appear in \(\mathcal{M}_{\text{train}}\cup\mathcal{M}_{\text{inter}}\).
sify \(\{s,\tilde{s}\}\) via:
\[\{s,\tilde{s}\}\mapsto\begin{cases}\text{Non-AC}&\text{if }|a(s)-f(\tilde{s})| \leq d_{\text{crit}},\\ \text{AC}&\text{if }|a(s)-f(\tilde{s})|>d_{\text{crit}}.\end{cases}\]
Here \(d_{\text{crit}}\in\mathbb{R}_{>0}\) is a critical threshold above which an MMP is classified as an AC. Throughout this work we use \(d_{\text{crit}}=1.5\) (in pKi- or pIC\({}_{50}\) units) since this value represents the middle point between the intervals \([0,1]\) and \([2,\infty)\) which correspond to absolute activity-label differences associated with non-ACs and ACs respectively.
If \(\{s,\tilde{s}\}\in\mathcal{M}_{\text{test}}\cup\mathcal{M}_{\text{cores}}\) then the activities of both compounds are unknown and we classify \(\{s,\tilde{s}\}\) via:
\[\{s,\tilde{s}\}\mapsto\begin{cases}\text{Non-AC}&\text{if }|f(s)-f(\tilde{s})| \leq d_{\text{crit}},\\ \text{AC}&\text{if }|f(s)-f(\tilde{s})|>d_{\text{crit}}.\end{cases}\]
PD-classification for MMPs is performed in a straightforward manner: the activity labels of both MMP-compounds are predicted via \(f\) and then compared to classify which compound is the more active one.
The performance of \(f\) for standard QSAR prediction in \(\mathcal{D}_{\text{test}}\) is measured via the mean absolute error (MAE). For the balanced PD-classification problem we rely on accuracy as a suitable performance measure. For the highly imbalanced task of AC-classification, however, we use the Matthews correlation coefficient (MCC), as well as sensitivity and precision. For the relatively small SARS-CoV-2 main protease data set we sometimes encountered the edge case where there were no positive predictions; we then set MCC = 0 and ignored ill-defined precision measurements when averaging the performance metrics to obtain the final results.
### Molecular Representation- and Regression Techniques
We constructed nine QSAR models via a robust combinatorial methodology that systematically combines three molecular representation methods with three regression techniques. This setup allows, for example, to compare the performance of molecular representation methods across regression techniques, data sets and predictions tasks.
For molecular representation, we used extended-connectivity fingerprints [59] (ECFPs), physicochemical molecular descriptor vectors [72] (PDVs), and graph isomorphism networks (GINs) [81]. Both ECFPs and PDVs were computed via RDKit [42]. The ECFPs were chosen to use a radius of two, a length of 2048 bits, and active chirality flags. The PDVs had a dimensionality of 200 and were constructed using the general list of descriptors from the work of Fabian et al. [21]. This list encompasses properties related to druglikeness, logP, molecular refractivity, electro-topological state, molecular graph-structure, fragment profile, charge, and topological surface properties. The GIN was implemented using PyTorch Geometric [23] and consisted of a variable number of graph convolutional layers, each with two internal hidden layers with ReLU activations and batch normalisation [33]. We further chose the maxpool operator which computes the component-wise maximum over all atom feature vectors in the final graph layer to obtain a graph-level representation.
Each molecular representation was used as an input featurisation for three regression techniques: random forests (RFs), k-nearest neigbours (kNNs) and multi-layer perceptrons (MLPs). The RF- and kNN-models were implemented via scikit-learn [56] and the MLP-models via PyTorch [55]. The MLPs used ReLU activations and batch normalisation at each hidden layer.
The GIN was combined with the regression techniques as follows: For MLP regression, the GIN was trained with the MLP as a projection head after the pooling step in the usual end-to-end manner. For RF- or kNN-regression, the GIN was first trained with a single linear layer added after the global pooling step that directly mapped the graph-level representation to an activity prediction. After this training phase the weights of the GIN were frozen and it was used as a static feature extractor. The RF- or kNN-regressor was then trained on the features extracted by the frozen GIN. Figure 3 illustrates our combinatorial experimental methodology.
### Model Training and Hyperparameter Optimisation
All models were trained using full inner hyperparameter-optimisation loops. Hyperparameters of RFs and kNNs were optimised in scikit-learn [56] by uniformly random sampling of hyperparameters from a predefined grid. The hyperparameters of MLPs and GINs were sampled from a predefined grid via the tree-structured Parzen estimator algorithm implemented in Optuna [2]. Deep learning models were trained for 500 epochs on a single NVIDIA GeForce RTX 3060 GPU via the mean squared error loss function using AdamW optimisation [46]. Weight decay, learning rate decay and dropout [65] were employed at all hidden layers for regularisation. Batch size, learning rate, learning rate decay rate, weight decay rate, and dropout rate were treated as hyperparameters and subsequently optimised. Note that the training length (i.e. the number of gradient updates) was implicitly optimised by tuning the batch size for the fixed number of 500 training
epochs. Further implementation details can be found in our public code repository1.
Footnote 1: [https://github.com/MarkusFerdinandDablander/QSAR-activity-cliff-experiments](https://github.com/MarkusFerdinandDablander/QSAR-activity-cliff-experiments)
## Results and Discussion
The QSAR-prediction-, AC-classification- and PD-classification results for all three data sets are depicted in Figures 4 to 9.
### QSAR-Prediction Performance
When considering the results depicted in Figures 4 to 9 with respect to QSAR-prediction performance, one can see that ECFPs tend to lead to better performance (i.e. a lower QSAR-MAE) compared to GINs, which in turn tend to lead to better performance compared to PDVs. In particular, the combination MLP-ECFP consistently produced the lowest QSAR-MAE across all three targets. These observations reinforce a growing corpus of literature that suggests that trainable GNNs have not yet reached a level of technical maturity by which they consistently and definitively outperform the much simpler non-differentiable ECFPs at important molecular property prediction tasks [13, 37, 48, 50, 60, 66, 80].
### AC-Classification Performance
The AC-MCC plots in Figures 4 to 6 reveal surprisingly strong overall AC-classification results on \(\mathcal{M}_{\text{inter}}\). This type of MMP-set models a compound-optimisation scenario where a researcher strives to identify small structural modifications with a large impact on the activity of query compounds with known activities. For this task, a significant portion of our QSAR models exhibit an AC-MCC value greater than 0.5 across targets, which appears impressive considering the simplicity of the approach. Exchanging \(\mathcal{M}_{\text{inter}}\) with either \(\mathcal{M}_{\text{test}}\) or \(\mathcal{M}_{\text{cores}}\) leads to a substantial drop in the AC-MCC to approximately 0.3 that appears to be mediated by a large drop in AC-sensitivity.
In most cases, GINs perform better than the other molecular representation methods with respect to the AC-MCC. Notably, kNN-regressors consistently perform best for AC-classification when combined with GIN-features; this supports the idea that GINs might have a heightened ability to resolve ACs by learning an embedding of chemical space in which the distance between two compounds is reflective of activity difference rather than structural difference. The combinations GIN-MLP, GIN-RF and ECFP-MLP exhibit particularly high AC-MCC values relative to the other methods. We recommend using at least one of these three models as a baseline against which to compare tailored AC-prediction models; the practical utility of
Figure 3: Schematic showing the combinatorial experimental methodology used for the study. Each molecular representation method is systematically combined with each regression technique, giving a total of nine QSAR models. Each QSAR model is trained and evaluated for QSAR-prediction, AC-classification and PD-classification within a 2-fold cross validation scheme repeated with 3 random seeds. For each of the \(2*3=6\) trials, an extensive inner hyperparameter-optimisation loop on the training set is performed for each QSAR model.
any AC-prediction technique that cannot outperform these three common QSAR methods is questionable.
Across all three targets, AC-sensitivity is moderately high on \(\mathcal{M}_{\text{inter}}\) but universally low on \(\mathcal{M}_{\text{test}}\) and \(\mathcal{M}_{\text{cores}}\). This is consistent with the hypothesis that ACs form one of the major sources of prediction error for QSAR models. The weak AC-sensitivity on \(\mathcal{M}_{\text{test}}\) and \(\mathcal{M}_{\text{cores}}\) indicates that modern QSAR methods are largely blind to ACs in novel areas of chemical space and thus lack essential chemical knowledge. GINs clearly outperform the other two more classical molecular representations across regression techniques with respect to AC-sensitivity. In particular, the GIN-MLP combination leads to the highest AC-sensitivity on \(\mathcal{M}_{\text{test}}\) and \(\mathcal{M}_{\text{cores}}\).
Figure 4: QSAR-prediction- and AC-classification results for **dopamine receptor D2**. For each plot, the \(x\)-axis corresponds to a combination of MMP-set and AC-classification performance metric and the \(y\)-axis shows the QSAR-prediction performance on the molecular test set \(\mathcal{D}_{\text{test}}\). The total length of each error bar equals twice the standard deviation of the performance metric measured over all \(mk=3*2=6\) hyperparameter-optimised models. For each plot, the lower right corner corresponds to strong performance at both prediction tasks.
sensitivity in all examined cases and thus discovers the most ACs. The highly parametric nature of GINs that makes them prone to overfitting could at the same time enable them to better model jagged regions of the SAR-landscape that contain ACs than classical task-agnostic representations.
There is a wide gap between distinct prediction techniques with respect to AC-precision: some models achieve a considerable level of AC-precision such that over 50% of positively predicted MMPs in \(\mathcal{M}_{\text{test}}\) and \(\mathcal{M}_{\text{cores}}\) are indeed actual ACs. Other QSAR models, however, seem to fail almost entirely with respect to this metric on \(\mathcal{M}_{\text{test}}\) and \(\mathcal{M}_{\text{cores}}\) and only deliver modest performance on \(\mathcal{M}_{\text{inter}}\). RFs tend to exhibit the strongest AC-precision and the weakest AC-sensitivity. This might be as a result of their ensemble nature
Figure 5: QSAR-prediction- and AC-classification results for **factor Xa**. For each plot, the \(x\)-axis corresponds to a combination of MMP-set and AC-classification performance metric and the \(y\)-axis shows the QSAR-prediction performance on the molecular test set \(\mathcal{D}_{\text{test}}\). The total length of each error bar equals twice the standard deviation of the performance metric measured over all \(mk=3*2=6\) hyperparameter-optimised models. For each plot, the lower right corner corresponds to strong performance at both prediction tasks.
which should intuitively lead to conservative but trustworthy predictions of extreme effects such as ACs.
### PD-Classification Performance
The abilities of the evaluated QSAR models to identify which is the more active compound in an MMP is universally weak, with PD-accuracies clustering around 0.7 on \(\mathcal{M}_{\mathrm{inter}}\) and around 0.6 on \(\mathcal{M}_{\mathrm{test}}\) and \(\mathcal{M}_{\mathrm{cores}}\), as can be seen in the top rows of Figures 7 to 9. Predicting the potency direction for two compounds with similar structures and thus usually similar activity levels must be considered a challenging task. The combination ECFP-MLP reaches the strongest PD-accuracy
Figure 6: QSAR-prediction- and AC-classification results for **SARS CoV-2 main protease**. For each plot, the \(x\)-axis corresponds to a combination of MMP-set and AC-classification performance metric and the \(y\)-axis shows the QSAR-prediction performance on the molecular test set \(\mathcal{D}_{\mathrm{test}}\). The total length of each error bar equals twice the standard deviation of the performance metric measured over all \(mk=3\) s \(2=6\) hyperparameter-optimised models. The precision of the AC-classification task is lacking for the ECFP + kNN technique on \(\mathcal{M}_{\mathrm{test}}\) and \(\mathcal{M}_{\mathrm{cores}}\) since this method produced only negative AC-predictions for all trials on this data set. For each plot, the lower right corner corresponds to strong performance at both prediction tasks.
in the majority of cases and we recommend starting with this model as a baseline for more advanced PD-prediction methods.
One can argue that the activity order of two similar compounds is of little interest if the true activity difference is small, as is often the case. We therefore also restricted PD-classification to predicted ACs. The three plots in the bottom rows of Figures 7 to 9 depict the PD-accuracy of each QSAR model on the subset of MMPs that were also predicted to be ACs by the same model. In this practically more relevant scenario PD-prediction accuracy tends to exceed 0.9 on \(\mathcal{M}_{\text{inter}}\) and 0.8 on \(\mathcal{M}_{\text{test}}\) and \(\mathcal{M}_{\text{cores}}\). The QSAR models investigated here are thus able to identify the correct activity order of MMPs if they also predict them to be ACs. The relatively rare instances in which the PD of a predicted AC is misclassified, however, reflect severe QSAR-prediction errors.
### Linear Relationship between QSAR-MAE and AC-MCC
Our experiments reveal a consistent linear relationship between the QSAR-MAE and the AC-MCC as can be seen in the left columns of Figures 4 to 6. A potential mechanism driving this effect could be that as the overall QSAR-MAE of a model improves, its accuracy at predicting activity differences between similar molecules might be expected to improve as well; previously misclassified MMPs whose predicted absolute activity differences were already close to the critical value \(d_{\text{crit}}=1.5\) might then gradually move to the correct side of the decision boundary and increase the AC-MCC. The results suggest that for real-world QSAR models the AC-MCC and the QSAR-MAE are strongly predictive of each other; while this observation only rests on nine models, it is highly consistent across MMP-sets and pharmacological targets.
Figure 7: QSAR-prediction- and PD-classification results for **dopamine receptor D2**. Each column corresponds to an upper plot and a lower plot for one of the MMP-sets \(\mathcal{M}_{\text{inter}}\), \(\mathcal{M}_{\text{test}}\) or \(\mathcal{M}_{\text{cores}}\). The x-axis of each upper plot indicates the PD-classification accuracy on the full MMP-set; the x-axis of each lower plot indicates the PD-classification accuracy on a restricted MMP-set only consisting of MMP predicted to be ACs by the respective method. The y-axis of each plot shows the QSAR-prediction performance on the molecular test set \(\mathcal{D}_{\text{test}}\). The total length of each error bar equals twice the standard deviation of the performance metrics measured over all \(mk=3*2=6\) hyperparameter-optimised models. For each plot, the lower right corner corresponds to strong performance at both prediction tasks.
### Future Research: Exploring Twin-Network Training Schemes
ACs are rich in pharmacological information; at the same time the experiments have shown that QSAR models exhibit low AC-sensitivity and thus frequently fail to predict ACs. In spite of this, to the best of our knowledge so far no method has been described to tackle this problem by attempting to increase the AC-sensitivity of QSAR models. We propose twin-network training of deep-learning models as a potential strategy to increase AC-sensitivity. Comparatively little work has been done to investigate twin neural network architectures (also referred to as _Siamese_ networks [9, 12, 41, 70]) in computational drug discovery [3, 6, 11, 18, 22, 24, 36, 53, 58, 61, 73, 82]. However, twin networks provide a natural way to tackle chemical prediction problems on compound pairs such as AC-classification.
Instead of training a deep network, \(f\), on an individual compound, \(s\), with activity label, \(a(s)\), via a classical squared error loss, \((a(s)-f(s))^{2}\), we suggest to train \(f\) on compound _pairs_, \(\{s,\tilde{s}\}\), using a pair-based loss:
\[w_{\{s,\tilde{s}\}}[(a(s)-f(s))^{2}+(a(\tilde{s})-f(\tilde{s}))^{ 2}\] \[+w_{\text{diff}}((a(s)-a(\tilde{s}))-(f(s)-f(\tilde{s})))^{2}].\]
The quantity \(w_{\{s,\tilde{s}\}}\) is used to specify the weight put on the compound pair \(\{s,\tilde{s}\}\) during training; \(w_{\text{diff}}\) determines the relative importance of predicting the individual activities of \(s\) and \(\tilde{s}\) versus predicting the activity difference associated with \(\{s,\tilde{s}\}\). Twin-network training could be conducted in two phases: first on general compound pairs in \(\mathcal{D}_{\text{train}}\times\mathcal{D}_{\text{train}}\) and then on MMPs in \(\mathcal{M}_{\text{train}}\). In the second phase, the weight function \(w_{\{s,\tilde{s}\}}\) could be used to assign training weights to MMPs proportional to their associated activity dif
Figure 8: QSAR-prediction- and PD-classification results for **factor Xa**. Each column corresponds to an upper plot and a lower plot for one of the MMP-sets \(\mathcal{M}_{\text{inter}}\), \(\mathcal{M}_{\text{train}}\) or \(\mathcal{M}_{\text{cross}}\). The \(\chi\)-axis of each upper plot indicates the PD-classification accuracy on the full MMP-set; the \(\chi\)-axis of each lower plot indicates the PD-classification accuracy on a restricted MMP-set only consisting of MMP predicted to be ACs by the respective method. The \(y\)-axis of each plot shows the QSAR-prediction performance on the molecular test set \(\mathcal{D}_{\text{test}}\). The total length of each error bar equals twice the standard deviation of the performance metrics measured over all \(mk=3*2=6\) hyperparameter-optimised models. For each plot, the lower right corner corresponds to strong performance at both prediction tasks.
ferences; MMPs that represent larger activity differences might encode structural transformations that are pharmacologically more relevant and thus should receive more attention during training. This weighting procedure could lead to increased AC-sensitivity and the extraction of more chemical knowledge. Our pair-based training strategy is depicted in Figure 10 and is based on a twin neural network model for AC-prediction with discrete outputs that we explored in a previous research study [16]. We intend to evaluate the proposed twin-network training scheme in a future study.
## Conclusions
To the best of our knowledge this is the first study to investigate the AC-prediction capabilities of QSAR models. It is also the first work to explore the quantitative relationship between QSAR-prediction (at the level of individual molecules) and AC-prediction (at the level of compound-pairs). As part of our methodology we have additionally introduced a simple, interpretable, and rigorous data-splitting technique for pair-based prediction tasks.
When the activities of both MMP-compounds are unknown (i.e. absent from the training set) then common QSAR models exhibit low AC-sensitivity which limits their utility for AC-prediction. This strongly supports the hypothesis that QSAR methods do indeed regularly fail to predict ACs which might thus form a major source of prediction errors in QSAR modelling [14, 26, 47, 63]. However, if the activity of one MMP-compound is known (i.e. present in the training set) then AC-sensitivity increases substantially; for query compounds with known activities, QSAR methods can therefore be used as simple AC-prediction-, compound-optimisation- and SAR
Figure 9: QSAR-prediction- and PD-classification results for **SARS-CoV-2 main protease**. Each column corresponds to an upper plot and a lower plot for one of the MMP-sets \(\mathcal{M}_{\text{inter}}\), \(\mathcal{M}_{\text{test}}\) or \(\mathcal{M}_{\text{cores}}\). The \(v\)-axis of each upper plot indicates the PD-classification accuracy on the full MMP-set; the \(x\)-axis of each lower plot indicates the PD-classification accuracy on a restricted MMP-set only consisting of MMP predicted to be ACs by the respective method. The \(y\)-axis of each plot shows the QSAR-prediction performance on the molecular test set \(\mathcal{D}_{\text{test}}\). The total length of each error bar equals twice the standard deviation of the performance metrics measured over all \(mk=3*2=6\) hyperparameter-optimised models. The accuracy of the PD-classification task for predicted ACs is lacking for the ECFP + kNN technique on \(\mathcal{M}_{\text{test}}\) and \(\mathcal{M}_{\text{cores}}\) since this method produced only negative AC-predictions for all trials on this data set. For each plot, the lower right corner corresponds to strong performance at both prediction tasks.
knowledge-acquisition tools. Furthermore, based on the observed potency-directon (PD) classification results we can expect the estimated activity direction of predicted ACs to have a high degree of accuracy.
With respect to molecular representation, we have found robust evidence that non-trainable task-agnostic ECFPs still outcompete differentiable GINs at general QSAR-prediction. This adds to a growing awareness that standard message-passing GNNs might need to be improved further to definitively beat classical molecular featurisations such as ECFPs [13, 37, 48, 50, 60, 66, 80]. One potential angle to achieve this could be self-supervised GNN-pretraining, which has recently shown promising results in the molecular domain [30, 78]. However, while GINs appear to be inferior to ECFPs for QSAR-prediction, they tend to be advantageous for AC-classification; their highly parametric nature might simultaneously lead to increased overfitting but to a better modelling of the more jagged regions of the SAR-landscape. We thus recommend using GINs as an AC-classification baseline since such an agreed-upon baseline is currently lacking.
Finally, the low AC-sensitivity of QSAR models when the activites of both MMP-compounds are unknown suggests that such methods are still lacking essential SAR knowledge; on the flip side, it might be possible to boost QSAR-modelling performance and increase the amount of extracted SAR knowledge by developing techniques to increase AC-sensitivity. To this end, we propose an AC-sensitive twin-network [9, 12, 41, 70] training scheme for deep-learning models that we intend to explore in the future.
## Funding
This research was supported by the University of Oxford's UK EPSRC Centre for Doctoral Training in Industrially Focused Mathematical Modelling (EP/L015803/1) and by the not-for-profit organisation and educational charity Lhas Limited ([https://www.lhasalinited.org/](https://www.lhasalinited.org/)).
## Abbreviations
* AC = Activity Cliff
* ECFP = Extended-Connectivity Fingerprint
* GIN = Graph Isomorphism Network
* GNN = Graph Neural Network
* ANN = k-Nearest Neighbour
* MAE = Mean Absolute Error
* MCC = Matthews Correlation Coefficient
* MLP = Multilayer Perceptron
* MMDP = Matched Molecular Pair
* PD = Potency Direction
* PD = Priocochemical-Descriptor Vector
* QSAR = Quantitative Structure-Activity Relationship
* RF = Random Forest
* SAR = Structure-Activity Relationship
## Availability of data and materials
All used data sets, the code to reproduce and visualise the experimental results, and the exact numerical results generated by the original experiments are available in our public code repository [https://github.com/MarkusFerdinandDablander/QSAR-activity-cliff-experiments](https://github.com/MarkusFerdinandDablander/QSAR-activity-cliff-experiments).
## Competing interests
The authors declare that they have no competing interests.
## Authors' contributions
The computational study was designed, implemented, conducted and interpreted by the first author M.O. The research was supervised by R.L. G.M.M., and T.H. who gave valuable scientific advice during weekly meetings. The computer code was written by M.O. The paper manuscript was
Figure 10: Twin-network training strategy for deep-learning-based QSAR models that might increase AC-sensitivity. Twin-network training could be conducted on general compound pairs and on MMPs, with larger weights given to MMPs associated with larger activity differences.
written by M.D. Feedback was provided by R.L., G.M.M. and T.H. during the writing process. The novel data splitting technique for MMP-data, the QSAR-modelling-based activity ciff prediction strategies and the proposed twin-network training scheme were developed by M.D. All scientific figures were designed by M.D., with input from G.M.M., R.L. and T.H. All chemical data sets were gathered and cleaned by M.D. All authors read and approved the final manuscript.
|
2309.17200 | Secure-by-design smart contract based on dataflow implementations | This article conducts an extensive examination of the persisting challenges
related to smart contract attacks within blockchain networks, with a particular
focus on the reentrancy attack. It emphasizes the inherent vulnerabilities
embedded in the programming languages commonly employed for smart contract
development, particularly within Ethereum Virtual Machine (EVM)-based
blockchains. While the concrete example used primarily employs the Solidity
programming language, the insights garnered from this study are readily
generalizable to a wide array of blockchain architectures. Significantly, this
article extends beyond the mere identification of vulnerabilities and ventures
into the realm of proactive security measures. It explores the adaptation and
adoption of dataflow programming paradigms, employing Domain-Specific Languages
(DSLs) to enforce security by design in the context of smart contract
development. This forward-looking approach aims to bolster the foundational
principles of blockchain security, offering a promising research direction for
mitigating the risks associated with smart contract vulnerabilities. The
objective of this article is to cater to a diverse audience, ranging from
individuals with limited computer science and programming expertise to seasoned
experts in the field. It provides a comprehensive and accessible resource for
fostering a deeper understanding of the intricate dynamics between blockchain
technology and the imperative need for secure smart contract development
practices. | Simone Casale-Brunet, Marco Mattavelli | 2023-09-29T12:48:27Z | http://arxiv.org/abs/2309.17200v2 | # Secure-by-design smart contract based on dataflow implementations
###### Abstract
This article conducts an extensive examination of the persisting challenges related to smart contract attacks within blockchain networks, with a particular focus on the reentrancy attack. It emphasizes the inherent vulnerabilities embedded in the programming languages commonly employed for smart contract development, particularly within Ethereum Virtual Machine (EVM)-based blockchains. While the concrete example used primarily employs the Solidity programming language, the insights garnered from this study are readily generalizable to a wide array of blockchain architectures. Significantly, this article extends beyond the mere identification of vulnerabilities and ventures into the realm of proactive security measures. It explores the adaptation and adoption of dataflow programming paradigms, employing Domain-Specific Languages (DSLs) to enforce security by design in the context of smart contract development. This forward-looking approach aims to bolster the foundational principles of blockchain security, offering a promising research direction for mitigating the risks associated with smart contract vulnerabilities. The objective of this article is to cater to a diverse audience, ranging from individuals with limited computer science and programming expertise to seasoned experts in the field. It provides a comprehensive and accessible resource for fostering a deeper understanding of the intricate dynamics between blockchain technology and the imperative need for secure smart contract development practices.
Keywords:Smart Contract Security Dataflow Blockchain Ethereum Solidity
## 1 Introduction
Smart contracts (SCs) were initially conceptualized in the early 1990s as digital agreements characterized by automated enforcement and execution of legally binding terms. In recent years, they have been integrated into blockchain technology. However, critical bugs and vulnerabilities in SCs have led to catastrophic consequences for deployed applications, necessitating further scientific research to enhance their security and reliability. Blockchain technology's foundational principle is the immutability and irreversibility of recorded data. This characteristic makes modifying deployed SCs infeasible, often requiring the creation of entirely new SCs for rectification. Rigorous pre-deployment testing and validation of SCs are crucial due to the impracticality of this approach. Unfortunately, contemporary testing methodologies often fall short, resulting in errors and vulnerabilities with severe repercussions. A significant challenge arises from the disparity between the programming languages used in SC development and the unique characteristics of blockchain systems. Current programming techniques for smart contracts primarily rely on serial models, making it challenging to express parallel and distributed execution on a blockchain network. This limitation, described in recent papers such as [10] (see for example insights 4 and 13) and [20], contributes to the difficulty of achieving secure and correct-by-construction smart contract implementations.
Presently, SC implementation often relies on vague and underspecified "coding best practices" to compensate for these shortcomings. This deficiency stems from the absence of a suitable design methodology and the unreliability of analysis and verification tools. The question of "which programming model best suits SCs?" remains an open scientific problem, as highlighted in previous research [20, 19, 10, 18]. Previous attempts to identify an effective programming model have yielded limited results, with existing literature primarily offering diverse solutions (often lacking scientific verification) tailored for specific and straightforward use cases [18].
The aim of this work is to investigate the root causes of these vulnerabilities and challenges in achieving secure-by-design development techniques, which we attribute to the use of seemingly inappropriate programming languages. In fact, by examining a simple yet well-known example of a reentrancy attack (currently the most common and with the most catastrophic outcomes), it is possible to mitigate such vulnerabilities by adopting a dataflow programming model. This programming model, which has been successfully applied in fields such as video coding and genomics, enables the representation of a program (in this case, the smart contract) at a high level while ensuring both security and efficiency when implemented on parallel and heterogeneous architectures (in this case, the blockchain).
The paper is structured as follows: Section 2 provides a brief historical overview of reentrancy attacks. It illustrates their emergence in the early days of Ethereum in 2016 and, despite the years that have passed, how reentrancy attacks remain a prevalent threat in newly deployed smart contracts, emphasizing the urgency for effective mitigation strategies. An illustrative scenario involving a bank ATM is presented in Section 3 to facilitate understanding for non-technical readers. Successively, Section 4 delves deep into the technical aspects of reentrancy attacks, offering a detailed examination of the source code of a real-world smart contract. Through this analysis, readers can gain a profound understanding of why current programming languages for smart contract development are inadequate and fraught with risks. The limitations of current best practices are discussed in Section 5. Finally, the concept of a dataflow model for secure-by-design smart contract implementation is discussed in Section 6. Here, it is outlined how this model can be leveraged to construct inherently secure smart contracts. By doing so, the paper presents a potential solution to address the prevailing issues in the current ecosystem, where the development process heavily relies on the developer's experience rather than robust engineering methodologies. Section 7 concludes the paper, providing further research directions.
## 2 The DAO Hack and how the history was altered with a fork
In 2015, the nascent Ethereum community initiated discussions surrounding the concept of Decentralized Autonomous Organizations (DAOs). These blockchain-based entities were designed to facilitate coordinated human activities through the execution of verifiable code, primarily by utilizing smart contracts on the Ethereum blockchain. They aimed to enable decentralized decision-making regarding community protocols. In 2016, approximately one year after the Ethereum mainnet's launch, a DAO called "The DAO" was established. It operated as a decentralized, community-managed investment fund, with its smart contract deployed on April 30, 2016. Individuals acquired The DAO's community tokens by depositing Ether (ETH), and these ETH holdings constituted the investment funds managed by The DAO on behalf of its token-holding community. The DAO managed to attract nearly 14% of all ETH tokens in circulation at the time, boasting over 18,000 stakeholders. Unfortunately, on June 8, 2016, less than three months after its inception, The DAO's smart contract fell victim to a malicious hacker. Over the ensuing weeks, the hacker systematically drained a substantial portion of The DAO's smart contract balance. This security breach dealt a severe blow to The DAO, eroding the trust of its investors and severely denting the credibility of Ethereum and blockchain technology as a whole. Faced with a formidable decision, the Ethereum core team contemplated potential solutions to thwart the hacker. One option was to execute a fork of the Ethereum blockchain, effectively rewriting its history and creating an alternative reality. By forking Ethereum, the new branch would operate as if the hack had never transpired. If users adopted the new fork and abandoned the old one, the value of the hacker's ETH holdings would significantly decrease. This fork would invalidate the historical blocks containing the hacker's attack transactions. However, this drastic measure ran counter to the fundamental principles underpinning Ethereum. Those who supported the fork were essentially advocating for a world with two parallel Ethereum blockchains. Ultimately, the vote in favor of the fork prevailed with an 85% majority, leading to the fork's implementation on July 20, 2016, that occurred with block 1,920,000 [2] containing the fix to "The DAO" (i.e., which allowed DAO investors to retrieve their funds). Consequently, two Ethereum chains now exist: Ethereum Classic (which retains the hack in its ledger) and the familiar Ethereum chain we know today (where the ledger's history predates the deployment of the flawed smart contract). Both chains have their native ETH tokens, which possess significantly different market values.
### Controversial issues
These events have sparked two opposing lines of discussion. From a legitimate but unethical perspective, it is essential to delve into the intricacies of this issue. From a purely technical standpoint, the hacker's actions did not breach the parameters established in "The DAO" protocols or the algorithmic rules embedded in the smart contract. This viewpoint gains further weight when considering an open letter signed by the attacker (which a copy can be accessed here [12]). Nevertheless, the ethical and moral dimensions of this action should not be underestimated. Despite its technical legality, appropriating funds in this manner is regarded as theft, giving rise to a significant ethical and moral dilemma. On the other hand, in the context of blockchain being an immutable ledger, the outcome leads to a dilemma regarding whether the Immutability Theorem has been compromised due to consensus among network validators. Indeed, the introduction of a hard fork, while addressing the crisis immediately, opens a Pandora's box of philosophical questions that pertain to the very foundations of blockchain technology. It challenges the long-standing principle that code, once implemented, is sacrosanct and akin to law etched in stone, rendering any action permitted by the code inherently legitimate and unalterable once executed. In practical terms, the hard fork operates as a mechanism for temporal regression. Transactions recorded on the public ledger are effectively nullified, creating a reality in which the malicious hack appears to have never happened, as the smart contract was never published on the network. This decision carries profound implications, as it necessitates a compromise on the immutability of the blockchain, a fundamental principle of distributed ledger technology. This compromise is made in the interest of preserving the then-emerging Ethereum movement during a severe existential crisis. The immutable nature of the blockchain, once hailed as a cornerstone principle, is sacrificed in this instance in pursuit of the greater good.
## 3 Understanding Reentrancy
In the following section we are providing a non-technical explanation of the reentrancy attack using a "bugged" ATM analogy. Imagine you have 10,000 CHF in your bank account, and you walk up to an ATM to withdraw 200 CHF. You receive the 200 CHF, but you notice that your balance hasn't changed. So, you decide to withdraw another 200 CHF, and again, there's no change in your balance. You continue to withdraw increasingly larger amounts until the cash in your hand exceeds your total balance. You keep going, and only when you remove your card does your balance finally reflect what just happened: you now have 0 CHF in your bank balance but 200,000 CHF in your hands. All you know is that you now have 200,000 CHF in cash because the ATM kept withdrawing from your original balance without updating it after each withdrawal. Every time you selected "Withdraw 200 CHF," the ATM checked that your balance was sufficient (seeing your original 10,000 CHF balance) and withdrew from it. However, it never updated the balance to 9,800 CHF after each withdrawal. You effectively trapped the ATM in a loop of withdrawing from your initial balance indefinitely, and the money the ATM distributed to you came from the bank's funds, not necessarily your own. This is precisely what occurred in "The DAO" hack, where a similar vulnerability in The DAO's smart contract code allowed a malicious attacker to drain funds beyond the allocation to which they were entitled. This type of attack is known as a reentrancy attack (or exploit). Just like in the ATM example above, the malicious attacker repeatedly entered a transaction via a recursive call and continuously executed withdrawals without the balance ever being updated. The technical description of this attack is illustrated in the next section.
## 4 Technical analysis of the DAO attack
The smart contract "The DAO" is a Solidity-based smart contract (version v0.3.1) consisting of approximately 1200 lines of code, accessible at Ethereum address 0xbb9bc244d798123fde783fccc1c72d3bb8c189413 (i.e., see [1]). As previously described, this smart contract was hacked for an amount of 50 million USD on June 17, 2016, by exploiting a flaw in the lines within the withdrawRewardFor(..) function [5]. This code defect was addressed by Lefteris Karapetsas in the fix titled "Protect against recursive withdrawRewardFor attack" [14], by moving the line containing the statement paidOut[_account] += reward; as depicted in Figure 1. In essence, what the hacker did was
withdraw their previously deposited ETH recursively using the splitDAO(..) function, which invoked the withdrawRewardFor(..) function up to a depth of 29 recursive calls. Consequently, transfers were executed 29 times without incrementing the value of paidout[_account] with the already paid amount.
### A simplified version of The DAO smart contract
In order to provide a more comprehensive description of how it was possible to exploit "The DAO" smart contract (which was authored by highly experienced individuals) by leveraging the incorrect placement of a single line of code, we simplified the original source code. We rewrote it in Table 1 with only the functions necessary to understand its operation and how, by changing the order of just one line, it is possible to alter the transaction outcome. This highlights a fundamental discrepancy between the execution model of Solidity and that on the blockchain. In the following, we analyze block by block (identified by the numbers in the left column of the table) the various components of this smart contract: 1 it identifies the Solidity compiler version used to build the smart contract deployed bytecode. 2 this line defines the smart contract name (i.e., like a Java class). 3 this is an internal smart contract state variable that contains the ETH balance value of each mapped address. 4 the function deposit(..) is used to deposit some ETH (i.e., defined by msg.value) on the smart contract. This function is used to increment the caller (i.e., identified by msg.sender) balance. By construction requirements, the minimum deposit is 1 ETH. 5 the function daoBalance(..) is used to return the available ETH balance stored in the smart contract. 6 the function withdraw(..) is used to withdraw the caller (i.e., identified by msg.sender) balance and it is used to describe in an equivalent manner the functioning of the withdrawRewardFor(..) function [5] available in the original DAO.sol smart contract. The operations performed by the withdraw function are:
1. Check if the caller has sufficient funds by checking balances[msg.sender]
2. Withdraw the balance sending the funds to the msg.sender address
3. Update balances[msg.sender] by setting the value to 0
This contract, while appearing straightforward, harbors a significant concern within its withdraw(..) function. The anticipated execution sequence, which aligns naturally with our thought processes when using a sequential language like Solidity, unfolds as follows:
1. Invocation of the withdraw(..) function.
2. Within the function, a validation step is executed to ascertain the caller's possession of available funds. This validation relies on inspecting the balances(address => uint256) mapping.
Figure 1: DAO fix github
3. If the caller possesses available funds, the function proceeds to transfer all those funds back to the caller. Conversely, if no available funds are detected, an error is generated, and the execution terminates.
4. As a final step, the balances mapping is updated to reflect a balance of 0 for the caller's address.
The pivotal question at this point centers on the locus of the issue. The core concern revolves around the fact that the caller of the withdraw(..) function can either be an external wallet (as in the case of individual users) or another smart contract.
In the case the caller is a wallet, the withdraw(..) function operates smoothly without intrinsic issues. In contrast, in the latter scenario where the caller is a smart contract, complications arise due to the non-atomic nature of the function's execution. The intricacy lies in how this non-atomicity can be exploited by the invoking smart contract. To elucidate this concept, we employ visual aids of Figure 2, representing each smart contract as a distinct entity denoted by a box. We further illustrate communication between these smart contracts as communication channels, akin to buffered interconnections. In this diagram we have two smart contracts communicating with each other: a) the DAO that is the one we saw earlier, and b) Attacker which is the smart contract we use to perform the exploit. Let us see below how we can build the smart contract Attacker in order to drain all the funds, even those that do not belong to us, from DAO with the few Solidity source code lines illustrated in Table 2. In
\begin{table}
\begin{tabular}{|c|l|} \hline
1 & pragma solidity *0.8.10; \\ \hline
2 & contract Dao { \\ \hline
3 & mapping(address => uint256) public balances; \\ \hline
4 & function deposit() public payable { \\ & neg.value >= 1 ether, \\ & deposits must be no less than 1 Ether \\ & } ; balances(msg.sender) += msg.value; \\ \hline
5 & function dealmac() public view returns (uint256) { \\ & return address(this).balance; \\ & } \\ \hline
6 & function withdraw() public { \\ & // Check user’s balance \\ & require \\ & balances(msg.sender) >= 1 ether, \\ & "Insufficient funds. Cannot withdraw" \\ & ); \\ & uint256 bal = balances(msg.sender); \\ & // Withdraw user’s balance \\ & (bool sent, ) = msg.sender.call(value: bal)(""); \\ & require(sent, "Failed to withdraw sender’s balance"); \\ & // Update user’s balance, \\ & balances(msg.sender) + 0i \\ \hline
7 & ) \\ \hline \end{tabular}
\end{table}
Table 1: Simplified version of the original "The DAO" smart contract source code.
Figure 2: Simplified execution flow of the DAO smart contract attack.
the following, we analyze block by block (identified by the numbers in the left column of the table) the various components of this smart contract: 1 it identifies the Solidity compiler version used to build the smart contract deployed bytecode. 2 this is the interface of the DAO smart contract, and it is used to define what functions of the DAO smart contract we can call from the Attacker smart contract. 3 this line defines the smart contract name 4 this is the handler to the DAO smart contract containing its public address. 5 this is the smart contract constructor function, used only during the deployment where the address of the DAO smart contract is provided. 6 this is the fallback function, a special Solidity construct that is triggered in specific situations such as when the smart contract receives some ETH. 7 we implement and use the function attack(..) to launch the attack. We call the deposit function from the DAO smart contract sending it 1 ETH so that: a) it receives the minimum required deposit and b) it records on its balances variable that we have 1 ETH we can withdraw. Finally, we call the withdraw(..) function from the DAO smart contract. This will then send the funds to this smart contract and the fallback function will be triggered and executed. And this is where the problems begin, which we see in detail in the section below.
#### The reentrancy attack
Now that we have seen the source code of the Attacker smart contract, let us assume that there are 2 users that we will identify as userA and userB (in reality they are identified by a 42-character hexadecimal address, but this would unnecessarily complicate the discussion). Both userA and userB send 3 ETH in the DAO contract. So, we can start the discussion with the DAO contract in the following state:
* DAO balances[userA] = 3 ETH
* DAO balances[userB] = 3 ETH
* The total DAO smart contract balance is 6 ETH (i.e., the total of ETH stored in the smart contract) and this value can be retrieved by the primitive solidity function DAO.daoBalance(..)
And now we are ready to launch the attack by calling the attack(..) function from the smart contract Attacker. What happens next is illustrated by the red arrows in Figure 3, which is:
1. The Attacker.attack(..) is executed and: 1. It calls the DAO.deposit(..) function by sending 1 ETH
\begin{table}
\begin{tabular}{|c|l|} \hline
1 & pragms solidity “0.8.10; \\ \hline
2 & interface IDao ( \\ & function withdraw() external / \\ & function deposit()external payable; \\ & ) \\ \hline
3 & contract Attacker ( \\ & IDao dao \\ & constructor(address\_dao)( \\ & dao = IDao(\_dao)? \\ & ) \\ \hline
6 & fallback() external payable( \\ & if(address(dao).balance >= 1 ether){ \\ & dao.withdraw(); \\ & ) \\ \hline
7 & function attack() public payable ( \\ & // Seed the DAo with at least 1 Ether. \\ & requiet \\ & msg.value >= 1 ether, \\ & "Need at least 1 ether to commence attack." \\ & ); \\ & dao.deposit(value: msg.value)(); \\ & // Withdraw from Dao. \\ & dao.withdraw(); \\ \hline
8 & ) \\ \hline \end{tabular}
\end{table}
Table 2: Attacker smart contract Solidity source code.
2. The DAO.balances[address(Attacker)] = 1 ETH is set 3. It call the DAO.withdraw(..) function
3. The DAO.withdraw(..) is called, the value DAO.balances[address(Attacker)] is 1 so: 1. 1 ETH is sent from the DAO contract to the Attacker contract 2. The new DAO balance is 5 ETH
4. The Attacker.fallback(..) function is triggered since 1 ETH is received and this function will call the DAO.withdraw(..) function.
5. The DAO.withdraw(..) is called, the value DAO.balances[address(Attacker)] is still 1 ETH since it has never been updated: the yellow line we previously highlighted in the Solidity code of DAO smart contract contained in Table 1 has not yet been executed. 1. 1 ETH is sent from the DAO contract to the Attacker contract 2. The new DAO balance is 4 ETH
6. We repeat to point 3 till DAO.daoBalance(..) is 0, i.e., all founds have been drained.
The final status of the DAO contract is the following:
* DAO balances[userA] = 3 ETH
* DAO balances[userB] = 3 ETH
* DAO balances[Attacker] = 0 ETH
* The DAO smart contract balance is 0 ETH, since all the founds have been drained:
* address(DAO).balance = 0 ETH
* address(Attacker).balance = 6 ETH
Now we are going to look in detail at what is going on during the execution of these two smart contracts and why the problem that has led to this exploit in such simple source code is related to the fact that the computation model we are using for Solidity is at the root of the problem. The same conclusions can be extended to the sequential programming languages that are used to develop the smart contracts.
As you can see, the funds from the DAO contract are sent before updating the balances variable, the Attacker fallback(..) function is triggered which will call the DAO withdraw(..) function which will have an un-updated view of the balances variable and it will continue to send to Attacker funds that are not its own (DAO owns the funds, but they are intended for userA and userB).
The question that arises is: _"How can we prevent such exploits and attacks?"_ Presently, the prevalent approach employed in smart contract implementation appears to rely on a set of coding best practices. However, this approach poses a substantial challenge as it inherently compromises the security of the code, given the absence of a universally applicable methodology for its analysis, irrespective of the use-case scenario. In the next section, we will examine how these techniques are formulated and adopted.
Figure 3: DAO attack explained
The (fragile and difficult) use of coding best-practices
To date, best practices for smart contracts development are a set of (non-standardized) rules based on the knowledge of experienced developers who suggest some coding rules in order to avoid well-known exploits. There are several collections of best practices for solidity, one of which we believe is the most comprehensive is the one available here [11]. As you can see, these are _alchemical_ rules that look almost hobbyist (even if drafted by a company). For our particular example, the best-practice we need to use in order to prevent the reentrancy attack in our example is the following one:
"If no internal state updates happen after an ether transfer or an external function call inside a method, the function is safe from the reentrancy vulnerability".
This rule requires to change the order of operations in the DAO withdraw(..) function so that the caller's balance is reset to 0 before some ETH are sent to the Attacker smart contract. The new code would look like the one illustrated in Figure 4. This is feasible for this simple example but could be not for more complex ones. It's worth noting that this corresponds precisely to the fix that was implemented in the original source code in 2016, as previously illustrated in Figure 1. A second solution, could be use a kind-of _mutex variable_[17] as the one illustrated in Figure 7 and discussed in the following section.
#### 4.2.3 General considerations about using best practices
Security-by-design is a fundamental requirement of any technology today. However, with this simple example, we have seen how even the development of a simple smart contract of a few lines is prone to potentially catastrophic bugs. We have seen how it is extremely difficult to effectively write even simple smart contracts that are secure and reliable since the development methodology is still based on best-practices that, as you can realize, are difficult to apply when it comes to more complicated protocols that require writing high numbers of lines of code or when not applicable in specific situations. If we go back to the origin of the problem, the main difficulty in securely writing a smart contract can be related to the divergence between the programming language execution model and the real execution on the blockchain. In fact, this fact can be identified as the main cause of problems in both development and analysis.
1. The difficulty in using inappropriate language is evident if you have a look to the log on github of the various attempts by the DAO development team to fix the bug [15]. It must be noted that at that time no best practices where available.
2. The difficulty in analysing smart contracts developed with an inappropriate execution model is evident when we try to analyse the above example with the current tools: the code of the second solution (i.e., the one with the mutex) generates false positives making the analysis unrealisable and the security of the implementation still dependent to the developer experience.
Figure 4: DAO attack reentrancy fix by changing the order of a line.
Therefore, it is absolutely necessary to have a development methodology capable to provide the security by design, which is a fundamental requirement for any kind of technology we have today. In the following section we will see how the security by design requirement can be granted by using a dataflow programming model and how, in our opinion, pursuing research in this direction is the right direction.
## 6 The use of a Dataflow model
In recent decades, the rise of massively parallel architectures, coupled with the challenges of programming these architectures, has made the dataflow paradigm an attractive alternative to the imperative paradigm [8, 7]. The primary advantages of the dataflow paradigm are linked to its capacity to express concurrency without intricate synchronization mechanisms. This capability arises from the program's internal representation as a network of processing blocks that exclusively communicate through communication channels. In fact, these blocks operate independently and do not produce any side effects [16]. Consequently, this eliminates potential concurrency issues that may emerge when programmers are tasked with manually managing synchronization among parallel computations. Furthermore, this paradigm explicitly exposes all the inherent parallelism within a program. Over the past decade, a multitude of programming languages has emerged to model the semantics of dataflow programs [6, 13]. Imperative programming languages have been extended to incorporate parallel directives (e.g., Java, Python, C/C++), while native dataflow languages (e.g., Esterel, Ptolemy) have been newly specified. Within this diverse landscape of language extensions, RVC-CAL [3] distinguishes itself as the sole formally standardized programming language aligned with the dataflow model. It is capable of modeling complex and dynamic dataflow networks where the token production and consumption rates cannot be known at compile time. As depicted in Figure 5, an RVC-CAL actor is defined as a collection of atomic methods (i.e., functions), referred to as actions, accompanied by encapsulated state variables. These variables are inaccessible for access or modification by neighboring actors within the same network. During an actor's execution, only a single action is selected at any given time, with the concurrent execution of multiple actions being precluded. The selection of the action to execute is contingent upon the input token values and/or the actor's internal variables. One of the intriguing properties associated with the use of these high-level dataflow programming models is the ability to generate optimized and secure-by-design low-level code from this architecture-independent representation. Examples of synthesis tools specifically developed for the RVC-CAL programming language include the Open RVC-CAL Compiler (Orcc) [21], Exelixi [4], and Tycho [9].
Figure 5: An example of RVC-CAL dataflow network composed by 5 actors (A, B, C, D, E) and 5 buffers (b1, b2, b3, b4, b5). Each actor is composed by a set of input and output ports, a set of internal state variables, a set of actions (i.e., atomic functions) and a finite state machine (FSM).
### Dataflow-based smart contracts
The interconnected block representation we employed in Figure 2 to elucidate the interaction between two smart contracts, namely the DAO and the Attacker, essentially forms the foundation of any dataflow-based model, as seen earlier. As illustrated in Figure 6, the core characteristics of such a dataflow model are as follows: A) Each box represents an actor, which corresponds to a smart contract, as previously demonstrated in Figure 2. B) The exchange of information between two smart contracts can only occur through dedicated communication channels known as buffers. These buffers ensure that the order and consistency of data are preserved and guaranteed by the execution model itself. C) Each function execution is inherently atomic. This means that, in advance, we are assured that the execution model prevents unpredictable effects resulting from changing the order of source code lines. A function execution adheres to the following sequence of operations:
* Input data is consumed from the input buffer(s).
* Subsequently, the execution occurs, during which state variables may be updated.
* Only at this point is output data placed in the output buffer(s).
The complexity of the individual action determines the category to which each smart contract (actor) belongs. These categories include:
* Static: At each function execution, it consumes from its input buffers and produces on its output buffers an always equal amount of data.
* Cyclo-Static: The number of data consumed and produced varies from run to run but follows a repetitive and cyclic pattern.
* Dynamic: The number of produced and consumed data is not known in advance.
These fundamental rules underlie a dataflow programming model, which can be extended to enhance the expressiveness and capability of representing various smart contract use cases. Consequently, creating a smart contract using a Domain Specific Language (DSL) similar to Solidity and RVC-CAL can resemble the example presented in Table 3. It is important to note that in this example, we employ a guard condition as a prerequisite for executing the action (function). If the prerequisite is not met, the action cannot be executed. In the following sections, we will analyze each block, identified by the numbers in the left column of the table, to understand the various components of this smart contract: 1 Smart contract (actor) name. 2 Input port definition, the point where data can be read/received inside the actor. 3 Output port definition, the point where data can be written/sent outside the actor. 4 ETH balance of each mapped address. 5A requirement for at least 1 ether available in the input port, involving popping (consuming) one value from the input port and updating the state variable. 6 It's worth noting that the update of the balances variable occurs after the value is pushed onto the output port. However, it's crucial
Figure 6: Smart contract dafaflow model
to understand that this message will be sent only once all operations are executed. This design choice fundamentally guarantees the absence of a reentrancy condition.
In essence, this model enables the generation of Solidity source code that is correct by construction. As previously mentioned, the dataflow model of computation hinges on the concept of atomicity in function execution. This concept can be translated into automated source code generation from the dataflow model to Solidity code, introducing, for example, a mutex variable, as seen in the generated source code illustrated in Figure 7. It's important to note that this source code, while deviating from Solidity best practices, is entirely correct, and it fundamentally enhances robustness by being invariant to the order of execution of its operations. This assures a security by design property, a critical requirement for smart contract development, which is currently lacking.
## 7 Conclusions
In summary, this study represents an initial exploration into the integration of dataflow programming models and Domain-Specific Languages (DSLs) within the domain of smart contract development, with a primary focus on enforcing security through the principle of security-by-construction. Our investigation has unveiled the fundamental attributes of dataflow-based models, demonstrating
Figure 7: Smart contract dafaflow generated source code
\begin{table}
\begin{tabular}{|c|c|} \hline contract Dao (//Solidity source code programmatically generated... & \\ \hline bool internal locked = false; //new state variable acting as a mutex \\ \hline.. & \\ \hline function withdraw() public payable ( require(locked, “No reentrancy”); require(balances[msg.sender] =1 ether); locked = true; uint256 bal = balances[msg.sender]; (bool sent, ) =msg.sender.call(value: bal](""); require(sent, “Failed to send bal”); \\ \hline balances[msg.sender] + Q;... & \\ \hline locked = false; & \\ \hline \end{tabular}
\end{table}
Table 3: Smart contract dataflow code using a DSL similar to Solidity and RVC-CAL
their innate capacity to articulate concurrency in a manner that obviates the need for intricate synchronization mechanisms. This approach effectively mitigates potential concurrency-related vulnerabilities, a significant concern within decentralized applications. Moreover, dataflow models provide a transparent framework for exposing inherent program parallelism, imparting an additional layer of security to the smart contract development process. As elucidated in our analysis, the adoption of DSLs analogous to Solidity and RVC-CAL facilitates the automated generation of low-level code that adheres to security-by-design principles, starting from high-level, architecture-agnostic representations. This approach reframes smart contract development, shifting the emphasis away from manual coding practices, which heavily rely on developer expertise, towards a systematic, inherently secure methodology. While our research marks an initial exploration of these concepts, it simultaneously beckons forth a promising trajectory for future research. Within the ever-evolving landscape of blockchain technology, where security stands as a paramount concern, this nascent study lays the foundational groundwork for a more resilient and secure future in smart contract programming.
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.